title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Revisit the Power of Vanilla Knowledge Distillation: from Small Scale to Large Scale | Accept (poster) | Summary: This paper explores the power of vanilla distillation on large datasets and strong training recipes. It shows the stronger data augmentation and using larger datasets can decrease the gap between vanilla KD and other meticulously designed methods. The extensive results show vanilla KD's power.
Strengths: 1. This paper reveals the enormous potential of vanilla KD. It inspires the community to take a broader look at knowledge distillation instead of small models or datasets.
2. This paper takes an enormous number of experiments to explore vanilla KD and obtain many valuable results and SOTA models.
Weaknesses: 1. We usually train the models under an ordinary training recipe actually. And under this setting, vanilla KD performs worse. So it is still not clear which method we should apply for various settings. For example, in LKBT[1], this paper compares KD and another method NKD under various settings including stronger training recipes. But NKD performs much better than KD.
[1] Are Large Kernels Better Teachers than Transformers for ConvNets. ICML 2023
2. The paper lacks the baseline performance under a stronger recipe. Generally, KD is introduced for a small model and ordinary training recipe. When the training recipe is stronger, the improvements that KD brings are fewer. In this paper, we cannot know how much KD helps the students under a stronger training recipe.
3. The paper applies downstream tasks by replacing the backbones. However, DKD and Dist can be applied to the classification branch directly for detection distillation. They perform much better than KD. How about vanilla KD performs under a stronger training recipe for detection? Does the conclusion keep the same?
4. The hyper-parameter $\alpha$ and temperature for KD are also important. This paper lacks a discussion about this.
5. Some typos, e.g. the performance of Res50 and MBNetV2 are reversed in Tab 1. Some details are missing, for example, the teacher BeiTV2 seems to be training with ImageNet-21K. The paper needs to clarify this.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: see the weakness
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to weakness 1:** Thanks for your valuable comments. We first answer for the method should be applied for different settings. Through our empirical sutdy, we observe a trend wherein a small training set prefers a knowledge distillation (KD) method with stronger regularization or priors, as it can effectively bring in more informative knowledge from the teacher model to enhance the performance of the smaller student model. On the other hand, with a large training set, the need for additional prior diminishes. The sheer abundance of diverse data in a larger dataset already provides sufficient information for the student model to learn from. In such cases, a simpler KD approach may suffice, as the teacher model can effectively capture the complexities present in the data without requiring extensive regularization or priors.
To verify the above analysis, we present the average confidence on all samples and the entropy of predictions of pretrained ResNet50/ResNet152 teacher models on CIFAR-100/ImageNet-1K. We use the two following measurements:
$
\max({p}(x))=\mathbb{E}_{x\in\mathcal{X}}[\max_i(p_i)],
$
and
$
\text{entropy}({p}(x))=\mathbb{E}_{x\in\mathcal{X}}[\textstyle \sum_i p_i\log p_i],
$
where $p_i$ is predictive probability of sample $x$ belonging to class $i$.
The average confidence score provides an insight into how certain the teacher's predictions are across all samples. As shown in the table below, model trained on CIFAR-100 exhibits higher average confidence and lower entropy value, suggesting that the model is more confident in its predictions, however, resulting in reduced mutual information or knowledge transfer among other classes.
||Top-1|$\max({p}(x))$|$\text{entropy}({p}(x))$|
|:-:|:-:|:-:|:-:|
|CIFAR-100|79.33|89.78|0.3362|
|ImageNet-1K|75.81|79.63|0.7593|
Next, we address the question of why NKD [1] outperforms KD. Take the SLaK-T - Res50 teacher-student pari as an example, when distilling for 120 epochs on ImageNet, NKD achieves a top-1 accuracy of 78.57%, while KD achieves 77.05%. However, extending the distillation duration to 300 epochs leads to NKD achieving a top-1 accuracy of 80.24%, and in our implementation of this extended training (which was not reported in [1]), KD achieves a top-1 accuracy of 80.11%. Remarkably, the gap between KD and NKD narrows significantly with this increased training epoch, further corroborating our observation that aligns with the concept of small data pitfall.
**Response to weakness 2:** In Table 1 of our main paper, we compared the performance of several baseline models using both ordinary and stronger training recipes. On small-scale datasets, the stronger training recipe improves the performance of vanilla KD, yet a significant performance gap remains between vanilla KD and recent KD baselines. On large-scale datasets, the stronger recipe also leads to improved performance for vanilla KD, resulting in a reduction of the performance gap. Building on the aforementioned observation, we introduce the concept of the "small data pitfall." Furthermore, as a response to ***weakness 1 of oNYG***, we conducted experiments using the lightweight MobileNet V3 Small model, reaffirming the validity of the small data pitfall identified in our main paper. For your convenience, we present the results here:
|Dataset|Teacher|Student|Method|Top-1|
|:-:|:-:|:-:|:-:|:-:|
|CIFAR-100|-|MobileNet v3 Small|-|64.76|
|CIFAR-100|ResNet32x4|MobileNet v3 Small|DKD|**69.10**|
|CIFAR-100|ResNet32x4|MobileNet v3 Small|DIST|67.72|
|CIFAR-100|ResNet32x4|MobileNet v3 Small|KD|68.59|
|ImageNet-1K|-|MobileNetv3Small|-|67.40|
|ImageNet-1K|BeiTv2-B|MobileNet v3 Small|DKD|67.36|
|ImageNet-1K|BeiTv2-B|MobileNet v3 Small|DIST|68.02|
|ImageNet-1K|BeiTv2-B|MobileNet v3 Small|KD|**68.05**|
**Response to weakness 3:** We followed the released DKD code to conduct object detection experiments, directly applying distillation to the classification branch. The results are presented in the following table. We used ResNet101 as the teacher backbone and ResNet18 as the student backbone. Among the results, vanilla KD and DIST achieved similar performance, while DKD outperformed the other two methods. Based on our small data pitfall assumption, we speculate that the performance gap is due to the limited scale of the COCO dataset. With a larger dataset, vanilla KD could potentially achieve more competitive performance. To further validate this, we plan to explore stronger training recipes (mosaic augmentation and large-scale jittering) and larger datasets (such as Objects365) for downstream tasks in our future work.
|Iteration|Method|AP|AP50|AP75|
|:-:|:-:|:-:|:-:|:-:|
|180K|From Scratch|33.26|53.61|35.26|
|180K|DKD|35.07|56.32|37.45|
|180K|DIST|34.52|55.64|37.33|
|180K|KD|34.75|55.95|37.31|
|540K|DKD|37.44|58.75|40.37|
|540K|DIST|37.01|58.00|40.07|
|540K|KD|36.99|57.92|39.87|
**Response to weakness 4:** Thanks for the valuable comments. We adopt a trivial setting of hyperparameter $\alpha$ by setting its value to 1, as our primary focus is not on optimizing the configuration of vanilla KD, but rather on showcasing its latent potential. We believe that this straightforward setting is sufficient to support our conclusion. Furthermore, our ablation study, as presented in Table 4, has explored the scenario with $\alpha=0$, which yielded performance similar to that of $\alpha=1$. This suggests that the specific value of $\alpha$ has minimal impact on our main findings.
***Regarding the temperature parameter, please refer to our "Author Rebuttal" section.***
**Response to weakness 5:** Thanks for your valuable suggestion. The BeiTV2 teacher is trained with ImageNet-21K. More specifically, it is pretrained on ImageNet-1K and then finetuned on ImageNet-21K and ImageNet-1K successively. We will correct the typos and added more details about the BeiTV2 teacher in our final version.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal.
Comment: We thank the author for rebuttals.
Some of my concerns are addressed. However, the performance on the downstream tasks, and the baseline performance under a strong training recipe without KD is still not satisfactory.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer aER7 part (1/2)
Comment: Thanks for your nice comments.
~~We are conducting more downstream tasks~~.
We apologize for the oversight in not including the results for the "baseline performance under a strong training recipe without KD" in our previous response. We misunderstood the target of your reference to "baseline". ~~As soon as the ongoing experiments are concluded, we will **promptly update the table below** and incorporate the results into Table 1 of our main paper.~~
on ImageNet-1K:
|Teacher-Student|Method|previous recipe|effective gain|direct gain|stronger recipe|effective gain|direct gain|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Res34-Res18|From Scratch|69.75|-|-|**71.91**|-|-|
|Res34-Res18|DKD|71.70|2.80|1.95|73.07|1.61|1.16|
|Res34-Res18|DIST|72.07|3.33|2.32|73.52|2.24|1.61|
|Res34-Res18|vanilla KD|70.66|1.30|0.91|73.46|2.16|1.55|
|Res50-MBV2|Scratch|68.87|-|-|**72.95**|-|-|
|Res50-MBV2|DKD|72.05|4.62|3.18|73.71|1.04|0.76|
|Res50-MBV2|DIST|73.24|6.35|4.37|74.26|1.8|1.31|
|Res50-MBV2|vanilla KD|68.58|-0.42| -0.29 |74.23|1.75|1.28|
on CIFAR-100:
|Teacher-Student|Method|previous recipe|effective gain|direct gain|stronger recipe|effective gain|direct gain|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Res56-Res20|From Scratch|69.09|-|-|**71.83**|-|-|
|Res56-Res20|DKD|71.97|4.17|2.88|73.10|1.77|1.27|
|Res56-Res20|DIST|71.78|3.89|2.69|74.51|3.73|2.68|
|Res56-Res20|vanilla KD|70.66|2.27|1.57|72.34|0.71|0.51|
|Res32x4-Res8x4|Scratch|72.50|-|-|**74.95**|-|-|
|Res32x4-Res8x4|DKD|76.32|5.27|3.82|78.15|4.27|3.2|
|Res32x4-Res8x4|DIST|75.79|4.54|3.29|77.69|3.66|2.74|
|Res32x4-Res8x4|vanilla KD|73.33|1.14|0.83|75.90|1.27|0.95|
---
Reply to Comment 1.1.2:
Title: Official Comment by Authors
Comment: Dear reviewer aER7:
We sincerely thank you for the review and comments.
We have provided corresponding responses and results, which we've tried our best to cover your concerns. Please let us know whether your concerns have been well addressed. We would like to further discuss with you if you still have any unclear parts of our work.
Best,
The Authors | Summary: The paper explores the effectiveness of knowledge distillation (KD) approaches for limited-capacity architectures based on small-scale datasets. The authors identify the "small data pitfall" in previous KD methods, which leads to underestimation of the power of the vanilla KD framework on large-scale datasets like ImageNet-1K.
Strengths: 1. Although this work does not propose a novel distillation method, the identification of small data pitfall and a series of analyzes based on this would provide valuable insights for the vision community in the field of knowledge distillation.
2. The experiment is very complete and sufficient, with some persuasiveness.
3. The findings of this article are very important and may correct the research direction of KD.
Weaknesses: 1. This article mainly focuses on experimental comparison, while neglecting theoretical analysis. Lack of in-depth analysis of the underlying causes of the observations.
2. The author has overlooked a phenomenon where the KD improvement on large-scale datasets (i.e. ImageNet-1K) is inherently small. Why is this? What is the impact of this on the "small data pitfall" proposed in this paper?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. For classification tasks, if that's still the case, what about other tasks?
2. Is there another possibility that this phenomenon is not solely due to the size of the data, but rather to the difficulty or performance bottleneck of the classification task?
3. Can you provide a presentation of the classification results on a certain dataset, and compare the differences between DIST and vanilla KD in terms of whether the classification of certain key samples is correct or not? What about using a subset of imagenet-1k for training and distillation?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer bACv part (1/2)
**Response to weakness 1:** Thanks for your valuable comments. Our observation suggests that, in the context of knowledge distillation tasks, smaller training sets exhibit a preference for methods featuring stronger priors. These methods effectively impart more informative knowledge [1,2,3] to the student, compensating for the limited data available. Conversely, larger training sets negate the need for such measures. The substantial volume of diverse data inherent to larger datasets naturally imparts ample information for the student model's learning. In such scenarios, a simpler knowledge distillation approach might be adequate, as the teacher model is apt at capturing the inherent complexities within the data, obviating the requirement for extensive priors. In addition, when comparing subsets and the complete ImageNet-1K dataset, the discrepancy between vanilla KD and other methods narrows as the training set scale increases. This observation leads us to believe that informative 'dark' knowledge can naturally manifest within larger datasets. Similarly, Stanton et al. [4] evaluates the fidelity of KD and points out that the student fidelity will increase as the dataset grows: "Enlarging the distillation dataset beyond the teacher training data makes it easier for the student to identify the correct solution, but also makes an already difficult optimization problem harder."
We have conducted additional experiments in our response to ***weakness 1 of mQmQ***, and we paste the results in the table below. To verify our analysis, we show the average confidence on all samples and the entropy of predictions of pretrained Res50/Res152 teacher models on CIFAR-100/ImageNet-1K. We use the two following measurements:
$
\max({p}(x))=\mathbb{E}_{x\in\mathcal{X}}[\max_i(p_i)],
$
and
$
\text{entropy}({p}(x))=\mathbb{E}_{x\in\mathcal{X}}[\textstyle \sum_i p_i\log p_i],
$
where $p_i$ is predictive probability of sample $x$ belonging to class $i$.
The average confidence score provides an insight into how certain the teacher's predictions are across all samples. As shown in the table, model trained on CIFAR-100 exhibits higher average confidence and lower entropy value, suggesting that the model is more confident in its predictions, however, resulting in reduced mutual information or knowledge transfer among other classes.
||Top-1|$\max({p}(x))$|$\text{entropy}({p}(x))$|
|:-:|:-:|:-:|:-:|
|CIFAR-100|79.33|89.78|0.3362|
|ImageNet-1K|75.81|79.63|0.7593|
We will add these analyses to our next version.
[1] Huang, Tao, et al. "Knowledge distillation from a stronger teacher." Advances in Neural Information Processing Systems 35 (2022): 33716-33727.
[2] Zhao, Borui, et al. "Decoupled knowledge distillation." Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. 2022.
[3] Yang, Zhendong, et al. "Rethinking knowledge distillation via cross-entropy." arXiv preprint arXiv:2208.10139 (2022).
[4] Stanton, Samuel, et al. "Does knowledge distillation really work?." Advances in Neural Information Processing Systems 34 (2021): 6906-6919.
**Response to weakness 2:** If I understand correctly, one example of the phenomenon you've mentioned is that most KD approaches can only bring about 1% improvement on top-1 accuracy on ImageNet-1K, but can get about 3-5% improvement on CIFAR-100. Please correct me if I am mistaken. The reasons behind such phenomenon are two-fold. (1) CIFAR-100 is a small-scale dataset consisting of 50K/10K training/testing samples. It is hard for a model trained on such a small dataset to capture the real distribution well. Therefore, the improvement on the student's performance is significant by using the teacher's predictions as additional supervision. However, the ImageNet-1K dataset consists of 1.2M samples, where the diverse data itself is sufficient for the student model to learn the distribution well enough. In such case, the introduction of teacher model only help improve the student performance marginally. (2) Existing approaches usually use small networks such as Res18 and WRN-16 on CIFAR-100, while they use large networks such as Res50 on ImageNet. The difficulty of optimizing models of different sizes (the number of parameters) varies. Some studies [1] indicated that different optimization approaches also influence the results. The previously widely used 90-epoch protocol on ImageNet might now be considered insufficient. This is also aligned with the outcomes from our 'stronger recipe' in Table 1. It seems that we need to use a larger scale dataset to more comprehensively assess the performance of models and KD methods.
[1] Ross Wightman, Hugo Touvron, and Hervé Jégou. Resnet strikes back: An improved training procedure in 420 timm. arXiv preprint arXiv:2110.00476, 2021.
**Response to question 1:** Thanks for your valuable comments. We conduct experiments on COCO (118K training images) object detection task to directly compare DKD, DIST, and vanilla KD. Adhering to DKD's implementation, we applied distillation to the classification branch, with ResNet101 serving as the teacher backbone and ResNet18 as the student backbone. The outcomes indicate that vanilla KD and DIST achieve comparable performance, while DKD surpasses the other two methods. If we consider our assumption of the 'small data pitfall,' we conjecture that this result could be attributed to the limited scale of the COCO dataset. In the future, we plan to validate this speculation on a larger dataset, Objects365, comprising 1720K training images.
|Iteration|Method|AP|AP50|AP75|
|:-:|:-:|:-:|:-:|:-:|
|180K|FromScratch|33.26|53.61|35.26|
|180K|DKD|35.07|56.32|37.45|
|180K|DIST|34.52|55.64|37.33|
|180K|KD|34.75|55.95|37.31|
|540K|DKD|37.44|58.75|40.37|
|540K|DIST|37.01|58.00|40.07|
|540K|KD|36.99|57.92|39.87|
### ***The part (2/2) can be found in "Author Rebuttal" at the top of this page.***
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for your rebuttal. There are still some concerns. The concept of "small data pitfall" is not clear, e.g. the scale of datasets, what is small or large? CIFAR is small vs. imagenet is large, imagenet is small vs. SAM-11M/LAION-400M is large ?
The experiments on COCO express nothing valuable, just comparsion of some KD methods. COCO is small or large ? The KD methods improve the AP on COCO.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer bACv part (1/2)
Comment: Thank you for your valuable comments.
We acknowledge that we didn't sufficiently clarify the definition of "small" in our paper. The "small data pitfall" we observed in our main paper refers to the scenarios where the conclusions drawn from certain experiments on datasets which have not reached a certain threshold might not hold in larger dataset contexts which have surpassed the threshold, such as ImageNet and LAION. Our experiments have now narrowed down this threshold of the dataset size to 60% the size of ImageNet, which is shown in Table14 of the supplementary material. This means that **different conclusions may emerge** once the **dataset size exceeds the threshold** (Number of training samples > 0.7M). For example, DIST shows no significant improvements over the baseline vanilla KD. One of the key insights we aim to convey in our paper is that exclusively focusing on KD approaches with small-scale datasets may limit our comprehensive understanding in real-world scenarios. Admittedly, scaling up from ImageNet to even larger datasets, such as LAION, could introduce deviations in existing conclusions. However, our current computational resources may not allow us to perform experiments on such larger datasets. We will update the original paper to provide a clearer description of "small" and "large", emphasizing that once a sufficient quantity of training data is reached, vanilla KD's performance can be similar to that of carefully designed KD methods. To be specific, we will refine the original Line 38-45 to "We point out the small data pitfall in current knowledge distillation literature: once a sufficient quantity of training data is reached, different conclusions emerge. For example, when evaluated on CIFAR-100 (50K training images), KD methods meticulously designed on such datasets can easily surpass vanilla KD. However, when evaluated on datasets with larger scale, i.e., ImageNet-1K (1M training images), vanilla KD achieves on par or even better results compared to other methods."
To further investigate the impact of data on knowledge distillation in the context of detection tasks, we are conducting additional ablation studies on the Pascal VOC and COCO datasets. Specifically, the Pascal VOC dataset consists of 20 object classes. Our training set is the combination of VOC 2007 trainval (5K) and VOC 2012 trainval (11K), with the validation set being VOC 2007 test (4.9K), following previous protocals. The training samples in Pascal VOC (16K) are smaller than those in COCO (118K). Therefore, we will also utilize subsets of COCO (30% and 60%) for conducting experiments.
|Iteration|Method|Teacher-Student-Dataset|mAP|effective gain|direct gain|AP50|AP75|
|:-:|:-:|:-:|:-:|:-:|:-:|---|---|
|18K|From Scratch|N/A-Res18-VOC|42.88|-|-|72.79|43.82|
|18K|DKD|Res101-Res18-VOC|48.07|12.10|5.19|78.75|50.16|
|18K|DIST|Res101-Res18-VOC|47.64|11.10|4.76|78.28|50.01|
|18K|KD|Res101-Res18-VOC| 46.95 | 9.49 | 4.07 | 77.42 | 48.62 |
|Iteration|Method|Teacher-Student-Dataset|mAP|effective gain|direct gain|AP50|AP75|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|180K|From Scratch|N/A-Res18-30%COCO|27.32|-|-|46.05|28.19|
|180K|DKD|Res101-Res18-30%COCO|31.27|14.46|3.95|51.94|32.49|
|180K|DIST|Res101-Res18-30%COCO|31.67|15.92|4.35|52.28|33.25|
|180K|KD|Res101-Res18-30%COCO|30.32|10.98|3.00|50.51|31.61|
|Iteration|Method|Teacher-Student-Dataset|mAP|effective gain|direct gain|AP50|AP75|
|:-:|:-:|:-:|:-:|:-:|:-:|---|---|
|180K|From Scratch|N/A-Res18-60%COCO|30.97|-|-|50.78|32.80|
|180K|DKD|Res101-Res18-60%COCO|33.40|7.85|2.43|54.48|35.81|
|180K|DIST|Res101-Res18-60%COCO|33.18|7.14|2.21|54.01|35.46|
|180K|KD|Res101-Res18-60%COCO|32.98|6.49|2.01|53.43|34.92|
|Iteration|Method|Teacher-Student-Dataset|mAP|effective gain|direct gain|AP50|AP75|
|:-:|:-:|:-:|:-:|:-:|:-:|---|---|
|180K|From Scratch|N/A-Res18-COCO|33.26|-|-|53.61|35.26|
|180K|DKD|Res101-Res18-COCO|35.07|5.44|1.81|56.32|37.46|
|180K|DIST|Res101-Res18-COCO|34.52|3.79|1.26|55.64|37.33|
|180K|KD|Res101-Res18-COCO|34.75|4.48|1.49|55.95|37.31|
~~We will **update the table above once the experiments are completed**, and provide corresponding analyses for the object detection task.~~ While Objects365 is large enough and also available for object detection, we are concerned that we may lack sufficient computational resources to conduct experiments on it. Our primary focus remains on the more widely applied field of KD in classification, which is more extensively used at present.
---
Reply to Comment 1.1.2:
Title: Official Comment by Authors
Comment: Dear reviewer bACv:
We sincerely thank you for the review and comments.
Please let us know whether your concerns have been well addressed. We would like to further discuss with you if you still have any unclear parts of our work.
Best,
The Authors
---
Reply to Comment 1.1.3:
Comment: Dear reviewer bACv,
We greatly appreciate your insightful comments and suggestions. As the discussion phase draws to a close, we are reaching out to inquire if you have any further feedback on our response. We are open and receptive to any additional queries or concerns you might have. Your feedback and ongoing engagement are of immense value to us.
In this paper, we delve into a critical question within the KD community: whether previous approaches remain effective in more intricate scenarios involving larger datasets. Our exploration on conventional classification task indicates that the conclusions drawn from datasets that haven't reached a specific threshold (e.g., CIFAR) might not apply to larger dataset contexts that have exceeded the threshold (e.g., ImageNet). We term this phenomenon as the "small data pitfall". Through our experiments, we've now pinpointed this dataset size threshold to be around 60% of the ImageNet size, as evident in Table 14 of the supplementary material. This implies that once the dataset size exceeds this threshold (with over 0.7M training samples), different conclusions might emerge. For example, in the ImageNet classification task, DIST shows no significant improvements over the vanilla KD baseline. A key insight we aim to convey in our paper is that a sole focus on KD approaches using small-scale datasets could limit our comprehensive grasp of real-world scenarios. Furthermore, our additional experiments on detection tasks illustrate that for datasets with comparable difficulty levels, the performance gap between vanilla KD and DKD gradually diminishes as the number of training samples increases. This observation mirrors the "small data pitfall" noted in the classification task, where, as the dataset size grows, the performance gap between vanilla KD and other KD methods narrows. Additionally, we leverage vanilla KD to elevate the performance of various architectures like ResNet-50, ViT-Tiny, ViT-Small, and ConvNeXtV2 beyond their previously reported best results.
Best,
The Authors | Summary: This paper revisits vanilla knowledge distillation and presents an empirical analysis of the impact of model size, dataset scale, and training strategy on student performance in knowledge distillation. It identifies: 1) the gap between vanilla KD and other carefully designed KD methods gradually diminishes when adopting stronger data augmentation techniques and longer training iterations on large-scale datasets such as ImageNet-1K; 2) on small scale datasets, vanilla KD consistently underperforms other KD approaches although stronger training strategy and longer training iterations are used; and 3) logits-based methods outperform hint-based methods in terms of generalizability in more challenging cases. Based on these observations, this paper trains four different student models achieving SOTA performance on ImageNet-1K solely using the vanilla KD method.
Strengths: 1. Personally, I agree that vanilla KD can serve as a formidable contender when employed with large-scale datasets. Hints serve as valuable priors during training **only** when the available data is severely limited.
2. The authors diligently conducted extensive experiments to thoroughly examine the performance of vanilla KD across various factors, such as model capacity, dataset size, training epochs, learning rate, and regularization techniques, among others.
3. Vanilla KD is indeed a practical approach with widespread applications, making this paper highly beneficial for a diverse range of readers.
4. Notably, this paper presents state-of-the-art (SOTA) models trained using vanilla KD, which can significantly aid in further research.
Weaknesses: - The evaluation of vanilla KD in this work is solid and self-contained. When dealing with a small training set, it is customary to incorporate additional priors, regularization techniques, augmentations, or hints like previous KD methods. Naturally, with a larger dataset, we can leverage vanilla KD with less regularisation to attain comparable performance levels. However, it would be better if the authors can provide a systematical analysis for the choice of algorithms under Small Scale & Large Scale settings.
- The paper compares teacher models in two aspects: model size and dataset scale. Adopting a teacher model with more parameters is beneficial (ResNet152 vs. BEiTv2-L), but an extremely large teacher is harmful (BEiTv2-B vs. BEiTv2-L). As for the dataset scale, training the teacher on larger datasets is helpful. I wonder if we continue to increase the dataset scale, whether it will show a trend similar to increasing the model size, i.e., the student performance decreases. If not, what leads to this difference?
- The conclusion of comparison KD with MIM is a bit weak as it assumes a stronger teacher model is available. In practice, this may not always hold. For example, if we want to train a BeITv2-L, perhaps BeITv2-L is not a good teacher, and we cannot find one achieving higher accuracy. So I think KD is only preferable when training small- or medium-size models.
Minor typos:
1. "reflection on whether" in line 73
2. "results show that" in line 127
3. "experimental setup is as follows" in line 136
4. "other approaches" in line 200
5. "that trained" in line 243
6. "learn from" in line 304
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please refer to the weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to weakness 1:** Thanks for your valuable suggestions. The trend is that a small training set prefers a knowledge distillation (KD) method with stronger regularization or priors, as it can effectively bring in more informative knowledge from the teacher model to enhance the performance of the smaller student model. On the other hand, with a large training set, the need for additional prior diminishes. The sheer abundance of diverse data in a larger dataset already provides sufficient information for the student model to learn from. In such cases, a simpler KD approach may suffice, as the teacher model can effectively capture the complexities present in the data without requiring extensive regularization or priors.
To verify the above analysis, we present the average confidence on all samples and the entropy of predictions of pretrained ResNet50/ResNet152 teacher models on CIFAR-100/ImageNet-1K. We use the two following measurements:
$
\max({p}(x))=\mathbb{E}_{x\in\mathcal{X}}[\max_i(p_i)],
$
and
$
\text{entropy}({p}(x))=\mathbb{E}_{x\in\mathcal{X}}[\textstyle \sum_i p_i\log p_i],
$
where $p_i$ is predictive probability of sample $x$ belonging to class $i$.
The average confidence score provides an insight into how certain the teacher's predictions are across all samples. As shown in the table, model trained on CIFAR-100 exhibits higher average confidence and lower entropy value, suggesting that the model is more confident in its predictions, however, resulting in reduced mutual information or knowledge transfer among other classes.
||Top-1|$\max({p}(x))$|$\text{entropy}({p}(x))$
|:-:|:-:|:-:|:-:
|CIFAR-100|79.33|89.78|0.3362
|ImageNet-1K|75.81|79.63|0.7593
> **Weakness 2:** increase the dataset scale
To demonstrate this, we first analyze the impact of model size by comparing the average confidence on all samples and the entropy of predictions of ResNet152/BEiTv2-B/BEiTv2-L teacher model on ImageNet-1K.
||Top-1|$\max({p}(x))$|$\text{entopy}({p}(x))$|
|:-:|:-:|:-:|:-:|
|ResNet152|82.83|67.46|2.4418|
|BEiTv2-B|86.39|80.71|1.2868|
|BEiTv2-L|88.39|83.94|1.1172|
As depicted in the table, as the teacher model size increases, their predictions become less informative. Our hypothesis is that smaller teacher models may struggle to learn the dataset adequately, leading to noisy soft labels that are less beneficial for the student. On the other hand, larger teacher models tend to overfit the dataset, hindering the distillation process with their soft labels (more closely resembling one-hot ground truth and less transferable knowledge to other classes). Consequently, the model of medium size, BEiTv2-B, exhibits the best performance in our experiments
Based on the analysis above, teacher models tend to overfit smaller datasets, leading to a lack of valuable information for distillation. Recent KD approaches have outperformed vanilla KD by introducing additional information on small-scale datasets. However, as the dataset scale increases, the overfitting issue diminishes, making it challenging to optimize the objectives of complex KD methods due to an abundance of information for the student to learn. On the contrary, the simplicity of vanilla KD, which solely relies on soft labels, becomes sufficient for distillation.
Consequently, we expect an improvement in the student's performance owing to the ample information provided by the increasing dataset scale. And we believe that as the dataset scale further increases, vanilla KD will achieve comparable performance to other KD baselines. The reduction in overfitting with larger datasets mitigates the need for more intricate KD approaches, reaffirming the effectiveness of vanilla KD for knowledge transfer in such scenarios.
**Response to weakness 3:** Indeed, there are instances where a more powerful teacher is not accessible. To assess the effectiveness of vanilla KD under such circumstances, we conduct experiments on ImageNet-1K, employing the identical architecture for both the teacher and student models. In this regard, we utilize the BEiTv2-B model as our chosen architecture and proceed to compare the performance of vanilla KD against the MIM pretraining method introduced by BEiTv2 [1], evaluating accuracy and training time consumption as key metrics. The GPU time measurements are conducted using a single NVIDIA Tesla V100 GPU.
|Method|Epoch|Top-1|GPUtime(hours)
|:-:|:-:|:-:|:-:
|MIM|300(pretrain)+100(finetune)|85.0|853
|MIM|1600(pretrain)+100(finetune)|85.5|3979
|vanillaKD|300|84.6|575
|vanillaKD|600|85.4|1151
|vanillaKD|1600|85.7|3069
Based on above results, it's evident that the performance of vanilla KD is on par with that of MIM. For example, the student model trained through vanilla KD (600 epochs) attains a top-1 accuracy of 85.4%, only 0.1% lower than the equivalent model trained with MIM (1600 epochs). However, the vanilla KD approach requires notably less training time. Furthermore, extending the training epochs to 1600 yields a top-1 accuracy of 85.7%, surpassing the MIM result by 0.2% under the same epoch settings. This underscores that vanilla KD remains competitive with MIM, even when a more robust teacher model is unavailable.
Specifically, we choose the BEiTv2-B with 85.5% top-1 accuracy (the 2nd line) as the teacher model in KD. In our reported GPU time, we omitted the training duration of the teacher model. Given that it is typically feasible to procure a pre-trained model with performance comparable to the student model, the necessity to train a teacher model from the ground up is minimized in most scenarios.
[1] Beit v2: Masked image modeling with vector-quantized visual tokenizers
**Response to weakness 4:** Thank you for thoroughly reviewing our submission and pointing out the mistakes. We will take great care to address these issues in our next version. Your feedback is invaluable in improving the quality of our work.
---
Rebuttal Comment 1.1:
Title: Replying to Rebuttal
Comment: Thank you for the feedback. I have thoroughly read the rebuttal and comments from the other reviewers. I don't have any additional questions and will maintain the initial score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer mQmQ
Comment: Dear Reviewer mQmQ,
We sincerely appreciate you taking time to review our paper and response. We will carefully follow reviewer's advice to incorporate the addressed points in updated version.
Best,
Authors of Paper 1852 | Summary: This paper investigates the effectiveness of vanilla Knowledge Distillation (KD) in large-scale datasets. The authors identify a "small data pitfall" which underestimates the power of vanilla KD on large-scale datasets and demonstrate that stronger data augmentation techniques and larger datasets can decrease the gap between vanilla KD and other KD variants.
Strengths: - **Originality**: This is an empirical study. It has no technical novelty but sheds some insights into the knowledge distillation method.
- **Quality**: Extensive experiments validate their conclusions.
- **Clarity**: This paper is well-structured and easy to follow.
- **Significance**: Knowledge Distillation is an important and interesting topic and exploring the impacts of the size of datasets for KD is beneficial for the community.
- New state-of-the-art results for ResNet-50, ConvNeXt-T architectures.
Weaknesses: - Lightweight models, such as MobileNetv3, are deemed critical model architectures due to their efficiency and compactness, which make them ideal for deployment on devices with limited computational resources. However, it appears that there is a lack of dedicated experiments specifically for these lightweight models.
- Two related papers are missing[1][2]
[1] Yang, Zhendong, et al. "Rethinking knowledge distillation via cross-entropy." arXiv preprint arXiv:2208.10139 (2022).
[2] Huang, Tianjin, et al. "Are Large Kernels Better Teachers than Transformers for ConvNets?." ICML. 2023.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Temperature is an important hyper-parameter for the KD method. what value is used for temperature in all experiments? How does the temperature affect the vanilla KD performance from small to large datasets?
- Does the conclusion still hold in lightweight models such as MobileNetv3?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weakness 1:** Lightweight models, such as MobileNetv3, are deemed critical model architectures due to their efficiency and compactness, which make them ideal for deployment on devices with limited computational resources. However, it appears that there is a lack of dedicated experiments specifically for these lightweight models.
>
> **Question 2:** Does the conclusion still hold in lightweight models such as MobileNetv3?
**Response to weakness 1 & question 2:** Thank you for your valuable suggestion. We have opted to employ MobileNetv3 as a representative lightweight model for our experimental investigations, as shown in the following table. The outcomes reveal that on the CIFAR-100 dataset, vanilla KD falls short of the leading baseline, DKD, by 0.51% in terms of top-1 accuracy. Nevertheless, on the ImageNet-1K dataset, vanilla KD emerges as the top performer. These results reaffirm the validity of the small data pitfall observed in our main paper.
| Dataset | Teacher | Student | Method | Top-1 |
| :---------: | :--------: | :----------------: | :----: | :-------: |
| CIFAR-100 | - | MobileNet v3 Small | - | 64.76 |
| CIFAR-100 | ResNet32x4 | MobileNet v3 Small | DKD | **69.10** |
| CIFAR-100 | ResNet32x4 | MobileNet v3 Small | DIST | 67.72 |
| CIFAR-100 | ResNet32x4 | MobileNet v3 Small | KD | 68.59 |
| ImageNet-1K | - | MobileNet v3 Small | - | 67.40 |
| ImageNet-1K | BeiTv2-B | MobileNet v3 Small | DKD | 67.36 |
| ImageNet-1K | BeiTv2-B | MobileNet v3 Small | DIST | 68.02 |
| ImageNet-1K | BeiTv2-B | MobileNet v3 Small | KD | **68.05** |
> **Weakness 2:** Two related papers are missing
**Response to weakness 2:** Thank you for bringing the overlooked related papers to our attention. [1] decomposes the original KD loss into two components and proposes NKD loss to discard the challenging-to-optimize portion. [2] points out that large-kernel ConvNets are more effective teachers than Transformers for small-kernel ConvNets. We will incorporate references to these two papers in the related work section of our next version.
> **Question 1:** Temperature is an important hyper-parameter for the KD method. what value is used for temperature in all experiments? How does the temperature affect the vanilla KD performance from small to large datasets?
**Response to question 1:** In our experiments, we follow the implementation of DKD to employ the temperature setting of T=4 for all experiments on CIFAR-100 and T=1 for those on ImageNet-1K.
To assess the influence of temperature in knowledge distillation (KD), we conducted experiments on both the CIFAR-100 and ImageNet-1K datasets. As indicated in the table below, the effect of temperature on student performance is non-monotonic. However, a discernible trend emerges: larger temperature values (T > 1) tend to be more effective for small-scale datasets, while a smaller temperature value (T=1) is preferable for larger-scale datasets. We posit that smaller-scale datasets often lead to teacher models overfitting, resulting in predictions with low entropy. Consequently, softening these predictions with a higher temperature can yield more valuable information for the student. Conversely, in the context of larger-scale datasets, teachers inherently provide less certain predictions, and elevating uncertainty further by using a higher temperature setting could hinder student learning.
| Dataset | Teacher | Student | Method | Top-1 |
| :---------: | :--------: | :----------------: | :------: | :---: |
| CIFAR-100 | ResNet32x4 | MobileNet v3 Small | KD (T=1) | 67.07 |
| CIFAR-100 | ResNet32x4 | MobileNet v3 Small | KD (T=2) | 69.35 |
| CIFAR-100 | ResNet32x4 | MobileNet v3 Small | KD (T=4) | 68.59 |
| ImageNet-1K | BeiTv2-B | ResNet50 | KD (T=1) | 80.96 |
| ImageNet-1K | BeiTv2-B | ResNet50 | KD (T=2) | 80.39 |
| ImageNet-1K | BeiTv2-B | ResNet50 | KD (T=4) | 80.53 |
We have also added experiments concerning the temperature settings for DKD and DIST on ImageNet-1K. The specific results are provided below. We will incorporate these findings to Table 16 in our supplementary material.
| Dataset | Teacher | Student | Method | Top-1 |
| :---------: | :------: | :------: | :--------: | :---: |
| ImageNet-1K | BeiTv2-B | ResNet50 | DIST (T=1) | 80.76 |
| ImageNet-1K | BeiTv2-B | ResNet50 | DIST (T=2) | 80.46 |
| ImageNet-1K | BeiTv2-B | ResNet50 | DIST (T=4) | 80.31 |
| ImageNet-1K | BeiTv2-B | ResNet50 | DKD (T=1) | 80.94 |
| ImageNet-1K | BeiTv2-B | ResNet50 | DKD (T=2) | 80.76 |
| ImageNet-1K | BeiTv2-B | ResNet50 | DKD (T=4) | 80.70 |
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. Most of my concerns have been addressed. Therefore, I increase my score to 6.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer oNYG
Comment: Dear Reviewer oNYG,
We sincerely appreciate you taking time to review our responses and contributing to improve this paper. We will carefully follow reviewer's advice to incorporate the addressed points in updated version.
Best,
Authors of Paper 1852 | Rebuttal 1:
Rebuttal: # Response to all reviewers
We thank all four reviewers for their constructive feedbacks which greatly improved the quality of our paper.
### **Response to Reviewer bACv, part (2/2)**
**Response to question 2:** Thanks for the insightful question. First, we analyzed the impact of task difficulty. In Figure 1 of our main paper, we compared the performance of vanilla KD and two baselines on the complete ImageNet-1K dataset as well as its two subsets. These subsets were obtained through stratified random sampling from the same 1000 classes, thus ensuring that the difficulty level (the number of categories) of tasks remains consistent across the datasets. The results illustrated that as the dataset scale increased, the performance gap between vanilla KD and the baselines diminished. This suggests that the observed phenomenon is driven by the dataset size rather than inherent task difficulty.
Next, we explored the performance bottleneck of the classification task. On the smaller-scale CIFAR-100 dataset, the baselines achieved superior results compared to vanilla KD. This implies that vanilla KD's performance has not yet reached a bottleneck (student can obtain higher accuracy). In the context of the larger-scale ImageNet-1K dataset, our experiments, which utilized an extended training schedule as detailed in Section 3.6, demonstrated that the student's performance did not reach a bottleneck even with training epochs extended to 4800. Since most of our experiments utilized 300/600/1200 epochs, we can confidently discount the performance bottleneck of the classification task as a factor influencing our conclusion.
**Response to question 3:** Thanks for the valuable comments. For the presentation, please refer to our "Author Rebuttal" section where we attached the PDF to show the cases.
As for using a subset of imagenet-1k, we show the results of using a subset of ImageNet-1K, i.e., 30% and 60%, to distill the student model. The results on validation set can be found in Table 14 in our supplementary material. When comparing the outcomes between subsets and the complete ImageNet-1K dataset, the discrepancy between vanilla KD and other methods narrows as the training set scale increases.
We choose the sutdent trained in Figure 1 of our main paper (ResNet50 trained on ImageNet-1K 30%/60%/full) to comapre the differences between DIST and vanilla KD. As the Table 1 shows, students trained by DIST and KD achieve similar performance. However, these two students give different predictions on more than 3000 samples. The result indicates that students trained via vanilla KD and DIST can be be very different at single sample level.
The marker "$\checkmark$" indicates the student trained by the corresponding method gives correct predictions, and marker "$\times$" indicates it gives wrong predictions. The adopted dataset is the validation set of ImageNet-1K, which consists of 50K samples. For example, the first row, DIST ($\checkmark$), indicates that when trained with a subset (30%) of ImageNet, there are 39872 samples correctly classified by DIST. The fourth row, DIST ($\checkmark$) & KD ($\times$), indicates that when trained with a subset (30%) of ImageNet, there are 1904 samples correctly classified by DIST but misclassified by KD.
||subset (30%)|subset (60%)|ImageNet (100%)|
|:-:|:-:|:-:|:-:|
|DIST ($\checkmark$)|39872|40665|40912|
|KD ($\checkmark$)|39781|40668|41137|
|DIST ($\checkmark$) & KD ($\checkmark$)|37968|38982|39270|
|DIST ($\checkmark$) & KD ($\times$)|1904|1683|1642|
|DIST ($\times$) & KD ($\checkmark$)|1813|1686|1867|
|DIST ($\times$) & KD ($\times$)|8315|7649|7221|
We have included visualizations of specific cases in the attached PDF. We selected four categories randomly and displayed the samples that are correctly classified by both DIST and KD, correctly classified by DIST but misclassified by KD, correctly classified by KD but misclassified by DIST, or misclassified by both DIST and KD. Upon analyzing the visualizations, we observed that vanilla KD tends to learn the distribution of the teacher, as seen in cases like the fourth row for the "Binoculars" category where the misclassified class by vanilla KD is the same as the teacher's class. We speculate that this behavior might be attributed to the KL divergence loss, which encourages the student to directly match the teacher's distribution. In contrast, DIST learns more implicit similarity relationships, which could result in a more explicit difference from the teacher's prediction.
### **Response to Reviewer aER7 weakness 4, part (2/2)**
Regarding the temperature parameter, we conducted supplementary experiments to investigate its impact on the performance of knowledge distillation (KD). We conduct experiments on both CIFAR-100 and ImageNet-1K dataset. From the results, the impact of temperature to the student performance is not monotonic, but there is a trend that a small temperature value is better for the small scale dataset while a larger temperature value is better for the larger scale dataset. We speculate that teacher models tend to overfit small scale dataset. Hence, predictions softened by a higher temperature can provide more useful information for the student. However, on large scale dataset, teachers already give uncertain predictions, and further increasing the uncertainty by using a higher temperature setting would impede learning of the student.
|Dataset|Teacher|Student|Method|Top-1|
|:-:|:-:|:-:|:-:|:-:|
|CIFAR-100|ResNet32x4|MobileNet v3 Small|KD(T=1)|67.07|
|CIFAR-100|ResNet32x4|MobileNet v3 Small|KD(T=2)|69.35|
|CIFAR-100|ResNet32x4|MobileNet v3 Small|KD(T=4)|68.59|
|IN-1K|BeiTv2-B|ResNet50|KD(T=1)|80.96|
|IN-1K|BeiTv2-B|ResNet50|KD(T=2)|80.39|
|IN-1K|BeiTv2-B|ResNet50|KD(T=4)|80.53|
Pdf: /pdf/a46b4f9335b0aa50e36ac47f5f41de704c57d347.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A Competitive Algorithm for Agnostic Active Learning | Accept (poster) | Summary: This paper studies agnostic active learning with a finite hypothesis space. The goal is to achieve a competitive query complexity with an optimal algorithm.
Strengths: Interesting and important problem.
The approximation hardness based on Set-Cover is a nice addition.
--- rebuttal comment ---
After reviewers clarified concerns and added more results, I raised my score from 3 to 7.
Weaknesses: The authors seem to be unaware of "Minimax analysis of active learning" by Steve Hanneke and Liu Yang [JMLR 2015]. Hanneke and Yang introduce the star number (a combinatorial dimension), which characterizes various active learning settings (realizable, different noise settings, including the one here) leading to near-tight lower and upper bounds. This seems to make the analyses here a special case of the achieved results. E.g., on a quick look Theorem 8 seems to have a similar competitive ratio as the proposed algorithms: roughly $d\log (1/\varepsilon)$ (with $d = VC$) instead of $\log |H| \geq d$.
Similarly, the discussed bounds $\mathcal{O}(m\log H)$ have been devised by Hegedűs, Tibor. "Generalized teaching dimensions and the query complexity of learning." [COLT 1995] long before Dasgupta and Nowak.
Claiming polynomial runtime is misleading, as one typically would mean polynomial in the instance space $X$ (or the sample, like in standard PAC) and not the hypothesis space $\mathcal{H}$, as is here the case.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The authors write $|H|$ multiple times implying that the hypothesis space is finite but also use packings / coverings of $H$. Do the results generalize to hypothesis spaces with infinite size?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 1 poor
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Related work.** We appreciate the additional references, which we will include in the
paper and discuss. But you are wrong about the relationship between our results and those of Hanneke-Yang '15. They actually give a much weaker
form of ``optimality''. See our general response for further
discussion, and an example where their algorithm takes $H$ queries
while ours takes $\log H$.
**Runtime.** For the runtime, also see our general response. We can make the
theorem statement more explicit, if you feel it is misleading.
**On $|H|$ vs a net, and whether H can be infinite in size.**
The *core* of our algorithm expects a finite hypothesis space, and
gives an $O(\log |H|)$ approximation. But the algorithm does work on
general hypothesis spaces with infinite size: the very first step of
the algorithm is to find a $2\eta$-packing of $H$, and thereafter only
consider this net. So as long as you can compute a finite packing,
the algorithm applies. | Summary: This paper applied a multiplicative weight update / generalized binary search style algorithm to solve agnostic active learning. It proposes a novel "capping" approach to the weight (over hypotheses) to ensure the potential function always grows by some amount, and it proves this amount is lower bounded by $\Omega(1/m^*)$ where $m^*$ is the theoretical optimal instance-dependent label complexity. In the end, it shows that the proposed algorithm is competitive in the sense that its label complexity is at most, roughly speaking, $\tilde{O}(m^* \log |H|)$. And it also shows that removing the extra "log(|H|)" factor is NP-hard.
--
I've read the author's response. It addressed my major concern about related work so I increased my score to 7. I would encourage the authors to provide a comprehensive comparison with related works, including directly comparing bounds in different settings. I'm still not very convinced about the argument around computational efficiency, but that should not block this paper from being accepted.
Strengths: - The proposed algorithm is a novel and nontrivial modification of the classical multiplicative weight update algorithm. The analysis from a competitive perspective also looks new to me in the setting of agnostic active learning.
- I briefly checked the proof and they look sound to me.
- Understanding the theoretical lower and upper bounds of instance dependent label complexity bounds for agnostic active learning is an interesting problem, and this paper makes a solid contribution by providing an algorithm whose label complexity is close to the optimal with an additional log(|H|) factor.
Weaknesses: 1. The additional factor of log(|H|) makes the label complexity of the proposed algorithm look not very strong to me: it can be worse than existing algorithms in many scenarios. The argument that getting rid of this factor is NP-hard does not look very relevant to me, since the main contribution of this paper is more on the information-theoretic side (the proposed algorithm is already by no means computationally efficient in the sense that it is polynomial in the number of hypotheses (or its epsilon cover at best) and the size of example space X), and it is known that even learning linear classifier without too much assumptions on noise is NP-hard.
2. It would clearer if there were more discussion about how good the label complexity is by comparing it with known upper and lower bounds in different settings. Especially, there are a few papers (e.g. Hanneke, Steve, and Liu Yang. "Minimax analysis of active learning." J. Mach. Learn. Res. 16.1 (2015): 3487-3602.) that give minimax analysis of active learning. I would like to see how your results compared with theirs. It would also be interesting if you could mention how the proposed algorithm works with noise assumptions (bounded, Tsybakov, etc.).
3. This paper focuses on the case where $\epsilon \gtrapprox \eta$, but it looks to me the case where $\epsilon$ is smaller can also be interesting (though of course the label complexity won't be exponentially better w.r.t. 1/\epsilon). It would be interesting if the theoretical results could cover that as well.
4. This paper is overall clear, but some wordings are a bit confusing to me unless I check the detailed proofs. E.g., the proof sketch in line 149 - 154 ("algorithm", "majority label", "inputs", "independent").
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I would like the authors to comment on the weaknesses 1~3 I mentioned above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. On the relevance of time complexity.** We respectfully disagree that our contribution is mainly on the
information theoretic side. We view it as a structural observation of
how one can adapt a Bayes-inspired/multiplicative weights algorithm to
get the frequentist agnostic learning guarantee.
From an information theoretic perspective, one could do an exponential
brute force like "consider all possible algorithms, and all possible
settings of labels, and find the minimax optimal algorithm". This
would work, optimally, without saying anything about the problem. The
polynomial restriction on the runtime forces us to learn something
about the problem.
There are interesting cases (e.g. noisy binary search) where H is
small enough that the algorithm is feasible. And beyond that, we
believe it should be feasible to adapt our algorithm to some
structured infinite H; certainly multiplicative weights has been
widely adapted.
See our general response for more on why we think a polynomial
dependence in this general problem is interesting.
**2. Related work.** See the general response for an example comparing our result to Hanneke-Yang '15.
For weaker noise models, the optimal complexity could be much smaller,
and our algorithm won't always match it. For example, suppose there
is $\eta/10$ probability mass on extremely informative points, with the
rest of points being very uninformative. In the agnostic setting, you
can't trust the informative points because the adversary probably
messes them all up, and so our algorithm won't sample them very often.
In the bounded noise setting, you should just sample those points many
times.
**3. Another regime.** Yes, $\varepsilon < \eta \ll 1$ could be interesting, where we suppose
given existing bounds the goal would be to save a $\Theta(\eta)$
factor relative to the passive ERM.
It's a pretty different regime, though. Consider the noisy binary
search input (i.e., the hypothesis class is 1d threshold functions):
for $\varepsilon > \eta$, you do some robust binary search to home in
on the correct $O(\varepsilon)$ region in $O(\log \frac{1}{\eta})$
queries. For $\varepsilon < \eta$, you start with this phase to get
within $O(\eta)$ of the truth, then do passive sampling over the
$O(\eta)$-size region of ambiguity. With $N$ samples you will
typically get error $\frac{\eta}{\sqrt{N}}$ on each hypothesis, so you
need about $O(\frac{\eta^2}{\varepsilon^2} + \log \frac{1}{\eta})$
queries.
So for noisy binary search there's two phases: one covered by this
paper, and one that's no-longer-adaptive to refine $\varepsilon$. It
seems plausible that general problems can be solved with such an
approach.
---
Rebuttal Comment 1.1:
Title: Thanks for the response!
Comment: I'm satisfied with the response about "Relation to Hanneke-Yang '15", but I would like to see your comment to the point about the extended teaching dimension raised by reviewer orr4.
I have a few follow up questions:
- Re: "On the relevance of time complexity": It is still unclear to me why "an exponential time optimal algorithm would be trivial by brute force". In particular, we want an algorithm with low label complexity w.r.t. an **unknown** distribution, I don't see how brute force could deal with that. Is there any reference to this aspect?
- Could you comment on my point that "the additional factor of log(|H|) makes the label complexity of the proposed algorithm look not very strong"?
---
Reply to Comment 1.1.1:
Comment: See our response to orr4 about the distributional extended teaching dimension result of Hanneke '07.
Q1: We want a low label complexity w.r.t. an unknown distribution over $Y \mid X$, relative to the optimal label complexity over all distributions over $Y \mid X$. So for every algorithm, we can determine its performance on every distribution over $Y \mid X$, then take the algorithm with minimax complexity. The running time is horrendous, but it's computable to arbitrarily good approximation.
Q2: Our NP hardness result shows we could not hope for a polynomial time algorithm that avoids the $\log |H|$ factor.
It's the best one can do even in the realizable, exact setting.
Moreover:
1. Competing algorithms (e.g. by Hanneke) cannot even achieve that factor.
2. There are interesting cases, like noisy binary search, where it isn't too big.
3. There are cases (also like noisy binary search) where the algorithm outperforms the generic bound. | Summary: The paper studies agnostic active learning by proposing a competitive algorithm that achieves at most a $\log H$ multiplicative factor on top of the optimal query lower bound of $m^*$. While similar result was known for the realizable setting, this paper makes a step toward understanding the agnostic setting. As a complementary result, it also shows that it is NP-hard to improve the $O(\log H)$ overhead both in realizable and agnostic settings.
Strengths: The paper proposed a strategy that actively learns the target function in the agnostic setting with competitive sample complexity. Given that similar strategies only handle the realizable setting, it makes a significant contribution to the active learning problems in general settings. The proofs seem sound. The paper is well written.
Weaknesses: The paper is clear in the description of the algorithm and the analysis of how it queries. However, the computational cost of the algorithm is vague. In Theorem 1.1, it claims that the algorithm runs in polynomial time, but it seems not so at least in the dimension parameter d.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Can the author comment on the exact running time of the algorithm on each parameter, for example, $d, m, |H|, |X|, \frac{1}{\epsilon}$, (or any other important parameters)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: No concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Runtime.** We should give the runtime more precisely, and appreciate the comment. The main cost is
from checking if any $\epsilon$-ball of hypotheses has 80\%
probability, after each label seen. Naively this takes about
$O(|H|^2(|\mathcal{X}| + m))$ time, because it takes $|H|^2|\mathcal{X}|$ time to compute the
distances between all hypotheses and then $|H|^2 m$ time to try
all the balls in all iterations.
However, one should be able to optimize this by making that step
randomized and approximate. This would make the analysis more
annoying, but shouldn't change the result other than the constant
factors. One could just sample $O(\log \frac{1}{\delta'})$ random
hypotheses, and $O(1/\epsilon)$ random $x$, and see if at least 90\%
of the hypothesis pairs are empirically within $O(\epsilon)$ of each
other.
With this optimization, the algorithm should have an overall runtime
around $\tilde{O}(|H| m + m / \epsilon)$: the first term is
for $m$ rounds of multiplicative weights over $|H|$, and the second term is
the optimized identification of heavy balls. Plus, if you want a net to improve the approximation from $O(\log |H|)$ to $O(\log |N(H, \eta)|)$, however much time it takes to compute this net at the beginning (at most $|H|^2/\eta$ with a greedy algorithm, but typically you would write it down explicitly for your hypothesis class).
But the analysis of this variant is a little hairy, and in our opinion
optimizing the algorithm for generic hypothesis classes is not as interesting as getting much more significant speedups
for specific hypothesis classes.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer LeWc
Comment: Thank you very much for your response. It is very clear to me now. I encourage the authors to add these comments in their revision. My review and score remain unchanged. | Summary: The authors provide an algorithm for learning in the presence of agnostic noise. The algorithm finds a hypothesis that gets error $O(\eta)$ where $\eta$ is the noise parameter. The algorithm requires querying specific points, so it uses a stronger oracle than the standard active learning. The algorithm is possibly inefficient for many classes as it is polynomial in the number of hypotheses (or to the size of the cover). Their algorithm is optimal up to log factors, and they provide a lower bound showing that it is NP-Hard to improve this.
Strengths: This work provides a query-efficient algorithm with tight results (upper and lower bound) for the agnostic setting.
Weaknesses: The setting in their analysis is not active learning but learning with queries which is a stronger oracle. So the title of the paper/abstract is misleading. In my opinion, active learning is a completely different model than the one the authors use. Also for examples the 41-46 lines, the papers there are for active learning and not for the model the authors analyze. I think the authors should be more clear about this. About the NP-Hardness result. This result makes sense when the $|H|$ is polynomial over $\epsilon$ and the dimension. But almost always the $|H|$ is exponentially large, i.e., $d$-dimensional functions.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 2 fair
Limitations: No liomitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Definition of Active Learning
-------
You appear to be concerned that we assume we know $D_X$ and can query
$(Y | X = x)$, while some previous work defines active learning as:
given all the $x_i$ in a dataset, pick a subset of them to see the
corresponding $y_i$. However, in the limit of infinitely many
unlabeled data points, these are equivalent: the set of all $x_i$
gives us the distribution $D_X$, and we can sample any $x$ in our
dataset, which means any $x$ in the support. So if we don't care about
the number of unlabeled examples, as the prior work also doesn't, then
we may as well assume our model.
Bounds on the unlabeled data complexity are a very interesting
question which we intend to pursue in followup work. It gets somewhat
complicated, because the OPT also depends on the amount of unlabeled
data, so we believe focusing on labeled complexity makes sense.
About the NP-Hardness result
--------
As we discuss in the general response, at our level of generality one
cannot avoid a runtime polynomial in $|H|$. And without the
$\log |H|$ slack in number of samples, our NP-hardness result shows one
cannot hope to avoid time *exponential* in $|H|$.
Faster algorithms under structural assumptions on $H$ are a good
direction for future work, but our results here are the best one could
hope for. | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments. We would like to emphasize that we give the first algorithm for active agnostic learning that is competitive with the optimal algorithm for a given input (unlabeled data and hypothesis class). There are a couple general points we would like to clarify, particularly about a prior minimax work by Hanneke and Yang which does *not* get this.
Relation to Hanneke-Yang '15
-------------------------
A couple reviewers pointed to the minimax analysis of Hanneke-Yang
'15, which we had missed. Thanks! The distinction is that the
Hanneke-Yang upper bound is close to the optimal complexity of the
*worst case $D_X$*, while ours is relative to the optimal complexity
of our actual $D_X$. As a result, our algorithm can be *much* better,
as the following example shows:
Define a hypothesis class of $N$ hypotheses $h_1, \dotsc, h_N$, and $(N + \lg
N)$ data points $x_1, \dotsc, x_{N + \lg N}$, so for each hypothesis $h_j$:
* The labels of the first $N$ points express $j$ in unary
* The labels of the last $(\lg N)$ points express $j$ in binary
So, for example, when N = 16 then $h_6$ will have labels:
00000100000000000110
Now consider the realizable ($\eta = 0$) setting, and suppose that
the $x_i$ are fairly uniform (e.g., min probability
$\frac{1}{10 N}$) and $\epsilon = \frac{1}{20 N}$. Then you need to
identify $h_j$ exactly by querying the labels.
The obvious, optimal strategy is: query the last $\lg N$ bits and read
off the answer. Our algorithm will basically do this, getting
$\Theta(\lg N)$ complexity. RobustCAL, the algorithm analyzed in HY15,
will not: the first $O(N/\lg N)$ points it sees will be in the unary
region, and it will label almost every one of them, for $\Theta(N/\lg N)$
complexity.
So in this example, our algorithm is optimal at $O(\lg N)$, and our
general theorem's guarantee is $O(\lg^2 N)$ which is pretty close.
HY15's algorithm actually takes $\Theta(N/\lg N)$ queries, and their
general guarantee is $O(N \lg^2 N)$.
You might be confused: how can HY15 claim minimax optimality for an
algorithm with such poor behavior? Their minimax lower bound on this
example is actually $N$. This is because if $D_X$ happened to be
uniform on the first $N$ points (i.e., only the unary ones), you
really would need $N$ queries to recover the hypothesis.
But in active learning, you *know* your unlabeled data $x_i$. So the
algorithm can see that it has access to the great points at the end
and should choose to query them. Our algorithm does; RobustCAL and $A^2$
do not.
On Polynomial Running times
-----
A couple reviewers point out that usually $|H|$ is very large, so a
polynomial dependence on $|H|$ isn't great. However:
* The problem setting we consider is fully general hypothesis classes.
Therefore the *input* has size linear in $|H|$, so one cannot hope to
avoid this dependence.
* There are interesting cases (e.g. binary search) where the
hypothesis class isn't huge, so the algorithm is feasible.
* The prior work in this setting (e.g. $A^2$) also depends polynomially
on $|H|$.
* Getting an *exponential time* optimal algorithm would be trivial by
brute force, so requiring polynomial time is what forces us to
understand something interesting about the problem.
We certainly agree that getting faster algorithms for more restrictive
hypothesis classes is an important line of future work. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy | Accept (poster) | Summary: The paper presents results on the nonexistence of efficient learning algorithms for $3$-depth ReLU networks with smoothed parameters and Gaussian input. It also proves that in general, there is no efficient learning algorithm for $2$-depth ReLU networks with smoothed parameters and smoothed inputs (the smoothness is applied to a specially constructed input, which is Bernoulli). The neural networks they analyze, unlike previous works, have a ReLU unit at the single output.
Strengths: The paper is theoretical in nature and so it is good that it is rigorous in its analysis and seeks to provide enough detail to understand its proofs. The paper presents its results in a logical order.
Weaknesses: I will first introduce my main concerns, and then other important observations to address. (I must say that I haven’t followed Section 4 with enough detail, though I do have some observations based on what I read.)
1) I am trying to understand the relevance of the learning problem studied by the paper. When talking about learning, from what I can recall, I have basically seen two approaches in the literature:
Case a) When training a neural network, e.g. using SGD, one of the things we are concerned about is the testing performance of the network after being trained for some number of iterations or number of epochs.
Case b) I have seen other works that are concerned on generalization issues in terms of how close is the empirical loss (with respect to samples of the input data) from the population loss (defined by the expected value with respect to the input data distribution), which such closeness depending on different parameters such as the number of samples being used, the width of the network, the depth of the network, etc.
Both of these cases tackle the issue of how well a neural network learns and generalizes. However, the paper seems to tackle the problem of actually learning a given neural network (see Definitions 2.1 and 2.2), where the true parameter of the neural network is chosen adversarially (and sometimes smoothed afterwards). In the paper, essentially, the learning is done by some algorithm that outputs some hypothesis that seeks to approximate the adversarial neural network. However, how is this relevant in the context of training neural networks and their testing/generalization performance? The type of works in Case a) and Case b), though theoretical, are trying to understand some practical issues in neural network learning that is relevant to the general ML community, but I don’t see such immediate connection in this paper. In which situations do we care about “learning a neural network” instead of “training a neural network so it learns”? This needs more motivation and clarification.
Also, the hypothesis $h$ that is supposed to learn the neural network in Definitions 2.1 and 2.2 seems to belong to any arbitrary class of functions. Is this right? If so, why isn’t this intractable to compute? In my mind, it would make sense for $h$ to belong to the same class of neural networks as defined by $N_\theta$ --- maybe it is the case in Section 4, but I just don’t know; all I know is that in Section 2 and 3, this is not clear to me.
Curiously, thinking about Case a) above, the paragraph that starts at line 102 mentions stochastic gradient descent (SGD), which makes me think that previous works have addressed the question of learning in a more practically motivated way, since SGD is used in practice. Can the authors also comment on this? How does it relate to your paper?
2) The paper addresses computational complexity, and from reading the paper it seems that such computation is linked to the algorithm $\mathcal{L}$. The computation by $\mathcal{L}$ seems to be related to how many times the algorithm needs to access the oracle in order to compute the final hypothesis $h$. This number of times is, I believe, the sample complexity of the algorithm. So, are we talking about computational complexity or sample complexity in the paper? Is the paper concerned about sample complexity at all? This must be clear on the paper, probably since the beginning of it. As far as I can see, the concept of sample complexity was only referenced in line 251 of Section 4 without much more detail about its relevance. Please, address this, since sample complexity is very important in ML and learning in general.
3) The first part of Section 4 explains how the proof used by the authors is related to [15] and what is the additional challenge that the authors present, i.e., the handling of smoothing. However, besides the first paragraph of Section 4, there is no more indication of whether some of the constructions being used throughout the proof correspond to ones in [15] or not. It would be nice to know which parts of the proof were taken from previous papers that also study learning of neural networks, and which parts weren’t.
4) This comment applies to networks of $k$-depth where $k>2$. When considering the neural network in definition 2.1 and 2.2, as well as in the statement of Theorems 3.1, there is no mention on how the neurons are distributed in the feedforward network. Do all hidden layers have the same width? Are the number of neurons per width irrelevant in terms of learning? Since the paper is concerned with presenting negative results, I guess they consider specific constructions of the networks in order to prove their results. Is that correct? If so, could this be mentioned? In general, could there be some mention about the specific topology of the networks in terms of the widths per layer?
5) Naturally, the paper recognizes that it might be possible to obtain results of efficient learning for different assumptions and topologies, such as not including an activation function at the output. I believe the authors should investigate this case for $3$-depth networks because this will strengthen their paper’s contribution. For example, if efficient learning is demonstrated for $3$-depth networks without activation function at the output, this will be very interesting because it elucidates more the role of the extra activation in efficient learning! I know that theoretical works are hard to do because proofs can be very non-trivial; however, since there is at least one previous work showing the efficiency of learning $2$-depth networks [6], how difficult would it be to adapt their proof to the case of $3$-depth networks?
Other observations:
1) The paper mentions that the neural network architecture they focus on have a ReLU at the single output layer. Previous works seem to focus on linear output, i.e. regression. Is having a ReLU at the output layer closer to real world applications? What is the motivation for it?
2) Definition 2.1 and 2.2 says that $(x,y)$ is drawn iid; however, it seems that only $x$ must be drawn iid, since $y$ is a deterministic function of $x$. Please check.
3) I think a proof outline is needed in Section 4, probably right before subsection 4.1 to better understand how the rest of the proof is built. Though the subsection titles in section 4 indicate what is being done, how they are pieced together in order to better understand the overall proof is not clear to me.
Minor:
1) In the abstract, line 9 mentions the word “hard”. It would be better to instead use expressions in terms of efficiency of the computation, etc.
2) Line 35, it seems that it should say “bounded from below”.
3) In line 86 it says that considering standard Gaussian distributions is “perhaps the most natural choice of an input distribution”, why is that? In real world applications, practitioners don’t care about this type of input. It may be good to insert a better motivation for it.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please, see above, in the section "Weaknesses" of this review.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their efforts.
Regarding the weaknesses:
1. The paper studies PAC learning, as defined in Definitions 2.1, 2.2 and 2.3. Thus, we study whether there exists an efficient algorithm that returns w.h.p. a hypothesis with a small population loss. In this common notion of learning, the learning algorithm can be any efficient algorithm, and it is not restricted to standard algorithms such as SGD. Also, the returned hypothesis is not restricted. Therefore, our hardness results imply that (under Assumption 2.1) there is no efficient algorithm that returns w.h.p. a hypothesis with a small population loss. As a special case, it implies that SGD cannot learn neural networks efficiently (even under the smoothed analysis framework, Gaussian inputs, depth-3, etc.). In other words, the fact that our definitions do not have any restrictions on the algorithm and the returned hypothesis only makes our hardness results stronger. In this sense, our results are stronger than the hardness results that are specific to gradient methods, which are mentioned in the paragraph that starts at line 102.
2. Our results consider computational complexity. That is, we show (under Assumption 2.1) that there is no poly-time algorithm that learns the considered networks under the smoothed analysis framework. In the proof, we assume that such a poly-time learning algorithm exists, and therefore this algorithm must also have a polynomial sample complexity, and we use it to obtain a poly-time algorithm that breaks the PRG given a polynomial number of examples. Thus, we refer to the sample complexity of the algorithm in the proof for technical reasons, but the theorems proved in the paper consider computational complexity. Our results do not rule out the existence of learning algorithms with a polynomial sample complexity but super-polynomial time.
3. Indeed, in the first paragraph of Section 4 we compare our proof to [15]. While we build on the technique from [15], none of the parts in the proof is taken directly from [15], except for Lemma A.1 in the appendix.
4. The construction considers a specific architecture, but by a simple argument (which we use in the proof of Corollary 3.1 in Appendix B) we can see that the hardness results hold even if all layers are of the same width (e.g., when all layers are of width d, where d is the input dimension). We will add a remark about this issue in the camera-ready version.
5. Indeed, it is not clear whether learning depth-3 networks without activation in the output in the smoothed-analysis framework is hard. Despite efforts by the community, the existing algorithms for learning non-degenerate depth-2 networks were not extended to depth 3, and it is unknown whether such an extension is possible. Intuitively, we feel that obtaining such an extension will be challenging. We agree that it is an important direction for future research.
Regarding the other observations:
1. In real-world applications, practitioners can choose whether to include activation in the output or not. We note that our hardness results can be easily shown to apply also in the case of depth-4 networks without activation in the output neuron (by adding a linear layer to the depth-3 network from our construction).
2. Sure, we can just draw $x$ i.i.d. and then $y$ is a deterministic function of $x$.
3. We will try to clarify this.
---
Rebuttal Comment 1.1:
Title: Reponse
Comment: Thanks to the reviewers for replying to my review. I still have some concerns.
1) The authors mentioned "In real-world applications, practitioners can choose whether to include activation in the output or not" as a response to the motivation of having a ReLU in the output layer. This is not convincing. I still believe the authors have not properly motivated the use of ReLU at the output layer, and I believe it is not well-motivated from a practical perspective either. This lack of motivation would make the problem studied in the paper less relevant. To the best of my knowledge, people use smooth activations in the output layer (including linear output). Here are some possible reasons why. Firstly, the output layer should output values in the range specified by the problem of interest -- i.e., whether it is classification or regression, the output must be in the range of the predictive values of interest. ReLU only outputs positive values, disregarding whatever negative value that comes into play. Secondly, ReLU is not a smooth function, and it may be possible that even small changes in the input are able to create abrupt changes in the output of the network, something not ideal, for example, in regression problems. The authors must find use cases for ReLU in the output layer, preferably in the literature, otherwise, I really don't see much relevance in studying such architectures. What is the motivation for the authors to focus on ReLU? It seems an arbitrary choice so far. There is no indication in the paper thus far.
Now, I note that the authors have said "We note that our hardness results can be easily shown to apply also in the case of depth-4 networks without activation in the output neuron (by adding a linear layer to the depth-3 network from our construction)." Can the authors provide more explanation about this? How can we know this is actually doable? The authors wrote "by adding a linear layer to the depth-3 network from our construction", but, isn't the depth-3 network already supposed to have a ReLU layer at the end? What was the point of studying ReLU at the output layer then?
Also, networks with linear output layers have already been studied -- the authors mentioned the work [1], for example. Though it seems that [1] studies depth-3 networks, I wonder how much it takes to extend the work to depth-4. In any case, studying deeper networks with linear output layers that go beyond depth-4 seems to be an incremental contribution at best.
2) I now focus on the response number 4. From what I gather from the authors' response, it seems that in the proofs a specific network architecture is used, and that an extension is possible to neurons with equal width. Can more information be provided on this architecture? It is nowhere to be found in the main paper (unless I missed it). I find a little bit misleading that nothing is said about the specificity of the architecture in the main paper (other than having ReLU as activation functions and in its output layer, and the depth), which could make the reader believe that the results hold for any depth-3 neural network (or even depth-4 for linear outputs, as specified by the authors). I want to see how restrictive the architecture used in the proofs is in terms of the width of its neurons, the input dimension, etc. The authors mentioned that their methods can be extend to architectures of equal width, the width of the input dimension. How is this possible? Moreover, in practical applications, neural networks tend to be wide, and so often they have widths that are larger than the input dimension.
I will update my score as our discussion develops.
---
Reply to Comment 1.1.1:
Comment: Regarding the activation in the output neuron and the choice of ReLU in general:
- We used the ReLU activation because it is the most commonly used activation, and since all previous works on hardness of improper learning used this activation.
- Some previous works on hardness of learning also included activations in the output neuron (see [14,15]).
- The result easily extends to depth-4 networks without activation in the output. We will add a remark about it.
Why does the result apply to depth-4 without activation in the output?
- This follows from the same proof. The only difference is that in the reduction, the target network is the depth-4 network obtained from the depth-3 network in the current reduction, by connecting the current output neuron to the new output neuron with an edge of weight 1. The proof will still hold for the same arguments as in the current proof.
- The point of having ReLU in the output is that it allows us to show hardness for depth-3 networks rather than depth-4.
- I suppose that you mean [11] (rather than [1]). They indeed studied depth-3 networks without an activation in the output. However, they did not consider the smoothed-analysis framework, which is the focus of our paper. Their result does not imply hardness in the smoothed-analysis framework. Moreover, despite some efforts that we made in this direction, we do not see a way to extend their technique to this setting.
- In general, showing a hardness result for depth $i$ is stronger than showing hardness for depth $j>i$.
Regarding the architecture:
- As we discuss in the preliminaries, we consider networks where all layers are fully-connected and have bias terms. The theorems shows that (under our assumption) learning such networks (in the smoothed-analysis framework) cannot be done in time polynomial in $d,p,B,1/\epsilon,1/\tau$. In order to show this, it suffices to show that learning is hard already under some specific widths of the layers. So for the correctness of our theorem, we do not really care about the widths.
- The construction in our proof does not have equal widths, because as we discussed in the previous point we did not need it in Theorem 3.1. However, in the proof of Corollary 3.1 we needed equal widths (the width of the input dimension) and hence in this proof we explain how our construction from the proof of Theorem 3.1 can be easily extended to handle equal widths. The details about the widths of the construction in the proof of Theorem 3.1 is also summarized in the proof of Corollary 3.1 (although these technical details are not very interesting).
- A similar extension to the one described in the proof of Corollary 3.1 can be done for widths larger than the input dimension. | Summary: This paper studies the computational complexity of learning 3-layer neural networks under the standard Gaussian distribution. Specifically, under a standard cryptographic assumption on the existence of local pseudorandom generators, the authors show that there is no poly-time algorithm that can learn 3-layer networks under the smoothed analysis framework. As a corollary, they show learning 3-layer networks is hard even with assuming a lower bound on the smallest singular values of the weight matrices.
Strengths: - It is an interesting question to understand the worst-case hardness of learning neural networks, and under what assumptions learning is possible. Prior work has shown 2-layer networks can be learned under the smoothed analysis framework. The current paper, however, shows that learning 3-layer networks in the smoothed setting is hard, which is quite an interesting result.
- At a high level the proof appears to be sound, however I admit that I do not work on complexity theory and am unable to verify the proofs fully.
Weaknesses: The novelty of the paper seems limited, particularly in light of the related work [11]. [11] shows that learning three-layer networks is hard, under the Learning With Rounding (LWR) assumption. [11] considers three-layer networks with no activation in the last layer, while the current paper considers networks with an activation in the last layer, but this last-layer activation seems like a very minor difference. Can the authors please comment further on the novelty of the current paper in comparison to this prior work?
[11] S. Chen, A. Gollakota, A. R. Klivans, and R. Meka. Hardness of noise-free learning for two-hidden-layer neural networks. arXiv preprint arXiv:2202.05258, 2022.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - I find the main text of the paper, in particular section 4, to be quite difficult to understand. The proof of the main theorem is written in a lengthy paragraph format, and as such I think the paper could be improved by organizing the exposition in section 4. Some concrete suggestions are the following:
- The authors could add a high-level proof sketch at the beginning of section 4 before diving into the details of the argument. This could be particularly helpful to readers who are not familiar with the argument of [15], and complexity theory more generally (and may make the paper more suitable for a NeurIPS audience).
- Key lemmas (such as Lemmas A.4 - A.6) could be stated explicitly in the main text, to make more clear what the various steps in the proof are.
- Both the examples oracle and algorithm $\mathcal{A}$ could be written in algorithm format, to make more clear what exactly they are computing.
- I am a bit confused about what the point of the examples oracle is in Section 4.3, and how this is being used by the algorithm. Could you please clarify this further?
- As mentioned in the weaknesses section, the paper would benefit from additional comparison to [11], in particular the novelty and a comparison between the LWR and PRG cryptographic assumptions.
[15] A. Daniely and G. Vardi. From local pseudorandom generators to hardness of learning. In Conference on Learning Theory, pages 1358–1394. PMLR, 2021.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions.
Regarding the comparison to [11]:
In [11], the authors showed hardness of learning depth-3 networks under the Gaussian input distribution, but the neural network in their construction is degenerate. Hence, their result does not imply hardness of learning non-degenerate networks, and does not imply hardness under the smoothed-analysis framework. Moreover, despite some efforts that we made in this direction, we do not see a way to extend their technique to these settings. The novelty in the current paper is that we establish hardness results for non-degenerate networks and hardness under the smoothed-analysis framework. We view this contribution as significant and surprising, as for depth-2 networks the non-degeneracy assumption makes the problem solvable in poly time. As the reviewer mentioned, there are additional distinctions between our results and [11], which we view as more minor, such as different cryptographic assumptions and the existence of activation in the output neuron. As we discuss in the paper, our cryptographic assumption is considered established, but we do not think that there is a formal way to make a comparison to their LWR assumption (it is essentially a matter of taste).
Regarding the readability of the proof in Section 4: We will try to apply the reviewer's suggestions to make the proof a bit easier to follow.
Regarding the examples oracle in Section 4.3: As we define in Definition 2.2, a learning algorithm has access to an examples oracle, and returns w.h.p. a hypothesis that performs well on fresh examples from the examples oracle. In our proof, we assume that we have an efficient learning algorithm for smoothed depth-3 neural networks, and we run it with the examples oracle defined in Section 4.3 in order to break the PRG as follows. If the hypothesis returned by the learning algorithm performs well on fresh examples from the examples oracle then the data is pseudorandom, and otherwise it is random. If the reviewer has further questions on this issue we will be happy to elaborate more.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thank you to the authors for answering my questions. The novelty of the submission in comparison to the prior work is now more clear to me, and as such I am inclined to increase my score. | Summary: The submission studies the classical problem of constructing ReLU-activated neural networks, specifically from the learning point of view. Previous work has established the existence of an efficient learning learning algorithm for learning depth-2 ReLU networks under the Gaussian distribution assuming a non-degenerate weight matrix. The authors provide a lower bound that rules out an extension of this result to depth-3 ReLU networks even under the "smoothed analysis" framework, where one assumes the presence of random noise to avoid the use of "degenerate" instances in the hardness reduction. As a second result, the authors provide a lower bound that rules out the extension of the aforementioned algorithm on the Gaussian distribution to general smoothed distributions.
Strengths: The construction of ReLU-activated neural networks is a central topic at NeurIPS, and other lower-bound results on the problem have also appeared at the proceedings. The article is well-written and the results are non-trivial.
Weaknesses: My main issue with the submission is that the newly obtained lower bounds seem to only shift the boundaries of intractability by a very small step compared to what was known previously. In particular, previous results of Daniely and Vardi [15] already ruled out (under the same complexity-theoretic assumptions) an efficient learning algorithm for the setting studied in this submission, with the distinction being that their lower bound does not operate in the "smoothed analysis" framework. The same work of Daniely and Vardi [15] also established a lower bound that matches the second result in this submission, but once again without the use of the "smoothed analysis" framework. In this sense, it seems to me that the main contribution of the submission is showing that previously established lower bounds do not require the use of purely degenerate cases. In combination with the fact that the lower-bound proof itself builds on the approach introduced by Daniely and Vardi [15], I cannot help but feel that the present submission is somewhat incremental in nature.
Also, a clearer and more to-the-point comparison of the submission's results to those obtained in the previous work(s) of Daniely and Vardi would have been appreciated... but that is only a minor (and perhaps subjective) point.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: N/A; however, the authors are of course welcome to respond to the individual points raised in the review.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their efforts.
Our hardness results show that learning neural networks under the Gaussian distribution is hard already for non-degenerate instances. This is in contrast to all previously known hardness results for learning neural networks. Since for depth-2 networks the non-degeneracy assumption makes the problem solvable in polynomial time, we think that our hardness result for depth-3 is surprising. Also, from the technical point of view, the required construction is highly non-trivial, and differs significantly from [15]. Thus, we believe that the contribution of the current paper is significant both at the conceptual and technical levels.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response; I have read it and will keep it in mind during the discussion. | Summary: This paper addresses the complexity of learning neural networks, a
very fundamental problem in learning theory. Previously, some
complexity and efficient solvability results were known for networks
of depth 2. The paper shows that when the depth is increased to
3, the learning problem becomes computationally intractable (under
a hypothesis on the existence of local pseudorandom generators)
even when one uses the assumptions (e.g., Gaussian input distribution,
non-degeneracy of the weight matrix) that lead to efficient algorithms
for the depth-2 case. The paper also shows that the learning problem
for depth-2 networks is hard under a smoothed analysis framework
(where both the input distribution and the network parameters are
perturbed).
Strengths: (a) Understanding the boundary between hard and easy cases of learning
neural networks is an important problem in learning theory. This is a
very nice contribution to that area.
(b) The paper provides a nice summary of previous work which makes
it easier to understand the context for the contributions.
(c) The technical results are presented very well.
Weaknesses: This reviewer can't see any weaknesses.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: (1) You prove hardness results for depth-3 neural nets. Can the proofs
be extended (in a simple way) to show that the hardness results also hold for
depth-k neural nets for any k >= 3?
(2) Assumption 2.1 (on the existence of local PRGs with certain properties)
is used in your proofs. You cite references that provide evidence for the
assumption. It will be useful to the readers if you can include a short discussion
on the evidence in the paper itself. Are there any consequences if
Assumption 2.1 does not hold?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review.
Regarding the questions:
1. Yes, the result can be easily extended to any $k \geq 3$ by appending to our construction additional layers. We will add a remark about it in the camera-ready version.
2. Indeed, we referred the reader to papers where this assumption is discussed. One notable evidence for our assumption was shown by Applebaum [1]. He showed that our assumption follows from a variant of Goldreich’s one-wayness assumption (Goldreich, 2000). In addition, there is a concrete candidate for a local PRG, which is based on the XOR-MAJ predicate, and was shown to be secure against all known attacks. A more detailed discussion of our assumption can be found in [15, Section 2.2]. We will add some details on this subject in the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: I have gone through the rebuttal. My questions/concerns have been addressed in a satisfactory
fashion. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
No-Regret Learning with Unbounded Losses: The Case of Logarithmic Pooling | Accept (poster) | Summary: This paper studies the logarithmic pooling method for prediction using expert advice. At each step, $m$ experts report distributions $p^1, p^2, \ldots, p^m$ over a size-$n$ domain. The goal is to make predictions with a vanishing regret in terms of the log loss.
The usual logarithmic pooling returns an aggregated distribution $p^*$ such that each $p^*(j)$ is proportional to $\prod_{i=1}^m[p^i(j)]^{1/m}$, the geometric mean of the predicted probabilities. This paper considers a generalization in which the exponent for each expert is replaced by $w_1, w_2, \ldots, w_m$ that sum up to 1 (instead of $1/m$ identically), and the goal is to learn the right set of weights to achieve a low regret.
Formally, each round of the prediction proceeds as follows: First, algorithm selects weights. Then, adversary chooses the experts' forecasts and the outcome subject to a *calibration* condition. The two steps are in sequential order, so that the forecaster is prevented from outputting an arbitrary distribution.
The main result of the paper (Theorem 3.2) states that the regret can be upper bounded by roughly $m^{3/2}n\sqrt{T}\log T$ in the $T \gg m, n$ regime. The algorithm uses online mirror descent with the Tsallis entropy regularizer.
Strengths: The problem setting is quite natural and well-motivated. The authors did a good job in explaining why certain assumptions are necessary by providing illustrative examples. While the solution is based on online mirror descent, the analysis of the algorithm requires several new ideas that seem to be non-trivial and of independent interest.
Weaknesses: - The scope of the setting is limited to logarithmic pooling and the log loss.
- The tightness of the bound is unclear; an $\Omega(\sqrt{T})$ lower bound is proved for a restricted class of algorithms, while the polynomial dependence on $m$ and $n$ might be sub-optimal.
- Several aspects of the setting don't seem sufficiently convincing; see questions below.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - The setting assumes that the adversary chooses the forecasts and outcome after seeing the weights selected by the algorithm. If both parties act simultaneously, would the setting become significantly easier? (In particular, the obstacle of Example 1.1 still seems to be there?)
- What if the weights don't need to sum up to 1? Would that change the expressivity of the aggregation? (Multiplying $w_1, \ldots, w_m$ by the same factor seems to give a different distribution.)
- Lines 63--70 argue that we need to keep the algorithm from seeing the experts' advice, so that the algorithm cannot "output an essentially arbitrary function of the forecasts". What would be the practical motivation/justification of this choice?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The main limitations are the assumptions made regarding the setting (e.g., the adversarial choices of outcomes and forecasts must satisfy the calibration condition). These have been clearly pointed out in the abstract and introduction and, the assumptions are formally stated in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for these comments. In response to the questions:
- Question 1: Our interpretation of this question is: what would happen if we changed our setting so that the aggregation algorithm chooses weights at the same time as the adversary chooses the probability distribution, such that neither has knowledge of the other’s choice? In that case, we would seemingly end up with a game in which optimal play may involve randomized strategies. You are correct that without the calibration property, the obstacle of Example 1.1 remains. (The adversary chooses which expert reports the extreme report at random, and then the bad outcome of the example happens with probability 50%.) It would be interesting to analyze this game in the presence of the calibration property, though somewhat less standard. The setting would be less adversarial than ours, and we are able to exhibit a no-regret algorithm even in our (more adversarial) setting.
- Question 2: Yes -- by allowing weights that do not sum to 1, new aggregate forecasts may be obtained. Logarithmic pooling with weights adding to a constant other than 1 is attested in the literature; see e.g. [Satopaa et al., 2013] (“Combining multiple probability predictions using a simple logit model”). We conjecture that our techniques can be extended to find weights for logarithmic pooling in this broader setting, though doing so appears to be nontrivial.
- Question 3: See our response to (3.) in the global rebuttal. Briefly, we argue that our goal is to study logarithmic pooling, and by allowing the aggregator to pick weights after seeing forecasts (i.e. allowing the weights to depend on the forecasts), the aggregation method becomes arbitrary, rather than being logarithmic pooling. Our setting is analogous to the well-studied setting of choosing weights for the optimal linear pool (see e.g. [Cesa-Bianchi and Lugosi, 2006, Section 3.3]), but for logarithmic pooling instead of linear pooling.
---
Rebuttal Comment 1.1:
Title: Thank you for your reply!
Comment: I would like to thank the authors for the clarification. I don't have further questions, and my overall evaluation remains positive. | Summary: Summary of the paper
====================
* The prediction setting explored in this work is harder than the usual "prediction with experts advice" (henceforth abbreviated PwE) setting (e.g. as in the Cesa-Bianchi and Lugosi book), since the learner is required to reveal the expert weights (w_t) *before* observing the expert advice (adversarially chosen by the environment) and the outcome (in the standard PwE model, the mixture prediction is to be done before observing the outcome but after observing the advice). This is crucial for the lower-bound in Example 1.1.
* The form of weighted logarithmic pooling considered in this paper to aggregate the advice of the experts (Lines 47-48) is also natural and considered in the previous literature (as minimizing weighted average KL divergence to the expert advice, having external Bayesianality, maintaining log-concavity of densities etc).
* The algorithm proposed is again efficient and well-studied in the literature; online mirror descent with the Tsallis entropy regularizer (as in Zimmert and Seldin, 2018).
* To beat the lower-bound (Example 1.1) and achieve sublinear regret, the condition imposed on the adversarial joint-sampling of expert advices and outcome is that the all the experts must be calibrated (for each expert, the conditional probability of the outcome conditioned on that expert's advice must match the advice distribution).
* With the calibration assumption and an appropriate step-size schedule, OMD+Tsallis is shown to achieve $\tilde{O}(\sqrt{T} \log T)$ regret (for $T \gg m$).
Strengths: * The adversarial prediction setting explored in this work is harder than the usual PwE setting. The form of weighted logarithmic pooling considered is well-studied in the literature (especially in the context of combining Bayesian priors) and this work seems to give the first non-trivial online adversarial prediction regret bound for the logarithmic pooling with respect to the the log loss (to the best of my knowledge).
* The algorithm proposed (OMD with Tsallis regularizer) is efficient and well-studied in the literature.
* The calibration condition proposed for the experts is extensively studied in the Bayesian inference literature (in the sense that Bayesian posteriors are naturally calibrated).
* Quite a bit of detail has been provided for the proofs in the main paper, and these are reasonably sound and well-written.
Weaknesses: * Not much motivation is given for the harder adversarial setting (requiring the aggregator to predict the weights before seeing the expert forecasts).
* While the calibration condition is well-motivated in the existing literature w.r.t Bayesian posteriors, the authors have not provided sufficient motivation for why it is practical when treating experts as learners (in a prediction-with-expert-advice setting). It may be particularly problematic when the experts are learners/prediction models which are decoupled from the generation of the adversarial outcomes. With the log loss, no-regret online predictors are of course possible even with adversarial outcomes (using the Laplace or Krichevsky-Trofimov estimators for instance, as in the Cesa-Bianchi and Lugosi book Chapter 9), but it may not be very useful to aggregate such no-regret experts with logarithmic pooling.
* There are no experimental results in the paper, nor are any practical applications discussed.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * Lines 66-70 show how forcing the aggregator to predict before seeing the expert forecasts disallows some form of manipulation by the aggregator (mimicking a linear mixture). Could you please elaborate on why such a situation should be disallowed (when using the aggregator for either theoretical or practical purposes)?
* Could you please provide some discussion on why calibration is still practical in the PwE setting when the experts are learners which are decoupled from the adversarial generation of outcomes (i.e. the expert $i$ has fix the forecast distribution $p^i_t$ at time $t$ before seeing the adversarial outcome $j_t$)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for these comments. A few responses:
- Regarding the first weakness (and first question), see our response to “Question 3” in the global rebuttal. Briefly, we argue that our goal is to study logarithmic pooling, and by allowing the aggregator to pick weights after seeing forecasts (i.e. allowing the weights to depend on the forecasts), the aggregation method becomes arbitrary, rather than being logarithmic pooling. Our setting is analogous to the well-studied setting of choosing weights for the optimal linear pool (see e.g. [Cesa-Bianchi and Lugosi, 2006, Section 3.3]), but for logarithmic pooling instead of linear pooling.
- We further justify our modeling choices (the calibration property and logarithmic pooling) in the global rebuttal. Briefly, we argue that the calibration property often holds in practice (e.g. in modern deep neural networks) and so studying prediction with advice from calibrated experts from a theoretical standpoint is well-motivated. We then argue that logarithmic pooling makes particular sense when experts are calibrated, because logarithmic pooling “takes more seriously” confident forecasts from experts, as compared with linear pooling.
- In response to the second question: in addition to theoretical reasons to expect experts to be calibrated (see e.g. [Foster and Vohra, 1997] or [Blasiok et al., 2023] (“When Does Optimizing a Proper Loss Yield Calibration?”)), in practice we see calibration in a variety of settings. For example, modern deep neural networks are usually calibrated (see the discussion in the global rebuttal). This raises the question: how can we adapt the standard theoretical model of online prediction with expert advice -- which by default gives full power to the adversary -- to model calibrated experts? Our model attempts to do this by still giving the adversary a lot of power while constraining it just enough to guarantee that experts are calibrated.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your detailed rebuttal. I would keep my score (tending towards acceptance) for now. But I will definitely consider your clarifications (which do address some of my concerns) if required during further discussions. | Summary: This paper investigates the logarithmic pooling method for minimizing log loss and introduces the OMD algorithm utilizing Tsallis entropy as a regularizer to update weights for the logarithmic pooling method. By assuming calibrated forecasts, the paper demonstrates that the proposed algorithm ensures a sub-linear regret. Additionally, this paper establishes a corresponding lower bound, indicating that their regret bound is optimal in terms of its dependence on the number of time steps $T$.
Strengths: 1. Learning weights for logarithmic pooling studied in this paper is novel as far as I know, which appears to be a crucial direction, particularly due to its natural alignment with log loss.
2. The results and methodologies presented in this paper seem to be novel and the techniques used in this paper may be of independent interest to the community.
3. This paper is well-written. The algorithm is well-motivated, and the theorems presented are rigorous and sound.
Weaknesses: 1. It is important to include a discussion of the relevant literature on online portfolio selection in the related work, as it addresses a similar problem to this paper, namely learning parameters using log loss.
2. It is worth noting that the lower bound only applies to OMD with a constant step size, which may limit its effectiveness. Additionally, the optimality regarding the number of experts (m) and outcomes (n) should be discussed in this paper.
3. Providing a more intuitive explanation for the choice of Tsallis entropy as the regularizer would enhance the paper's clarity and value.
4. This paper can be improved by presenting a detailed example to illustrate the significance of the logarithmic pooling method with log loss.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How about the efficiency of the algorithm? Can the algorithm be executed efficiently?
2. Why there is no need to project our decision, as stated in footnote 5?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for these comments. We address the comments in the “Weaknesses” and “Questions” sections.
Weaknesses:
1. Thanks for the suggestion -- we agree, and will take care to do so in the final version, should our paper be accepted. We already cite [Cover, 1991] (“Universal Portfolios”), but that’s just a start. Are there particular papers that you would recommend citing?
2. We agree that this is a limitation of our work, though we would like to highlight Footnote 3 from our supplement: “While Algorithm 1 does not always have a constant step size, it does so with high probability. The examples that prove [our lower bound] cause $\Omega(\sqrt{T})$ regret in the *typical* case, rather than causing unusually large regret in an atypical case. This makes our comparison of Algorithm 1 to this class [of constant step size OMD algorithms] fair.”
3. Thanks -- we agree. Here’s a brief intuition for our choice of Tsallis entropy. The natural first online learning algorithm to try for our problem is OMD with the negative entropy regularizer; this is equivalent to Hedge, which is a variant of multiplicative weights. Unfortunately, when attempting to use the negative entropy regularizer, we could not rule out a failure mode in which two experts alternate in making big mistakes (assigning very low probabilities to the eventual outcome) in a way that causes nearly all weight to be assigned to the mistaken expert. For example, Expert 1 makes a big mistake that causes 99% of the weight to be on Expert 2 in the next round. Then, Expert 2 makes a big mistake that causes 99.99999% of the weight to be on Expert 1 in the next round. Then Expert 1 makes a big mistake that causes 99.99999999999999999% of the weight to be on Expert 2 in the next round, and so on. This failure mode is more plausible with the negative entropy regularizer than with Tsallis entropy, because Tsallis entropy is “steeper” near the boundary of the simplex, which makes it difficult for OMD to end up assigning nearly all weight to a single expert. However, we do not have a formal proof that the negative entropy regularizer cannot be made to work.
4. Thanks -- we would be happy to add an example. Here’s a simple example: consider two experts and two outcomes. Expert 1 reports the distribution (98%, 2%); Expert 2 reports (50%, 50%). With equal weights, the logarithmic pool of these forecasts is (87.5%, 12.5%). The log loss is then 0.134 if Outcome 1 happens and 2.079 if Outcome 2 happens.
Questions:
1. Yes, the algorithm is efficient, as we argue below. We can include this argument in the final version.
The only nontrivial step is finding the weight vector satisfying the equation on the last line of the algorithm. To do so, it is first necessary to compute the gradient of the loss. See Equation (5) in the supplement for a formula for the gradient of the loss, which makes it clear that the gradient can be computed in time $O(mn)$. After that, we need to find the weight vector $\mathbf{w}$ that satisfies the equation on the last line.
This can be done efficiently through local search: essentially, the goal is to find weights $(w_1, \dots, w_m)$ such that the vector $(w_1^{\alpha - 1}, \dots, w_m^{\alpha - 1})$ is equal to a target vector $\mathbf{v}$ plus a constant $c$ times the all-ones vector. That is, we need to simultaneously solve the equation $w_i^{\alpha - 1} = v_i + c$ for all $i$, with weights that add up to 1. (Here, the $v_i$ are knowns and the $w_i$ and $c$ are unknowns.)
We start by finding $c$, by solving the equation $\sum_i (v_i + c)^{1/(\alpha - 1)} = 1$. Such a $c$ exists because the left-hand side of this equation is continuous and monotone decreasing and goes from infinity to zero as $c$ ranges from $-\min_i v_i$ to infinity. We can solve for $c$ very efficiently, e.g. with Newton’s method. Once we know $c$, we know each $w_i$: we have $w_i = (v_i + c)^{1/(\alpha - 1)}$. This algorithm thus takes $O(mn)$ time.
2. In general, projecting the weight vector is necessary when there is no weight vector such that the gradient of R at that weight vector is equal to the target vector. However, every vector is attested as the gradient of R somewhere. We prove this in our answer to your previous question by exhibiting an algorithm for finding the weight vector.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: I would like to thank the authors for the detailed response. I would keep my score as a (weak) accept. | Summary: The paper studies no-regret learning in the setting of logarithmic pooling of experts with the logarithmic loss. In this setting, there is a set of experts each outputting a distribution $p_i^{t}$ over outcomes in some finite set $Y$. The task of the learner is to output a vector $w^{t}$. An outcome $y$ in $Y$ is then observed and the learner suffers loss $\sum_i w^{t}_i \log p^{t}_i (y)$ where the sum is over the experts. The loss is then compared to the best choice of weights $w$ in hindsight. Under an additional assumption that the expert probabilities are "calibrated" (which the paper argues is necessary), the paper presents $\sqrt{T}$ regret algorithm. Further, the paper is the first to consider regret in the logarithmic pooling setting and to connect calibration of experts to regret in this setting.
Strengths: The main contributions of the paper in my opinion is the initiation of study of logarithmic pooling + logarithmic loss to the study of online learning. The paper justifies this setting by noting various nice properties this method of aggregation is known to satisfy. Further the relationship of calibration to the regret in this setting is also interesting. It is the key condition under which their algorithm has non-trivial regret.
Weaknesses: The main issue I have with the setting is that beyond the presentation in terms of pooling, this setting seems like an instance of standard online linear optimization. As the paper notes the main difference here is the apriori unbounded sizes of the loss vectors. Viewed in this light calibration is the same as assuming that the losses are bounded ( ). The issue really is whether this is a priori clear or not. The point that makes me lean towards the fact that the calibration is "just assuming" boundedness is that no average notion of calibration (for example calibration on average across time steps) seems to be sufficient (it seems that the counterexample can be modified to be calibrated on average). Thus, the interpretation just as having "good experts" is less clear and seems more like assuming bounded losses.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - It would be very helpful to elaborate the usual setting of learning with the log loss for comparison since that is well established area of online learning and information theory. The reason this would be helpful is that in the usual setting the actions are unrestricted but it turns out to be (essentially) optimal to consider "linear pooling". The fact that linear pooling comes up naturally in that setting would be a good constrast and good place to compare linear and logarithmic pooling.
- The comment about linear pooling and logarithmic regret needs to be considered a bit more carefully. In the usual notion, the learner does not compete against all convex combinations of experts. In fact, there can be polynomial difference between the two cases.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for these comments. A few points and clarifications:
* We disagree that our setting is an instance of *linear* optimization. The log loss of a forecast is not a linear function of the forecast, nor of the weights that the aggregator assigns to the various experts.
* We disagree that the calibration property places a bound on the losses of the experts (or of the aggregate forecast). That’s because an expert’s forecast could assign an arbitrarily small probability to some outcome -- and if that outcome happens, then they could realize an arbitrarily large loss. The calibration property does guarantee that the probability of this occurring is small: in particular, that the probability of loss $\ell$ is only $\exp(-\ell)$. The fact that losses in our setting are stochastic, even if bounded in expectation, complicates the analysis and makes it far from obvious that standard learning algorithms for finding the best mixture of forecasters can be successful in our setting.
* We would conjecture that our results could be adapted to go through for experts that are calibrated on average. The natural notion of calibration-on-average would look something like: when an expert assigns a 1% chance to an outcome, when averaged across time steps it is the case that the probability of the outcome is 1%. (Perhaps on some time steps it is 2% and on others it is 0%, but on average it’s 1%.) It seems that Example 1.1 cannot be adapted to the case of calibrated-on-average experts. For example, if the adversary chooses for an expert to assign $\exp(-T)$ probability to some outcome, then the probability of that outcome occurring is at most $T \exp(-T)$, since there are T time steps over which the expert must be calibrated on average. This is up from $\exp(-T)$ in the case of calibration at each time step, but still very small.
Here are our answers to the two questions:
* Thanks for prompting us to further justify logarithmic pooling. We do so in the global rebuttal (Question 2). To summarize, we believe that logarithmic pooling contrasts favorably to linear pooling in the context of calibrated experts. This is because logarithmic pooling “takes more seriously” confident forecasts from experts. Imagine two calibrated experts and three outcomes. Suppose that Expert 1 has strong evidence against Outcome 1, and so reports (0.04%, 49.98%, 49.98%), and that Expert 2 has strong evidence against Outcome 2, and so reports (49.98%, 0.04%, 49.98%). Accounting for both experts’ evidence would mean placing low probabilities on the first two outcomes and most probability mass on the third outcome. Logarithmic pooling successfully does this, returning (2.7%, 2.7%, 94.6%). By contrast, linear pooling would return (25.01%, 25.01%, 49.98%).
* Thanks for pointing this out. On lines 152-153, we are referring to the subsection titled “A Mixture Forecaster for Exp-concave Losses” of [Cesa-Bianchi and Lugosi, 2006], specifically Theorem 3.3, which addresses the setting of competing with the best (linear) mixture of forecasters. That section addresses bounded, exp-concave loss functions. The log loss is not bounded, and we cited [Cover, 1991] for the log loss. Upon a closer look, it seems that [Cover, 1991] (cited on line 154) attains logarithmic regret in a slightly different setting (closer to the standard online convex optimization setting that of learning a mixture of experts). Do you know of a source that proves a no-regret guarantee in the prediction with expert advice setting, when choosing weights for a linear pool to compete with the best linear pool in hindsight? If not, and if we cannot find such a source, then we will take care to clarify the difference between our setting and that of [Cover, 1991].
---
Rebuttal Comment 1.1:
Comment: We thank the author for the detailed response and apologize for the delay in the response.
1. Regarding the linear pooling, I do not know of an explicit reference but people do study complete against "general" experts and the comment I made was about even simple cases where "taking the convex hull" increases the regret. see https://arxiv.org/pdf/2303.07279 for an example.
2. I agree with the point about approximate calibration.
3. The reason I was saying that this was similar to linear optimization is the following: the actions are the weights $w$ and the loss suffered is $ \sum_i w_i log(p^i_j)$ right? I understand that the loss has some semantics related to predictions but in the end is the game not similar to predicting $w$ and suffering against a loss vector $ (.. log(p^i_j) ... )$ ? This is what I was saying about in this notation calibration seems like a bound on the "size" of the loss. Am I misunderstanding something?
---
Reply to Comment 1.1.1:
Comment: Response to 1: Thanks for the reference! We will take a look.
Response to 3: Ah yes, I believe there's a misunderstanding, and I have a guess about where the misunderstanding is coming from. The issue is that the normalizing constant $c$ in the formula for the logarithmic pool depends on the weight vector. Recall (line 30) that $c$ is the normalizing constant that makes the probabilities add to $1$. Since the logarithmic pool $p_j^*(\mathbf{w})$ is defined as $c \prod_{i = 1}^m (p_j^i)^{w_i}$, the normalizing constant $c$ can be written as
$$c = \frac{1}{\sum_{j = 1}^n \prod_{i = 1}^m (p_j^i)^{w_i}},$$
which depends on $\mathbf{w}$.
The loss suffered by the aggregator is
$$\log p_j^*(\mathbf{w}) = \log \left( c \prod_i (p_j^i)^{w_i} \right) = \log c + \sum_i w_i \log(p_j^i).$$
If $c$ were independent of $\mathbf{w}$, then it would be correct to say that our problem is an instance of linear optimization. However, since $c$ depends on $\mathbf{w}$ (according to the formula above), this is not the case.
I think the paper would be made clearer from us clarifying this point, and we will do so for the final version, should the paper be accepted. Thanks! | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their thoughtful comments. In this global rebuttal, we will address three recurring questions:
1. Some reviewers were interested in further justification of the calibration property.
2. Some reviewers were interested in further justification of using logarithmic pooling to aggregate experts’ forecasts, especially in light of the fact that low regret is attainable with a simple weighted average (as we mention in lines 153-154).
3. Some reviewers asked for further explanation of lines 63-70, in which we argue that at each time step, the algorithm should choose the experts’ weights without seeing the forecasts.
In the paper, we note some theoretical and empirical considerations that justify logarithmic pooling (lines 27-43) and present the calibration property as a natural condition under which there is hope for low regret in the face of challenges presented by requiring logarithmic pooling. In this rebuttal, we present an additional and orthogonal justification: we first justify the calibration property and then justify logarithmic pooling as a particularly sensible aggregation method for calibrated experts. To briefly summarize:
1. In practice, modern deep neural networks are calibrated. Our work addresses the aggregation of such networks’ outputs from a theoretical standpoint by balancing the desire for worst-case guarantees (as is typical for theory) with practical observations about neural networks’ performance.
2. Logarithmic pooling makes sense *in particular* when experts are calibrated, because it takes confident predictions seriously, especially as compared with linear pooling.
3. Learning optimal weights for linear pooling is a well-studied question. Our argument in favor of logarithmic pooling suggests addressing the same question for logarithmic pooling. However, if the weights to a logarithmic pool are allowed to depend on the forecasts, then the aggregation method is no longer a logarithmic pool; it can be an arbitrary function.
Elaborating point by point:
1. In lines 85-89, we give two theoretical justifications for the calibration property. Here we note another justification: in practice, modern deep neural networks are well-calibrated when trained on a proper loss function such as log loss. This is true for a variety of tasks, including image classification and language modeling. (See e.g. [Blasiok et al., 2023] (“When Does Optimizing a Proper Loss Yield Calibration?”) or [Kadavath et al., 2022] ("Language Models (Mostly) Know What They Know").)
Now, suppose that we wish to use an ensemble of off-the-shelf neural networks for some prediction or classification task. We trust the networks to be calibrated, but we may not know beforehand which networks have the highest levels of expertise. (As an extreme example, suppose there are a thousand classes, all equally likely. A network that always assigns 100% probability to the correct class has more expertise than a network that always assigns a uniform distribution over the classes, even though both are calibrated.) Learning to aggregate these networks’ forecasts is well-motivated and nontrivial. The standard fully adversarial setting is one reasonable (and well-trodden) theoretical approach to this problem. However, integrating the empirical observation that neural networks are well-calibrated into the theoretical analysis yields a new, well-motivated setting with the potential for stronger results than are possible without assuming calibration.
2. In lines 27-43, we gave a few theoretical and empirical justifications for using logarithmic pooling. We believe that logarithmic pooling makes *particular* sense when the experts are calibrated. Intuitively, this is because the logarithmic pool pulls the aggregate toward more confident forecasts. We give one example in lines 32-33: if Expert 1 reports probability distribution (0.1%, 99.9%) over two outcomes and Expert 2 reports (50%, 50%), then the logarithmic pool (with equal weights) is roughly (3%, 97%). This makes more sense than the linear pool (i.e. arithmetic average), which would be roughly (25%, 75%): if Expert 1 is calibrated (as we have assumed and justified above), then a (0.1%, 99.9%) forecast entails very strong evidence in favor of Outcome 2 over Outcome 1. Meanwhile, Expert 2’s forecast gives no evidence either way. An aggregate of very strong evidence for Outcome 2 with no evidence either way ought to still be fairly strongly in favor of Outcome 2. (Indeed, there is a natural interpretation of logarithmic pooling in terms of averaging experts’ Bayesian *evidence*; see [Neyman and Roughgarden, 2023].)
As another example, suppose that Expert 1 reports (0.04%, 49.98%, 49.98%) and Expert 2 reports (49.98%, 0.04%, 49.98%) (a natural interpretation: Expert 1 found strong evidence against Outcome 1 and Expert 2 found strong evidence against Outcome 2). If both experts are calibrated, a sensible aggregate should assign nearly all probability to Outcome 3. Linear pooling does not accomplish this; meanwhile, logarithmic pooling returns roughly (2.7%, 2.7%, 94.6%), which is much more sensible.
3. The problem of choosing weights for a *linear* pool to compete with the best weights in hindsight is well-studied (see e.g. [Cesa-Bianchi and Lugosi, 2006, Section 3.3]). Above, we justified the logarithmic pool as a better alternative to linear pooling in the case of calibrated experts. It is then natural to study the analogous problem for logarithmic pooling in place of linear pooling; this is the problem we study. If the algorithm’s weights were allowed to depend on the experts’ forecasts, the resulting probability distribution (as a function of experts’ forecasts) might look nothing like logarithmic pooling. If the algorithm is to compete with the benchmark of a “best possible logarithmic pool”, it makes sense to restrict the algorithm to (weighted) logarithmic pools of the experts’ reports. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Counterfactual Conservative Q Learning for Offline Multi-agent Reinforcement Learning | Accept (poster) | Summary: The paper addresses the problem of offline multi-agent reinforcement learning. A way to make conservative updates to the Q-function is proposed, extending, non-trivially, CQL to multi-agent settings. Theory and experiments clearly validate the approach.
Strengths: The method is both well motivated as well as theoretically and empirically validated. The method is experimentally shown to be substantially better than all the state-of-the-art baselines considered.
Weaknesses: The introduction can be improved in terms of written language. Moreover, I would like to see a couple of straightforward independent learning baselines added to the StarCraft results. See my questions below.
Minor comments:
- I do not understand the last sentence of 3.3;
- Some sentences to improve: figure 1(a), first sentence; first paragraph of the introduction;
- I suggest giving an intuitive meaning to Theorems 1 and 2 after they appear, as well as an intuitive meaning of the whole theory of Section 4.3 in the end of the Section.
- typos: "temporature" in the caption of Figure 3(a); capitalize "conclusion" in the title of Section 6.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - In Section 4.1, what are the optimal Q-values of the Toy MDP ?
- What is $\beta$ ? It appears first in Section 3.3 and I don't see it defined, even though it seems quite important in the sequel (for instance, it appears twice in the proposed algorithm).
- Why is the method called counterfactual ? I suggest adding an intuitive explanation to the document.
- What is the performance of single agent independent Q-learning (actually DQN) and of independent CQL (actually DQN) on the SMAC environment ? I believe it would be quite important to make clear on the table of the results that coordination is really necessary to perform the task and that ICQL is not enough.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: I can not see the limitations of the work nor I foresee a negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comment!
●**Q1: Typos, sentence improvement, and intuition explanation.**
As per your suggestion, we have fixed the typos and reorganized the sentences. The intuition behind our theories is as follows: Theorems 4.1 and 4.2 in our paper illustrate the differences in the extent of underestimation of value functions between CFCQL and MACQL. When MACQL samples $n$ OOD actions, it leads to significantly larger gaps between estimated V and true V compared to CFCQL. Additionally, Theorem 4.4 demonstrates that a greater gap between value functions leads to a larger discrepancy between the performance of the trained policy and the behavior policy. These updates and explanations will be reflected in the revised script.
●**Q2: The explanation of the last sentence of 3.3.**
We apologize for the unclear statement. We want to convey that with a large enough hyper-parameter $\alpha$ in Eq. 2 as the training loss, we can obtain a $Q$ function, and the corresponding value function, which is lower than the real value function $V^{\pi}(s)=E_{\pi}[\sum_{i=0}^{\infty}\gamma^i r_i]$ on the same MDP at any state. Based on this, the overestimation problem in single agent RL can be resolved.
●**Q3: In Section 4.1, what are the optimal Q-values of the Toy MDP?**
The Multi-Agent Markov Decision Process (MMDP) proposed in Section 4.1 is an infinite Markov Decision Process (MDP) task, wherein the maximum reward attainable at each step is 1, contingent upon all agents reaching and remaining in state S2. The discount factor (\gamma) was set to 0.8, and consequently, the optimal Q-value is determined to be 1/(1-0.8)=5, as illustrated in Fig. 1(b) with a dashed line.
●**Q4: What is $\beta$? It appears first in Section 3.3 and I don't see it defined, even though it seems quite important in the sequel (for instance, it appears twice in the proposed algorithm).**
We sincerely apologize for the unclear definition in our paper and any resulting confusion it may have caused. In our work, we denote $\beta$ as the behavior policy responsible for generating the dataset D. It is important to note that $\beta$ is often represented as an implicit distribution, and in our proposed algorithm, we resort to employing an empirical policy approximation through the behavior-cloning method.
●**Q5: Why the method is called counterfactual? I suggest adding an intuitive explanation to the document.**
We refer to our method as 'counterfactual' because its structure bears resemblance to counterfactual methods commonly used in multi-agent reinforcement learning (MARL). This involves obtaining each agent's counterfactual baseline by marginalizing out a single agent's action while keeping the other agents' actions fixed. The intuitive rationale behind employing a counterfactual-like approach is that by individually penalizing each agent's out-of-distribution (OOD) actions while holding the other agents' actions constant from the datasets, we can effectively mitigate the out-of-distribution problem in offline MARL with reduced pessimism. As per your suggestion, we will add the explanation to the revised version.
●**Q6: What is the performance of single agent independent Q-learning (actually DQN) and of independent CQL on the SMAC environment?**
We conducted additional experiments to demonstrate the performance of independent Q-learning (IDQN) and independent CQL (ICQL) on the SMAC environment as listed below. As expected, IDQN proved to be inadequate in handling the offline tasks due to the absence of a specialized mechanism to address the distribution shift issue. ICQL demonstrated promising performance, primarily attributed to its pessimistic design, yet it fell short in achieving high-level performance due to the limitations in coordination imposed by the independent learning paradigm. The supplementary experiments underscore the essentiality of integrating pessimism and CTDE paradigm within the offline MARL domain.
| Map | Dataset | CFCQL| IDQN | ICQL |
| ------- | --------- | ------- | --------- | ---------------- |
|5m_vs_6m|Medium| **0.29±0.05** |0.00±0.00| 0.21±0.04|
||Medium_Replay|**0.22±0.06**|0.00±0.00 | 0.21±0.03|
||Expert|**0.84±0.03**|0.00±0.00| 0.73±0.06|
||Mixed|**0.76±0.07** |0.00±0.00| 0.72±0.07|
|6h_vs_8z|Medium| **0.41±0.04**|0.00±0.00| 0.35±0.04|
||Medium_Replay|**0.21±0.05**|0.00±0.00 | 0.09±0.05|
||Expert|**0.7±0.06**|0.00±0.00| 0.49±0.10|
||Mixed|**0.49±0.08** |0.00±0.00|0.33±0.06 |
●**Q7: Lack of discussions on limitations and broader impacts:**
Due to the page limitation, we have put these to Appendix E.
---
Rebuttal Comment 1.1:
Title: Acknowledgment of rebuttal
Comment: I thank the author for their response. In light of the answers and the the author discussion with other reviewers, I am keeping my score. | Summary: Offline Reinforcement Learning in the multi-agent setting suffers from the combined effects of distribution shift and increasing number of agents. An exponential blowup of the action space in addition to Out-Of Distribution (OOD) actions hinders performance of RL agents. The work tackles these phenomena by proposing CounterFactual Conservative Q Learning (CFCQL), an algorithm that learns conservatively from static offline datasets. CFCQL differs from the naive multi-agent CQL approach as it conservatively trains each agent separately using the pessimistic regularization. Regularization terms are then linearly combined for global value estimation. This prevents agents from being excessively conservative while maintaining theoretical guarantees of underestimation and safe policy improvement as in CQL. In practice, weighted contributions of conservative penalties are realized either using a one-hot encoding or softmax with temperature scaling. In the case of continuous actions, counterfactual Q function updates of the gradient are utilized. Experiments are carried out on a range of discrete and continuous multi-agent benchmarks with promising improvements.
Strengths: * The paper presents an amenable combination of design choices.
* Empirical results are promising and extensive.
Weaknesses: * **Writing and Presentation**: My main concern is the writing and presentation of the paper. The paper is not well written and presented in an unorganized manner. Technical claims made by authors are vague and informal. Explanations are not well supported by intuition or insights and the frequency of grammatical errors is too high. Specifically, sections 1 and 2 motivate the work with high-level and vague statements. Section 3.3 does not formally explain the offline RL problem (of maximizing the expected discounted return), behavior policy or distribution shift. Section 4 builds the algorithm using informal vocabulary. Section 4.4 presents theoretical guarantees without any intuition or interpretation and section 6 summarizes the paper informally. In my view, the paper's presentation requires significant attention.
* **Dataset Ablations**: I am having trouble understanding results on dataset ablations presented in figure 3 (b). Ideally, as the number of data samples grow, the performance of agents improves. With larger static datasets agents have access to more in-distribution samples and a broader coverage of the underlying MDP. However, figure 3 (b) demonstrates that performance of agents drops for 50,000 samples. It would be helpful if the authors can explain this result or its interpretation. In its current form, it appears that the approach may not scale well to larger multi-agent learning datasets for real-world applications.
* **Baselines**: While the paper includes relevant multi-agent learning baselines, it is worth noting that none of these baselines were designed for offline RL. All methods were developed as off-policy or pure online RL methods that aggregate new experience. With that said, the only baseline of interest is MACQL since it leverages the CQL penalty designed for static datasets. Authors should consider offline-RL baselines that may be adapted for multi-agent learning. For instance, IQL[1] and TD3-BC[2] are state-of-the-art offline RL methods which might be useful for comparison. Similarly, AWR[3] is another method that imitates dataset interactions.
* **Transfer and Finetuning**: The paper claims that CFCQL addresses distribution shift and generalizes better. However, this claim is not well supported. Ideally, robustness and generalization ability of an algorithm are tested by transferring it to new unseen scenarios. Authors should consider finetuning agents or adapting them to a new task even if on a small toy example. This will help verify the claims of CFCQL addressing distribution shift.
While experiments and results are promising, overall presentation and writing of the paper needs significant improvement.
## Minors
* line 17: man-made -> synthetic
* line 23: ~from~
* line 24: ~the~
* line 24: highly -> high
* line 37: agent number -> number of agents
* line 44: ~just~
* line 55: bounded from below -> lower bounded
* line 63: too much -> excessive
* line 66: contributes CQL -> contributes to CQL
* line 69: sampled in the dataset -> sampled in-distribution
* line 70: agents number -> number of agents
* line 72: man-made -> synthetic
* line 79: advantagous -> advantageous
* line 149: dataset distribution -> behavior policy
* Figure 1: stays -> stay
* line 171: even worse -> more significantly
* line 175: need learn -> need to learn
* line 178: exponentially -> exponential
* line 183: style -> update
* line 198: contributes regularization -> contributes its regularization
* line 201: in the dataset -> in-distribution
* line 238: performances -> performance
* line 257: contributes penalty -> contributes to penalty
* line 258: style -> encoding
* line 307: agents number -> number of agents
* line 314: basically -> mostly
* Figure 3: temporature -> temperature
* line 372: conlcusion -> Conclusion
* line 373: lack of -> lacks
* line 374: from theories -> using theory
* line 375: exponentially -> exponential
* line 379: exponentially -> exponential
* line 381: achieve -> achieves
* line 381: Some -> Ablation studies
* line 382: also made -> conducted
[1]. Kostrikov et. al., Offline Reinforcement Learning with Implicit Q-Learning, ICLR 2022.
[2]. Scott Fujimoto, Shixiang Shane Gu, A Minimalist Approach to Offline Reinforcement Learning, NeurIPS 2021.
[3]. Peng et. al., Advantage Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning, arXiv 2020.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: * What is the formal problem definition of offline RL? What is a behavior policy? What is distribution shift?
* What is the interpretation of Figure 3 (b)? How does an increase in the size of dataset lead to a decrease in agent performance? How does CFCQL scale for larger data samples?
* Can you please explain the reasoning behind off-policy/online RL methods being relevant baselines? Can CFCQL be compared to IQL, TD3-BC or AWR even if on small toy tasks?
* How does CFCQL address distribution shift? How does CFCQL generalize to new task? Can CFCQL be adapted to new multi-agent tasks or transferred to different kinds of agents following pretraining?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Authors have discussed limitations and future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and detailed syntax error check!
●**Q1: Presentation problem. What is the formal problem definition of offline RL? What is a behavior policy? What is distribution shift?**
We apologize for the unclear statement, and we are committed to improving the presentation. To achieve this, we will seek assistance from native speakers.
As stated in Section 3.1, the online MARL problem concerned in this work is formulated in the context Partially Observable Markov Decision Process (POMDP). And the target is to maximize the cumulative reward $E_{\pi}[\sum_{t=0}^{\infty}\gamma^t r(s_t, a_t)]$. When transferred to offline MARL, the primary distinction is that the training data is a static dataset of transitions, denoted by $D={(s_t,o^i_t, a^i_t,s_{t+1},o^i_{t+1}, r^i_t)}$, where $i\le n$ means the agent index, and $t$ denotes the time-step. Additionally, the agent no longer has the ability to collect data by interacting with the environment.
The behavior policy $\beta$ denotes the distribution over states and actions in D and the transitions in D could be interpreted as being sampled according to the behavior policy, i.e., $s\sim d^\beta(s)$, $a\sim \beta(a|s)$, where $d^\beta(s)$ means the state visitation distribution under policy $\beta$.
Offline RL algorithms encounter the distribution shift issue. The distribution of the behavior policy generally differs from the distribution of the learned policy. Consequently, when updating the Q-values with Bellman backups, the actions sampled from the learned policy may be out-of-distribution (OOD) actions, potentially leading to the overestimation of Q-values[1]. Due to the inability to interact with the environment to correct such errors, the distribution shift issue becomes a major challenge in offline RL.
●**Q2: What is the interpretation of Figure 3 (b)? How does an increase in the size of dataset lead to a decrease in agent performance? How does CFCQL scale for larger data samples?**
The decrease in the performance of larger datasets (50k trajectories) in Figure 3 (b) occurs because of the under-fitting issue. To ensure fairness, the maximum number of training steps for all datasets and algorithms on the SMAC environment is fixed at $1\times 10^7$. However, this may result in termination of trainings before convergence is achieved, resulting in the under-fitting issue.
To verify our claim aforementioned, we retrained the CFCQL on datasets containing 5,000 or 50,000 trajectories until convergence. The performances of the converged agents and the steps taken to achieve convergence are presented in the table below. The results demonstrate that larger datasets contribute to improved convergence performances, thus confirming the scalability of CFCQL for larger data samples.
|Performance|Medium|Expert|Mixed|
|-----------|--------|--------|---------|
|5k(showed in the paper)|0.41±0.04|0.7±0.06|0.49±0.08|
|50k(showed in the paper)|0.36±0.13|0.63±0.09|0.5±0.13|
|5k(converged)|0.43±0.07|0.75±0.05|0.65±0.08|
|50k(converged)|**0.47±0.07**|**0.79±0.07**|**0.67±0.07**|
|Convergence Steps($\times 10^7$)|Medium|Expert|Mixed|
|-----------|--------|--------|---------|
|5k|2|2|2|
|50k|2|2|5|
Regarding the Medium-Replay dataset, we have discovered a bug in our code and the maximum number of trajectories in the Medium-Replay datasets is 5000 instead of 50000. Since only 5000 trajectories are generated when agents are trained to medium performance, we cannot fill a replay buffer with 50000 trajectories for this kind of dataset. Therefore, the training results pertaining to Medium-Replay datasets with 5,0000 trajectories are invalid. We deeply apologize for this mistake and have rectified it in our code. We will revise our paper accordingly.
●**Q3: How does CFCQL address distribution shift? How does CFCQL generalize to new task?**
We acknowledge that the distribution shift problem in offline RL is distinct from the generalization problem. In offline RL, the distribution shift refers to the divergence between the trajectory distribution generated by the behavior policy $\beta$ (i.e., the dataset) and the imagined distribution of trajectories generated by the current training policy $\pi$. The main challenge in offline RL is to provide accurate feedback to the current policy $\pi$ using the data distribution induced by another policy $\beta$, which constitutes the distribution shift problem. We tackle this problem with a counterfactual regularizer to penalize the action that is not sampled from the dataset, i.e., the left part of Eq. 4 in the original paper.
In our work, both the dataset and our trained policy share the same task and reward function, and we do not require agents to possess generalization abilities for transferring to a new task. Instead, we focus on addressing the distribution shift problem to effectively learn from offline data and improve policy performance within the given task.
●**Q4: Can you please explain the reasoning behind off-policy/online RL methods being relevant baselines? Can CFCQL be compared to IQL, TD3-BC or AWR even if on small toy tasks?**
Due to the page limit, please refer to the **public rebuttal block** with name "Author Rebuttal by Authors" on the top of this page for the explanation of lacking baselines.
Reference:
[1] Fujimoto, Scott, David Meger, and Doina Precup. "Off-policy deep reinforcement learning without exploration." International conference on machine learning. PMLR, 2019.
---
Rebuttal Comment 1.1:
Title: Response to Authors' Comments
Comment: I thank the authors for providing a detailed response. After going through authors responses and other reviewers' comments, my concerns regarding dataset ablations still remain unaddressed.
* **Dataset Ablations**- I am struggling to understand the under-fitting issue of CFCQL towards the behavior policy. Assuming the policy is trained for $10^7$ steps, 50k samples still present a broader coverage of the MDP for the same policy. Intuitively, if the behavior policy is kept same and the number of samples are increased, the data distribution remains unchanged and does not affect performance of the downstream agent. Irrespective of underfitting, the CQL algorithm is known to find performant policies even from sub-optimal unconverged behavior policies [1, 2]. Thank you for bringing up the `medium-replay` dataset composition. From my understanding, `medium` datasets can be filled up to 50k samples by simply letting the policy run in the environment and collecting samples from different seeds. Note that in this setting we do not make gradient updates to the policy. Nevertheless, it is worth looking into the dataset composition and exact counts of samples in each task dataset.
[1]. Kumar et. al., A Workflow for Offline Model-Free Robotic Reinforcement Learning, CoRL 2021.
[2]. Kumar et. al., Should I Run Offline Reinforcement Learning or Behavioral Cloning?, ICLR 2022.
---
Reply to Comment 1.1.1:
Title: Response to Your Comments
Comment: We appreciate your feedback and apologize for the misleading aspects of our rebuttal. Firstly, we would like to clarify that the “underfitting” issue is directed at the training policy $\pi$, not the behavior policy $\beta$. The “$1\times 10^7$ training steps” indicate that we train the downstream agents for $1\times 10^7$ steps on each dataset, as described in the original paper. As demonstrated in the previous rebuttal section, $1\times 10^7$ steps are insufficient for the downstream agents to converge (see the second table in the previous rebuttal section). When trained to convergence, the policy trained on a 50k dataset slightly outperforms the policy trained on a 5k dataset (refer to the first table in the previous rebuttal section). These findings align with your expectation in the first comment that, "Ideally, as the number of data samples increase, the performance of agents should improve."
Regarding the medium dataset issue, we have reviewed its composition and can confirm it aligns with your suggestion to "let the policy run in the environment and collect samples from various seeds." If any aspect of our statement remains unclear, please provide further feedback, and we will respond promptly. | Summary: This paper proposes a novel offline multi-agent reinforcement learning algorithm called Counterfactual Conservative Q-Learning (CFCQL) to address the overestimation issue and achieve team coordination at the same time. The algorithm calculates conservative regularization for each agent separately in a counterfactual way and then linearly combines them to realize an overall conservative value estimation. The paper compares CFCQL and MACQL theoretically and shows that CFCQL is advantageous to MACQL on the performance bounds and safe policy improvement guarantee as the agent number is large. The paper conducts experiments on commonly used multi-agent environments to demonstrate that CFCQL outperforms existing methods on most datasets and even with a remarkable margin on some of them.
Strengths: - Theoretical comparison of the proposed algorithm with existing methods to show its advantages in terms of performance bounds and safe policy improvement guarantee.
- The paper provides a detailed explanation of the proposed algorithm and its counterfactual approach to conservative value estimation.
- Conducting experiments on multiple environments to demonstrate the effectiveness of the proposed algorithm.
Weaknesses: - The paper seems to be a simple combination of CQL and MARL.
- Although the paper shows that the proposed algorithm is theoretically superior to existing methods when the agent number is large, it does not provide empirical evidence to support this claim.
typos
- Section "conclusion" -> "Conclusion"
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - How to get $a_i$ in eq.7? Or what's the range of the summation?
- To my understanding, does CFCQL update $Q$ for each agent's action iteratively instead of jointly in MACQL? In this case, does the order of the updates matter? And why sample only OOD actions from one agent instead of two or more if two agents' action are highly related?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper does not explicitly mention any limitations of the proposed method. The paper has no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback!
●**Q1: No empirical evidence to support the superior of our method on large agent number $n$.**
We apologize for any unclear experimental instructions in our paper. The evidence supporting our claim can be found in Section 5.1, specifically in the demo labeled 'Equal Line'. In Figure 2, it is evident that the performance of MACQL degrades significantly as the number of agents increases, while the performance of CFCQL remains relatively stable. These results strongly support the conclusion we presented in Section 4.3, highlighting that CFCQL provides better policy improvement guarantees compared to MACQL when the number of agents $n$ is sufficiently large.
●**Q2: The paper seems to be a simple combination of CQL and MARL.**
While CFCQL shares some similarities in loss form with CQL, it differs fundamentally from a simple combination of CQL and MARL (MACQL). Our theoretical analysis demonstrates that MACQL leads to an exponential increase in pessimism as the number of agents increases, resulting in an overly pessimistic value function. To address this issue, a potential solution is to separately penalize each agent's out-of-distribution (OOD) actions, allowing the penalty term to grow linearly rather than exponentially.
CFCQL successfully adapts the counterfactual method to offline multi-agent settings by independently penalizing each agent's OOD actions while keeping the other agents' actions sampled from datasets. This adaptation holds promise in alleviating the out-of-distribution problem with reduced pessimism. Moreover, the empirical results from our experiments further validate the superiority of CFCQL over MACQL. These unique contributions distinguish CFCQL and demonstrate its effectiveness in multi-agent scenarios.
●**Q3: How to get $a_i$ in eq.7? Or what's the range of the summation?**
We sincerely apologize for the lack of clarity in our paper and any resulting confusion it may have caused. The summation range of $a_i$ in Eq. 7 corresponds to the action space of agent $i$, which is similar to the concept in CQL. In terms of implementation details, for the discrete action space, we perform the summation directly over the action space using the standard torch.logsumexp() function, which allows us to efficiently compute the required summation. However, for the continuous action space, we employ an importance sampling method inspired by the CQL. Specifically, we generate action samples from both a uniform-at-random distribution (Unif(a)) and the current policy. These action samples are then utilized in conjunction with importance sampling to compute the summation.
●**Q4: Does CFCQL update $Q$ for each agent's action iteratively instead of jointly in MACQL? In this case, does the order of the updates matter?**
In CFCQL, the symbol $Q$ refers to the joint state-action function shared by all $n$ agents, which undergoes simultaneous updates involving both the counterfactual terms and the TD error. And we train the agents simultaneously by taking the summation of all $n$ counterfactual terms as the training loss.
●**Q5: Why sample only OOD actions from one agent instead of two or more if two agents' action are highly related?**
The significance of the number of agent with OOD actions directly influences the disparity between the estimated value functions and the true value functions. It is possible to sample OOD joint-actions for two agents as MACQL does. However, as illustrated in theorems 4.1 and 4.2 in our paper, this leads to an increased order of pessimism, which is squared compared to CFCQL. And as more agents are involved, the induced pessimism will explode exponentially. However, as Theorem 4.3 demonstrates, a greater gap between the estimated and true value functions will lead to a larger discrepancy between the performance of the learning policy and the optimal policy, which implies that the huge gap brought in by MACQL will severely affect the performance of the learned policy. Therefore, sampling OOD joint-actions from two or more agents may perform much worse than sampling just single OOD action from one agent. That is our reason behind the choice of sampling only OOD actions from one agent. | Summary:
This paper addresses challenges in Offline Multi-Agent Reinforcement Learning (MARL), which suffers from severe distribution shift issues and high dimensionality. To overcome these problems, the authors propose a novel MARL algorithm, CounterFactual Conservative Q-Learning (CFCQL). Unlike conventional methods that treat all agents as a single high-dimensional entity, CFCQL separately computes conservative regularization for each agent in a counterfactual manner and linearly combines them for an overall conservative value estimation. The authors demonstrate that CFCQL maintains the underestimation property and performance guarantee similar to single-agent methods but offers improved regularization and safe policy improvement bounds that are independent of the agent number. This method is thus advantageous, especially with large agent counts. Experimental validation on various environments and datasets shows that CFCQL outperforms existing methods significantly.
Strengths: * The structure of the presentation is generally clear.
* The considered problem is interesting and of importance.
Weaknesses: * The motivation for the algorithm is unclear, e.g. why considering counterfactual is helpful for offline MARL?
* The theoretical analysis is not well elaborated in context. E.g. the implications of theorems can be better elaborated.
* Some terms are not clearly defined, e.g. “PI”, “e_i” in Eq.(8).
* How algorithm 1 works with continuous or discrete action space is not clear.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: * The presentation needs to be improved in terms of clarity, and rigor.
* The method is still employing centralized training, which limits its scalability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback!
●**Q1: The motivation for the algorithm is unclear, e.g., why considering counterfactual is helpful for offline MARL?**
We apologize for any confusion resulting from the unclear statement in our paper.
The motivation for this study is rooted in the observation that directly transferring CQL into Multi-Agent settings, wherein the joint action is penalized, leads to an exponential increase in pessimism as the number of agents increases. Consequently, this results in an overly pessimistic value function.
To mitigate this issue, it is a potential solution to separately penalize each agent's out-of-distribution (OOD) actions, then the penalty term will grow linearly instead of exponentially. This idea aligns perfectly with the counterfactual approach in MARL[1]. Specifically, counterfactual approaches aim to obtain each agent's counterfactual baseline by marginalizing out the agent's action while keeping the other agents' actions fixed to address the challenges associated with multi-agent credit assignment.
In the context of our study, we successfully adapted the counterfactual method in offline multi-agent settings. This is achieved through the separate penalization of each agent's OOD actions, while holding the other agents' actions still sampled from datasets. Theoretically, this adaptation holds promise in alleviating the out-of-distribution problem with reduced pessimism. Moreover, the empirical results from our experiments serve to further validate the effectiveness of the counterfactual method in the domain of Offline Multi-Agent RL.
●**Q2: The theoretical analysis is not well elaborated in context, e.g., the implications of theorems can be better elaborated.**
In this paper, we aim to theoretically analyze the pessimism of CFCQL and highlight its advantages over MACQL. In Theory 4.1, we rigorously demonstrate that our proposed updating rule, as represented by Eq. 4, leads to a lower bound of the value function. This finding validates the pessimistic property of CFCQL, which is necessary for offline settings. Moving forward, in Theory 4.2, we delve into a comparative study of the scale differences between CFCQL and MACQL in terms of their degree of pessimism. Our analysis reveals that the degree of pessimism exhibited by MACQL surpasses that of CFCQL and that the scale difference between the two methods tends to increase with the number of agents. Building upon the findings from Theory 4.2, we engage in a comprehensive discussion regarding the performance bound and safe policy improvement guarantee of both methods in Theory 4.3 and Theory 4.4, which allows us to showcase the dominance of CFCQL in comparison to MACQL, highlighting the superiority of our proposed approach.
●**Q3: Some terms are not clearly defined, e.g., “PI”, “e_i” in Eq. (8).**
We sincerely apologize for the lack of clarity in our paper and any resulting confusion it may have caused. Allow us to clarify the notation in question. In line 281, "PI" stands for "policy improvement," denoting the iterative progress of policy update in the context of Reinforcement Learning. Additionally, in Eq.8, the symbol “e_i” represents a one-hot vector, wherein the i-th element assumes a value of 1 while all other elements remain 0.
●**Q4: How algorithm 1 works with continuous or discrete action space is not clear.**
We sincerely apologize for any confusion that may have arisen from our algorithm implementation. Allow us to clarify the algorithm implementation for both discrete and continuous action spaces in our work.
For the discrete action space, we utilized the QMIX algorithm, a well-known Multi-Agent Reinforcement Learning (MARL) Q-Learning method specifically designed for discrete action spaces as our backbone algorithm. Upon QMIX, we implement our counterfactual pessimism by the updating rule in Eq. 7.
For the continuous action space, we opted for the MADDPG algorithm, a MARL algorithm based on deterministic policy gradient, as our backbone algorithm. In this setting, in addition to the replacement of update rule for the Q function (as denoted by Eq. 7), we also adapted the update rules for the policy function $\pi$ (as represented by Eq. 10) to align with the nature of the continuous action space scenario.
We hope that these clarifications suffice to remove any ambiguity surrounding our algorithm implementation.
Reference:
[1] Foerster J, Farquhar G, Afouras T, et al. Counterfactual multi-agent policy gradients[C]//Proceedings of the AAAI conference on artificial intelligence. 2018, 32(1).
---
Rebuttal 2:
Title: Further concerns
Comment:
Thank you for the authors' response. I echo the Reviewer dhyd's concerns regarding the writing quality, particularly the informal and inconsistent notations. Many methodological details are absent as well.
After reviewing the authors' response, I have additional concerns:
1. The distinctions among $\bar{Q}$, $\hat{Q}$, and ${Q}$ are not explained.
2. Line 6, which mentions “updating by $\hat{\mathcal{T}}{\hat{\theta}}$ and $\hat{\mathcal{T}} {\hat{\theta}}^{\pi_{\psi_t}}$ ”, is unclear. What's the exact difference between these two operators is missing.
3. The term "Ablation study" in Figure 3 seems misplaced; it appears to merely examine hyperparameters.
4. The rationale for introducing the parameter $\lambda$ in Eq (4) is unclear. Moreover, the definition of $\bar{\mathcal{E}}$ in Eq.(4) is omitted.
5. The term “counterfactual” is concerning to me as it seems to refer to a marginal distribution without $a^i$. The paper lacks a clear explanation of what "counterfactual Q-learning" means.
6. The reviewer did not find explanation of the relations between policy $\pi, \mu$ and $\beta$ in line 181. Why is the update of $\beta^{-i}$ not included in Algorithm 1, which is used in Eq.7?
---
Rebuttal Comment 2.1:
Title: Reply to the comment of Reviewer rt2G
Comment: Thank you for your thorough proofreading! We apologize for the oversight in explaining certain symbols, and we're committed to addressing this issue and ensuring their proper clarification. Your feedback is greatly appreciated, and we'll strive to make the necessary improvements.
**1. The distinctions among $\bar{Q}$, $\hat{Q}$, and $Q$ are not explained. The definition of $\bar{\epsilon}$ in Eq.(4) is omitted.**
In principle, "\hat" refers to the symbols involved in MACQL, like the empirical $Q$ function, the empirical Bellman operator, the empirical TD error, etc. And "\bar" refers to the corresponding symbols involved in CFCQL.
Concretely, we employ $\hat{Q}$ to represent the empirical Q function after each iteration in MACQL, and the $\bar{Q}$ to represent the empirical Q function after each iteration in CFCQL. Similarly, $\bar{\mathcal{E}}$ serves a comparable role, signifying the empirical TD error in the loss of CFCQL, analogous to the function of $\hat{\mathcal{E}}$ in MACQL.
Regarding $Q$, it usually represents a variable that need to be optimized (Eq. 1,2,3,4,7), or the actual $Q$ function obtained via exact policy evaluation(Theorem 4.1), distinguished from the empirical $Q$. We will give detailed explanation about the symbol in the revised manuscript.
**2. What's the difference between the two operators in Algorithm 1, Line 6?**
These are two Bellman operators suitable for discrete and continuous action space, respectively. The main difference is that $\mathcal{T}$ uses the IGM principle to find the actions with max $Q^{tot}$ in the next time step, and $\mathcal{T}^{\pi}$ uses a parameterized policy to output the actions of the next time step. For detailed explanation, please refer to Section 3.2 "Value Functions in MARL".
**3. The term "Ablation study" in Figure 3 seems misplaced.**
Thank you for bringing this to our attention. We have made the necessary correction, now stating "hyperparameters examination."
**4. The rationale for introducing the parameter $\lambda$.**
The rationale behind parameter $\lambda$ is rooted in the understanding that uniformly penalizing the out-of-distribution (OOD) actions of each agent might not be optimal, considering that the OOD degrees across agents may vary. It's important to note that Theorem 4.2 necessitates only $\sum_i \lambda_i = 1$. Consequently, we introduce parameter $\lambda$ to impose higher penalties to agents displaying more significant deviations from the dataset. This is achieved under the constraint $\sum_i \lambda_i = 1$, ensuring that value function overestimation is avoided.
**5. A clear explanation of what "counterfactual Q-learning" means.**
In MARL, counterfactual refers to considering the return of $a^i$ during the training process, taking into account the situations where other agents take different actions. Therefore, the marginal distribution of $Q$ value without $a^i$ is obtained by taking the expectation of different actions of other agents,and it only measures the impact of action $a^i$ on the $Q$ value[1]. In our context, when we pessimistically evaluate the $Q$ value for agent $i$, we keep the other agents' actions sampled from the dataset , which is equivalent to only considering the pessimistic impact of $a^i$ on $Q$ values. Therefore, our approach shares similar principles with the counterfactual method, so we adopted this term to refer to our method.
**6.The relations between $\pi$, $\mu$ and $\beta$. Why is the update of $\beta^{-i}$ not included in Algorithm 1?**
First, let us clarify the significance of $\beta$. $\beta$ denotes an implicit distribution over the dataset $D$, signifying that the dataset can be considered as being collected by executing the behavior policy $\beta$ in the environment. Consequently, $\beta$ remains fixed and can not be updated. On the other hand, $\mu$ primarily emerges within the theoretical analysis of policy evaluation, and it can denote any policy different from the behavior policy $\beta$. The notation $\pi$ pertains to the learning policy. In practical algorithms, we often opt for $\mu=\pi$ for simplicity's sake, which is also consistent with the treatment in CQL[2]. For ease of understanding, you can equate $\mu$ and $\pi$, treating them as the same – that is, referring to the learning policy. This clarification will be accentuated in the revised manuscript.
Reference:
[1] Foerster J, Farquhar G, Afouras T, et al. Counterfactual multi-agent policy gradients[C]//Proceedings of the AAAI conference on artificial intelligence. 2018, 32(1).
[2] Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning." Advances in Neural Information Processing Systems 33 (2020): 1179-1191. | Rebuttal 1:
Rebuttal: ●Q1: Explanation on current baselines and more baseline results.
We acknowledge that the baselines used in our paper (OMAR, ICQ, and MADTKD) are designed for **offline** MARL, not off-policy MARL. We recognize that our paper lacks sufficient comparison with single agent offline RL methods except TD3-BC, the results of which have been listed in the original paper, Table 3 and Appendix Table 2.
Furthermore, following the request of Reviewer dhyd, we conducted additional experiments and have now incorporated the results of IQL and AWR into our comparison on **all** datasets appeared in the paper. To ensure better agent cooperation in continuous action spaces, similar to CFCQL, we applied the CTDE paradigm to both IQL and AWR. This involved utilizing a centralized Q/Value function while maintaining decentralized policies. Additionally, to enhance the stability of AWR, we adopted the more robust actor-critic alternate, AWAC[1].
The results on MaMuJoCo are listed below:
|Datasets|CFCQL|TD3-BC|IQL|AWAC|
|----------|--------|---------|-----|-------|
|Random|**39.7±4.0**|7.4±0.0|7.4±0.0|7.3±0.0|
|Med-Rep|**59.5±8.2**|27.1±5.5|58.8±6.8|30.9±1.6|
|Medium|80.5±9.6|75.5±3.7|**81.3±3.7**|71.2±4.2|
|Expert|**118.5±4.9**|114.4±3.8|115.6±4.2|113.3±4.1|
The results on MPE are listed below:
|Maps|Datasets|CFCQL|TD3-BC|IQL|AWAC|
|------|----------|--------|---------|-----|-------|
|CN|Random|**62.2±8.1**|9.8±4.9|5.5±1.1|0.5±3.7|
||Med-Rep|**52.2±9.6**|15.4±5.6|10.8±4.5|2.7±3|
||Medium|**65.0±10.2**|29.3±4.8|28.2±3.9|25.7±4.1|
||Expert|**112±4**|108.3±3.3|103.7±2.5|103.3±3.5|
|PP|Random|**78.5±15.6**|5.7±3.5|1.3±1.6|0.2±1.0|
||Med-Rep|**71.1±6**|28.7±20.9|23.2±12|8.3±5.3|
||Medium|**68.5±21.8**|65.1±29.5|53.6±19.9|50.9±19.0|
||Expert|**118.2±13.1**|115.2±12.5|109.3±10.1|106.5±10.1|
|World|Random|**68±20.8**|2.8±5.5|2.9±4.0|-2.4±2.0|
||Med-Rep|**73.4±23.2**|17.4±8.1|41.5±9.5|8.9±5.1|
||Medium|**93.8±31.8**|73.4±9.3|70.5±15.3|63.9±14.2|
||Expert|**119.7±26.4**|110.3±21.3|107.8±17.7|107.6±15.6|
The results on SMAC are listed below. No results of TD3-BC are provided in SMAC with discrete action space, since it is only suitable for continuous action space.
| Map | Dataset | CFCQL| IQL | AWAC |
| ------- | --------- | ------- | --------- | ---------------- |
|2s3z|Medium| **0.40±0.10** |0.16±0.04|0.19±0.05 |
||Medium_Replay|**0.55±0.07**|0.33±0.06 | 0.39±0.05|
||Expert|**0.99±0.01**|0.98±0.03| 0.97±0.03|
||Mixed|**0.84±0.09**|0.19±0.04| 0.14±0.04|
|3s_vs_5z|Medium| **0.28±0.03** |0.20±0.05|0.19±0.03 |
||Medium_Replay|**0.12±0.04**|0.04±0.04 | 0.08±0.05|
||Expert|**0.99±0.01**|**0.99±0.01**|**0.99±0.02** |
||Mixed|**0.60±0.14**|0.20±0.06|0.18±0.03 |
|5m_vs_6m|Medium| **0.29±0.05** |0.25±0.02|0.22±0.04 |
||Medium_Replay|**0.22±0.06**|0.18±0.04 |0.18±0.04 |
||Expert|**0.84±0.03**|0.77±0.03|0.75±0.02 |
||Mixed|0.76±0.07 |0.76±0.06| **0.78±0.02**|
|6h_vs_8z|Medium| 0.41±0.04|0.40±0.05| **0.43±0.06**|
||Medium_Replay|**0.21±0.05**|0.17±0.03 | 0.14±0.04|
||Expert|**0.7±0.06**|0.67±0.03| 0.67±0.03|
||Mixed|**0.49±0.08** |0.36±0.05|0.35±0.06 |
We have observed that CFCQL consistently outperforms IQL and AWAC, particularly on the Random dataset. This outcome aligns with our expectations since both IQL and AWAC can be considered as variants of weighted behavior cloning methods, and their performance heavily depends on the quality of the dataset used for training. In contrast, CFCQL does not necessitate the learned policies to closely match the behavior policy, which enables it to excel on datasets with lower quality.
Reference:
[1] Nair, Ashvin, et al. "Awac: Accelerating online reinforcement learning with offline datasets." arXiv preprint arXiv:2006.09359 (2020). | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
MosaicBERT: A Bidirectional Encoder Optimized for Fast Pretraining | Accept (poster) | Summary: This work proposes RapidBERT to train BERT in a faster way. Different from the previous accelerated method, this work attempts to employ some recent popular transformer architecture efficient designs as the basic modification of RapidBERT architecture, such revisions including the introduction of FlashAttention, AliBi, Bf16, GLU, Low Precision LayerNorm, etc. The training dataset is C4. It cost 1.13 hours on 8 A100-80G GPU.
Results on the GLUE NLU benchmark show that under fair comparison except for the training step, RapidBERT can get efficient training FLOPs compared to previous methods.
Strengths: - This work employs some recently released transformer-efficient architectures in BERT training, getting RapidBERT.
- The RapidBERT got similar NLU ability compared to the vanilla-BERT.
Weaknesses: - For a fair comparison, The authors should also post the RapidBERT results pre-trained on the English Wikipedia and the Books Corpus. (The original BERT study trained on English Wikipedia and the Books Corpus).
- Similar studies have been proposed in the same faster BERT pertaining track, it would be better to have a comprehensive comparison with them in detail to help readers learn more about this field.
- In line 176, this work says that "we modify the vocab size from 30522 to 30528" inspired by MEGATRON so that the vocab size is a multiple of 64, and leads to a non-trivial throughput speedup. However, in MEGATRON's work, they pad the vocab because of the 8-way model parallelism. But this work does not employ any tensor parallelism or pipeline parallelism, could you explain why such an operation can lead to a speedup in the training of RapidBERT?
- Most of the new techs (eg. FlashAttention) tried in this work have already been employed and integrated into the BERT pre-trained stage in some frameworks, e.g., Megatron, DeepSpeed [1][2]. In this manner, this work is not so innovative and may not be the first work to do this, it would be better to discuss differences with these similar works. (Actually, it is inspiring to see more works on efficient LM pertaining field, so this term is not very influential, I just brought it up.)
[1] https://github.com/NVIDIA/Megatron-LM
[2] https://github.com/microsoft/DeepSpeed
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Except for the adding modules (eg. FlashAttention, AliBi, etc.), there are also some recently released efficient transformer architectures (eg, RoPE, DeepNorm, etc), have the authors try them in RapidBERT. Could such methods help accelerate the training period further? It would be interesting to see if these methods could help.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Similar studies have been proposed in the same faster BERT pertaining track, this work does not have too much comparison with them in detail.
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes
Flag For Ethics Review: ['No ethics review needed.'] | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments.
> The authors should also post the RapidBERT results pre-trained on the English Wikipedia and the Books Corpus
The primary focus of our work was to show that certain architectural modifications and training choices lead to both a speed up as well as an improvement in downstream accuracy. We were less concerned with the well-worn path of “beating” the GLUE accuracy of the original BERT, or with getting the highest GLUE scores (which requires larger models trained for much longer). Even if we had chosen to train on 40 epochs of English Wikipedia and Books Corpus, it would be difficult to benchmark directly against the original BERT, which had many differences. For example, our BERT baseline and many other BERT variants such as RoBERTa include a MLM pretraining objective instead of MLM + NSP, a larger batch size (the original paper used a batch size of 256), etc.
We chose to run all experiments on the same, high quality, contemporary dataset (C4) so that we could explore the combined effects of architectures and training configuration (as opposed to data quality).
> Similar studies have been proposed…comprehensive comparison with them in detail to help readers learn more about this field.
We have included an extended comparison with similar studies in the updated manuscript.
> In line 176, this work says that "we modify the vocab size from 30522 to 30528" inspired by MEGATRON so that the vocab size is a multiple of 64, and leads to a non-trivial throughput speedup...could you explain why such an operation can lead to a speedup in the training of RapidBERT?
This was simply an empirical observation that we made when benchmarking our models, even in the absence of model parallelism. GPUs compute matrix multiplication by breaking matrices into tiles, and padding the vocab size aligns the matrix dimensions to a multiple of a fast tile size. CUDA has a heuristic which it uses to pick what tile size it uses for matrix multiplication, and it often chooses a suboptimal one when there is a “weird” matrix side length. Because of this, we and others have found that manually padding matrix dimensions can lead to non-trivial performance gains.
> it would be better to discuss differences with these similar works
We thank the reviewer for this comment, and have updated our manuscript to include a discussion of DeepSpeed and MEGATRON
> some recently released efficient transformer architectures (eg, RoPE, DeepNorm, etc)
We have a growing list of proposed efficiency improvements to transformer architectures, and are excited to explore this in future work. | Summary: The paper benchmarks several architectural changes to BERT that allows for more efficient pretraining. More specifically, the paper adds flash attention, ALiBi position representations, and GLU activations to the original BERT architectures. For fair comparison, they re-implement the baseline BERT-base using the same pretraining hardwares. They show that with these architecture modifications they can achieve better performance on downstream tasks while using smaller amounts of pretraining time. Their base model’s performance is similar to BERT-base in (Devlin et al., 2018) using approximately 1 hour of pretraining on 8 A100 GPUs, costing $20 on a standard cloud provider. They also benchmark the training cost of each architecture modification, showing that reducing the sizes of GLU matrices help to improve throughput of training significantly
Strengths: The paper provides empirical evidence that combining flash attention, ALiBi position representations, and GLU activations reduces 50% of the training cost. Their ablation on GLU will benefit researchers working on the relevant areas
Weaknesses: The contribution of this paper is limited. The proposed architecture modifications are all adopted from prior work. Their primary findings mostly come from tuning the hyperparameters of GLU. And their current findings depend on multiple architecture modifications, which is complicated and makes their scientific findings unclear. I’d hope that the authors can provide more experiments showing why all these modifications are necessary and whether it’s possible to simplify it further for a clearer takeaway message
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. What is “Pareto Optimal”? This term is mentioned multiple times but never formally defined in the paper
2. There appears to be a throughput difference in Figure 5 between “lpLN+GLU+ALBi” and “GLU 3072”. What leads to the difference?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The contribution of this paper is limited.
> I’d hope that the authors can provide more experiments showing why all these modifications are necessary.
We have added many additional ablations to demonstrate the individual contributions of each of the changes to the architecture in the Author Rebuttal PDF. This is our best attempt to address the reviewer’s feedback. However, this feedback is vague and unactionable; we hope the reviewer will give us concrete and constructive feedback after the author response period so that we can improve our paper.
> The contribution of this paper is limited. The proposed architecture modifications are all adopted from prior work.
We strongly but respectfully disagree with this comment. The literature is full of papers that propose a single, novel improvement to an architecture like BERT (e.g., the many papers we cite and build on). However, there is very little work assessing what this collection of papers actually amounts to: whether these improvements fit together and combine to produce something even stronger, or whether they are different, mutually exclusive paths to the same gains. We believe that it is crucial to take stock of scientific progress in the manner that we have in our paper. We hope the reviewer recognizes the importance of this kind of contribution.
> their current findings depend on multiple architecture modifications, which is complicated and makes their scientific findings unclear.
> simplify it further for a clearer takeaway message
We strongly but respectfully disagree with the reviewer’s assumption that there should be a simple, straightforward result. None of the neural network architectures we train today are simple; each comprises a multitude of small design choices and tweaks that combine to lead to a strong result. ResNets, for example, rely on a combination of residual connections, batchnorm, global average pooling, specific initialization schemes, and particular sets of data augmentation. The goal of our paper is establishing a new baseline for future work on improving BERT, and that baseline must combine all of the best techniques available. Our proposed baseline is no more complicated than any other state-of-the-art architecture in use today.
> Their primary findings mostly come from tuning the hyperparameters of GLU.
Our primary contribution is building and extensively benchmarking one of the fastest architectures for BERT pretraining. We believe this is an important contribution to the academic and engineering communities, as fast, cheap pretraining enables ML practitioners.
We believe that the reviewer has misunderstood our primary contribution to be the change of the dimension of the intermediate layer for the Gated Linear Unit.
> What is “Pareto Optimal”?
We thank the reviewer for bringing to our attention that we do not explicitly define the term “Pareto optimal,” and have fixed this in the updated manuscript.
In order to make meaningful improvements in neural network training efficiency, ML practitioners
must be able to compare between different choices of network architectures, hyperparameters, and training algorithms. One straightforward way to do this is to characterize the tradeoff between accuracy and training time with a “tradeoff curve” or a “Pareto curve.” Pareto curves can be generated by varying the length of training for each model configuration; longer training runs take more time but tend to reach higher quality. For a fixed model and task configuration, this method of generating tradeoff curves is an estimate of the theoretical Pareto frontier, i.e. the set of all of the best possible tradeoffs between training time and accuracy, where any further attempt to improve one of these metrics worsens the other.
We therefore consider one model Pareto optimal relative to another if it has superior accuracy across different training budgets while keeping everything else fixed. Many studies advertise novel architecture approaches without measuring wall clock time, or show an improvement for a single training duration. In our paper we show that RapidBERT-Base is Pareto optimal relative to our BERT-Base baseline for both short training durations and long training durations (Figure 2). We also show that BERT-Large and RapidBERT-Large are not Pareto optimal relative to BERT-Base and RapidBERT-Base for training durations under ~30 hours.
> throughput difference in Figure 5
In Figure 5A, “lpLN+GLU+ALiBi” is bert-base with the additional pieces of low precision LayerNorm, GLU and ALiBi on top of the classic BERT architecture. In Figure 5B, “GLU 3072” indicates the complete RapidBERT architecture, which includes full-sized GLU, unpadding, FlashAttention, optimal vocab size, masked language modeling ratio of 30%, etc. We indicated this in the caption for Figure 5B “Throughput of the “complete” RapidBERT-Base with different intermediate sizes for GLU,” but will elaborate on this to be more clear.
---
Rebuttal 2:
Title: Check rebuttal
Comment: @Reviewer WcGd,
Does the rebuttal address your concerns? Could you read it and update your review accordingly? | Summary: The paper introduces RapidBERT, an architecture and training paradigm for pretraining BERT-style language models that is cost-effective. The proposed approach incorporates several modifications into the conventional transformer encoder block, including FlashAttention, Attention with Linear Biases, Gated Linear Units, Unpadding Module, and a low precision LayerNorm. The pretraining process avoids the Next Sentence Prediction task and follows the RoBERTa practices, using the C4 dataset with a 30% masking ratio in MLM.
RapidBERT demonstrates faster convergence and achieves a better accuracy versus time Pareto curve during pretraining. Their base model consistently outperforms the original BERT model on average across the GLUE dev tasks. Remarkably, the training process only takes 1.13 hours using 8 A100 GPUs, resulting in a cost of approximately $20.
The authors show that RapidBERT is Pareto-optimal when compared to BERT for both base and large models. They also highlight the necessity of extensive training for larger models, as RapidBERT-Base outperforms both BERT-Large and RapidBERT-Large for a significant portion of their training.
In addition, the paper includes a comprehensive ablation analysis of the design choices made in RapidBERT and evaluates the throughput of each architecture. The results indicate that the GLU modification leads to a decrease in throughput, while ALiBi has minimal impact on throughput. On the other hand, incorporating low precision LayerNorm significantly improves throughput.
Overall, the paper presents RapidBERT as an efficient and effective approach for pretraining BERT-style language models, showcasing its superior performance, cost-effectiveness, and the benefits of the proposed modifications.
Strengths: - The paper introduces a time and compute-efficient approach for pretraining BERT-like models, offering significant advantages over previous works. This approach holds promise for pretraining task-specific language models that do not require high parameter requirements, making it highly practical and valuable in various applications.
- The implementation details provided in the paper are clear and comprehensive, contributing to the reproducibility of the research. The authors effectively explain the concepts and methodologies, ensuring a thorough understanding of the proposed approach.
- The research conducted on related work demonstrates a comprehensive exploration of the existing literature. The design choices made in this study are grounded in well-established prior works, and the ablation study further strengthens the validity and effectiveness of the proposed modifications.
- The paper presents an intriguing finding regarding the impact of model size on performance in specific domains. The authors discover that larger models may not always yield superior results due to limited data availability and increased compute time. This insight, demonstrated by the Pareto optimality of RapidBERT-Base over BERT-Large and RapidBERT-Large for a significant portion of the training, adds a valuable contribution to the understanding of model performance and scalability.
Weaknesses: ~- The approach primarily combines existing techniques, such as FlashAttention, GLUs, and low precision LayerNorm. While the authors introduce the novel aspect of maintaining a 30% masking ratio, the overall novelty of the architectural choices is limited.~
~- It would have been beneficial to include a discussion comparing the proposed approach with the original RoBERTa model as a baseline. This comparison would provide a clearer understanding of the improvements achieved by RapidBERT and highlight its unique contributions.~
- The paper lacks a discussion on how the architectural choices made in RapidBERT could facilitate the development of larger language models, such as GPT 3.5. Exploring the potential scalability and benefits of these choices for larger models would enhance the practical relevance and implications of the research.
- The ablation studies do not investigate the effects of changing the masking ratio. Including an analysis of different masking ratios would offer insights into the choice behind keeping the ratio to be 30%.
- The paper would benefit from discussions on the comparison with RoBERTa, the scalability of the approach to larger models, and the effects of different masking ratios in the ablation studies.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Can you provide a rationale for considering BERT as a "strong" baseline in your study? Why was RoBERTa not chosen as the baseline for comparison, and what factors led to the selection of BERT as the benchmark model?
~- It would be helpful to gain insights into the absence of numerical instabilities in bfloat16. Could you provide any intuition or theoretical reasoning behind why it doesn't happen in your case?~
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The discussion on limitations is satisfactory. There are some limitations discussed in the appendix regarding potential training stability issues on the larger models. Potential negative societal impacts (ease of pre-training could lead to ease of more biased/inappropriate models) is also discussed in broader impacts section. General limitations with any pre-trained LMs will apply to RapidBERT.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and suggestions.
> approach primarily combines existing techniques…the overall novelty of the architectural choices is limited
We strongly but respectfully disagree with the reviewer’s concerns about novelty. The literature is full of papers that propose a single, novel improvement to an architecture like BERT. (E.g., the many papers we cite and build on.) However, there is very little work assessing what this collection of papers actually amounts to: whether these improvements fit together and combine to produce something even stronger, or whether they are different, mutually exclusive paths to the same gains. We believe that it is crucial to take stock of scientific progress in the manner that we have in our paper, and we emphasize the novelty of (1) the nontrivial work we needed to perform to get these many improvements to work well together and (2) the fact that we sought to combine many existing methods rather than invent yet another new technique of uncertain value. We hope the reviewer recognizes the importance of this kind of contribution and the novelty inherent in performing this work.
> It would have been beneficial to include a discussion comparing the proposed approach with the original RoBERTa model as a baseline
We appreciate the reviewer’s comments regarding RoBERTa, and have elaborated on this in the updated version of the manuscript. RoBERTa is a “training recipe” for BERT that preserves the network architecture but changes hyperparameters and training data. The RoBERTa paper showed that the training dataset is incredibly important; for example, RoBERTa trained on 350GB of data while BERT only trained on 16GB of data. The RoBERTa paper also showed that the next sentence prediction (NSP) objective was superfluous, and showed that increasing the batch size led to accuracy gains. These RoBERTa training choices have become the standard in the field, and our general training recipe choices resemble RoBERTa more than the original BERT (as we mention in the paper).
Since our goal was to show that both our architectural and training recipe choices led to strict accuracy vs. wall clock time Pareto improvements, we wanted to make sure that we were comparing our models with a very strong baseline. Therefore both BERT and RapidBERT used the same dataset (C4), tokenizer, learning rate schedule, batch size and device microbatch size, hardware, etc. The initial untrained BERT architecture and the RoBERTa architecture are the same, so the setup would be the same for our strong baseline. We have included more reported RoBERTa values in the Appendix of our updated manuscript as a helpful reference point.
> how the architectural choices … could facilitate the development of larger language models
We thank the reviewer for making this point, and have expanded on this in our updated manuscript. All of the architectural and configuration choices made in RapidBERT can in principle be applied to larger models, although they might have different effects on throughput. Additionally, they should not pose any issues with different forms of parallelism (some of the modifications we explored were used successfully at large scale in frameworks like MEGATRON). For example, Gated Linear Units should have no issues with tensor parallelism as long as the matrix multiplications are a multiple of the TP world size. As models are scaled, LayerNorm becomes a smaller portion of the total compute, so low precision LayerNorm might not matter as much.
> The ablation studies do not investigate the effects of changing the masking ratio
Multiple studies have shown that the change to the MLM masking ratio from 15% (in the original BERT) to 30% leads to a small but significant improvement in downstream GLUE performance. In “Should You Mask 15% in Masked Language Modeling?” Wettig et al. find that constant MLM masking ratios above 15% lead to improved average GLUE and SQuAD scores for bert-base and bert-large. Similarly, in the recently published paper “Dynamic Masking Rate Schedules for MLM Pretraining,” Ankner et al. find that a constant MLM masking ratio of 30% consistently outperforms a MLM masking ratio of 15% for BERT-base. We have elaborated on this in the updated manuscript.
We have also included an updated figure in the Author Rebuttal PDF showing that an MLM ratio of 30% leads to a small improvement in the downstream GLUE score over MLM 15% without affecting the wall clock time.
> the absence of numerical instabilities in bfloat16.
As we were prototyping our RapidBERT architecture and training recipe, we were surprised to find that low precision (bfloat16) LayerNorm afforded significant speedups without compromising stability. While instabilities such as loss spikes are strongly coupled to the learning rate schedule and other hyperparameters, we found low precision LayerNorm to be a strict Pareto improvement when benchmarking wall clock time and downstream accuracy in our hyperparameter regime. We don’t have theoretical intuitions for why numerical instabilities didn’t occur in our hyperparameter regime. We should note however that our learning rate schedule was not overly aggressive (warmup followed by linear decay with peak values well within the range of published studies), and that this could have mitigated loss spikes. Finally, we should also note that the NVIDIA Apex library and Megatron both use a form of low precision LayerNorm in their code, but it is not documented in any papers that we could find. We have elaborated on this in our updated manuscript. | Summary: This paper proposes a new efficient recipe for training BERT, matching the original performance of BERT on GLUE in ~1h on 8x A100. To do so, the authors leverage a number of architectural/implementation improvements: FlashAttention, ALiBi, GeGLU, `bf16` layer norm, unpadding, and tweaks to the masking ratio. The authors find that their recipe is optimal even when considering longer runs, and that it transfers well to BERT-Large.
Strengths: * **S1.** Reducing the costs associated with training language models can help practitioners iterate faster, improving downstream research outcomes. This makes this paper potentially valuable to the community, as it improves upon previously introduced similar recipe such as CrammingBERT.
* **S2.** The authors detail their contributions and open-source their code, making these results reproducible and enabling the community to build upon them.
* **S3.** The authors feature an updated baseline (BERT-Base) that helps for fair comparisons in their setup.
Weaknesses: * **W1. It is difficult to untangle individual contributions to the final result.**
* **W1.1.** The proposed recipe visibly has some positive impact on GLUE score, as Figure 3 shows that it achieves significantly better performance than BERT-Base after the same amount of training. However, that impact is never quantified in the paper. Here, the ablations should not only focus on throughput, but also on how the proposed interventions might impact the GLUE score.
* **W1.2.** The proposed baseline is very strong (which is a positive point), and it would be good in Table 1 to also showcase the time it take for it to reach a 79.6 GLUE score. Furthermore it would be interesting to identify what makes the baseline such a strong one (with a final score much higher than BERT-Base). Is it the change in data to C4 (which authors acknowledge as an important factor l161)? Something else?
* **W1.3.** l112 the authors discuss using ALiBi to pretrain with a shorter sequence length and extrapolate at test time. It's unclear if this end up being used, and if it is included in the ALiBi ablations.
* **W1.4.** l251 it is disappointing to not ablate adequately every component, especially since the value of this paper lies in having a potentially systematic approach to performance improvements of BERT models.
* **W2. The Pareto frontiers described may be slightly misleading, as they do not account for LR schedule.** The so-called Pareto optimality is obtained by taking points from the same run, instead of having one run per pretraining budget on the plot. This approximation penalises intermediate budgets, as they are evaluated with an incomplete LR schedule. While I don't think this has a significant influence in this work, since the authors discuss Pareto optimality so much, this should at the very least be clearly discussed as a limitation to avoid misleading other authors. This issue is particularly relevant, as it has lead to significant misunderstandings around scaling laws for instance (see Hoffmann et al., 2022).
* **W3.** The paper feels a bit repetitive, as if it had been stretched to fit the 9 pages of content. Section 3.2 and 3.3 are particularly egregious in this regard, and more time could instead be spent on ablations.
* **W4.** (minor nits) l98 reference to Triton should cite "Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations" (Tillet et al., 2019); l145 the sentence "results from NVIDIA and others" is confusing, as the final work cited is not from NVIDIA -- there should be a citation somewhere for the NVIDIA results.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: This has the potential to be a valuable paper to the community, by systematically identifying and detailing interventions that can accelerate the training of large language models like BERT. Reducing iteration costs is in particular a great enabler for researchers. Unfortunately, the lack of systematic ablations on GLUE score and the limited throughput ablations make this paper fall somewhat short of its promise. Accordingly, I am rating it as a **Borderline Reject (4)** but would be willing to increase my score to an accept should some of my concerns be addressed.
**EDIT: following rebuttal, I have updated my score to a Weak Accept (6).**
* **Q1.** Could the authors explain why Figure 3 shows a significant improvement in GLUE score for RapidBERT over BERT? Would it be possible to ablate for this improvement, to ultimately better understand which intervention improves throughput and which improves "modelling performance" (i.e., GLUE score)?
* **Q2.** Could the authors explain why the baseline used is so much above the original BERT-Base?
* **Q3.** Could the authors clarify whether they use for RapidBERT a shorter training seqlen to accelerate training as proposed?
* **Q4.** Could the authors provide further performance ablations for FlashAttention/unpadding?
* **Q5.** (more of a suggestion) Could the authors better describe the limitation of their approach regarding Pareto-optimality?
Small suggestions:
* l13/14 in the abstract "When pretrained from scratch on the C4 dataset, this base model achieves the downstream average GLUE score of 79.6 in 1.13 hours on 8 A100 80 GB GPUs at a cost of roughly USD 20" it would be good to mention this is the GLUE score achieved by the original BERT; for instance "RapidBERT achieves the same downstream average GLUE score as the original BERT (79.6) in 1.13 hours on 8 A100 80 GB GPUs at a cost of roughly USD 20".
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: While there is no explicit section, some limitations in terms of scope are discussed in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and suggestions, and we try to address them here.
**W1** One of the primary goals in this paper is to develop a high-performance BERT architecture and recipe for pretraining on commercially available high-end hardware (A100-80GB GPUs). We agree with the reviewer that ablations should focus on downstream performance in addition to throughput, and have included updated plots with the individual effects of GLU, low precision LayerNorm, etc. on both wall clock time and GLUE scores.
**W1.1 (How ablations affect GLUE score)** We ran further ablations in response to the reviewer’s comments and have plotted downstream GLUE accuracy as a function of measured pretraining wall clock time. We provide figures in the Author Rebuttal PDF. The patterns here shed light on the individual effects of various architectures (e.g. BERT+GLU, BERT+low precision LayerNorm) and training configurations (e.g. BERT + 30% masking ratio). On average, all methods seem to provide a slight accuracy boost to BERT-base. Increasing the masking ratio to 30% leads to a slight accuracy boost while not affecting the WCT, while turning off dropout in the attention layer leads to a slight improvement in both accuracy and WCT. Low precision LayerNorm leads to a significant speedup. Gated Linear Units (GLU) add more parameters to the model and lead to a significant slowdown while providing an accuracy boost. As a point of reference, we also benchmark the full RapidBERT as well as RapidBERT without FlashAttention and with dropout set to 0.1 (the standard BERT-base configuration).
**W1.2 (Strong baseline)** Thank you for this suggestion. We will include the time it takes for our strong baseline to reach a 79.6 GLUE score in Table 1. We are confident that the C4 dataset is the main reason for the strong BERT baseline relative to the original BERT (Devlin et al. 2018). The original BERT was trained for 40 epochs on Wikipedia + Books Corpus (16GB of text for a single epoch). C4 contains Wikipedia and Books Corpus, as well as much more data (roughly 350 GB). Besides the use of C4, we note some other differences from the original BERT paper: 8xA100 80 GB GPUs, MLM pretraining objective instead of MLM + NSP, a sequence length of 128 throughout the entirety of pretraining (instead of starting with 128 and then expanding to 512 towards the end of pretraining), larger batch size (the original paper used a batch size of 256), an increased vocab size of 30,528 (the original used a vocab size of 30,000, while the default Hugging Face vocab size for the bert-base-uncased tokenizer is 30,522), and the use of PyTorch 1.13. We used the original bert-base-uncased tokenizer for all our models; it is likely that a more modern tokenizer would improve GLUE results as well.
**W1.3 (ALiBi)** Our main motivation for including ALiBi was to allow for long sequence extrapolation and-or finetuning in downstream use cases. There is increasing demand in the ML community for long context windows and alternatives to positional embedding schemes. We agree with the reviewer that our submission did not decisively show that ALiBi extrapolates well at test time. Both BERT and RapidBERT pretraining was done on a sequence length of 128, while finetuning was done for a sequence length of 512 (and have made this clear in the updated manuscript). Unfortunately the GLUE benchmark does not have many examples with long sequence lengths above 512 tokens, and is therefore not a great benchmark for testing sequence length extrapolation.
**W1.4 (Ablations)** We agree with the reviewer that ablations here should not only focus on throughput but also downstream GLUE accuracy. We have provided updated experiments on GLUE score vs. wall clock time for BERT, BERT+GLU, BERT+low precision LayerNorm, BERT+ mlm 30% ratio, BERT+no attention dropout, RapidBERT, and RapidBERT-FlashAttention in the Author Rebuttal PDF. In the final version of the manuscript, we will also include results for BERT+ALiBi, BERT+ALiBi+GLU, etc.
**W2 (Pareto Curve)** The Pareto plots in Figures 1-4 are constructed by taking points from the same runs (with the same learning schedule), instead of having one run per pretraining budget on the plot. We did this for cost reasons; it would have been too expensive to do every run separately. We emphasize that this approximation penalizes intermediate budgets, as they are evaluated with an incomplete LR schedule, so we expect that our results would be even stronger had we used the learning rate schedule corresponding to the training time for each point.
**W3 (Repetitive)** We appreciate this feedback, and we will cut some of the unnecessary language from sections 3.2 and 3.3
**W4 (References)** Thank you for these suggestions. We have updated the manuscript to properly cite the Triton paper and the correct NVIDIA reference for unpadding (which was NVIDIA’s MLPerf v1.1 submission).
**Q1:** We hope that the additional ablation experiments shed light on the large gap between RapidBERT and BERT in Figure 3. Each of the individual architecture modifications lead to either a downstream accuracy boost or a speedup; combined (RapidBERT) they lead to the full Pareto improvement over BERT.
**Q2:** We have addressed this question in comment W1.2
**Q3:** We pretrain with sequence length 128 for both the baseline BERT and for RapidBERT. We therefore do not count this as a speedup method. We then finetune both BERT and RapidBERT with sequence length 512.
**Q4:** We have provided some FlashAttention ablations in the Author Rebuttal PDF.
**Q5:** We address the question of Pareto optimality in W2
---
Rebuttal Comment 1.1:
Title: Answer to the rebuttal
Comment: First, I would like to thank the authors for providing an extensive rebuttal to each reviewer, as well as additional results.
I believe that this additional information+discussions significantly improve the quality and value of the paper. I will trust that the authors update their paper based on the rebuttals they have provided.
Accordingly, I have updated my score to a **Weak Accept (6)**.
Re: as a small clarification on W2, I agree that this does not impact the results presented -- I just pointed to the need for mentioning this explicitly (that intermediary results are penalised), as I believe this is an important point of confusion in the community.
---
Reply to Comment 1.1.1:
Title: We thank the reviewer for their comments and appreciate the updated score.
Comment: We thank the reviewer for their comments and appreciate the updated score.
We agree with the reviewer that the nuances of learning rate schedules and Pareto curves are an important point of confusion in the community, and will make sure to discuss this explicitly in the final version of the paper. | Rebuttal 1:
Rebuttal: We ran further ablations in response to reviewer comments and have plotted downstream GLUE accuracy as a function of measured pretraining wall clock time. The comments here describe the additional experiments in detail.
The patterns in Figures R1 and R2 shed light on the individual effects of various architectures (e.g. BERT+GLU, BERT+low precision LayerNorm) and training configurations (e.g. BERT + 30% masking ratio). On average, all methods seem to provide a slight accuracy boost to BERT-base. Increasing the masking ratio to 30% leads to a slight accuracy boost while not affecting the WCT, while turning off dropout in the attention layer (BERT+drpt=0) leads to a slight improvement in both accuracy and WCT. Low precision LayerNorm (BERT+lpLN) leads to a significant speedup (i.e. a shift to the left). Gated Linear Units (BERT+GLU) add more parameters to the model and lead to a significant slowdown while providing an accuracy boost. As a point of reference, we also benchmark the full RapidBERT as well as RapidBERT without FlashAttention and with attention dropout set to 0.1 (the standard BERT-base configuration).
All BERT-Base models here were pretrained on C4 for 70,000 steps with batch size 4096, and microbatch size 256 on 8xA100 80GB GPUs. All models were initialized with the same seed and shared all other hyperparameters including the bert-base uncased tokenizer, the learning rate schedule, AdamW as the optimizer, etc. Final pretraining checkpoints were then finetuned on GLUE following the details in the appendix of our paper. The points represented in these GLUE plots are final finetuning checkpoints.
These plots highlight the importance of benchmarking with Pareto curves, as it is not possible to tell from these plots alone whether training BERT-base for 2 more hours leads to better performance than BERT+GLU (for example).
* BERT-base: standard BERT-base (110M parameters) with attention dropout=0.1 and feedforward dropout=0.1, vocab size set to 30522, MLM=15% (all Hugging Face standard configurations)
* BERT+drpt=0: standard BERT-base, except that the attention in the dropout layer is set to 0 instead of the default 0.1
* BERT+GLU: standard BERT-base, with GLU for the feedforward component of the encoder block
* BERT+lpLN: standard BERT-base, except with low precision LayerNorm (bfloat16)
* BERT+mlm30: standard BERT-base, except with a masked language modeling masking ratio of 30%
* RapidBERT: the complete RapidBERT detailed in the paper, including GLU (where the dimension of the intermediate layer is 3072), ALiBi, low precision LayerNorm, unpadding, MLM 30%, vocab size 30528 (a multiple of 64) and the attention dropout=0.
* RapidBERT-FlashAttn+drpt=0.1: RapidBERT _without_ Flash Attention and _with_ the attention dropout=0.1
Pdf: /pdf/403669202969318eb36dc78a5c67a6246ceb4a2a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a training recipe that can train a BERT-style encoder model efficiently (1.13 hours on 8 GPUs). The recipe combines several techniques including a higher masking rate for MLM, bf16, optimized vocabulary size. The model is trained on the C4 dataset for 1.13 hours and achieves 79.6 on GLUE. The paper conducts ablation studies to characterize properties of the architecture design choices.
Strengths: * The paper presents very impressive results of pre-training a BERT-like model very efficiently without loss in accuracy. In general, I believe that this is a solid outcome and it is definitely a good addition to the community, as it can enable more researchers to pretrain custom BERT models from scratch.
* The paper presents ablation experiments that quantify the impact of each proposed change.
* The paper is well written and easy to follow. Replicating the results should not be hard.
Weaknesses: * The experiments are only conducted on GLUE tasks; it is not clear how well the trained model transfers to other datasets.
* There is no fair comparison (i.e., under the same hardware setup) of the proposed recipe and other efficient training methods such as CrammingBERT.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The paper uses a 30% MLM masking ratio for RapidBERT instead of 15% for BERT. How do you compare the improvement from this change to the gains from your architectural modifications?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I didn’t see a clearly potential negative societal impact of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and questions.
> The experiments are only conducted on GLUE tasks; it is not clear how well the trained model transfers to other datasets.
We chose to focus exclusively on the GLUE benchmark for multiple reasons. Since GLUE is a classic benchmark, fine-tuning on GLUE makes it easy to compare results across both recent and more “classic” papers (e.g. the original BERT). On a more practical level, the data presented in our paper represents hundreds of GPU hours, and we had to make the pragmatic decision to extensively benchmark on one dataset instead of benchmarking on multiple datasets in a limited way.
> There is no fair comparison (i.e., under the same hardware setup) of the proposed recipe and other efficient training methods such as CrammingBERT.
While some of the architecture modifications in our paper are shared with Cramming BERT, including FlashAttention and Gated Linear Units (GLU), we consider our work complimentary in the following ways:
* We try to build and benchmark the fastest pretraining recipe with the highest performance on top-end hardware, while Cramming BERT focuses on the low-resource scenario (one consumer GPU for one day). In that sense, it focuses on one lower-performance point on the pareto frontier, whereas we focus on the entire pareto frontier.
* We evaluate multiple model sizes (i.e. both RapidBERT-Base and RapidBERT-Large), while Cramming BERT only analyzes BERT-Base
* We share full Pareto curves for BERT Base and Large as well as throughput and accuracy ablations (see Author Rebuttal PDF for further ablations). We believe this extensive data will be exceedingly useful to practitioners. We have already open-sourced our code and hope that the results from hundreds of GPU hours will be useful to the ML community
> The paper uses a 30% MLM masking ratio for RapidBERT instead of 15% for BERT. How do you compare the improvement from this change to the gains from your architectural modifications?
Multiple studies have shown that the simple change to the MLM masking ratio from 15% (in the original BERT) to 30% leads to a small but significant improvement in downstream GLUE performance. In “Should You Mask 15% in Masked Language Modeling?” Wettig et al. find that constant MLM masking ratios above 15% lead to improved average GLUE and SQuAD scores for bert-base and bert-large. Similarly, in the recently published paper “Dynamic Masking Rate Schedules for MLM Pretraining,” Ankner et al. find that a constant MLM masking ratio of 30% consistently outperforms a MLM masking ratio of 15% for bert-base. We have elaborated on this in the updated manuscript.
We have also included an updated figure in the Author Rebuttal PDF showing that an MLM ratio of 30% leads to a small improvement in the downstream GLUE score over MLM 15% without affecting the wall clock time. | null | null | null | null | null | null |
No Representation Rules Them All in Category Discovery | Accept (poster) | Summary: This paper works on generalized category discovery, a setting that requires classifying unlabelled samples according to the taxonomy defined by the labeled set. This work identifies that a shortcut exists in previous benchmarks, and methods overlooking the classification metric defined by the labeled set could still perform well. It thus proposes the CLEVR-4 benchmark, in which the taxonomy/isolation of categories is separated as 'shape', 'texture' or 'color', or 'count'. Based on this benchmark, it evaluates the performance of previous methods in terms of pre-training/representation and pseudo-labeling and proposes $\mu$GCD. The technique shows better performance in both the proposed benchmark and prior benchmarks.
Strengths: *Originality & significance.* The proposed benchmark is absolutely novel and original. It identifies a key issue in current GCD benchmarks: unsupervised classification algorithms are generally specialized for semantics, and the task could be easily suited without relying on the labeled set. The benchmark is in line with a recent trend of aligning machine outputs with human requirements: what is the definition needed by humans? I am happy to see this benchmark and the revisiting of previous works that it enables.
*Quality.* The benchmark is accompanied by careful evaluations of different aspects of previous methods (representation & classification), and analytical figures (fig. 3/6/7) that help understand the source of improvement. Ablation study and error bars are also provided, which is good and makes the contribution sound.
*Clarity* The presentation is natural and fluent, and I did not find issues in delivery.
*Reproducibility.* Technical details are well provided to facilitate reproduction. Hope to see the dataset soon.
Weaknesses: I have to say that my major concerns are anticipated and well discussed in the appendix. There is one small concern on methodology and I list it in the next section.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: One concern on methodology is whether $\mu$GCD could be listed as a major contribution. It is somehow technically incremental to me and is more like SimGCD Plus (momentum teacher & weak-strong aug) rather than a new method. I understand it suits the proposed benchmark better, and I feel okay with the overall contribution of this paper, the only concern is whether could it be listed as a side-by-side contribution with the benchmark.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: have been discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review of our work!
We first clarify that we will publicly release all code and the Clevr-4 dataset upon paper acceptance. Secondly, regarding the phrasing of our method (**“one concern is whether $\mu$GCD could be listed as a major contribution”**), we propose to clarify our methodological contribution as: “we present a novel algorithm for GCD, $\mu$GCD, which extends the SimGCD method with a mean-teacher based approach”. We are happy to answer any further concerns should they arise.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors and I confirm my positive recommendation. I have read the responses from other reviewers and I look forward to seeing the final version (e.g., discussions with DMC & CS).
---
Rebuttal 2:
Title: The novelty of the contributed benchmark is not absolute
Comment: To make sure that the community is not guided by the misconception of this review of the paper, I would like to point out that, the novelty of contribution on the clever-4 benchmark is **not absolute**, as similar lines of research have been conducted in the domains like DMC and CS.
But I agree with the point that the proposed benchmark could help with the current research on GCD.
Yet limitations like the small number of classes could limit the potential usefulness of the proposed clevr-4 dataset in practice.
---
Rebuttal Comment 2.1:
Comment: Thanks for pointing out those related works. I was not aware of DMC & CS before and I indeed find them highly related. I agree that taking them into discussion could certainly benefit this paper, which is promised in the authors' reply. And the major contribution of bringing such a trend into the GCD community is still acceptable. I am also aware of the limitations in terms of small scale and remaining synthetic of the proposed dataset, which is also discussed in the paper. Overall, the aforementioned improvements could definitely make it a better work, but from my expertise, it already reaches the bar of NeurIPS. | Summary: This paper contributes one new dataset and one new method for category discovery. The dataset is designed in a way that each image can be clustered into 4 different clustering based on the attribute of the image (shape, count, texture, and color), this design can be used to reveal the difficulty of unsupervised clustering and the necessity of category discovery methods. The proposed method extends on the SimGCD method by adding an EMA teacher and strong-weak augmentations.
Strengths: 1. I really like the discussion of the limitation of unsupervised clustering method on self-supervised representations.
2. The design of the Clevr-4 dataset is interesting, and could be a direction for future works.
Weaknesses: 1. I would like to note a few related fields, (1) deep multiple clustering [R1, R2] which deals with unsupervised clustering for different clustering criteria, and (2) conditional similarity [R3, R4]
2. The proposed method seems to be a strighforward extension of the SimGCD method, I think it is not significant enough to be a claimed contribution.
3. I think the discussion on the experiment of previous method on Clevr-4 is not comprehensive, as mentioned in line 262-269, augmentations seem to play an important role for category discovery, then I think the discussion on Clevr-4 should contain some experiments of using different augmentation for differe splits.
[R1] AugDMC: Data Augmentation Guided Deep Multiple Clustering, https://arxiv.org/abs/2306.13023
[R2] A Diversified Attention Model for Interpretable Multiple Clusterings, TKDE2022
[R3] Learning Similarity Conditions Without Explicit Supervision, CVPR 2019
[R4] Towards Latent Attribute Discovery From Triplet Similarities, ICCV 2019
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I am interested in how would applying semi-supervised k-means algorithm on pretrained models in tab 2 performs, because using the semi-supervised k-means is IMO the easiest way to inject certain clustering criteria to the model.
2. Since the proposed method is similar to SimGCD, then how about applying the strong augmentation to SimGCD or GCD and compare with that performance? It seems that the strong augmentation plays an important role for the performance as shown in line 2 of tab 6.
3. I am also interested in the performance of tuning the augmentations used by SimGCD and GCD method on Clevr-4. As `misaligned augmentations degrade performance`, yet there are no experiment supporting this claim.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I think the methodology contribution of this paper is limited, the proposed method seems like a combination of FixMatch and SimGCD.
The proposed Clevr-4 might be of interest to the community, but as I discussed in the weakness and questions section, I think currently the discussion on Clevr-4 is not comprehensive enough.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback, and hope to address their concerns as follows:
**“How would applying semi-supervised $k$-means algorithm on pre-trained models in tab 2 perform”**:
We have re-evaluated a representative selection of the backbones from Tab 2 in the paper, in both unsupervised and semi-supervised k-means settings, for all four Clevr-4 taxonomies. The results are shown in the Author Response PDF (Fig R2). Overall, we find that though semi-supervised k-means can sometimes marginally improve performance on some taxonomies, it is *far from sufficient* to overcome the bias from the model’s pre-training. We suggest that this points to the utility of Clevr-4 as a test-bed for GCD, as well as as a probing set for biases in pre-trained representations.
**“The discussion on Clevr-4 should contain some experiments of using different augmentation for different splits”**:
In the Author Response PDF (Tab R1), we have now included experiments of the GCD baseline trained on different taxonomies with different augmentations. Specifically, for the *count* split, we trained the model with ‘CutOut’, which blacks out a large portion of the image and hence hides a number of the objects. On the *color* split, we trained the model with ‘ColorJitter’, which disturbs the colors in the image. As such, these augmentations (though commonly used in the vision literature) are ‘misaligned’ with the respective taxonomies, and hence result in substantial performance degradation. We find that such failure modes, though quite intuitive, are difficult to isolate with existing GCD (or SSL) datasets, thus underlining the use case for Clevr-4.
We thank the reviewer for the suggestion, and we will update our manuscript to include this analysis and discussion.
**“...how about applying the strong augmentation to SimGCD or GCD and compare with that performance”**:
This SimGCD experiment is almost exactly the same as the experiment from L(4) of the ablation in Tab 6. In this case, we train our model with $\omega(t)$ always equal to 0, thereby using the weights from the last iteration as the ‘teacher’. Here, we find that performance degrades by 3%, pointing to the importance of the slowly updated mean-teacher used in $\mu$GCD. We thank the reviewer for raising this point, and we will clarify it in the text.
**“I would like to note a few related fields…[deep multiple clustering (DMC), conditional similarity (CS)]”**:
We thank the reviewer for pointing out these related problems, they are certainly relevant and we will discuss them fully in the updated manuscript. We will also conduct investigations to see if techniques from these fields can be brought into GCD. Similarly, we hope that our introduced Clevr-4 benchmark can also find use in DMC/CS research.
**“The proposed method seems to be a straightforward extension of the SimGCD”**:
We thank the reviewer for the comment and we propose to clarify our methodological contributions as: “we present a novel algorithm for GCD, $\mu$GCD, which extends the SimGCD method with a mean-teacher based approach”.
---
Rebuttal Comment 1.1:
Title: Thanks for the response.
Comment: **1. Results of semi-supervised $k$-means:**
Thanks for performing the experiments, I think this could be a valuable add-on for the paper.
**2. Results on using aligned augmentations:**
I think given the performance boost of using an aligned augmentation, this should definitely be mentioned in the main paper.
As a reader of the paper, my first thought on the different taxonomy of categories is that augmentations could play a key role here.
I would like to also note a few references on how augmentations influence the learned representations [a,b].
Adding these discussions would make the paper stronger.
I would encourage the author to include these discussions, as they further complete the paper.
Given that the response resolves my concern, and these changes to the paper is easy to make, I would remain positive about this paper.
[a] Amortised Invariance Learning for Contrastive Self-Supervision, ICLR 2023.
[b] Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks.
---
Reply to Comment 1.1.1:
Title: Further author response to reviewer dGiY
Comment: We thank the reviewer for the response and are glad that we could resolve their concern. Regarding their further raised points:
**“[SS-K-Means experiments] could be a valuable add on to the paper”** and **“I would encourage authors to include these discussions [on mis-aligned augmentation]”**:
Yes, we will include both of these sets of experiments, and corresponding discussions, in the updated manuscript. While the augmentation results are described in L268, we agree with the reviewer that the message is strengthened with the results provided.
We will also include discussion of further literature the reviewer suggests [a,b]. As with the DMC and CS literature, we hope that Clevr-4 can be complementary to these papers, providing a test-bed for controlled experimentation.
[a] Amortised Invariance Learning for Contrastive Self-Supervision, ICLR 2023
[b] Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks.
---
Rebuttal 2:
Title: Kind reminder to reviewer dGiY
Comment: Kind reminder: the "Limitations" section is not intended for the reviewers' opinions (this should be part of "Weaknesses"), and it simply asks whether limitations are discussed in the paper (yes or no). The original hint is as follows:
> Have the authors adequately addressed the limitations and, if applicable, potential negative societal impact of their work (refer to the checklist guidelines on limitations and broader societal impacts: https://neurips.cc/public/guides/PaperChecklist)? If not, please include constructive suggestions for improvement. Authors should be rewarded rather than punished for being up front about the limitations of their work and any potential negative societal impact.
---
Rebuttal Comment 2.1:
Comment: The original hint does not say "Limitations section is not intended for the reviewers' opinions". I would like to point out that this is only reviewer aYTM's personal interpretation of the limitations sections. And this interpretation should not be used to argue with other reviewers.
The contents I put in the limitation section are indeed my view of the limitation of the paper presented at submission time.
---
Rebuttal Comment 2.2:
Title: Reasons of the limitations
Comment: **Limitation 1**
> I think the methodology contribution of this paper is limited, the proposed method seems like a combination of FixMatch and SimGCD.
This limitation refers to the original claim of the paper
> We present a novel method for GCD, µGCD, inspired by the mean-teacher algorithm.
According to https://neurips.cc/public/guides/PaperChecklist on the first point:
>Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
I think this is a limitation because the claim in the paper is not accurate, also as pointed out in reviewer aYTm's review, the proposed muGCD method is not very novel.
**Limitation 2**
> The proposed Clevr-4 might be of interest to the community, but as I discussed in the weakness and questions section, I think currently the discussion on Clevr-4 is not comprehensive enough.
This is based on the claim in the paper:
> Clevr-4 contains four independent taxonomies and can be used to better study the category discovery problem.
Since the Clevr-4 benchmark is synthetic, and limited in the number of categories per taxonomy, I argue that this claim is also not accurate and is a limitation of the paper.
Referring to https://neurips.cc/public/guides/PaperChecklist, the fourth point :
> Reflect on the scope of your claims
The claim that clevr-4 is helpful for better study the category discovery problem is small in scope (synthetic and small categories).
Also from the results of using aligned augmentations, we can see that actually clevr-4 shows that augmentations matters, which is a studied problem in representations learning [a,b].
[a] Amortised Invariance Learning for Contrastive Self-Supervision, ICLR 2023.
[b] Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks. | Summary: 1. This paper tackles the problem of generalized category discovery (GCD) and identifies the drawbacks of existing methods that they are verified only with labels for a single clustering of the data. In such a case, the model may simply perform unsupervised clustering, not correctly using the available labels.
2. A synthetic dataset, named “Clevr-4” is proposed which contains four independent taxonomies, i.e., shape, texture, color or count.
3.The authors demonstrate the limitations of unsupervised clustering in the GCD setting and propose a new method, µGCD, based on mean teachers, which outperforms existing baselines on Clevr-4 and sets a new state-of-the-art on the Semantic Shift Benchmark (SSB).
Strengths: 1. The motivation of this paper is clear and interesting. The authors propose that existing GCD methods may simply perform unsupervised clustering based on the natural grouping of the data. The issue is overlooked by existing methods and is indeed worth investigating.
2. A new benchmark is proposed which provides four independent sets of labels for a common dataset, which is valuable to extensively verify GCD methods’ ability to extrapolate the taxonomies specified by the labeled data using different pre-trained feature extractors.
Weaknesses: 1. Additional recent methods should be compared:
Pu N, Zhong Z, Sebe N. Dynamic Conceptional Contrastive Learning for Generalized Category Discovery[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 7579-7588.
2. Fig.2 is misleading. Why there are three 'student predictions'? Does the arrows between them mean that the three losses act sequentially, rather than all together?
3. Since the proposed method adopts a two-stage training strategy, i.e., the proposed method could be regarded as an additional 100 epochs of training based on GCD, the training cost could become a concern.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Additional ablation study on lambda_1 which is used to balance the supervised and unsupervised loss terms, should be added to verify the hyperparameter sensitivity.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The motivation of verifying GCD methods on taxonomies not merely on the semantic labels is reasonable. The experiments and analysis are comprehensive. Limitations of the method would be limited technical novelty. Codes are not provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and respond to their concerns as follows:
**“Additional recent methods should be compared…[DCCL @ CVPR 2023]”**
We thank the reviewer for bringing this interesting paper to our attention, we were unaware of DCCL @ CVPR 2023 as CVPR 2023 was held after the NeurIPS submission deadline. We will include its results and discuss it in the related work. However, we note that its performance is lower than SimGCD and $\mu$GCD on the Semantic Shift Benchmark (against which we compare), thus confirming that our claim of ‘SoTA’ on the SSB remains valid, even after this paper is taken into account.
**“Additional ablation study on $\lambda_1$…”**:
We note that we inherit the value of $\lambda_1$ from prior work (the GCD baseline and SimGCD). However, we have now also ablated its effect on $\mu$GCD in the Author Response PDF (Fig R1). Overall, we find that while performance degrades at extreme values of lambda – i.e near 0 (with only the unsupervised loss) or near 1 (with only the supervised loss) – $\mu$GCD is robust to $\lambda_1$ changes in the range $0.1 <= \lambda_1 <= 0.4$.
We thank the reviewer for raising this point and we will include this analysis in the updated manuscript.
**“...the training cost [of $\mu$GCD] could become a concern”**
We found the training time of all models to be feasible under an academic compute budget, with models being trained on a single NVIDIA M40 or P40 in roughly 4 hours on Clevr-4, and in roughly 15 hours on the SSB datasets (see Appendix L250). Furthermore, we found that the losses of all baseline methods had plateaued after 200 epochs, verifying that the improved performance of $\mu$GCD is not simply due to longer training.
**“Fig.2 is misleading. Why are there three 'student predictions'?”**:
The student model only produces one set of predictions, and we show these three times in the diagram to visualize the three losses (which are summed and optimized jointly). We agree that the figure may be confusing and will update the diagram, and accompanying text, to clarify.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their answers to my questions. Overall, I think it is interesting to propose the "taxonomy issue" which is neglected by recent GCD research. Conceptually, the proposed problem is novel; technically, muGCD is less novel. I would like to keep my positive rating.
---
Rebuttal 2:
Title: Kind reminder to reviewer s1RT
Comment: Kind reminder: the "Limitations" section is not intended for the reviewers' opinions (this should be part of "Weaknesses"), and it simply asks whether limitations are discussed in the paper (yes or no). The original hint is as follows:
> Have the authors adequately addressed the limitations and, if applicable, potential negative societal impact of their work (refer to the checklist guidelines on limitations and broader societal impacts: https://neurips.cc/public/guides/PaperChecklist)? If not, please include constructive suggestions for improvement. Authors should be rewarded rather than punished for being up front about the limitations of their work and any potential negative societal impact. | Summary: This paper addresses generalized class discovery (GCD) from a unique perspective. The authors argues that current GCD benchmarks are unable to ascertain whether models are using the available labels to solve the GCD task, or simply solving an unsupervised clustering problem. In light of this, this paper introduces a new dataset where the data can be clustered according to different rules, i.e., object count, shape or texture. This paper also proposes a simple method that effectively solves such a problem.
Strengths: - The motivation of this paper is sound.
- The paper is well-written and easy to follow.
- The contribution is convincing.
Weaknesses: - The introduced dataset is focused on synthetic data. It could be more convincing to evaluate on real-world data.
- The proposed method is simple yet powerful, but it's unclear which mechanism solves the GCD problem when multiple clustering rules are available.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The proposed is a simple reinvent of existing techniques that are common in semi- and self-supervised literature. As mentioned above, the question is how does such a simple method solves the proposed GCD problem when different clustering rules are acceptable?
Thank the authors for the rebuttal, which addressed most of my concerns. Please include the modifications as mentioned in the discussion in the final version.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations have been carefully discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's feedback on on our work. We hope to address their concerns as follows:
**“The introduced dataset is focused on synthetic data. It could be more convincing to evaluate on real-world data”**:
We found it difficult to find an appropriate real dataset which contains sufficiently complete annotations to construct several taxonomies over the same images. The closest dataset we found was CUB-200-2011, which contains annotations for different attribute types (e.g ‘head color’, ‘bill shape’ etc). However, upon investigation, we found the annotations to be too noisy to build a proper benchmark.
We discuss this issue in the limitations (L331) and the difficulties of constructing real-world datasets of this form in Appendix 6.1.
Given the difficulties of finding an appropriate real dataset, we consider that the advantages of a synthetic dataset — which allows precise manipulation of the images and hence a more controlled study of the GCD problem — outweigh the disadvantages.
**“...unclear which mechanism [in muGCD] solves the GCD problem when multiple clustering rules are available”**:
In Fig 3 (left), we find that the GCD baseline method can, to some extent, learn a representation which distinguishes images according to the desired taxonomy. Our intuition is to begin from this feature space, and train a linear head on top with strong pseudo-labels (generated with the mean-teacher approach) which is then specialized for the desired taxonomy. We thank the reviewer for raising this important point, and we will update the manuscript to extend our discussion in L216 - 234.
---
Rebuttal Comment 1.1:
Comment: The point about how muGCD solves the problem of multiple clustering GCD is great.
I also share this concern, and I think the response here is a bit vague.
Since we have observed a huge performance gain from changing the augmentations, doesn't that mean the real mechanism of the solving of GCD is in the augmentation the model uses (as augmentations influence the representation space)?
---
Rebuttal Comment 1.2:
Comment: I thank the authors for the response. I understand the difficulty in real-world data. I agree with Reviewer dGiY that the current response to the real mechanism is still vague. I would appreciate it if more theoretical/empirical analysis could be provided to strengthen the motivation of this paper. At least, the readers can still benefit from the discussion that which augmentation favors what kind of taxonomy if the true mechanism is indeed the augmentation.
---
Reply to Comment 1.2.1:
Comment: We understand the concerns of reviewers **V8PZ** and **dGiY**.
Concretely, there are two sources of external information (aside from the raw images) which we use to learn the taxonomy:
(1) The ground-truth labels for the ‘Old’ classes
(2) The augmentations in the contrastive loss
Both sources are used in some way in the baselines, prompting us to find that the GCD baseline can ‘to some extent’ identify the correct taxonomy (Fig 6 and comment above).
We suggest that $\mu$GCD outperforms these baselines by: (1) integrating the labels into within a stronger pseudo-labelling framework (see L226); and (2) carefully selecting augmentations for use with a ‘strong/weak’ augmentation strategy (see L262). We note that, while augmentations are important (see L262, results in Tab R1), the design choices in the pseudo-labelling framework are *also important* (see ablations in Tab 6).
We thank both reviewers for raising this point, and *we will provide further analysis on these factors in the final version*. For instance, we will provide visualizations of the learned feature space (similar to Fig 6), when different design choices are made - e.g when mis-aligned augmentations are used. | Rebuttal 1:
Rebuttal: **Global response**:
We sincerely thank all reviewers for the time they spent reviewing our manuscript, and for their thoughtful feedback. We are encouraged that the reviewers found: the ideas *‘unique’* and *'overlooked by previous methods'* (V8PZ, s1RT); our proposed dataset *‘interesting’*, *'valuable'* and *'absolutely novel and original'* (dGiY, S1RT, aYTM); and overall our analysis to be *'comprehensive'* and our contributions *'sound'* and *'convincing'* (s1RT, V8PZ, aYTM).
We have provided detailed responses to individual reviewers below, and have provided additional experiments suggested by the reviewers in the Author Response PDF. We also clarify that we plan to publicly release all code and the Clevr-4 dataset.
Pdf: /pdf/55cb6bcf6d10fcc9e304beb340eb162da7064751.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Many-body Approximation for Non-negative Tensors | Accept (poster) | Summary: Authors propose a tensor decomposition approach based on the Legendre decomposition and convex optimization with natural gradient-based algorithm for non-negative tensors which is inspired by many-body interactions in physics. The tensor is interpreted as a probability measure over a multidimensional discrete space given by its indices, and the complexity is regulated by the order and map of interactions between variables (this makes it possible to avoid the difficult problem of choosing the optimal rank of the decomposition). Experimental results demonstrating the good properties in terms of accuracy and computation time of the proposed method in comparison with a number of alternative methods of low-rank approximations.
Strengths: The proposed method looks novel and interesting, and it has benefits like a convex optimization formulation and a lack of rank parameters to tune. The work is well structured, written in a good style and easy to read.
Weaknesses: 1. From the presented numerical experiments, the actual advantages of the proposed approach over baselines are not quite clear to me. For the case of a dataset with graphic images, what is the actual quality of the low-rank approximation (it would be useful to give examples of reconstructed images)?
2. As far as I know, there are effective rank-adaptive methods of low-rank approximations (rank-adaptive TT-cross method, rank-adaptive tensor ring, etc.), see, e.g., "Adaptive rank selection for tensor ring decomposition" (2021). Will the proposed approach be superior to such modern methods?
3. I did not see estimates of the computational complexity of the proposed method. What dimensions limit the applicability of the approach, i.e., can we use it, for example, for the 100-dim tensors?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. [Line 23] "More recently, tensor networks [4] have been introduced...". I do not fully agree with this position. Tensor networks have been used for a very long time in the scientific community, including quantum applications (e.g., DMRG), and also the noted decompositions (CP, Tucker) represent a case of the tensor network. It seems to me that this paragraph should be substantially reformulated. It also seems appropriate to note such decompositions as MERA, PEPS, etc.
2. [Line 268] It might be worth refining the positioning of the graphics so that the section title (3.2) isn't at the end of the page.
3. [Line 330] "error[s] are..."
4. [Lines 349-350] "visualizing interactions between modes ... to see activated interactions between modes". Perhaps this sentence should be reformulated to avoid repeating the phrase.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The method can only be applied to tensors with non-negative elements. There are also likely to be problems associated with high computational complexity when trying to apply this approach to significantly multidimensional data arrays.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: I appreciate your comments. We provide our point-by-point response to each of your comments in the following.
### In the "Weaknesses" section
> 1. From the presented numerical experiments, the actual advantages of the proposed approach over baselines are not quite clear to me.
Many-body approximation has two significant practical advantages over baselines, the global convergence and intuitive model design:
- Many-body approximation always converges to the globally optimal solution as described in lines 112-114 and Section A. Thus we do not have to repeat algorithms to find better initial values. In contrast, presented results in Section 3-3 for baselines are best results after multiple trials with random initial values (see line 326).
- Many-body approximation has an advantage in terms of model selection. For example, it does not require hyperparameter tuning as described in Figure 14 and Section E.2, while it still has comparable recovery fit scores compared to baselines. This can be a significant advantage of the proposed method in practice. In traditional low-rank approximation, hyper-parameter tuning based on prior knowledge is difficult because its semantics is unclear. In contrast, the interactions in many-body approximation can be tuned easily by considering the expected association among modes of a given tensor (see more discussion in Sections 3.1 and C.2 in Appendix).
> 1. For the case of a dataset with graphic images, what is the actual quality of the low-rank approximation?
Please refer to the global response for the reconstruction images. Also, Figure 11 in Appendix shows that the proposal has the better reconstruction quality for the traffic dataset compared to two traditional low-rank approximations, non-negative Tucker decomposition and non-negative tensor-train decomposition.
> 2. Will the proposed approach be superior to rank-adaptive methods?
We would like to argue that our proposal has two significant advantages over rank-adaptive methods: convexity and intuitive interaction tuning.
Rank-adaptive methods typically repeat low-rank approximation so that the reconstruction error becomes smaller than a threshold. This strategy requires a threshold as an additional hyperparameter and more computational cost, as shown in Table III in [a]. In addition, since it is non-convex optimization, each iteration for low-rank approximation cannot find the global optimal due to the initial value dependency, as described in Section IV in [a]. Moreover, interpretation or understanding of the meaning of the obtained rank is difficult as it is not on the original feature space. In contrast, since our proposal is based on convex optimization, there is no initial value dependency; thus, we do not have to run the algorithm multiple times with random initial values to find a better approximation. Furthermore, as described in Sections 3.1, 3.2, and C.1, the interaction is more interpretable than the traditional rank.
[a] Sedighin, Farnaz, Andrzej Cichocki, and Anh-Huy Phan. "Adaptive rank selection for tensor ring decomposition." IEEE Journal of Selected Topics in Signal Processing 15.3 (2021): 454-463.
> 3. I did not see estimates of the computational complexity of the proposed method.
As described in line 117, our method is based on the natural gradient, and its computational complexity is cubic with respect to the number of parameters to be optimized. For example, the number of parameters for cyclic two-body approximation is described in equation (12). Also, we provide a computational speed comparison in Figure 10 in Appendix.
> 3. What dimensions limit the applicability of the approach, i.e., can we use it, for example, for the 100-dim tensors?
It depends on not the dimensionality but the number of non-zero natural parameters. Please also refer to the discussion about computational complexity in line 117. For huge-size tensors and much higher-order interactions, we need to consider efficient variations of the natural gradient, which can be our future work.
### In the "Questions" section
> 1. Tensor networks have been used for a very long time in the scientific community, including quantum applications (e.g., DMRG), and also the noted decompositions (CP, Tucker) represent a case of the tensor network. It seems to me that this paragraph should be substantially reformulated. It also seems appropriate to note such decompositions as MERA, PEPS, etc.
Thank you for pointing it out. We agree that tensor networks have a long history and have been used to evaluate wave functions and partition functions for physics. What we wanted to say in that paragraph is that tensor networks, introduced in physics, have recently been used in the machine learning community as well. To avoid misleads, we will revise that paragraph as follows:
More recently, tensor networks, initially introduced in physics, have become popular in the machine learning community because it helps intuitive and flexible model design, including tensor train decomposition [24], tensor ring decomposition [34], and tensor tree decomposition [2]. Nowadays, traditional tensor structures for physics, such as MERA [b] and PEPS [c], are also used in machine learning.
[b] 10.1088/2632-2153/abffe8 [c] 10.1103/PhysRevB.103.125117
We will also addressee issues 2, 3, and 4 in the camera-ready version. We appreciate your careful reading.
### In the "Limitations" section
> There are also likely to be problems associated with high computational complexity when trying to apply this approach to significantly multidimensional data arrays.
Although this is a general problem of tensor factorizations, we would like to clarify that, as you can see in Figure 10 in Appendix, our methods work on tensors whose sizes are ten million scales, 20×20×20×20×20×20 = 64,000,000, with Intel Xeon CPU Gold 5218 and 128GB of memory (See Section E). Moreover, we have provided a way to handle larger tensors in Section B-2 in Appendix.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and the accompanying graphic images.
1. I do not fully agree with your statement `the interaction is more interpretable than the traditional rank`. For example, in Lines 292-295 you repeatedly refer to the intuition of interactions' choice, but such intuition is possible in a significantly limited number of practical problems.
2. With regard to computational complexity, I meant the general form of the expected dependence on the problem's dimension (the order of the tensor in terms of your work) and the number of elements in each mode. For example, for tensor ring and tensor train decompositions, the complexity is usually positioned as linear in these quantities (subject to limited rank). Your approach, as far as I understand, has a significantly higher complexity, however, for the case of the dimensions considered in your work, this is not a problem.
3. I noticed that in section 3.3 in paragraph "Synthetic data" you missed the link to Figure from the application, I advise you to add it to the final text.
However, these comments are not fundamental. The approach you suggested seems interesting to the scientific community, and you provided relevant clarifications to my initial comments in the review, so I'm raising my rating from 5 to 6.
---
Reply to Comment 1.1.1:
Comment: We appreciate your careful reading of our response and acknowledging our contributions.
We will revise our text according to your third suggestion in the camera-ready version. | Summary: The paper proposes a new type of tensor decomposition, based on considering an interaction network among the different modes of a tensor. Nonnegative tensors are considered and factorized based on a Legendre decomposition, which may be viewed as representing the tensor elements by an underlying probability distribution. The paper then proposes to model the tensor in terms of interactions, each of which couples two or more modes. These interactions may be represented by a tensor network, which contains no rank modes, e.g., T_{ijkl}=U_{ij}V_{jk}W_{kl}. By contrast, typical tensor decompositions, include auxiliary/contracted modes. Using the theory of probability distributions, the authors argue that the resulting optimization problem to factorize the tensor according to interaction network, is convex. A gradient based method is used for optimization. Connections are made to other types of tensor networks, demonstrating that the interaction network arises by imposing additional structure on such networks. Experimental results show that, on the datasets considered, the interaction/many-body approximation method can be competitive in terms of accuracy.
Strengths: * The proposed tensor decomposition type appears to be new and is interesting / has connections to probability theory and physics.
* The use of probability theory to demonstrate convexity of the optimization problem is innovative, however, it appears to me that the assertion of convexity is incorrect, see weaknesses. [After reviewing the rebuttal, I follow at least the high-level motivation for why the approach is convex and revised my evaluationp]
Weaknesses: * The Legendre decomposition is defined vaguely in the paper and supplementary material, and as far as I can tell this is not a broadly known concept (discussed only in a recent line of ML papers). This makes the paper not self-contained, and makes it difficult to ensure correctness of the arguments. The optimization method and proof of convexity of the optimization problem are also proven hastily / with lack of good definition/theorem/lemma/proof structure, in particular restate the theorems being used from prior work (e.g., when discussing results based on linear dependence of parameters in the distribution).
* My main concern is that from a tensor point of view, it seems impossible that decomposing a tensor into a many-body approximation for an arbitrary interaction network leads to a convex optimization problem. Here is a counterexample. Consider a sixth order tensor with 3 interaction terms, each acting on two independent modes, T_{ijklmn}=U_{ij}V_{kl}W_{mn}. If the dimensions of modes j, l, and n are 1, we've obtained a rank-1 approximation of the tensor T'_{ikm}=T_{i1k1m1}=U_{i1}V_{k1}W_{m1}. Its well known that rank-1 approximation, which also corresponds to the tensor eigenvalue problem, is nonconvex (often has many local minima). [After reviewing the rebuttal, I see that KL-divergence enables convexity based on prior results.]
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please provide a counterargument to my assertion above that the optimization problem is in fact nonconvex. If I have somehow overlooked something on this, I would be much more supportive of the paper. If its indeed nonconvex, the paper needs to be corrected and re-evaluated.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful reading and constructive comments. As described below, our proposal is ensured to be convex.
> My main concern is that from a tensor point of view, it seems impossible that decomposing a tensor into a many-body approximation for an arbitrary interaction network leads to a convex optimization problem. Its well known that rank-1 approximation, which also corresponds to the tensor eigenvalue problem, is nonconvex.
Our argument is correct because we use not the Frobenius norm but the KL divergence. As you demonstrate and we describe in Section 2.5, our proposal includes rank-1 approximation. However, instead of the Frobenius norm, our proposal optimizes the KL divergence from an empirical distribution (a given normalized non-negative tensor) onto a model manifold (a set of interaction-reduced tensors). When the cost function is defined with the Frobenius norm, the best rank-1 approximation is NP-hard, as seen in Equation (28) in [a], which is non-convex. In contrast, when the cost function is the KL divergence, it becomes a convex problem, as shown in [b]. To avoid the misleading, we will add the following sentence in line 243 in the camera-ready version:
Finding the rank-1 tensor that minimizes the Frobenius norm from a given tensor is known to be an NP-hard problem [a]. However, it has been reported that finding the rank-1 tensor that minimizes the KL divergence from a given tensor is a convex problem [b].
[a] DOI: 10.1145/2512329
[b] DOI: 10.1109/ACSSC.2017.8335432
> The Legendre decomposition is defined vaguely in the paper and supplementary material, and as far as I can tell this is not a broadly known concept. This makes the paper not self-contained, and makes it difficult to ensure correctness of the arguments.
We agree that Legendre decomposition is not a broadly known concept. Thus, we will make a new section, “Formal definitions” in Appendix and add the following formal definition of Legendre decomposition in the camera-ready version.
__Definition 1__: Legendre Decomposition [29]
For a normalized non-negative tensor $\mathcal{P}$ and a set of indices $B$, many-body approximation of $\mathcal{P}$ is $\mathcal{Q}$ defined as
$ \mathcal{Q} = argmin_{\mathcal{Q} \in \boldsymbol{\mathcal{B}}} D_{KL} (\mathcal{P},\mathcal{Q}) $
where $\boldsymbol{\mathcal{B}} = \set{ \mathcal{Q} \mid \theta_{i_1,\dots,i_D} = 0 \ \ \mathrm{ if }\ \ (i_1,\dots,i_D)\notin B \text{ for } \theta\text{-parameters of } \mathcal{Q}\}$.
In the same way, we will give a formal definition of many-body decomposition as __Definition 2__ in the camera-ready version.
> The optimization method and proof of convexity of the optimization problem are also proven hastily / with lack of good definition/theorem/lemma/proof structure, in particular restate the theorems being used from prior work.
We appreciate your constructive comments for readability.
- For the optimization method :
The optimization method is thoroughly described in Section 2.1.1, Section A, and Algorithm 1. This is the Newton method that optimizes KL divergence from a given distribution. We are happy to address that further if you could provide unclear points specifically.
- For convexity of the proposal:
We have discussed in Section A to carefully introduce the necessary definitions to prove the argument, i.e., geodesic, flatness, and projection. We will add the following theorem in Appendix B with proof to further clarify our claim.
__Theorem 1__: The solution of many-body approximation is always unique, and the objective function of many-body approximation is convex.
__Proof__:
As we see in Definition 2, the objective function of the proposed many-body approximation is the KL divergence from an empirical distribution (a given normalized tensor) to a subspace $\boldsymbol{\mathcal{B}}$. We can immediately prove that a subspace $\boldsymbol{\mathcal{B}}$ is $e$-flat from the definition of the flatness [1, Chapter 2] for any $B$. The KL divergence minimization from a given distribution to $e$-flat space is called $m$-projection (see the second paragraph in Section A). The $m$-projection onto $e$-flat subspace is known to be convex and the destination is unique (see the third paragraph in Section A). Thus, the optimal solution of the many-body approximation is always unique, and the objective function is convex.
--
It is widely known that maximum likelihood estimation of ordinary Boltzmann machines without hidden variables is a convex problem. Since we optimize the KL divergence, the proposed many-body approximation is also maximum likelihood estimation that approximates a non-negative tensor, which is regarded as an empirical distribution, by an extended Boltzmann machine without hidden variables, as described in lines 147-153. As with ordinary Boltzmann machines, the maximum likelihood estimation of extended Boltzmann machines can be globally optimized. __Theorem 1__ is a general proof of this using Information Geometry. We will add this description in Appendix in the camera-ready version.
> when discussing results based on linear dependence of parameters in the distribution.
In the second paragraph titled “Flatness and projections” in Appendix A, the sentence “It is known that a subspace with linear constraints on natural parameters $\theta$ is $e$-flat” describes the general condition for $e$-flatness. However, to prove the convexity of the proposal, we need only a particular condition “A subspace with some of its natural parameters fixed at 0 is $e$-flat” and it is obvious from the definition of $e$-flatness. To simplify, we replace that sentence in Appendix A with this sentence. In the same way, in line 98, “it is known that a subspace with linear constraints on natural parameters $\theta$ is flat, called $e$-flat [1, Chapter 2]” will be replaced as “we can introduce a concept of flatness for a set of tensors” to increase the readability.
---
Rebuttal Comment 1.1:
Comment: Thanks for the comments and clarifications. I was not aware of the results regarding convexity of low rank tensor product approximation with use of KL divergence. Thanks also for pointing out the subsections where some things are clarified, I had overlooked a coupled of things in my prior pass. I have updated my review to a weak accept as I now see why correctness of convexity should follow.
I refrained from a full accept, as I am concerned that, even if convexity follows from KL divergence, why does that make the proposed method better than alternative tensor decompositions with a KL divergence objective? As far as I can tell this is not addressed in the experimental comparison (only tensor ring is considered, but based on prior work at least CP or tensor train with KL divergence seem like plausible alternatives). The paper should also describe the prior results regarding convexity of low-rank tensor decomposition with KL divergence as an objective and survey for what decompositions this is applied / is possible. This point seems to be a core motivation of the whole approach/analysis.
---
Reply to Comment 1.1.1:
Comment: We appreciate your careful reading of our response and acknowledging our contributions.
> Why does that make the proposed method better than alternative tensor decompositions with a KL divergence objective?
We would like to clarify that low-rank tensor decompositions are non-convex except for the rank-1 case, even if the objective function is defined with the KL divergence. In contrast, our proposal is always convex, which makes our proposal more favorable than theirs in terms of the stability.
> This is not addressed in the experimental comparison (only tensor ring is considered, but based on prior work at least CP or tensor train with KL divergence seem like plausible alternatives).
In Figure 11 in Appendix, we have already provided comparisons of the proposal with non-negative Tucker decomposition (NTD) and non-negative tensor train decomposition (NTTF) when optimized with the Frobenius norm. We here further provide fit scores of NTD and Non-negative CP decomposition (CPAPR) using the same protocol in Figure 11(a), where these objective functions are defined with not the Frobenius norm but the KL-divergence from input tensor to the reconstructed tensor.
#### NTD with the KL divergence
| Rank | # Parameters | Fit score (worst) | Fit score (best) |
| ---- | ---- | ---- | ---- |
| [1,1,1,1] | 69 | 0.835 | 0.835 |
| [5,5,5,5] | 965 | 0.835 | 0.894 |
| [6,6,6,6] | 1704 | 0.835 | 0.901 |
| [11,11,11,11] | 15389| 0.835 | 0.916|
#### Non-negative CP decomposition (CPAPR) with the KL divergence
| Rank | # Parameters | Fit score (worst) | Fit score (best) |
| ---- | ---- | ---- | ---- |
| 1 | 69 | 0.835 | 0.835 |
| 3 | 204 | 0.835 | 0.877 |
| 10 | 680 | 0.888 | 0.910 |
(For larger ranks, the algorithm did not converge.)
#### Many-body approximation
| Body | # Parameters | Fit score |
| ---- | ---- | ---- |
| One | 64 | 0.835 |
| Cyclic-two | 1052| 0.907 |
| Two | 1418| 0.917 |
| Three | 11762| 0.973 |
Hyperparameters for CPAPR are set to default values of the TensorLy implementation, and those for NTD are described in Section E.2.
Please note that NTD and CPAPR are convex only if their ranks are [1,1,1,1] and 1, respectively, and hence the worst and the best scores are the same in this case.
> The paper should also describe the prior results regarding convexity of low-rank tensor decomposition with KL divergence as an objective and survey for what decompositions this is applied / is possible.
The low-rank tensor decomposition with the KL divergence is non-convex except for the rank-1 case. We will add this description in the camera-ready version. | Summary: This work introduces a novel approach to non-negative tensor decomposition, termed "many-body approximation," which specifically addresses the relationship among modes of tensors. It is formulated as a variant of Legendre decomposition and realized through globally minimizes the Kullback-Leibler divergence. Additionally, this work explores the connections between the proposed method and existing methods, using interaction representation. A series of experimental results substantiate the superiority of the proposed method over other existing ones in dealing with tensor completion applications.
Strengths: 1. This paper is comprehensive and includes all the essential components. The structure is logical, and the design principle is effectively and adequately illustrated.
2. The proposed method is a novel rank-free method, which is particularly advantageous as identifying an appropriate low-rank structure can be highly challenging in some practical applications.
3. This paper visualizes the presence or absence of interactions between modes and further demonstrates the interpretability of interactions among all the modes.
Weaknesses: 1. The proposed decomposition is only applicable to non-negative tensors. This may restrict the method's effectiveness and generalizability when dealing with tensors that contain negative values.
2. Some theoretical analysis about the proposed "many-body approximation"method is missed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is it possible to derive some theoretical results to support the merits of the proposed "many-body approximation" method in dealing with tensor completion and approximation problems?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See the Weakness\Questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive feedback.
> Some theoretical analysis about the proposed "many-body approximation" method is missed. Is it possible to derive some theoretical results to support the merits of the proposed "many-body approximation" in dealing with tensor completion and approximation problems?
As described in lines 112-114 in the main text and the second paragraph titled “Flatness and projections” in Section A, we have derived theoretical results of many-body approximation using information geometry, the uniqueness of the solution and the global convergence of the natural gradient method. These theoretical results directly lead to practical merits, that is, we do not have to run the algorithm multiple times with random initial values to find a better approximation.
---
Rebuttal Comment 1.1:
Comment: Thanks so much for your clarifications about the issue of theoretical analysis. I shall raise my rating from 6 to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We appreciate your acknowledgement of our contributions. | Summary: The authors introduce a new, energy-model based approach to the decomposition of non-negative tensors. They compare their method to mainstay techniques.
Strengths: The technique seems solid and reasonable, but how solid and reasonable is hard to assess (see below).
Weaknesses: As presented, it's hard for me to determine how novel and distinct the presented method is. I work extensively with tensors, but as practical tools, so I am not fully versed in the various techniques and advantages for the various decomposition technique. For me to determine this, I would need a much more extensive literature review (perhaps adding a table) and to compare the new technique much more thoroughly and extensively to existing techniques (to highly the similarities, differences, advantages, and disadvantages).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: My questions all relate to the similarities and structures of the other techniques.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: This is not an issue. The work presented by the authors is a general tool.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments about our novelty and impact.
According to your suggestion, we have prepared a table comparing the proposed method with major tensor decomposition methods, the CP decomposition, Tucker decomposition, tensor train decomposition, and tensor ring decompositions. If there is any other information to be added, please let us know.
| Models | Global Optimal | Uniqueness of solution | Required parameter | Remarks |
| ------------- | ------------- | ------------- | ------------- | ------------- |
| CP decomp. | No | No | CP-rank $r$ | NP-hardness, ill-posed |
| Tucker decomp. | No | No | Tucker rank $(r_1, \dots,r_D)$ | |
| Tensor train decomp. | No | No | Train rank $(r_1, \dots,r_{D-1})$ | Optimal rank is imbalanced |
| Tensor ring decomp. | No | No | Ring rank $(r_1, \dots, r_D)$ | Cyclic structure |
| Many-body Approx. | __Yes__ | __Yes__ | Interactions, or $m$ of $m$-body | Only for non-negative tensors|
---
Rebuttal Comment 1.1:
Title: Referee Response
Comment: The table is helpful, although it would be nice to have it compare other facets than just global optima and uniqueness of solution, and including additional discussion on the details would have been helpful as well. I'll keep my mildly positive review score.
---
Reply to Comment 1.1.1:
Comment: We appreciate your additional feedback on our response and positive review.
Here we provide further discussion of tensor decomposition methods.
In general, it is difficult to find out the low-rank structure of a given data set. Therefore, how to choose a low-rank decomposition model is hard to answer. This is why it is nontrivial to state the general advantages and disadvantages of each model itself despite various studies on optimization methods and regularization. To alleviate this difficulty in model selection and provide intuitive modeling, we have introduced the concept of body and formulated a decomposition that does not rely on ranks.
We will add the above discussion in Appendix in the camera-ready version. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful feedback and constructive suggestions. We responded to individual reviewers and look forward to further discussions.
Also, to address the concern raised by Reviewer wHyr, we submit a PDF file of reconstructed images of the COIL dataset to compare proposal and low-rank approximations, non-negative Tucker decomposition (NTD) and non-negative tensor train decomposition (NTT).
Pdf: /pdf/981c360ea40f383b09e11ecaa386245349618375.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a novel many-body decomposition, based on Legendre decomposition, for non-negative tensor. Instead of specifying a rank parameter, the proposed method use energy-based model to the interactions among modes.
Strengths: 1. The paper is overall well-written and easy to follow. Especially the figures that explain the interactions.
2. The work is technically sound and aims at an interesting angle of tensor decomposition which is worth more attention from the research community.
3. The proposed method doesn't require the rank parameter which can save a headache of parameter tuning in many existing tensor analyses.
Weaknesses: 1. The novelty of the proposed method is limited given the existing Legendre decomposition.
2. The empirical advantages of the proposed methods are not sufficiently demonstrated.
3. The rank parameter is not necessarily required in traditional tensor methods, for example, tensor nuclear norm based approach.
4. The details of the proposed algorithm, LBTC, should be more emphasized in the main text.
5. How is the computational cost of the proposed approach, compared to the traditional tensor decompositions? There were many fast methods have been proposed for rank based decomposition, such as tensor CUR decompositions.
6. If I understand right, the proposed method is indeed a generalized rank-1 approximation.
7. "interactions between modes" -> "interactions among modes"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness section
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. As the authors already stated in the paper, the proposed method can only be applied to non-negative tensors.
2. Compared to rank-based tensor decomposition, the proposed method seems to require more expert knowledge about the application itself.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive feedback. We provide our point-by-point response to each of your comments in the following.
### In the "Weaknesses" section
> 1. The novelty of the proposed method is limited given the existing Legendre decomposition.
Although our proposal is based on Legendre decomposition, we have the following nontrivial novel contributions:
- As described in lines 48-49, Legendre decomposition does not provide factorization, while the proposed many-body approximation factorizes tensors into a product form that enables us to observe components of the data. For example, as seen in Figure 3, our proposal factorizes the COIL image dataset into two components, the color tendency of each object (e) and the shape of each object (f). We also provide another example with the traffic dataset in Figure 13 in Appendix.
- As described in lines 44-45, we have revealed that the $n$-body parameters control interaction among modes. Such a tight connection between natural parameters and interaction between modes was not known in literature of Legendre decomposition.
- We have proved a theoretical connection between Legendre decomposition and low-rank approximation (Section 2.4).
> 2. The rank parameter is not necessarily required in traditional tensor methods, for example, tensor nuclear norm based approach.
Thank you for pointing this out. You are right that nuclear norm-based methods do not require the rank parameter. We will add the following text in line 32:
Although there is an attempt to avoid rank tuning by approximating it by trace norms [2], it also requires another hyperparameter, the weights of the unfolding tensor, hence such an approach does not fundamentally solve the problem.
--
Please note that nuclear norm-based methods require other hyperparameters instead of ranks. In our experiments, the baseline methods SiLRTC and SiLRTC-TT in Section 3.2 are typical nuclear norm-based methods, and their performance depends on hyperparameters, as explained in Section E.2 and Figure 14. These dependencies were also noted in their original papers (for example please refer to Section VI. A in [2]).
[2] Bengua, J. A., Phien, H. N., Tuan, H. D., and Do, M. N. (2017). Efficient tensor completion for color image and video recovery: Low-rank tensor train. IEEE Transactions on Image Processing, 26(5):2466–2479
> 3. The empirical advantages of the proposed methods are not sufficiently demonstrated.
We have demonstrated the empirical advantages of the proposed method in terms of hyperparameter tuning and model selection in Sections 3.2 and C.2, initial value dependency in Figure 11 in Appendix, and tensor completion in Section 3.2, which can be significant merits in practical situations.
> 4. The details of the proposed algorithm, LBTC, should be more emphasized in the main text.
Thank you for your suggestion. In the camera-ready version, we will add a new section 2.6 to describe LBTC, to which we put lines 269-280 and Algorithm 2.
> 5. How is the computational cost of the proposed approach, compared to the traditional tensor decompositions? There were many fast methods have been proposed for rank based decomposition, such as tensor CUR decompositions.
As described in line 117, our method is based on the natural gradient, and its computational complexity is cubic with respect to the number of parameters to be optimized, that is, the total complexity is $O(\gamma |B|^3)$, where $\gamma$ is the number of iterations. Due to the second-order convergence property of Newton's method, the iteration number $\gamma$ is usually small (Section C.3 in Appendix). For many low-rank decomposition methods, the computational complexity is cubic with respect to the rank. For example, the computational cost of tensor train decomposition of a $d$-order tensor of size $n×n× \dots ×n$ is $O(dnr^3)$, where the rank is $(r,r,…,r)$ [a].
Please note that your suggested acceleration techniques based on SVD, such as CUR decomposition, cannot directly incorporate the non-negative condition in decomposition. As seen in Figure 10 in Appendix, we have also compared runtime of cyclic two-body approximation methods, NTR-MM and NTR-lraMM, that are known as faster methods than traditional tensor ring decomposition [32].
[a] Oseledets, Ivan V. "Tensor-train decomposition." SIAM Journal on Scientific Computing 33.5 (2011): 2295-2317.
[32] Yu, Y., Xie, K., Yu, J., Jiang, Q., and Xie, S. (2021b). Fast nonnegative tensor ring decomposition based on the modulus method and low-rank approximation. Science China Technological Sciences, 64(9):1843–1853.
> 6. If I understand right, the proposed method is indeed a generalized rank-1 approximation.
Your understanding is correct. We have discussed that point in Section 2.5. We have shown in this paper the significant potential of a generalization of rank-1 approximation by formulating it as a many-body approximation, which has comparable results with traditional non-convex low-rank approximation.
> 7. "interactions between modes" -> "interactions among modes"
Thank you for pointing out that. We will revise that in the camera-ready version.
### In the "Limitations" section
> Compared to rank-based tensor decomposition, the proposed method seems to require more expert knowledge about the application itself
In low-rank approximation, we often gradually increase the rank to enlarge model capability. In the same way, if we have no prior or domain knowledge about a tensor, we can gradually increase $m$ of $m$-body approximation and get more accurate results. Hence, even when the user knows nothing about the data, the proposed method is also easy to use.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response. It has addressed most of my concerns. I will keep my positive score.
---
Reply to Comment 1.1.1:
Comment: We appreciate your careful reading of our response and acknowledging our contributions. | null | null | null | null | null | null |
Amortized Reparametrization: Efficient and Scalable Variational Inference for Latent SDEs | Accept (poster) | Summary: The authors develop a reparameterization scheme for affine SDEs which is advantageous in memory cost and time. Specifically, the authors consider splitting the computation into a series of different chunks such that each can be effectively parallelized and integrated individually by taking the expectation with respect to a uniform random variable over the time interval. This is possible since the SDE is linear and a mean and covariance matrix provide the full description of the data at all time marginals and individual trajectories are not needed. The points are then combined using an interpolation technique that is based upon deep kernels. The authors finally consider experiments to highlight the utility of the method. They consider synthetic and a real experiment on time series data that compares the number of function evaluations and the accuracy of the methods.
Strengths: The idea of transforming the sequential computations into parallelizable expectations is appealing and the numerical results suggest this approach provides a significant computational speed up. This is a useful extension beyond the usual sequential nature of ODE which cannot be parallelized and could be particularly useful and impactful in certain cases. Additionally, empirical results suggest that the modifications provide increased stability. The authors also show the utility of the method on a real dataset where the performance is superior compared to most of the baselines. Finally, the paper is fairly well written and the presentation is intuitive.
Weaknesses: The method only applies to SDEs where the marginal descriptions are known, as the authors make an assumption that the evolution of the latent space is given by a Markov Gaussian process. However, this is not the case for most SDEs, which makes the algorithm limited in scope. From the paper, it is not clear where such an assumption would hold that the latent distribution should be Gaussian everywhere. This question was not studied, and I think that's the major weakness of the paper. While the computations are easier to perform, it is mainly due to the simpler model. SDEs are particularly useful when sampling from a complicated distribution, but in this case the latent distribution Gaussian for all time marginals. Maybe to strengthen this, it would be helpful to show the types of stochastic processes in the observable space that can be modeled (e.g. by applying Ito's lemma to the latent process). This would the provide a more concrete statement on the expressiveness of the model.
The experimental evaluation is somewhat limited since the evaluation is only on two synthetic datasets and one real dataset. I would be curious to see how the other methods compare on the pendulum dataset. Additionally, I think some more synthetic datasets would be useful to describe the utility/use of the method, particularly on non-affine SDEs and seeing the performance in approximation. It would be helpful to see a comparison to backpropagating through the solver rather than the adjoint method, which the authors mention is numerically unstable. Understanding how the behavior of backpropagation through the solver (since this is possibly the most common method when not using a fine step size or memory constraints are not a concern) compares to the proposed method. Finally, the authors mention in line 184 the linear assumption of the SDE is not a limiting factor, but the experiments do not seem to be extensive enough to validate such a claim.
Other types of methods that are related and should be mentioned are [1, 2]. These methods consider similar strategies with [1] considering a gaussian process in the latent space and [2] considering a linear SDE approximating the latent evolution (using some of the same results mentioned in this paper based on casting linear SDEs in terms of their moments).
[1] Fortuin et al 2020 AISTATS, GP-VAE: Deep Probabilistic Time Series Imputation
[2] Duncker et al 2019 ICML, Learning Interpretable continuous-time models of latent stochastic dynamical systems
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How does the interpolation between the windows affect the quality of the sample path? This point was not discussed thoroughly and it seems like it might have impact on the sample quality.
Were there any comparisons to non-adjoint based back-propagation through the solver? For example, using just an Euler solver, how does the method compare with backpropagation through that?
Line 147: “While the previous sections have demonstrated how to replace a differential equation with an integral …” Can the authors please expand upon this? When solving a differential equation, the integral is approximated as a sum, as the authors have done. Is this comment due to the fact that you are not using an adaptive solver but you are using an expectation to compute the integral? Or is it more that you do not require the value of $z(t)$ at some later time, only the expectation?
Is there an ablation on changing the window size/number of windows? How does this parameter affect the performance?
Why is the drift parameterized as a neural network of z on line 519? I thought the drift was linear in z.
Is there a way to verify that the linear SDE assumption is a valid one? Are there some equivalence classes one can derive for the case studied here and a possibly more general class of SDEs?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors provide a discussion on the issues associated with the linear SDE assumption.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The method only applies to SDEs where the marginal descriptions are known,
as the authors make an assumption that the evolution of the latent space
is given by a Markov Gaussian process.**
Thank you for your comments. Before addressing specific comments, we would like to
make an important clarification. We agree that SDEs are useful when sampling from a
complicated distribution.
In the present work, we only make the assumption that the posterior (i.e. the latent state *given*
a batch of observations) can be approximated by a Gaussian process.
The *generative model* we use in our work is characterized by a nonlinear SDE with time dependent diffusion meaning that
predictions from the model will be a complicated non-Gaussian in general [L67-L74].
In order to help ensure that readers don’t leave with the same confusions, we have added the following clarifying section to the updated version of the manuscript.
[L175] “**Summary of Assumptions.**
In the previous sections we introduced an ELBO which, when maximized,
leaves us with a generative model in the form of a nonlinear latent SDE
with time-dependent diffusion and an approximation to the latent
state in the form of a Gaussian process.
To reiterate, we only assume that the approximating posterior, i.e. the
distribution over the latent state given a batch of observations, is a Gaussian
process; this is an assumption that is commonly made in the context
of nonlinear state estimation, for example [A,B].
When making predictions, we sample from the nonlinear SDE which characterizes
the generative model.”
**Other types of methods that are related and should be mentioned are [1, 2].**
Thank you for suggesting the papers from Fortuin et al and Duncker et al. We have added both to
our list of references.
In contrast to [1], in our work the generative model is
defined by a nonlinear SDE.
Regarding [2], the major difference between this approach and our work is that they still
rely on solving ODEs to estimate gradients (see Section 4.1 in their paper). The main consequence of Theorem 1 is that we can eliminate the differential equality constraints required by this approach.
As we have argued throughout
our work, solving differential equations as a part of an iterative optimization method
is computationally challenging [L26 – L30]
**How does the interpolation between the windows affect the quality of the sample path?**
Apologies if we have misunderstood your question.
The required data sampling frequency will be highly problem dependent. From a theoretical perspective our approach should perform similarly to other approaches for inferring latent SDEs
as the sampling frequency is decreased.
**Were there any comparisons to non-adjoint based back-propagation through the solver?
For example, using just an Euler solver, how does the method compare with backpropagation through that?**
When backpropagating through a forward solver, we only expect to save evaluations of the model in estimating the adjoint (not in estimating the solution trajectory). This means that all of the challenges associated with methods for solving initial value problems as a part of an iterative optimization procedure remain [L26 – L30].
In addition, it is worth noting that
gradients computed via backpropagation of a forward solver are not consistent with the adjoint ODE in general [C].
In our work we infer a latent SDE using unbiased gradients of a lower bound on the evidence.
It is worth reiterating that in example 4.1 we considered training a neural ODE with an adaptive stepping tolerance of $10^{-2}$, $10^{-4}$, and $10^{-6}$. When decreasing the tolerance we find that the standard NODEs struggled to achieve a comparable validation accuracy to our approach.
**“While the previous sections have demonstrated how to replace a differential equation with
an integral …” Can the authors please expand upon this? When solving a differential equation,
the integral is approximated as a sum, as the authors have done.
Is this comment due to the fact that you are not using an adaptive solver but you are
using an expectation to compute the integral? Or is it more that you do not require the value
of at some later time, only the expectation?**
Casting an IVP problem in the form $x(t) = x(0) + \int_0^t f(x(s),s) ds$ and invoking appropriate approximations for the integrand results in a sequential time-marching scheme. In Line 147, we are drawing the reader’s attention to the fact that equation (6) is a standard integral with respect to time that can be evaluated in parallel. This representation underpins the outstanding computational advantages offered by our approach.
**”Is there an ablation on changing the window size/number of windows? How does this parameter affect the performance?”**
It is challenging to systematically select the partitions of the temporal axis. The
closest analogy to this hyperparameter in the standard supervised learning setting is the batch size; however, this isn’t the full story.
Intuitively, there is a trade-off between the complexity of the encoder and having enough observations
in each partition such that the assumption of a Gaussian process for the latent state
is reasonable. For example, given two observations in each partition, the encoder only needs
to interpolate between the latent states over a short time window (meaning the dependence
on $t$ could be relatively simple).
However, using only
two observations might make it challenging to approximate the latent state using
a Gaussian process (think of the pendulum problem, it is difficult to estimate the
velocity using only a few frames).
Furthermore, like the batch size in the standard supervised learning setting, the parameter
$M$ has an effect on the variance of the gradients.
For our work, we maximized the number of elements in each partition
such that fast approximations of the gradients of the ELBO can be calculated on the hardware available to us.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My main concern on the linear SDE is partially addressed, and I have increased my score, and I apologize for missing that in the initial review. My other concern about the window parameter was also partially addressed, but I would recommend including additional discussion on this in the revision since this appears to be important for achieving good performance. | Summary:
This paper introduces a method for identifying latent stochastic models from discrete time observations.
The authors perform
unbiased approximations to gradients of the
evidence lower bound to identify parameters and estimate the state for latent stochastic differential equations.
Thy propose to combine a recently introduced (by the same authors - provided anonymous paper) reparametrisation of the ELBO
in terms of time dependent linear SDEs and amortisation to reduce the computational demands.
The resulting approach does not require numerical integration of the approximate SDE or the ODEs defining the evolution of the central moments (as required by previous approaches), and seems to perform on par or better than existing frameworks.
Although the method is limited by their assumptions of linear covariances and additive noise, the merging of the amortisation with the reparametrisation of the ELBO seems a rather interesting contribution. The performance of the method on the numerical experiments seem to be on par with existing approaches. However in most numerical examples the employed noise in the system was negligible when considering the dynamic range of the employed systems.
Strengths: - The authors eliminate the need for numerically integrating the approximate differential equations by replacing the initial value problem with an integral resulting in reduced computational demands.
- The approach avoids numerical instabilities often encountered in adjoint methods.
- the proposed solution has a time and memory cost that depends only on the number of parallel evaluations of the ELBO, i.e., scales independently with the amount of data, the length of the time series, and the stiffness of the approximation to the differential equations.
Weaknesses:
- The approximate process is a linear SDE, and thus cannot probably capture effectively nonlinear dynamics that may result in multimodal transition densities.
- To my understanding the proposed framework requires a diagonal covariance matrix (but I am happy to be corrected if this is not the case) for the diffusion process, as well as a diagonal time dependent covariance for the approximate process. How would you tackle problems with interactions in the diffusion component with your framework?
- I consider the diffusion term for the experiment of the Lorenz system substantially small ($\sigma = 10^{-5}$) considering the state space volume the state of the system spans.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How do you select the partitions of the temporal axis and how does this selection influence the obtained results?
- In line 498 in the supplement you mention that the encoder design is interpretable. Can you explain what you mean with this?
- Could you please explicitly mention in the appendix what observation model was employed in each of the numerical experiments?
- The time complexity of the algorithm depends on the number of evaluations $R$. Can you provide a systematic evaluation of how the number of evaluations $R$ influences the performance of the framework?
- What kind of issues do emerge in your framework if you don't split the observations into chunks?
- How does the method perform for decreasing sampling frequency?
- Does the method require the entire state of the system to be accessible to the observation model, or could also a lower dimensional projection of the latent state work?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations:
- The proposed framework is limited only to systems with diagonal covariances.
- As far as I understand, they employed only a Gaussian observation model in all experiments.
---
[1] Duncker, Lea, et al. "Learning interpretable continuous-time models of latent stochastic dynamical systems." International Conference on Machine Learning. PMLR, 2019.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments. Our response to specific questions and comments are provided below. One important clarification is that we only make the assumption
that the approximate posterior (i.e. the latent state given the observations) is Gaussian.
We only use this approximate posterior to encode the observations to the latent space.
The generative model is not required to be a Gaussian process. In fact, the generative model that we use in this work is a nonlinear SDE with additive noise, which makes our approach fairly flexible and capable of dealing with complex, real-world dynamical systems [L67-L74].
**The approximate process is a linear SDE, and thus cannot probably capture effectively nonlinear dynamics that may result in multimodal transition densities.**
Because the generative model is a nonlinear SDE, predictions will result in
multimodal transition densities [L184-L186].
To help ensure that readers don’t leave with the same confusion, we have provided the following paragraph in revised main text.
[L175] “**Summary of Assumptions.**
In the previous sections we introduced an ELBO which, when maximized,
leaves us with a generative model in the form of a nonlinear latent SDE
with time-dependent diffusion and an approximation to the latent
state in the form of a Gaussian process.
To reiterate, we only assume that the approximating posterior, i.e. the
distribution over the latent state given a batch of observations, is a Gaussian
process; this is an assumption that is commonly made in the context
of nonlinear state estimation, for example [A,B].
When making predictions, we sample from the nonlinear SDE which characterizes
the generative model.”
**To my understanding the proposed framework requires a diagonal covariance matrix ...?**
The framework, in general, does not require the covariance matrix be diagonal.
The diffusion process and the covariance are required to be diagonal in order to scale
as $O(d)$ where $d$ is the dimension of the latent state.
If we are willing to give up some computational efficiency, we could use a full covariance
matrix and full diffusion matrix ($O(d^3)$ time cost).
**I consider the diffusion term for the experiment of the Lorenz system substantially small considering the state space volume the state of the system spans.**
We agree that the diffusion term is small. The purpose of this section was to
demonstrate the numerical instability of adjoint methods for long time series and
to show that our approach does not suffer from these same instabilities.
**How do you select the partitions of the temporal axis ...?**
It is challenging to systematically select the partitions of the temporal axis. The
closest analogy to this hyperparameter in the standard supervised learning setting is the batch size; however, this isn’t the full story.
Intuitively, there is a trade-off between the complexity of the encoder and having enough observations
in each partition such that the assumption of a Gaussian process for the latent state
is reasonable. For example, given two observations in each partition, the encoder only needs
to interpolate between the latent states over a short time window (meaning the dependence
on $t$ could be relatively simple).
However, using only
two observations might make it challenging to approximate the latent state using
a Gaussian process (think of the pendulum problem, it is difficult to estimate the
velocity using only a few frames).
Furthermore, like the batch size in the standard supervised learning setting, the parameter
$M$ has an effect on the variance of the gradients.
For our work, we maximized the number of elements in each partition
such that fast approximations of the gradients of the ELBO can be calculated on the hardware available to us.
**Could you please explicitly mention in the appendix what observation model was employed ...?**
Thank you for pointing this out – we have added an explicit description
of the likelihood for each problem in the appendix. For your reference,
we assumed a Gaussian likelihood for 4.1, 4.2, & 4.3 and a Bernoulli likelihood for 4.4 (because
the frames where black and white).
**Can you provide a systematic evaluation of how the number of evaluations
influences the performance of the framework?**
Thank you for this question. First, it is worth mentioning that even for one evaluation of the model we are left with an unbiased estimate for the gradient of the ELBO. Increasing the number of evaluations only serves to decrease the variance of gradient estimates.
We have added an additional numerical study to the supplementary material to explore this effect. For the problem we considered, we found that convergence with $S=10$, $S=50$, and $S=100$ was similar. The numerical study is described in greater detail in the “General Response” section.
**What kind of issues do emerge in your framework if you don’t split the observations into chunks?**
We provide a brief discussion in the manuscript~[L125-128]. The main issue with
not splitting the observations into chunks is that we need to compute and store
an approximation to the latent state over the entire time series.
In addition, splitting the observations into chunks also allows us to efficiently approximate the ELBO using a batch of datapoints rather than the entire dataset.
**How does the method perform for decreasing sampling frequency?**
This will be highly problem dependent. From a theoretical perspective our
approach should perform similarly to other approaches for inferring latent SDEs
as the sampling frequency is decreased.
**Does the method require the entire state of the system to be accessible to the observation model...?**
The method should be able to approximate the data so long as the data can be approximated
by a nonlinear SDE with additive diffusion. Regarding reconstructing hidden states, this
is a challenging inverse problem beyond the scope of the current work. | Summary: This paper proposes a new inference method for latent SDE model. Due to the continuous nature of SDE, inference methods for latent SDE models are usually expensive and the cost would grow as the dataset goes larger or longer. Existing methods also would require differential equation solvers, which makes them unfriendly to GPU parallelization. The approach proposed in this work uses an amortized Markov Gaussian process as a variational posterior for inference over the latent SDE. The proposed variational objective allows for factorization over time steps and therefore enables alleviates the need to iterate over all time steps. Under such a framework, the cost at each iteration is only dependent on the number of Monte Carlo samples utilized for gradient estimation. The authors then empirically evaluate the efficiency of their method in a wide range of tasks, in which the proposed method shows much better performance in terms of computational cost, numerical stability, and convergence compared with existing methods.
Strengths: - The paper is well-motivated and the proposed approach, to the best of my knowledge, is very sensible and well supported by empirical evaluations.
- The authors provide a thorough discussion of existing methods and include them in the experiments as baseline methods.
- The paper is well written and the proposed method is presented clearly.
Weaknesses: Some technical details seem to be discussed too briefly in the main text, for example, the implementation of the encoder and the use of deep kernel for "interpolate between latent vectors". I can hardly understand the meaning without referring to the appendix. In Remark 2, the authors mention "make use of a nested Monte Carlo scheme along with stratified sampling to reduce the number of decoder evaluations and the variance of gradients." but the meaning of stratified sampling and nested Monte Carlo in this context is not discussed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I am not familiar with the literature of latent SDE but I wonder whether the proposed variational posterior is more biased than existing approaches. Assuming computational cost is not an issue, would the baseline approaches be more favorable to the proposed method?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Nothing I can think of other than the limitation discussed by the authors at line 182.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We appreciate that you found our approach to be well-motivated, sensible
and well-supported by empirical evaluations. Responses to your specific questions and
concerns are provided below.
**Some technical details seem to be discussed too briefly in the main text, for example, the implementation of the encoder and the use of deep kernel for “interpolate between latent vectors”. I can hardly understand the meaning without referring to the appendix. In Remark 2, the authors mention “make use of a nested Monte Carlo scheme along with stratified sampling to reduce the number of decoder evaluations and the variance of gradients.” But the meaning of stratified sampling and nested Monte Carlo in this context is not discussed.**
The point that some technical details are discussed too briefly in the main text
is well-taken. The revisions which address this concern are provided
below along with the line number where this new section would appear in the original text.
[L141-145] “To reiterate, the probabilistic encoder is a function which takes in $M$
observations from a particular partition along with a time stamp, $t$,
and outputs a mean and covariance as an estimate for the latent state
at that particular time.
In principle, any function which can transform a batch of snapshots and a time
stamp into a mean and covariance could be used as an encoder in our work.
In our implementation, we use deep neural networks to encode $x_i^{(j)}$ for
$i\\in\\mathcal{I}$ where $\\mathcal{I}$ contains some temporal neighbours
of $x_i$ into a single latent vector.
This approach yields a set of latent vectors associated with each observation
in the partition $h_i$ for $i=1,2,\\dots, M$.
We then interpolate between each latent vector using a deep kernel based
architecture to construct the posterior approximation for any time stamp
in the partition; see Appendix~D for details.”
[L167-173] “In the case that
evaluations of the SDE drift term were relatively
cheap compared to decoder evaluations (for example in the case the dimension
of the latent state is much smaller than the dimension of the data),
we found it useful to increase the number of samples used to approximate the
integral over time without increasing the number of samples from the
variational posterior.
To do so we made use of a nested Monte Carlo scheme
to approximate the second term in the ELBO,
$$
( t_1^{(j+1)} - t_1^{(j)} )E_{p(\\epsilon)p(t)}||r_{\\theta,\\phi}(T(t,\\epsilon, \\phi),t)||^2_{C_\\theta(t)}
$$
$$\\quad \\approx \\frac{( t_1^{(j+1)} - t_1^{(j)} )}{RS} \\sum\_{k=1}^R \\sum\_{l=1}^S ||r_{\\theta,\\phi}(T(t^{(k,l)},\\epsilon^{(k)}, \\phi),t^{(k, l)})||^2_{C_\\theta(t^{(k, l)})}
$$
where, again, each $\\epsilon^{(k)} \\sim \\mathcal{N}(0, I)$ and each
$t^{(k,1)}, t^{(k, 2)}, \\dots, t^{(k, S)}\\sim \\mathcal{U}(t_1^{(j)}, t_1^{(j+1)})$.
In addition, because the integral over time is one-dimensional we used
stratified sampling to draw from $\\mathcal{U}(t_1^{(j)}, t_1^{(j+1)})$
in order to further reduce the variance in the integral over time.”
**I wonder whether the proposed variational posterior is more biased than existing approaches. Assuming computational cost is not an issue, would the baseline approaches be more favorable to the proposed method?**
Thank you for this question. A summary of the assumptions made by our approach as compared reference [13] is provided on lines 175-194.
To summarize, the assumptions we make with respect
to this approach are that (1) the posterior over the latent state can be approximated by
a Gaussian process and (2) the diffusion matrix is strictly time-dependent and not
a function of the state.
Regarding the first assumption, it is important to emphasize that this assumption
does not require that the *prior* (or generative model) define a Gaussian process, it only
requires that the *posterior* (i.e. the state given a batch of observations) be well
approximated by a Gaussian process.
This is an assumption that is commonly used in the state estimation literature for
a wide variety of nonlinear state estimation tasks [A,B]. Because we are seeking
to infer a latent state using more observations than states, we believe that
this assumption does not limit our approach as compared to [13].
With regards to the second assumption, it is true that our approach is strictly
more biased than [13]. Our approach is not capable of learning latent SDEs
with state dependent diffusion processes [L189].
All this is to say that our approach makes more assumptions than [13].
However, we would like to emphasize that our approach is significantly more efficient than the approach of [13]. This enables the application of our approach to complex, high-dimensional systems on a limited computational budget. Even in situations where there are no strict limits on the computational budget, the computational efficiency of our approach would allow for more detailed hyperparameter tuning studies to maximize performance.
In order to help ensure that readers don’t leave with the same questions, we have provided the following summary of the assumptions made by our approach in the updated draft.
[L175] “**Summary of Assumptions.**
In the previous sections we introduced an ELBO which, when maximized,
leaves us with a generative model in the form of a nonlinear latent SDE
with time-dependent diffusion and an approximation to the latent
state in the form of a Gaussian process.
To reiterate, we only assume that the approximating posterior, i.e. the
distribution over the latent state given a batch of observations, is a Gaussian
process; this is an assumption that is commonly made in the context
of nonlinear state estimation, for example [A, B].
When making predictions, we sample from the nonlinear SDE which characterizes
the generative model.” | Summary: The present work proposes a time and memory efficient method to perform inference in a directed probabilistic model in the presence of a latent stochastic process with intractable posterior (path) distribution. Learning is performed using a variational Auto-Encoding approach, in which an approximate latent posterior process is selected from the class of Markov-Gaussian processes that are solutions to linear non-autonomous (Itô) SDEs. An evidence lower bound objective (ELBO) is derived that removes the need for numerical solvers when approximating gradients. Included experiments on toy and real world time series data approve similar performances compared to methods based on adjoint sensitivities but with two-orders of magnitude fewer evaluations and with a time and memory cost that scales independently with the amount of data, the total length of the time series and the stiffness of the approximate differential equations.
Strengths: I agree, a huge drawback of those existing models is the dependency on stochastic optimization methods that require backpropagading gradients through the numerical (stochastic) differential equation solver and hence rely on adjoint methods.
It is the attempt itself tackling the road blocks by combining a variety of ideas that rely on model simplification, that I particularly appreciate. Especially under the lens of Bayesian inference it is recommended to question the need for unbounded expressivity in case of learning latent dynamics with continuous time models. This consideration is interesting on its own and adds progress to the community, e.g., regarding the applicability on low performance hardware.
The authors propose improvement in time and memory cost by combining the following three major strategies: (1) simplifying the function class available for learning latent dynamics while reducing the expressiveness of the latent (SDE) models used, (2) exploiting the nature of sequential data such as time series that allow divide-and-conquer strategies such as the analysis of small sliding observation windows, and (3) introducing an optimized objective (i.e. an ELBO) by exploiting the reparameterization trick within the model used.
This method builds on a novel combination of well-known techniques, and is original to the best of my knowledge. Further, the paper is well positioned to existing work, the content is well organized and, with a few exceptions, has no spelling or grammatical flaws. Empirical evidence is accessed through experiments on toy data as well as established benchmark data from real world applications; however, I would classify them small-scale.
Weaknesses: It is not the value of the approach I am concerning here, but the shortcomings related to the clarity of the submission and the technical soundness which I will address with examples in the following.
1. (2.1 Problem description)
- I am missing a clear definition of the observation model, e.g., cf [Vrettas et al. 2015; Variational mean-field algorithm for efficient inference in large systems of stochastic differential
equations ; Section II.A].
- Considering time $t_i \in \mathbb{R}$ instead of $\mathbb{R}_{+}$ is not wrong but unusal.
- Minor ambiguities make understanding difficult in total: $dz$ vs $dz(t)$ and $\beta$ vs $\beta(t)$; the definition of $\theta$ is missing.
- Is there any motivations for the stochastic variational inference approach?
- I would $q_\phi(z(t)| x_i,\dots,x_{i+M}) \approx p_{\theta}(z(t)|\mathcal{D})$ as an local approximate posterior with respect to the time variable; are there necessary and/or sufficient conditions for an observed stochastic process that guarantee the validity of this approximation?
2. (2.2 Evidence lower bound)
- (l. 96f) The following statement: "An added complication in our specific case comes from the fact that the latent state is defined by a stochastic process rather than a random-vector as is more typical." is poorly worded. You probably intend to say that, typically, stochastic variational inference considers vector-valued random variables to define latent states. But your approach requires function valued random variables, i.e., stochastic processes. What are those complications expected here?
- It would be advantageous in combination with Eq. (1) to mention that the time dependency of the drift coefficient is necessary to account for a possible non-stationary observation process $x(t)$ that is driven by the latent stochastic dynamics of $z(t)$; cf. [32; Section 3].
- (Eq. (3)): To derive this result, the authors refer to unpublished work without stating the authors, cf. [14], but they included the reference in the supplementary material submitted. Notwithstanding the fact that I cannot call this good practice, this result is based on known facts and the derivation could have been included in the supplementary material of the present work, for example. Besides that, the derivation in [14] only concerns the case of a constant dispersion matrix $\Sigma$ with respect to time $t$ whereas $L(t)$, e.g., in Eq. (1) depends on time.
- (Eq. (4)): Section A of the supplementary material indicates that this result follows by applying properties concerning a Lyapunov equation and by using the Kronecker sum. Following the arguments myself raised difficulties; in my opinion some given identities only hold in case of $A(t)$ is symmetric, i.e., $A(t) = A\top(t)$. Last but not least titling the result Theorem overstates its contribution.
- (Remark 2): I would like to see the form of the prior SDE. In the setting of stochastic variational inference it is important to well design the dispersion matrix between prior and posterior SDE in order to guarantee that the KL divergence is finite in value.
- Honestly I still don't get how the authors include the fact, that $m_\varphi$ and $\dot{m}_\varphi$ as well as
$S_\varphi$ and $\dot S_\varphi$ are related through an ODE? Because all four are included into $r_\theta, _\varphi$ and e.g., Section G.1 only shows architecture design for $m$ and $S$.
3. (Experiments)
- (Figure 2): Can you quantify "a similar validation accuracy"? Does the right plot only contain the ARCTA probabilistic prediction? Section F.1 does not contain the promised information, did you mean Section G.1?
- (Section 4.2; l. 259ff.): The evaluation protocol is unclear to me; why are there multiple adjoint methods included and do you only optimize ARCTA for 2000 iterations? Can the quality be evaluated using error measures, e.g. MSE? A very similar experiment was presented in [13] proposing an adjoint method for SDEs. The results are promising. Can you please take these results into account?
- (Section 4.3): From my point of view, the motion capture data set under consideration is quite delicate, since I know the shortcomings very well from my own experiments with it in the past. In my opinion it is the overall size of time series contained (16 training, 3 validation and 4 testing) that frequently causes severe overfitting. So I am still bothered with severe instabilities when reproducing the results from [30 & 13]. Therefore, I also view the results presented here with caution. There are 4 times 50 = 200 predictive posterior outputs by the model, evaluated on the test dataset; given four test time series you only provide three single dimension, that is strange. Secondly, I would like to know why the evaluation of the results is carried out in relation to the RMSE, requiring all benchmark results to be translated.
Finally, I would like to reiterate my appreciation for the authors' interest in expanding the field of learning latent SDEs in the setting of stochastic variational inference. I'm sure that by working on the deficiencies mentioned, the work will gain in value and will deliver the desired contribution to the community in a new round.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - (l. 32f.) "... replacing ordinary differential equation (ODE) solvers with integration methods ..." But don't those numerical ODE solver build on integration methods?
- In terms of the total number of time series samples included, the experiments dealt with are probably rather small-scale. To really highlight the added value of the proposed method, I would like to see large-scale experiments like the PhysioNet 2012 experiment in [12].
- Minor typos:
- (l. 28) parareal -> parallel
- (l. 54/67) time-series -> time series
- (l. 66) Description -> description
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors address the limitations of the approach regarding the models expressivity compared to existing stochastic adjoint sensitivity methods in Chapter 3 of their work.
I would appreciate some remarks on limitations concerning the recognition module, i.e., the encoder. As stated in Section D, data sparsity is only accepted to a certain extent and cf. (l 495f.) "It is worth noting that without this deep-kernel our approach struggled to achieve good validation accuracy on the datasets we considered". This suggests that the correct choice of the encoder is crucial and, on the other hand, raises doubts about the contribution of the latent stochastic dynamic. Is the proposed approach limited to interpolation and time series classification tasks or may we solve also tackle extrapolation problems?
No evidence was found that the authors addressed any potential negative societal impact. Please make the effort to include this mandatory information in your work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments.
**“The authors refer to unpublished work without stating the authors, cf. [14], but they included the reference in the supplementary material submitted. Notwithstanding the fact that I cannot call this good practice, this result is based on known facts and the derivation could have been included in the supplementary material of the present work, for example."**
For papers which are in review but not available as a non-anonymous preprint, the recommendation from the 2023 call for papers is to, “include a copy of the cited anonymized submission in the supplementary material and write ‘Anonymous et al. [1] concurrently show.’” The paper included in the supplementary material is currently under review and not available as a non-anonymous preprint. Our apologies but we are not sure what known facts you are referring to since no supporting references are listed in your comment.
**"No evidence was found that the authors addressed any potential negative societal impact. Please make the effort to include this mandatory information in your work."**
In the paper checklist we answered n/a for the question on broader impacts. The paper checklist states, “if you develop a generic algorithm for optimizing neural networks, you do not need to mention that this could enable people to train models that generate Deepfakes faster.” In our perspective, our work can be seen as providing a method for training neural SDEs more quickly than the status quo.
**“Parareal -> Parallel”**
Sorry, this is not a typo. Parareal methods are a class of time-marching schemes with a history spanning over two decades; see, for example, reference [6].
**“Following the arguments myself raised difficulties; in my opinion some given identities only hold in case of A(t) is symmetric. Last but not least titling the result Theorem overstates its contribution.”**
The theorem is applicable when A(t) is nonsymmetric. We believe that this is an important theoretical result with significant practical impact. This result allows us to recast expectations under linear SDEs entirely in terms of their marginal distributions without requiring differential equality constraints like in [D]. From a practical perspective, this allows us to evaluate expectation under linear SDEs using either ($A$, $b$, ($m(0)$, $S(0)$)) or $(m(t), S(t))$.
[D] Archambeau, C., Opper, M., Shen, Y., Cornford, D., Shawe-taylor, J. “Variational Inference for Diffusion Processes.” In: Advances in Neural Information Processing Systems, vol. 20. (2007).
**“Honestly I still don't get how the authors include the fact, that m and dm/dt as well as S and dS/dt and are related through an ODE?”**
Apologies if we have misunderstood your question.
In lines 102-104 we provide an explanation for this standard result on SDEs along with a link to a textbook for the benefit of readers who are new to this topic.
**”Can you quantify "a similar validation accuracy"? Does the right plot only contain the ARCTA probabilistic prediction? Section F.1 does not contain the promised information, did you mean Section G.1?”**
Apologies for the confusion. This should have been section G.1. The right plot only contains one ARCTA prediction. The left plot contains the validation accuracy vs. number of function evaluations.
We trained 10 ARCTA models and 10 NODEs with different tolerances for the adaptive stepping schemes on the same dataset with different random seeds. The purpose of this study was to show that our approach requires a fewer number of evaluations of the model on average to achieve a similar validation accuracy. To make our claim more conservative, we now state our approach "requires more than one order of magnitude fewer evaluations of the model..." Since submitting the original study we have also improved our hyperparameter selection for our approach. The new Figure 2 is attached.
**“A very similar experiment was presented in [13] proposing an adjoint method for SDEs. The results are promising. Can you please take these results into account?”**
The numerical study in [13] considers a fundamentally different problem. The authors in this work collect data for a short time-window of 1 second. The purpose of the numerical study in our work is to demonstrate that adjoint sensitivities for chaotic systems can blow up when the time-window of interest is increased (we show this for the case when the time-window is set to 10, 50, and 100 seconds). See references [10] & [11] for more details on this phenomena.
**“In my opinion it is the overall size of time series contained (16 training, 3 validation and 4 testing) that frequently causes severe overfitting ... Therefore, I also view the results presented here with caution. There are 4 times 50 = 200 predictive posterior outputs by the model, evaluated on the test dataset; given four test time series you only provide three single dimension, that is strange.”**
We evaluated the test accuracy on the entire test dataset. We plot predictions on three trajectories for illustrative purposes (instead of plotting 200 trajectories). Your comment that the dataset is sensitive due to the fact that there are only 3 validation and 4 testing trajectories is well-taken. We have updated our claim to: “Looking to the table, we see that our method performs similarly to other state-of-the-art methods.” We converted to RMSE due to concerns that MSE might exaggerate performances differences between approaches on a quick reading.
**I would appreciate some remarks on limitations concerning the recognition module, i.e., the encoder. As stated in Section D, data sparsity is only accepted to a certain extent and cf. (l 495f.)**
Regarding the encoder design when we say, "It is worth noting that without this deep-kernel our approach struggled to achieve good validation accuracy on the datasets we considered," this was intended to contrast the encoder with and without a deep kernel specifically.
---
Rebuttal Comment 1.1:
Comment:
I thank the authors very much for their response and for the detailed discussion
of my questions and concerns.
(ad "The authors refer to unpublished work ...")
My apologies for too strong a comment. What I addressed by "known facts" is the
general form of the ELBO given in [14, Eq. (8)], which you gave references for.
Indeed, new is the reparameterization for the expectations involved; presented
starting at l. 468 in [14].
(ad $A$ symmetric) Given the following Lyapunov equation:
$$AS + SA^\top = W.$$
Rewriting this equation and using vector representation gives
$$(I \otimes A + A \otimes I) \text{vec}(S) = \text{vec}(W)$$
with $I$ denoting the identity matrix. Hence, we can obtain the following
solutions
$$ \text{vec}(S) = (I \otimes A + A \otimes I)^{-1}\text{vec}(W). \quad (\star) $$
Following the proof starting at l. 408, we have:
$$\dot{S} = -AS - SA^\top + L\Sigma L^\top.$$
Simple algebraic transformation gives
$$AS + SA^\top = L\Sigma L^\top - \dot{S} . \quad (\ast \ast)$$
Hence, $(\star)$ gives
$$ S = \text{vec}^-1((I \otimes A + A \otimes I)^-1\text{vec}(L\Sigma L^\top- \dot{S})).$$
But we want to solve $(\ast \ast)$ for $A$. Therefore, transposing Eq. $(\ast
\ast)$ we obtain
$$S^\top A^\top + AS^\top = L\Sigma^\top L^\top - \dot{S}^\top.$$
Since $S = S^\top, \Sigma = \Sigma^\top$, we have
$$S A^\top + AS^\top= L\Sigma L^\top - \dot{S}^\top.$$
Now, if $A$ is symmetric, i.e., $A= A^\top$, $(\ast)$ gives
$$ A = \text{vec}^{-1}((I \otimes S + S \otimes I)^{-1}\text{vec}(L \Sigma L^ \top- \dot{S}^\top)),$$
similar to Eq. (16). This derivation was the reason for asking, if $A$ needs to
be symmetric. Even I am convinced that my derivative is correct, apologies are
given if you prove me wrong, here.
(ad “Honestly I still don't get how the authors include the fact, that m and
dm/dt ...") Sorry for my deficiency in explicitness at this point! I give it second
try; in l. 41f. the authors state "... our approach removes the requirement of
solving differential equations entirely." Now, following l. 102 - l. 104,
necessary information for the approach, i.e., $m_\phi(t), \ S_\phi(t)$, are
connected to ODEs. I am wondering, don't we still need to solve those?
---
Reply to Comment 1.1.1:
Comment: Thank you for carefully reviewing the Theorem and for sharing your work. You are correct that the theorem makes the implicit assumption that the matrix $A$ is symmetric. We now state this clearly in the manuscript. This implicit assumption does not affect any results since we never parametrize $A$ directly – instead we always parameterize $m$ & $S$.
Since $m$ and $S$ are parametrized as explicit functions of time we can efficiently compute $\frac{dm}{dt}$ and $\frac{dS}{dt}$ via automatic differentiation to estimate the ELBO. Note that under this parametrization the ELBO contains only a standard integral with respect to time and does not depend on differential equality constraints. This in contrast to prior approaches which implicitly parameterize $m$ & $S$ by directly parametrizing $A$ & $b$ which will require solving ODEs. | Rebuttal 1:
Rebuttal: ## General Comment
We would like to open by thanking the reviewers for their careful consideration of our work.
We understand that providing quality reviews is time-consuming, so we are thankful for your efforts.
Here we summarize the main changes made to the paper.
We address comments and questions from referees in separate comments below.
**Updates to the main text**
1. We recognize that some descriptions of technical details were too terse in the main text. We have used what space we have left to clarify details on the encoder, the Monte-Carlo based approximation to the evidence lower bound, and the assumptions made by our approach. Please see the detailed responses for clarifications on these updates.
2. We have edited the paper to ensure that the details of the numerical study design are described more clearly. In particular, we explain that section 4.1 shows the validation accuracy curves vs. the total number of function evaluations when training on the same dataset using 10 random seeds for each approach.
3. We have softened some claims following suggestions from reviewers. In particular, for section 4.3 we now claim that “our method performs similarly to other state-of-the-art methods.” In section 4.1 we now claim that our approach “requires more than one order of magnitude fewer evaluations of the model (NFE) than the neuralODE”. We believe these adjusted claims more precisely present our results without affecting the main message.
3. We have also improved performance for the numerical study listed in section 4.1 after additional hyparameter tuning. The updated Figure 2 is provided in the attachment.
We have added the following references into the main text in order to support these new sections.
[A] Barfoot, TD, Forbes, JR, Yoon, DJ. “Exactly sparse Gaussian variational inference with applications to derivative-free batch nonlinear state estimation.” In: The Internationl Journal of Robotics Research 39(13), 1473 – 1502 (2020).
[B] Barfoot, TD. “State Estimation for Robotics a Matrix Lie Group Approach. Cambridge University Press, UK. (2017)
[C] Alexe, M, Sandu, A. “On the discrete adjoints of adaptive time stepping algorithms”. In: Journal of Computational and Applied Mathematics 233, 1005-1020, 2009.
**Additional numerical study**
In response to reviewer questions regarding the effect of parameters in the Monte-Carlo approximation to the gradient (Eq. 9), we will provide a new numerical study in the supplementary material. In this numerical study we trained a latent SDE on data from a four-dimensional predator prey model. Keeping all other hyperparameters the same ($R=1$, $M=256$) we varied the number of nested Monte-Carlo samples $S=(10, 50, 100)$. The results are summarized in Figure 10 in the attachment. The left figure shows the validation RMSE vs. the iteration count. From this figure we see the model is insensitive to the choice of $S$ in terms of the validation RMSE for this problem. The right figure shows the validation RMSE vs. the total number of function evaluations of the model. We see that the while different values $S$ do not seem to impact validation RMSE at convergence, this hyperparameter can have a large impact on the total number of function evaluations.
Pdf: /pdf/74687ef9527dd7aefc91d7b8d938f83be64411aa.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Residual Scheduling: A New Reinforcement Learning Approach to Solving Job Shop Scheduling Problem | Reject | Summary: This paper studied the problem of learning a graph neural network based policy network as a construction heuristic for solving job shop scheduling problems. This paper proposed an idea called residual scheduling to remove irrelevant operations and machines from the graph based state representation. This has been shown experimentally to perform well on several benchmark job shop scheduling problem instances.
Strengths: With the fast advancement of deep learning technologies, it becomes increasingly interesting to develop deep neural network models that can effectively solve complex combinatorial optimization problems, including job shop scheduling problems. The newly developed deep learning system in this paper appears to be very effective and highly competitive in performance, compared to similar approaches from the literature.
Weaknesses: The idea of using graph neural networks or other forms of deep neural networks trained through reinforcement learning to solve job shop scheduling problems has been studied in many past research works. The main text of this paper lacks a comprehensive review of these research works, making it hard to clearly understand the key technical novelty and contribution of this paper, compared to other recently published works.
The development of the residual scheduling technique is not strongly motivated in this paper. It is hard to understand why it is essential to remove irrelevant operations and machines from the graph based state representation. The development of this technique also appears to be very highly intuitive and lacks thorough theoretical analysis. Hence, the technical contribution of this development remains largely questionable.
The authors stated that in order for the process of learning the construction heuristic to be formulated as an MDP, they introduced several attributes for the operation nodes and machine nodes in the graph based state representation. However, it remains largely unknown whether, with the introduced attributes, the state representation can satisfy the Markov property of the MDP. The importance of using any newly introduced attributes should be more thoroughly evaluated experimentally. The associated technical contributions should be clarified and strongly justified.
Some other key aspects of the new system design should also be justified more. For example, the use of GIN needs to be supported with more convincing reasons. The focus on learning a construction heuristic rather than an improvement heuristic, which is gaining increasing popularity and attention, should be better justified and experimentally validated in this paper.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: What are the key novelties of this research work, compared to existing deep learning based methods for combinatorial optimization, including job shop scheduling?
At the theoretical level, why is it essential to remove irrelevant operations and machines from the graph based state representation?
With the newly introduced attributes, why will the state representation satisfy the Markov property of the MDP?
What are the theoretical and empirical advantages of learning a construction heuristic, in comparison to learning an improvement heuristic?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I do not have any concerns regarding this question.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
>The idea of using graph neural networks or other forms of deep neural networks trained through reinforcement learning to solve job shop scheduling problems has been studied in many past research works. The main text of this paper lacks a comprehensive review of these research works, making it hard to clearly understand the key technical novelty and contribution of this paper, compared to other recently published works.
Due to page limitation, our review of previous works is in the Introduction section (L48 to L71) and briefly introduces our contribution in L72 to 75.
>The development of the residual scheduling technique is not strongly motivated in this paper. It is hard to understand why it is essential to remove irrelevant operations and machines from the graph based state representation. The development of this technique also appears to be very highly intuitive and lacks thorough theoretical analysis. Hence, the technical contribution of this development remains largely questionable.
About the contribution, please read those in the section of “Author Rebuttal by Authors” above.
>The authors stated that in order for the process of learning the construction heuristic to be formulated as an MDP, they introduced several attributes for the operation nodes and machine nodes in the graph based state representation. However, it remains largely unknown whether, with the introduced attributes, the state representation can satisfy the Markov property of the MDP. The importance of using any newly introduced attributes should be more thoroughly evaluated experimentally. The associated technical contributions should be clarified and strongly justified.
See the reply to the Question section.
>Some other key aspects of the new system design should also be justified more. For example, the use of GIN needs to be supported with more convincing reasons. The focus on learning a construction heuristic rather than an improvement heuristic, which is gaining increasing popularity and attention, should be better justified and experimentally validated in this paper.
See the reply to the Question section.
**Questions**
>What are the key novelties of this research work, compared to existing deep learning based methods for combinatorial optimization, including job shop scheduling?
The comparison to existing work is described in the section of “Author Rebuttal by Authors” above.
>At the theoretical level, why is it essential to remove irrelevant operations and machines from the graph based state representation?
This issue is described in the section of “Author Rebuttal by Authors” above.
>With the newly introduced attributes, why will the state representation satisfy the Markov property of the MDP?
In this paper, the state representation satisfies the Markov property (of the MDP), like other works, such as L2D and ScheduleNet. Like many articles (e.g., Sutton's) said, to satisfy the Markov property, (1) State transition probabilities do not depend on the past state and action, and (2) Rewards do not depend on the past state and action.
Our state representation satisfies the Markov property for the following:
1) State transition probabilities do not depend on the past state and action: For each state s, the transition to the next feasible state s' depends solely on the current operation node features and machine node features (note that when calculating the action selection probability, we only use the node features). This does not include information from earlier states, hence the probability of state transitions is independent of past states and actions. And the removal of nodes does not affect the above explanation.
2) Rewards do not depend on the past state and action: For any state s, the immediate reward for transitioning from state s to the next state is the increase in makespan after executing the selected action. This design ensures that the reward depends only on the current state and the selected action, and is independent of the earlier states and actions.
---
Rebuttal 2:
Title: Thank the authors for their response
Comment: I would like to thank the authors for taking time to respond to my comments. Based on the response, I think my main concerns have not been properly addressed. The discussion on the Markov property did not seem to provide any additional insights on the system design and its novelty. The comparison to improvement heuristics both theoretically and empirically was not addressed with sufficient depth and details.
---
Rebuttal Comment 2.1:
Comment: >I would like to thank the authors for taking time to respond to my comments. Based on the response, I think my main concerns have not been properly addressed. The discussion on the Markov property did not seem to provide any additional insights on the system design and its novelty.
Thank you for your inquiry. In this paper, we simply said that for the approach of using solution constructions the state representation satisfies the Markov property (of the MDP). Some other works, such as L2D and ScheduleNet are also based on this approach (called construction heuristics). We did not claim novelty on this (similar to ScheduleNet), so we simply put it into the Appendix. We claim novelty and significance on "Residual Scheduling” as described in the Section of “Author Rebuttal by Authors”. And, we also claim that ours performed best among all works using construction heuristics.
>The comparison to improvement heuristics both theoretically and empirically was not addressed with sufficient depth and details.
ScheduleNet paper (Park et al 2021) has mentioned about the two types of heuristics as follows (Page 2 and 20): “the RL approaches can be categorized into: (1) improvement heuristics which learns to revise a complete solution iteratively to obtain a better solution; and (2) construction heuristics learns to construct a solution …. The improvement heuristics typically can obtain better performance than the construction heuristics as they find the best solution iteratively through the repetitive solution revising/searching process. However, improvement heuristics require expensive computations than construction heuristics.”
So, this paper did not make more comparisons on this issue.
To our knowledge, the improvement heuristics in L2S (Zhang et al. 2022) perform relatively well (probably the best) in the works of JSP with improvement heuristics. Here are some more points.
* Although improvement heuristics perform well for JSP with high steps (like 5000), these steps cannot be parallelized well. In contrast, most construction heuristics can construct solutions in parallel to further improve the performance. For example, our RS+100 for FJSP has much better performance than RS, and can be completely done in parallel.
In fact, we also have done experiments on RS+100 for JSP for comparisons to their works. The following table shows the performance results of RS+100 as well as others (including L2S-500, L2S-1000, L2S-2000, L2S-5000) for TA datasets. The results show that ours still outperforms L2S (even for L2S-5000) for large cases (50x15,50x20,100x20). This result also shows the superiority of our RS approach.
|Size|15x15|20x15|20x20|30x15|30x20|50x15|50x20|100x20|Avg|
|-|-|-|-|-|-|-|-|-|-|
|RS|0.148|0.165|0.169|0.144|0.177|0.067|0.100|0.026|0.125|
|RS+100|0.109|0.111|0.117|0.108|0.141|**0.035**|**0.064**|**0.005**|0.086|
|L2S-500|0.093|0.116|0.124|0.147|0.175|0.110|0.130|0.079|0.122|
|L2S-1000|0.086|0.104|0.114|0.129|0.157|0.090|0.114|0.066|0.108|
|L2S-2000|0.071|0.094|0.102|0.110|0.140|0.069|0.093|0.051|0.091|
|L2S-5000|**0.062**|**0.083**|**0.090**|**0.090**|**0.126**|0.047|0.065|0.030|**0.074**|
* L2S has been used for JSP so far. It is unclear whether it can be applied to FJSP well at least at this moment. | Summary: The paper introduces a novel approach called residual scheduling for solving the Job-shop scheduling problem (JSP) and its variant, flexible JSP (FJSP), focusing on removing irrelevant machines and jobs from the consideration set. Despite these problems being NP-hard, the proposed method demonstrates state-of-the-art performance across standard benchmarks, even performing well when scaled to larger problem sizes, achieving a zero gap in 49 out of 50 instances with more than 150 jobs on 20 machines.
Strengths: The method developed claims that for the 98% of the cases, the zero gap is achieved for fairly large instances.
Weaknesses: The zero gap is mentioned. Upon reading, readers find out that the gap refers to "makespan gap." Understandably, significant bulk of existing papers on job-shop focus on makespan rather than tardiness. Ignoring tardiness may lead to poorer on time delivery, which is a weakness in itself, but with respect to the "makespan gap" measure it is not clear whether zero gap will result in zero (or small gap) is tardiness is considered or, generally, if the standard duality or MIP gaps are used instead. The gap is misleading at best.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The paper claims to beath SOTA heuristics. Lagrangian heuristic is definitely missing. How does the new method compare against Lagrangian heuristic? Understandably, Lagrangian relaxation has been used for a long time, the reviewer is asking about recent rather than historical Lagrangian heuristics, since their quality may differ.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: In light of the above comments, the limitations are 1. only makespan is considered, 2. only one type of gap seems to be introduced and considered, and 3. major heuristics are not used/compared with.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
>The zero gap is mentioned. Upon reading, readers find out that the gap refers to "makespan gap." Understandably, significant bulk of existing papers on job-shop focus on makespan rather than tardiness. Ignoring tardiness may lead to poorer on time delivery, which is a weakness in itself, but with respect to the "makespan gap" measure it is not clear whether zero gap will result in zero (or small gap) is tardiness is considered or, generally, if the standard duality or MIP gaps are used instead. The gap is misleading at best.
To prevent misunderstanding, we will use “makespan gap” instead of “gap”.
Since the majority of research in JSP/FJSP focuses on minimizing makespan as the objective, we also choose to optimize this objective to facilitate the comparison with other studies. The calculation of the gap is explicitly defined in equation (6).
Besides, in our experiments as Figure 4 shown, RS obtains optimal solutions for all instances with sizes larger than 100x15. All details of optimal solutions are listed in Appendix Table 16 and Table 17.
**Questions**
>The paper claims to beat SOTA heuristics. Lagrangian heuristic is definitely missing. How does the new method compare against Lagrangian heuristic? Understandably, Lagrangian relaxation has been used for a long time, the reviewer is asking about recent rather than historical Lagrangian heuristics, since their quality may differ.
Our paper only claims to achieve SOTA for all construction heuristics, not for all heuristics. For some other heuristics (e.g., GA, Tabu, and Lagrangian heuristic etc.), although some of them achieved better gaps (or smaller gaps), these methods usually take a lot more time. For example, for [1] (see below), it takes 3700, 5000, 4200, 7500 seconds (about 1-2 hours) to inference TA25 (20x20), TA30 (30x15), TA40(30x20), TA50 (50x15) respectively which we can inference within 2 seconds for these instances.
[1] Kotary, J., Fioretto, F., & Van Hentenryck, P. (2022, June). Fast approximations for job shop scheduling: A lagrangian dual deep learning method. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 7, pp. 7239-7246).
---
Rebuttal Comment 1.1:
Comment: The reviewer continues to have reservations regarding the choice of makespan. While it is acknowledged (by both the reviewer and the authors) that a significant portion of existing job-shop literature centers on makespan rather than tardiness, this choice restricts practical applicability, especially concerning on-time delivery—a factor crucial for customer satisfaction. Note that the acknowledgment of this predominant focus on makespan is a mere observation and does not constitute a rigorous scientific discovery. On the other hand, the discourse on LR heuristics/methods has somewhat improved and the results presented are promising. The most the reviewer can do in light of these concerns is adjust their rating from "borderline reject" to "borderline accept".
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive re-evaluation and your acknowledgment of our results for makespan. Yes, we totally agree that other objectives like tardiness (more like machine utilization, setup cost, energy consumption, etc. [1]) are also important for practical applications, especially for scheduling for manufacturing plants, which we are actually also working with. We worked on the research towards RS actually based on our observation and analysis on these practical applications, with the objectives like makespan as well as tardiness/machine utilization/setup cost. In this paper, we target makespan, simply because it would be easier to make comparisons with existing job-shop research works on makespan (including datasets and benchmarks for comparisons). Like John McCarthy said "Chess as the Drosophila (the Fruit Fly) of Artificial Intelligence" in 1990, we usually want to work on simplified work first (makespan only) for a more complicated topic (including tardiness, etc). I hope the above answer addresses your concerns on the issue of “makespan” only. Thank you very much again.
[1] Xiong, Hegen, et al. "A survey of job shop scheduling problem: The types and models." Computers & Operations Research (2022) | Summary: This paper proposed DRL based method to learn dispatching polices for (flexible) job-shop scheduling problems (JSP/FJSP). The main idea is to remove the completed operations from the state embedding, which is called residual scheduling, so as to improve the representation accuracy. The DRL agent uses a graph representation, which is processed by a Graph Neural Network (GNN) architecture. Experiments on JSP and FJSP benchmarks show that the proposed residual scheduling scheme outperforms recent DRL baselines.
Strengths: 1. The idea of residual scheduling makes great sense and is interesting.
2. The method is generally applicable to both JSP and FJSP, which are important scheduling problems.
3. Good empirical performance, comparing to recent DRL based scheduling methods.
Weaknesses: 1. The main weakness is that the technical contribution is incremental. While the redisual scheduling idea is interesting and novel, a large part of the proposed method is similar to existing works. Specifically, the graph representation and heterogeneous graph neural network in Section 3.2 and 3.3 is similar to the heterogeneous graph and heterogeneous GNN in (Song et al., 2023). This is not mentioned in Section 3, and the differences between the proposed method and existing works are not discussed.
2. Some design choices need to be justified. Please see the below questions.
3. Empirical evaluation needs to be improved.
* It is unclear whether the models in other works (e.g. L2D, SchN, DRL-G) are retrained using the same dataset as the proposed method. If not, then directly comparing their performance (even on the same benchmark instances) is not fair due to different training data.
* The discussion for Figure 4 is not surprising. It is well known that for JSP, problems with larger $n/m$ ratios are easier to solve (Taillard 1993). That is why the gaps on large problems in Figure 4 are smaller. Actually this is true for most algorithms, and cannot be claimed as a major advantage of the proposed method.
* Training time is not reported.
4. The authors made several inappropriate statements in the paper, mainly in introduction. The authors should be more precise about the related concepts.
* In the first paragraph, it is better to describe JSP as a combinatorial optimization problem, instead of mathematical optimization.
* In the second paragraph, Constraint Satisfaction Problem (CSP) should be not stated as a type of mathematical optimization. In addition, Constraint Programming (CP) here is more suitable than CSP.
* In the fourth paragraph, the approaches that automate the design of heuristics should be hyperheuristics, instead of metaheuristics.
4. The language needs to be improved. Besides, there are quite a few grammar errors and typos.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Why use fully connected edges to link the operations in the same job? How about using only precedence relationship as a directed edge?
2. The second graph below Figure 3, for flexible problems, how to define the ready status for an operation considering there could be multiple compatible machines?
3. According to Figure 3(b), the graph embedding $h_{\mathcal{G}}^{(k)}$ is not used in the action score prediction as in Figure 3(c). So what is $h_{\mathcal{G}}^{(k)}$ used for?
4. In the experiments, to generate number of jobs and machines, why use the uniform distribution $\mathcal{U}(3, N)$ and $\mathcal{U}(3,n)$ (should be $\mathcal{U}(3,M)$ for machines)? The value 3 seems arbitrary.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors did not give a particular discussion on the limitations. One limitation could be the computational efficiency. As shown in Table 13 in the appendix, the runtime increase rapidly with the number of jobs. This could affect both training and inference efficiency, and limit its applicability to larger problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
>The main weakness is that the technical contribution is incremental. While the redisual scheduling idea is interesting and novel, a large part of the proposed method is similar to existing works. Specifically, the graph representation and heterogeneous graph neural network in Section 3.2 and 3.3 is similar to the heterogeneous graph and heterogeneous GNN in (Song et al., 2023).
This is not mentioned in Section 3, and the differences between the proposed method and existing works are not discussed.
First, thanks for finding it interesting for the novelty of RS, which, to our knowledge, none of previous works with construction heuristic (including L2D (Zhang et al., 2020) , ScheduleNet (Park et al., 2021a), Song et al. (2023), Park et al., (2021b)) considered residual state. The significance and importance of RS are described in the common reply section as above.
Ours is different from Song et al. (2023) as described in the section of “Author Rebuttal by Authors” above.
>It is unclear whether the models in other works (e.g. L2D, SchN, DRL-G) are retrained using the same dataset as the proposed method. If not, then directly comparing their performance (even on the same benchmark instances) is not fair due to different training data.
In fact, many of previous works did not provide their training data and their open-source version (only L2D has open-source). However, most of them used the similar procedure for the generated training data, which is mentioned in the Section 4.1. In this paper, we still use the same procedure to generate our training data and directly use their records for comparisons. In addition to this, we also use public datasets like TA, MK, etc. for comparisons.
>The discussion for Figure 4 is not surprising. It is well known that for JSP, problems with larger $n/m$ ratios are easier to solve (Taillard 1993). That is why the gaps on large problems in Figure 4 are smaller. Actually this is true for most algorithms, and cannot be claimed as a major advantage of the proposed method.
Yes, it is understandable that large JSP cases are easier to be solved. However, Figure 4 mainly wants to show that RS is much better than L2D, and actually achieves the same makespan as OR-Tools does for those large JSP cases.
>Training time is not reported.
It takes about one day to train with 200,000 episodes. We will add it in the revision.
>In the first paragraph, it is better to describe JSP as a combinatorial optimization problem, instead of mathematical optimization.
We said this since many articles considered combinatorial optimization is a subfield of mathematical optimization. In the revision, we will rephrase it.
>In the second paragraph, Constraint Satisfaction Problem (CSP) should be not stated as a type of mathematical optimization. In addition, Constraint Programming (CP) here is more suitable than CSP.
We will rephrase it in the revision.
>In the fourth paragraph, the approaches that automate the design of heuristics should be hyperheuristics, instead of metaheuristics.
Thanks for pointing this out. We will rephrase it to “a generic approach to search within a search space of problem solutions”.
**Questions**
>Why use fully connected edges to link the operations in the same job? How about using only precedence relationship as a directed edge?
With fully connected edges, the embedding of one operation node encompasses all the information of all operations of the same job, thus potentially representing the job embedding, e.g., including information about the rest of the operations to be processed.
>The second graph below Figure 3, for flexible problems, how to define the ready status for an operation considering there could be multiple compatible machines?
The ready status for an operation is on as long as there exists at least one available machine that can process the operation at the time. So, it is the same for multiple available machines.
>According to Figure 3(b), the graph embedding $h_{\mathcal{g}}^{(k)}$ is not used in the action score prediction as in Figure 3(c). So what is $h_{\mathcal{g}}^{(k)}$ used for?
The variable $h_{\mathcal{g}}^{(k)}$ is to represent all of the hidden embeddings of the graph, including all $h_{O\in\mathcal{g}}^{(k)}$ and $h_{M\in\mathcal{g}}^{(k)}$. We will add the definition in the revision.
>In the experiments, to generate number of jobs and machines, why use the uniform distribution $\mathcal{U}(3, N)$ and $\mathcal{U}(3, n)$ (should be $\mathcal{U}(3, M)$ for machines)? The value 3 seems arbitrary.
Since the number of jobs, $n$, is greater than the number of machines, $m$, in most datasets of past works, we let $m\sim\mathcal{U}(3, n)$ such that $m\leq n$.
The reason for choosing 3 as a lower bound for the number of jobs or machines is simply because we think that it is most likely to be a trivial case of no more than 2 jobs and 2 machines.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, which addressed my concern. I increased my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive response towards acceptance. In the case of acceptance, we will revise this paper accordingly based on your valuable comments. Again, thank you, and we are confident that this paper is worthy of this conference. | Summary: This paper proposes a deep reinforcement learning-based constructive heuristic to solve the (Flexible) Job Shop Scheduling Problem. An instance of the problem is represented as a graph and fed into a Graph Neural Network-based model which outputs a score for each candidate (operation-machine) pair. The model is trained with the REINFORCE algorithm with as baseline a classic Priority Dispatching Rule-based heuristic. The novelty of the paper lies in the update of the state after each action: the graph is updated by removing the operations which have already been executed to focus on the most relevant information, which is the residual operations and remaining times of the ongoing ones. The proposed approach is experimentally shown to outperform RL-based constructive heuristics on classic JSSP and FJSSP benchmarks.
Strengths: 1. Sound and interesting idea of removing irrelevant information from the state
1. The paper is fairly clear and I appreciated the illustrations Fig 1-3.
1. The model was proposed for the JSSP and easily adapted to the Flexible JSSP
1. The approach outperforms deep RL-based construction heuristics on classic benchmarks
Weaknesses: 1. Limited novelty: the main contribution is the definition of the residual state at each step of the construction process by removing irrelevant operations and resetting the time reference. This seems to me an incremental improvement of the approach L2D [1], which already proposed a similar state graph representation and the use of the Graph Isomorphism Network architecture for the JSSP.
1. The proposed approach seems very specific to (F)JSSPs (state representation, baseline) and it's not clear what could be transferable to DRL heuristics for solving other optimization problems.
1. In the experiments, the presented non-learning-based baselines seem pretty weak: only greedy (see more in Questions)
1. Lots of English typos (I noted a few per page)
[1] C Zhang et al, Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning, Neurips 2020
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Where dos the attributes of edges (L205) appear in the GNN model (Sec 3.3)?
1. The paper claims: “Interestingly, using RS, the average gaps are nearly zero for the collections with sizes larger than 100 … A strong implication is that our RS approach can be scaled up for job sizes and even reach the optimal for sufficient large job count.” Do you have an idea of why the model would work better on unseen large instances versus instances of the same size as the training ones? Does OR-tools return the optimal solutions for these larger instances? Can it be that the quality of the reference solutions decreases and therefore the “optimality” gap becomes smaller?
1. Are there stronger non-learning-based baselines for the JSPP other than the greedy PDR heuristics presented in Table 2? To be able to appreciate the performance of the proposed approach, it would be useful and more convincing to compare it to the best heuristics for this problem, beyond simple greedy ones. For example maybe [2]?
1. Appendix, Algorithm 1, Line 6: to compute this makespan at state s_t, given action a_t, do you do a rollout of the current policy \pi_{\theta} with the updated parameter \theta? This would mean doing a rollout until the end of both the baseline and current policy at each step of the trajectory? How long did the training take?
1. How is the average computation time computed? (Table 13) In particular, was the policy applied to each instance individually or were instances batched?
1. The discussion L322 about the time it takes for RS/L2D/ScheduleNet versus OR-Tools can be a bit misleading: OR-tools is an exact solver. If spends a lot of time proving the optimality of the solution. Probably it could return a good quality solution much faster if used as a heuristic. In addition it runs on CPUs and not GPUs therefore just comparing the time is not really fair.
[2] CY Zhang, A tabu search algorithm with a new neighborhood structure for the job shop scheduling problem, Computers & Operations Research, 2007
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Not explicitly addressed by the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
>Limited novelty: the main contribution is the definition of the residual state at each step of the construction process by removing irrelevant operations and resetting the time reference.
Please see the section of “Author Rebuttal by Authors” above.
>This seems to me an incremental improvement of the approach L2D [1], which already proposed a similar state graph representation and the use of the Graph Isomorphism Network architecture for the JSSP.
To our best knowledge, none of previous works with construction heuristic (including L2D (Zhang et al., 2020) , ScheduleNet (Park et al., 2021a), Song et al. (2023), Park et al., (2021b)) considered residual state (RS). In addition, ours is also different from L2D as described in the section of “Author Rebuttal by Authors” above.
>The proposed approach seems very specific to (F)JSSPs (state representation, baseline) and it's not clear what could be transferable to DRL heuristics for solving other optimization problems.
This paper focuses on (F)JSSPs. Based on this, we expect to extend it to other optimization problems in the future.
**Questions**
>Where does the attributes of edges (L205) appear in the GNN model (Sec 3.3)?
The attributes of edges (L202~205) are concatenated to each $O\to M$ & $M\to O$ message passing, as mentioned in Equation (3), which hide the attributes in $h$. To clarify this, we will update the equation as follows.
$h_{M_1}^{(k+1)}=MLP_{MM}^{(k+1)}((1+\epsilon)h_{M_1}^{(k)})+MLP_{OM}^{(k+1)}((h_{O_{1,1}}^{(k)}||\bar{T_{1,1,1}})+(h_{O_{2,3}}^{(k)}||\bar{T_{2,3,1}})+(h_{O_{3,1}}^{(k)}||\bar{T_{3,1,1}}))$
>Do you have an idea of why the model would work better on unseen large instances versus instances of the same size as the training ones?
Yes, from our observation, makespans become larger for large problem sizes, the gaps (normalized to the large makespans in the definition Equation (6)) also become smaller. The reason why the model would work better on unseen large instances is argued in the section of “Author Rebuttal by Authors” above.
>Does OR-tools return the optimal solutions for these larger instances?
Given half-a-day computation, OR tools returned optimal solutions for most instances except for about 20 instances. Interestingly, optimal solutions were obtained for all instances with sizes larger than 100x15.
>Can it be that the quality of the reference solutions decreases and therefore the “optimality” gap becomes smaller?
Like said above, optimal solutions were obtained for all instances with sizes larger than 100x15, so we do not think this is the issue.
>Are there stronger non-learning-based baselines for the JSPP other than the greedy PDR heuristics presented in Table 2? To be able to appreciate the performance of the proposed approach, it would be useful and more convincing to compare it to the best heuristics for this problem, beyond simple greedy ones. For example maybe [2]?
For many non-learning methods, like [2] that you mentioned, while solving the problems with low gaps (like OR-Tools), they actually take a (unstably) long time to solve. For example, for the instances, swv11-swv15, our methods take 1-3 seconds on average, while the method [2] took about 1 hour (which is reported in [2]).
[2] CY Zhang, A tabu search algorithm with a new neighborhood structure for the job shop scheduling problem, Computers & Operations Research, 2007.
>Appendix, Algorithm 1, Line 6: to compute this makespan at state s_t, given action a_t, do you do a rollout of the current policy \pi_{\theta} with the updated parameter \theta? This would mean doing a rollout until the end of both the baseline and current policy at each step of the trajectory? How long did the training take?
In line 5 & 6, we simply use a baseline (MWKR in most of our experiments) to rollout, not our (current) policy. (Note: MWKR is much faster than our policy, and our training dataset is (10,10).) It takes about one day to train with 200,000 episodes.
>How is the average computation time computed? (Table 13) In particular, was the policy applied to each instance individually or were instances batched?
The computation time is calculated for each individual instance. Then, all of these times are averaged.
>The discussion L322 about the time it takes for RS/L2D/ScheduleNet versus OR-Tools can be a bit misleading: ......
Probably it could return a good quality solution much faster if used as a heuristic.
In addition it runs on CPUs and not GPUs therefore just comparing the time is not really fair.
For this question, we reran our program with CPU and found that our version with CPU (with single thread) takes roughly twice of computation time for our version with GPU (as shown in the paper). The details are also shown below.
|Size|15x15|20x15|20x20|30x15|30x20|50x15|50x20|100x20|
|:-|-|-|-|-|-|-|-|-|
|CPU time (s)|1.30|1.55|1.45|3.67|4.65|10.04|13.10|51.47|
|GPU time (s)|0.47|0.83|0.91|1.93|2.21|5.3|6.96|27.32|
Now, for fairness, we let OR-tools use one thread run within the same times ($T$) as above for each group of dataset (in the row of CPU time). In this way, the obtained makespans have the gaps as shown in the following table. From the table, ours clearly outperformed OR-tools’ by a large margin when limiting the running time to $T$ for OR-tools. Even for 2$T$ and 4$T$, ours also clearly outperformed OR-tools’ except for some small cases, like 15x15, 20x15. For some large instances like 100x20, 50x20, 50x15, the table shows that longer times do not help improve much.
|Size|15x15|20x15|20x20|30x15|30x20|50x15|50x20|100x20|
|:-|-|-|-|-|-|-|-|-|
|makespan gap by OR-Tools ($T$) |0.159|0.245|0.229|0.297|0.312|0.207|0.251|0.143|
|makespan gap by OR-Tools (2$T$) |0.121|0.214|0.202|0.267|0.292|0.199|0.248|0.143|
|makespan gap by OR-Tools (4$T$) |0.094|0.162|0.171|0.226|0.263|0.189|0.241|0.143|
|makespan gap by RS ($T$) |0.148|0.165|0.169|0.144|0.177|0.067|0.100|0.026|
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their precise answers. I increase my score to 6. | Rebuttal 1:
Rebuttal: Dear all reviewers, we appreciate your valuable comments. We would like to address the common concerns raised by reviewers.
**Novelty and contribution.**
In RS, the model simply focuses on the remaining non-dispatched operations, so the problem size is getting smaller as the process goes. This potentially leads to a more accurate prediction, as illustrated as follows. For example, for a 20x10 instance at the beginning, the original problem size is 20x10, however, the size is reduced to 10x10 after scheduling half of the operations in RS. In RS, scheduling at this point is like scheduling an instance with size close to 10x10 (without extra information of 20x10). Since the problem size (about 10x10) has been trained as described in Section 4.1, our method is able to predict well at this point (presume that the model is well trained). In brief, removing irrelevant operations allows the model to focus on the most critical parts of the problem and capture essential patterns and features, leading to better prediction.
In contrast, for other scheduling methods (like L2D, ScheduleNet), their graphs still include information with size close to 20x10 at this point. For example, a node of an operation which has been finished still exists with status “assigned” in ScheduleNet, while in RS the node is completely removed. Note: obviously the current state of these methods includes a lot of extra information, which usually requires more training to predict well.
Our experiments also justify the expected improvements over other methods. This is one of the major contributions in this paper.
|Method|Problem|Connection of operation nodes|Machine nodes|Action|Model|
|:-|-|-|-|-|-|
|L2D|JSP|precedences |no|choose operation|homogeneous GIN|
|ScheduleNet|JSP/mTSP|fully connected|yes|choose machine-operation pair|homogeneous GAT|
|Song [1] |FJSP|precedences |yes|choose machine-operation pair|heterogeneous GAT|
|RS|JSP/FJSP|fully connected|yes|choose machine-operation pair|heterogeneous GIN|
**Similarity to existing works.**
Here, we list the differences in the table above and will add more discussion in the revised version.
*(To reviewer 8Fub)* For L2D, ours is different in the following senses.
In addition, ours is also different from L2D in the following senses:
1) Nodes: RS uses two kinds of nodes (operation nodes and machine nodes), while L2D only uses operation nodes (and uses edges to link those with the same machines). Thus, RS uses heterogeneous GIN for two kinds of nodes, while L2D uses homogeneous GIN.
Besides, RS and L2D use different features. Mentioned in Section 3.2, RS uses three features for operation nodes, two features for machine nodes, and one edge feature for machine-operation edge. L2D uses two features for operation nodes and no machine nodes.
2) Actions: In RS, we consider to choose the eligible machine-operation pairs, i.e., "assigning an operation to an available machine", while L2D only considers the eligible operations, i.e., "assigning an available operation" since they only consider JSSP and thus this straightforward scheme (of L2D) won’t work for FJSP.
3) Rewards/Returns: In RS, the reward/return is the normalized advantage makespane with respect to a baseline policy, while L2D’s reward is the quality difference between two states.
*(To reviewer e9mT)* For (Song et al., 2023)[1], ours is different in the following senses.
1) Nodes and features: RS and (Song et al., 2023) use two kinds of nodes (operation nodes and machine nodes), but (Song et al., 2023) uses additional dummy operation nodes (start and end nodes). RS fully connects operation nodes belonging to the same job, while (Song et al., 2023) uses precedences to link nodes.
2) RS uses heterogeneous GIN for two kinds of nodes, while (Song et al., 2023) uses heterogeneous GAT. We use GIN instead, since it is shown in (Xu et al., 2019)[2] that GIN has strong discriminative power.
[1] Wen Song, Xinyang Chen, Qiqiang Li, and Zhiguang Cao, Flexible Job-Shop Scheduling via Graph Neural Network and Deep Reinforcement Learning, IEEE Trans. Ind. Informatics 2023.
[2] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka, How Powerful are Graph Neural Networks? ICLR 2019. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
StreamNet: Memory-Efficient Streaming Tiny Deep Learning Inference on the Microcontroller | Accept (poster) | Summary: This paper presents methods for speeding up patch-based inference on microcontrollers. The method creates a buffer to selectively save intermediate values that are traditionally discarded in patch-based computation. This allows StreamNet to balance latency and memory consumption. There are 1d and 2d variants of this optimization.
The paper additionally proposes a method for skipping the computation of padding data and a framework for auto-tuning the hyperparameters of StreamNet. The framework searches for patch hyperparameters that meet certain memory constraints while minimizing latency.
Strengths: - Pure runtime optimization targeted at the most important MCU metrics (latency and memory)
- State-of-the-art performance vs. a strong baseline (MCUNetv2)
- Enables a new dimension for latency-memory tradeoffs
- Includes an algorithm for automatically searching the newly created search space.
Weaknesses: - Given the tradeoff between latency and memory consumption that StreamNet unlocks, it would be easier to interpret results as points on a latency-memory Pareto curve rather than cross-referencing tables and charts. The table in the appendix that gave the speedup at nearly equal memory was the most informative in understanding the benefits, but it was buried.
- Only compares against MCUNetv2 and not the other patch-based optimization mentioned in the related work.
- Relies heavily on existing work (MCUNetv2 and TinyEngine)
- The contributions in the intro should be reworded to be consistent
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is this method purely a runtime/compiler optimization, or are there any model architecture implications (e.g. on accuracy)?
- How might one redesign MCU class models to maximize the benefits of StreamNet? If StreamNet more easily optimizes certain layer configurations, how does that impact exiting Pareto curves?
- Will StreamNet be open-sourced?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: How long does it take to run the auto-tuning framework? How does it compare to black-box search methods?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you all for the valuable comments! In our revised version, we addressed all of the reviewer comments. The details of our reply and the changes are presented in the following.
D.1 Is this method purely a runtime/compiler optimization, or are there any model architecture implications (e.g. on accuracy)?
Yes, StreamNet is purely a runtime/compiler solution. The codegen of StreamNet yields the streaming buffer used when transferring data at the runtime without changing the original model structure. As a result, StreamNet has the same output and accuracy compared to the original model.
D.2 How might one redesign MCU class models to maximize the benefits of StreamNet?
It is possible to use the Neural Architecture Search (NAS) method to discover new parameters of DNN models to meet the given memory budget on MCUs. Unlike the NAS method, the StreamNet can be used for existing DNN models without having to change the structure of the original model. The StreamNet also eliminates the model training processing used by the NAS. Furthermore, the StreamNet compiler also automatically analyzes the structure of the DNN model to create streaming buffers and reduces the amount of unnecessary recomputations shown on the patch-based inference. As a result, the StreamNet decreases the additional burden to manually craft a new DNN model to meet the requirement of the MCUs.
D.3 Will StreamNet be open-sourced?
StreamNet will be open-sourced after the paper is published.
D.4 How long does it take to run the auto-tuning framework? How does it compare to black-box search methods?
The StreamNet auto-tuning method takes about one minute to discover the best parameters used to reorganize the structure of the streaming buffer and fit a DNN model in the MCU. The black-box (brute-force) method firstly enumerates all combinations of the n_patch and the split_idx parameters. Second, the black-box method compiles combinations of the n_patch and the split_idx parameters with all feasible configurations of streaming levels (x or y streaming). Finally, the black-box method works out parameters that will consume the minimum amount of MACs while meeting the SRAM memory budget of the MCU. Unlike the black-box method, StreamNet leverages the Pareto Front optimization method to reduce the size of the search space. As a result, the StreamNet auto-tuning achieves a geometric mean of 1.29X speedup over the black-box method on the Apple Silicon M1 CPU by using single thread.
| Benchmark | StreamNet Auto tuning | Brute Force | Speedup |
|----------------------|------------------------|--------------|----------|
| mcunet-vww0 (MV0) | 51.17s | 83.00s | 1.62X |
| mcunet-vww1 (MV1) | 46.47s | 77.86s | 1.68X |
| mcunet-vww2 (MV2) | 95.42s | 107.44s | 1.13X |
| mcunet-in0 (MI0) | 91.68s | 134.30s | 1.46X |
| mcunet-in1 (MI1) | 59.17s | 93.85s | 1.59X |
| mcunet-in2 (MI2) | 90.87s | 92.39s | 1.02X |
| mcunet-in3 (MI3) | 61.11s | 64.95s | 1.06X |
| mcunet-in4 (MI4) | 124.40s | 126.61s | 1.02X |
| mbv2-w0.35 (MB2) | 73.28s | 101.98s | 1.39X |
| proxyless-w0.3 (PL) | 65.90s | 79.89s | 1.21X |
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: The authors' response resolves my main concerns with the paper.
I will say that the speed-up over the brute force method isn't particularly large, and the brute force method seems easier to parallelize over multiple threads or cores. Maybe worth de-emphasizing the auto tuning framework as a contribution given these results?
---
Reply to Comment 1.1.1:
Comment: It is good to know our responses resolve your main concerns in our paper. We will fine tune the contributions of our paper based on your comments. Thank you. | Summary: This paper introduces StreamNet, a novel approach designed to eliminate the performance bottleneck associated with patch-based inference, which incurs additional computational overheads due to overlapping patches. StreamNet comprises two techniques, StreamNet-1D and StreamNet-2D, each offering a different trade-off between memory overhead for buffering intermediate results and computational overhead reduction. The authors also propose an auto-tuning framework to derive an inference schedule, which is a composition of the two techniques, given the memory constraints of a Microcontroller Unit (MCU). Evaluation results indicate that the proposed approach significantly improves upon prior work in terms of memory usage and latency.
Strengths: - Clear and comprehensive illustration of the method: The paper provides good illustrations of the proposed methods, making them easy to understand. The examples used to explain the methods are also helpful in allowing readers to understand the advantages of the proposed methods and the trade-off between design elements.
- Intuitive and straightforward methods: The proposed StreamNet-1D and StreamNet-2D techniques are intuitive and straightforward, offering a clear path to reducing computational overheads by buffering tensors shared between patches.
- Practicality and on-device evaluation: The paper demonstrates the practicality of the proposed methods through on-device evaluation, which is a strength. However, more comprehensive testing is needed.
Weaknesses: - Limited experimental settings: The experimental setting does not seem to cover the common user scenario of employing patch-based inference. This limits the generalizability of the results and makes it difficult to assess the full potential of the proposed methods. Future work should include more diverse testing scenarios to fully evaluate the effectiveness and applicability of StreamNet. In my opinion, two control variables, input resolution and number of patches, are missing in the experiments.
* Patch-based inference is favorable in settings with large input resolution where the overhead of re-computation can be amortized by the large patch size. For instance, the re-computation overhead reported in MCUNetV2 is only 10% for MobileNetV2, but such overhead appears significantly higher in this paper. What causes the discrepancy requires further discussion.
* The number of patches used also significantly influences the re-computation overheads and memory usage. It would be intriguing to observe the trade-off between latency and memory usage achieved by the proposed method in comparison to the baselines.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Could you provide more insight into how the current experimental setup mirrors different user scenarios, particularly those that commonly employ patch-based inference?
2. Could you provide the original latency and memory usage of the models without patch-based inference as reference data points?
3. How would varying the input resolution and the number of patches impacts the performance of StreamNet?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you all for the valuable comments! In our revised version, we addressed all of the reviewer comments. The details of our reply and the changes are presented in the following.
C.1 The re-computation overhead reported in MCUNetV2 is only 10% for MobileNetV2, but such overhead appears significantly higher in this paper. What causes the discrepancy requires further discussion.
The MCUNet-v2 model zoo does not include the MobileNet-v2 model used in the MCUNet-v2 paper. Thus, we cannot directly compare the result reported in the MCUNet-v2 paper with that in the StreamNet paper. In addition, the re-computation overhead of the patch-based inference (MCUNet-v2) becomes worse when decreasing the peak usage of SRAM memory. The MCUNet-v2 uses the patch-based inference to increase 10% of the MACs while decreasing 87.5% of the peak SRAM memory usage compared to the layer-wise tensor memory allocation. Although the MCUNet-v2 can further decrease the SRAM memory usage of TinyML models on MCUs. MCUNet-v2 only presents the SRAM memory usage of TinyML models when increasing 10% of the MACs. This is because the amount of recomputation in the MCUNet-v2 will significantly increase when the SRAM memory usage of TinyML models is minimized. Unlike MCUNet-v2, the StreamNet minimizes the usage of the SRAM memory without significantly increasing the amount of MACs on TinyML models.
C.2 Could you provide more insight into how the current experimental setup mirrors different user scenarios, particularly those that commonly employ patch-based inference?
The usage of the SRAM memory can drop significantly in the patch-based inference by only increasing a few MACs. For instance, the original PL model contains 38.3 M of the MACs and uses 259 KB SRAM memory space. Then, the patch-based inference increases 8.8% of the MACs (41 M) and decreases the usage of the SRAM memory on the PL model to 158 KB. Unlike MCUNet-v2, the amount of MACs in the StreamNet is 39.1 M when using 92 KB SRAM memory space. As a result, the StreamNet needs fewer MACs and SRAM memory space than the patch-based inference, since the StreamNet removes the recomputation overhead of the patch-based inference while using small SRAM memory space.
C.3 Could you provide the original latency and memory usage of the models without patch-based inference as reference data points?
The original latency was obtained from the MCUNet-v1 that does not use the patch-based inference. To achieve a fair comparison of the latency on TinyML models, the StreamNet and the MCUNet-v1 use the same back-end library. The MCUNet-v1 reduces the peak SRAM memory usage on TinyML models by performing the in-place depthwise operation instead of using the patch-based method. Table 2 presents the StreamNet-2D gains about 25% runtime overhead when processing streaming buffers over the MCUNet-v1. Moreover, the StreamNet-2D saves about 42.3% SRAM memory usage over the MCUNet-v1 by using the patch-based inference with 2D streaming buffers.
| | Original Latency | | StreamNet-2D | |
|--------|-------------------------|---------------|-------------------------|---------------|
| Model | SRAM Memory Usage (KB) | latency (ms) | SRAM Memory Usage (KB) | latency (ms) |
| MB2 | 295 | 367 | 66 | 417 |
| PL | 259 | 542 | 95 | 676 |
| MI0 | 49 | 82 | 30 | 99 |
| MI1 | 96 | 169 | 47 | 188 |
| MI2 | 215 | 869 | 169 | 1,168 |
| MI3 | 260 | 1,048 | 208 | 1,444 |
| MI4 | 416 | 1,371 | 236 | 1762 |
| MV0 | 59 | 86 | 29 | 101 |
| MV1 | 92 | 159 | 44 | 225 |
| MV2 | 174 | 718 | 143 | 961 |
C.4 How would varying the input resolution and the number of patches impact the performance of StreamNet?
The following two figures present the latency and memory variations of the StreamNet-2D when changing the number of patches in the PL model. As illustrated in Figure I, the execution time of the PL model does not significantly fluctuate when varying the number of patches on the STM32F767 MCU. In Figure II, the SRAM memory usage is sensitive to the value of the split_idx. For instance, in Figure I, the SRAM memory usage on the setting of the (2, 12) is 50.5% smaller than the (2, 2). In addition, the SRAM memory usage also drops when increasing the number of patches. For example, in Figure II, the SRAM memory usage of the (44, 4) is 14.4% smaller than the (2, 4). Furthermore, the size of the patches will increase with the growth of the input resolution without changing the value of the n_patch and the split_idx. Hence, the StreamNet-2D requires more stream buffer space to store tensor data and the recomputation overhead of the StreamNet-2D does not increase with the growth of the input resolution.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, which clarifies my main concerns. I am comfortable raising my rating to a weak accept. | Summary: The patch-based inference is widely employed for TinyML models on resource-constrained microcontroller units (MCUs), which significantly reduces memory requirements compared to layer-based inference. However, path-based inference can lead to a substantial increase in Multiply-Accumulates (MACs), as it introduces a great deal of redundant computation among adjacent patches. This paper introduces StreamNet, a solution that curtails repeated computation by memorization. Furthermore, StreamNet can auto-tune the area to be skipped during path-based inference, enhancing overall efficiency.
Strengths: 1. The paper provides a comprehensive explanation of the benefits and drawbacks of patch-based inference.
2. The proposed method is clearly articulated and easy to understand.
3. StreamNet not only reduces MACs by leveraging the memory of the overlapped computation area, but it also introduces an auto-tuning framework to streamline these areas.
4. StreamNet outperforms the state-of-the-art methods at a relatively minor expense of additional memory space.
5. The auto-tuning framework possesses the ability to further accelerate inference automatically if more memory space is made available.
Weaknesses: 1. It would be better to list the potential overlapped area percentage for several commonly used tinyML models.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I enjoyed reading the paper, and the idea is well present, and the gains are intuitively making sense. My only question is that what is the reuse distance of patches? Is there any case that the stream buffer is not sufficient to capture all the reuses?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you all for the valuable comments! In our revised version, we addressed all of the reviewer comments. The details of our reply and the changes are presented in the following.
B.1 What is the reuse distance of patches?
The reuse distance of the StreamNet means the distance of the data in a patch reused by another patch. For example, in Figure 3 of the StreamNet paper, the data of the patch 0 will be reused by the patch 1 in the StreamNet-1D. Thus, the reuse distance of the StreamNet-1D is 1. In addition, as illustrated in Figure 4, the data of patch 1 will be reused by patch 3 in the horizontal direction of the StreamNet-2D. Therefore, the reuse distance of StreamNet-2D in the horizontal direction is 2 because the StreamNet-2D will reuse the data after passing two patches. That means the reuse distance in the horizontal direction of the StreamNet-2D is the value of the n_patch.
B.2 Is there any case that the stream buffer is not sufficient to capture all the reuses?
The StreamNet-2D typically captures all the reuses of the patch-based inference in the 2D convolution and the depthwise convolution. However, the StreamNet-2D needs more SRAM memory space to store its 2D stream buffer than the StreamNet-1D. Therefore, the StreamNet-2D will increase the peak usage of the SRAM memory and may run out of the SRAM memory space on MCUs. To address this challenge, our StreamNet mixes the StreamNet-1D and the StreamNet-2D to reduce the peak usage of the SRAM memory on MCUs. Hence, the StreamNet does not capture all the reuses when using both the StreamNet-2D and the StreamNet-1D in a Convolutional Neural Network (CNN) model.
---
Rebuttal Comment 1.1:
Comment: The rebuttal has addressed my concerns. Thanks. | Summary: The processing of patch-based inference for MCUs induce a large number of redundant MACs against the layer-wise processing because of the overlapped processing. In order to address this problem, this work designs StreamNet that employs the stream buffer to eliminate the redundant computation of patch-based inference. StreamNet uses 1D and 2D streaming processing and an auto-tuning framework to significantly improve the performance of patch-based inference with minimal requirements on the MCU’s SRAM memory space.
Strengths: 1) The proposed streamnet removes the computing redundancy in prior patch-based DNN processing framework and improves DNN inference performance significantly.
2) The paper is well organized and easy to follow. The experiments are sufficient and to the point.
Weaknesses: StreamNet essentially introduces additional buffers to explore the redundant computing results induced by patched DNN processing without compromising the memory requirements. The major contribution will be the DNN computing system or implementation optimization and the novelty is relatively limited.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: 1) According to the experiments, we notice that the performance speedup is sensitive to the different neural network architectures say kernel sizes and feature map sizes. While the benchmarks are mostly obtained from NAS and it is a bit difficult to evaluate the representation of these models. Could you provide more details about the models such as the number of layers, model sizes, and accuracy?
2) Transformer models are increasingly utilized, will the proposed framework be applicable to transformer models?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you all for the valuable comments! In our revised version, we addressed all of the reviewer comments. The details of our reply and the changes are presented in the following.
A.1 Could you provide more details about the models such as the number of layers, model sizes, and accuracy?
Our benchmark models were obtained from the MCUNet model zoo and were used to perform the real-world applications such as visual wake word (VWW) and image classification on MCUs. The MCUNet uses the Neural Architecture Search (NAS) to fine tune the accuracy, memory usage, and the amount of MACs of TinyML models to meet the requirements of MCUs. In the following table, each TinyML model comprises multiple layers. The model size and the top-1 accuracy of the MCUNet models are presented in Table 1. These MCUNet models can serve as representatives to real-world TinyML applications on MCUs, which faithfully reflect the performance improvements of our StreamNet.
| Benchmark | number of layers | model sizes (MB) | Top-1 Accuracy |
|----------------------|-------------------|-------------------|-----------------|
| mcunet-vww0 (MV0) | 55 | 0.37 | 87.3% |
| mcunet-vww1 (MV1) | 51 | 0.43 | 88.9% |
| mcunet-vww2 (MV2) | 83 | 0.64 | 91.8% |
| mcunet-in0 (MI0) | 51 | 0.75 | 40.4% |
| mcunet-in1 (MI1) | 55 | 0.64 | 49.9% |
| mcunet-in2 (MI2) | 67 | 0.73 | 60.3% |
| mcunet-in3 (MI3) | 67 | 0.74 | 61.8% |
| mcunet-in4 (MI4) | 63 | 1.73 | 68.0% |
| mbv2-w0.35 (MB2) | 64 | 0.75 | 49.0% |
| proxyless-w0.3 (PL) | 76 | 0.75 | 56.2% |
A.2 Will the proposed framework be applicable to transformer models?
Since the receptive field in the convolution operation works in the sliding window manner, the patch-based inference can divide the entire input into multiple small tiles and reduces the peak SRAM memory usage by only storing one of the small patches. Unlike the convolution operation, in the transformer model, the receptive field covers the entire input. Thus, the tiling method used by the patch-based inference does not work in transformer models. StreamNet aims to decrease the amount of the recomputation caused by the overlapping patches shown on the patch-based inference, which is not applicable to transformer models. Hence, we need to redesign our StreamNet to decrease the SRAM memory usage on transformer models. | Rebuttal 1:
Rebuttal: Thank you all for the valuable comments! In our revised version, we addressed all of the reviewer comments. The details of our reply and the changes are presented in the following.
C.1 The re-computation overhead reported in MCUNetV2 is only 10% for MobileNetV2, but such overhead appears significantly higher in this paper. What causes the discrepancy requires further discussion.
The MCUNet-v2 model zoo does not include the MobileNet-v2 model used in the MCUNet-v2 paper. Thus, we cannot directly compare the result reported in the MCUNet-v2 paper with that in the StreamNet paper. In addition, the re-computation overhead of the patch-based inference (MCUNet-v2) becomes worse when decreasing the peak usage of SRAM memory. The MCUNet-v2 uses the patch-based inference to increase 10% of the MACs while decreasing 87.5% of the peak SRAM memory usage compared to the layer-wise tensor memory allocation. Although the MCUNet-v2 can further decrease the SRAM memory usage of TinyML models on MCUs. MCUNet-v2 only presents the SRAM memory usage of TinyML models when increasing 10% of the MACs. This is because the amount of recomputation in the MCUNet-v2 will significantly increase when the SRAM memory usage of TinyML models is minimized. Unlike MCUNet-v2, the StreamNet minimizes the usage of the SRAM memory without significantly increasing the amount of MACs on TinyML models.
C.2 Could you provide more insight into how the current experimental setup mirrors different user scenarios, particularly those that commonly employ patch-based inference?
The usage of the SRAM memory can drop significantly in the patch-based inference by only increasing a few MACs. For instance, the original PL model contains 38.3 M of the MACs and uses 259 KB SRAM memory space. Then, the patch-based inference increases 8.8% of the MACs (41 M) and decreases the usage of the SRAM memory on the PL model to 158 KB. Unlike MCUNet-v2, the amount of MACs in the StreamNet is 39.1 M when using 92 KB SRAM memory space. As a result, the StreamNet needs fewer MACs and SRAM memory space than the patch-based inference, since the StreamNet removes the recomputation overhead of the patch-based inference while using small SRAM memory space.
Unlike the MCU used in the StreamNet paper, there is another user scenario when using the STM32L412 MCU that contains 256 KB SRAM memory and 1MB flash memory. The peak SRAM memory usage of the Ml4 model is 426 KB, and the amount of MACs is 126 M when using the Google TensorFlow Lite for Microcontroller (TFLM) with the layer-wise tensor memory allocation. The TFLM will cause the out-of-memory exception on the STM32L412 MCU, since the SRAM memory usage of the Ml4 model is larger than the SRAM memory space on the STM32L412 MCU. The MCUNet-v2 performs the patch-based inference to reduce the use of the SRAM memory on the MCU. Thus, in the MCUNet-v2, the Ml4 model only uses 129 KB SRAM memory, but the amount of MACs becomes 1942 M when using the patch-based inference. To fit the SRAM memory constraint, in the StreamNet-2D, the Ml4 model uses 241 KB SRAM memory space with 126M MACs. The StreamNet-2D reduces 15.41X MACs by capturing all reuses in the Ml4 model. In addition, in the StreamNet-1D, the amount of MACs is 354 M, and the peak SRAM memory usage is 150 KB. The outcome of the StreamNet-1D is similar to the result of the MCUNet-v2 in the Ml4 model. Typically, we can adjust parameters of the MCUnet-v2 to increase the usage of the SRAM memory and decrease the amount of the recomputation on overlapping patches. For instance, to meet the size of the SRAM memory on the STM32L412 MCU, we choose that value of the (5, 19) in the (n_patch, split_idx) parameter that yields the usage of the SRAM memory closest to the memory constraint on the STM32L412 MCU. Thus, in such MCUNet-v2 parameters, the amount of MACs in the Ml4 model becomes 458 M, and the peak SRAM memory usage is 239 KB. Therefore, it is possible to adjust values of the patch-based inference parameters to meet the memory constraint of the MCUs. However, the patch-based inference exposes the extremely high recomputation overhead when lowering the peak SRAM memory usage or using the MCU with a small SRAM memory. Unlike the patch-based inference, the StreamNet eliminates the large portion of the recomputation overhead shown in the patch-based inference without significantly increasing the usage of the SRAM memory.
C.4 How would varying the input resolution and the number of patches impact the performance of StreamNet?
The following two figures present the latency and memory variations of the StreamNet-2D when changing the number of patches in the PL model. As illustrated in Figure I, the execution time of the PL model does not significantly fluctuate when varying the number of patches on the STM32F767 MCU. In Figure II, the SRAM memory usage is sensitive to the value of the split_idx. For instance, in Figure I, the SRAM memory usage on the setting of the (2, 12) is 50.5% smaller than the (2, 2). In addition, the SRAM memory usage also drops when increasing the number of patches. For example, in Figure II, the SRAM memory usage of the (44, 4) is 14.4% smaller than the (2, 4). Furthermore, the size of the patches will increase with the growth of the input resolution without changing the value of the n_patch and the split_idx. Hence, the StreamNet-2D requires more stream buffer space to store tensor data and the recomputation overhead of the StreamNet-2D does not increase with the growth of the input resolution.
Pdf: /pdf/d8353fb94b1864f297663783fd63dc7f1f88ea6c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Module-wise Training of Neural Networks via the Minimizing Movement Scheme | Accept (poster) | Summary: This paper proposes a new training method for greedy layer-wise or module-wise training of neural networks, which is compelling in constrained and on-device settings where memory is limited and suffers from a stagnation problem. Experimental results show that their method improves the accuracy of module-wise training of various architectures.
-------After Rebuttal-------
I thank the authors for the answers to my comments and questions. Most of my concerns are properly addressed. Therefore I raise my score.
While I still think the word "early overfitting" in the paper should be carefully used. I agree with the phenomenon that vanilla module-wise training performs very well in the early layers but stagnates and gets overtaken later. While it can be due to the insufficient mutual information between the learned features and the inputs, instead of the early layer overfitting. Generally, I think the early modules containing fewer learnable parameters are difficult to overfit the large-scale dataset, such as ImageNet.
Strengths: 1. Overall, the idea is new and the proposed method is theoretically sound.
2. The paper is well-written and easy to follow.
Weaknesses: On Line 34, the authors claim that the early modules can overfit and learn more discriminative features than end-to-end training, destroying task-relevant information, and deeper modules don’t improve the test accuracy significantly, or even degrade it. However, the experimental results in this paper are not sufficient enough to support this assumption, especially for the overfitting problem.
As the listed experiments are all conducted on small or medium-scale datasets, the overfitting problem might happen for the early layers. While, for the large-scale dataset, such as ImageNet, the early layers are more likely to be under-fitting due to the insufficient model
capacity of the first few layers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The wall-time training cost is also an important evaluation metric for training algorithms. I suggest the authors provide a comparison of the training time between their proposed TRGL and other related methods.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer N1H7,
Thank you for your valuable review. We answer your remarks below.
**Weaknesses**
**1. Evidence for early overfitting**
This is indeed an interesting question. Our experiments in Figures 2 and 3 demonstrate that vanilla module-wise training performs very well in the early layers but stagnates and gets overtaken later for the datasets considered in the paper.
Most importantly, we cite 4 papers [36,5,53,40] that make the same observation on line 33. The arguments developed in each paper are detailed below. We therefore considered that this was well-established in the literature. We propose to add this discussion of the evidence in the literature for this early overfitting phenomenon to the paper.
[36] compare the accuracy after each layer to that of end-to-end training and find that layer-wise training overfits and outperforms E2E in early layers, but stagnates later while E2E improves dramatically.
[53] explore this in more detail. They find that "greedy SL (module-wise training) contributes to dramatically more discriminative features with the first one (or few) local module(s), but is only able to slightly improve the performance with all the consequent modules. In contrast, the E2E learned network progressively boosts the linear separability of features throughout the whole network with even more significant effects in the later layers, surpassing greedy SL eventually". [53] also show that module-wise training detroys a larger amount of mutual information between the learned features and the inputs (and between the learned features and the outputs) compared to end-to-end training in the early modules.
Also, [40] find that the first module "may learn only too coarse features to classify complex images, which may be less meaningful representations for upper layers". They solve this by making the first module deeper, which reduces the memory savings of the method, and [5] solve the problem by making the auxiliary network deeper.
**2. Large scale experiments**
Since submitting the paper we have run experiments on ImageNet (ResNet-101 split into 2 modules trained in parallel). Results are below and in the additional PDF file. This validates that the conclusion drawn on the smaller datasets still holds for large datasets such as ImageNet. We did not experiment with more than 2 modules on ImageNet due to resource limitations.
| Dataset | Parallel VanGL | Parallel TRGL (ours) | DGL | Sedona | InfoPro | E2E |
|----------|----------------|----------------------|-------|--------|---------|-------|
| ImageNet | 78.11 | **79.41** | 78.47 | 79.28 | 78.15 | 78.71 |
**Questions**
**1. Training time**
This was included in the paper but probably not prominently enough, in section 4.2 starting line 259:
Vanilla parallel module-wise (denoted VanGL in the paper) training does slightly increase training time. Epoch time increases by 6% with 2 modules and by 16% with 16 modules compared to end-to-end training. By adding our regularization, TRGL is only slower than VanGL by 2% for any number of modules. This is comparable to InfoPro baseline which reports a time overhead between 1 and 27% compared to end-to-end training.
Note that InfoPro is the most comparable method to ours in this regard as it also saves memory at the price of slowing down training a little bit, while having performances that are close to or better than end-to-end training.
---
Rebuttal 2:
Title: After review update
Comment: Indeed the term 'destruction of information' as in [53] in our rebuttal is better than 'overfitting' to describe this phenomenon.
---
Rebuttal 3:
Title: Can you please check the rebuttal comments?
Comment: Dear reviewer,
The authors have provided a response to your comments. Can you please take a look and accordingly comment, and updated your review?
Thanks,
-Area Chair | Summary: The paper explores the module-wise training of neural networks via the Minimizing Movement Scheme. This approach aims to overcome the stagnation problem often encountered in layer-wise training, leading to improved accuracy and reduced memory usage. The authors compare their results with those of other methods and demonstrate the effectiveness of their approach on various architectures, including ResNets, Transformers, and VGG.
Strengths: 1. The introduced methodology effectively addresses the frequently observed stagnation issue associated with layer-wise training. As a result, there is a noteworthy enhancement in model accuracy along with a commendable reduction in memory usage.
2. Establishing a connection between neural network training and optimal transport theory is not only intellectually stimulating, but it also presents intriguing theoretical insights.
3. Generally, the manuscript is well-structured and clearly articulated. However, some convoluted notations used in Section 2.2 could potentially be simplified for improved reader comprehension.
Weaknesses: 1. The paper is currently missing a detailed convergence analysis. It's vital to elucidate whether the overall training process will indeed converge in order to reinforce the reliability and robustness of the proposed approach.
2. The study could benefit from a more comprehensive ablation analysis, specifically, an exploration on the impact of variations in the regularization penalty. This would further validate the robustness of the model and provide additional insights into its behavior under different conditions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer X5WU,
Thank you for your valuable review. We answer your remarks below.
**Weaknesses**
**1. Convergence**
Indeed this is an important question and this will be further discussed in the final version. In Section 2.2, we have assumed that the training of the individual modules converges.
Indeed, the training of shallow networks is known to converge [A,B,C,D,E]. Papers [5,6,24,F,G] show the convergence of vanilla module-wise (both parallel and sequential) training by chaining existing convergence results for shallow modules. We have not considered the impact of the regularization on this, but it is a simple quadratic regularization that makes the target more convex and [29] show the beneficial effects of this transport regularization on training a network end-to-end.
Assuming therefore that the training of each shallow module converges with the number of epochs, the assumptions under which we have convergence as the regularization weight parameter $\tau$ goes to zero and the number of modules $k$ increases are discussed in Section 2.2 (lines 130 to 136). Convergence as the number of modules increases is the central point in module-wise training as we precisely want the modules to build upon each other to solve the task, and this is exactly what we demonstrate.
[A] *Understanding Deep Neural Networks with Rectified Linear Units*, Arora et al., ICLR 2018
[B] *Breaking the Curse of Dimensionality with Convex Neural Networks*, Bach, JMLR 2017
[C] *Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods*, Janzamin et al, arXiv 2016
[D] *Learning One-hidden-layer Neural Networks with Landscape Design*, Ge et al., ICLR 2018
[E] *Improved Learning of One-hidden-layer Convolutional Neural Networks with Overlaps*, Du et al., arXiv 2018
[F] *A Provably Correct Algorithm for Deep Learning that Actually Works*, Malach et al., arXiv 2018
[G] *Provable Bounds for Learning Some Deep Representations*, Arora et al., ICML 2014
**2. Impact of variations in the regularization penalty**
We already provide an exploration of the impact of variations in the regularization penalty in Figure 4 in Appendix H (this is mentioned in Section 5) on the STL10 dataset. We found that "TRGL performs better than VanGL for values of $\tau$ from 0.03 to 100 and is then roughly equivalent to it for values up to 5000."
We have run a similar experiment using the Swin-Tiny Transformer split in 4 modules trained in parallel on CIFAR100 (line 4 in Table 4), with similar results. The figure is in the additional PDF joined to the general answer above.
---
Rebuttal 2:
Title: Can you please check the rebuttal comments?
Comment: Dear reviewer,
The authors have provided a response to your comments. Can you please take a look and accordingly comment, and updated your review?
Thanks,
-Area Chair | Summary: The paper proposed a new regularization for module-wise training via the distance of the input and output of the module.
Strengths: 1. The proposed regularization is quite straightforward and easy to apply.
2. The paper connects the proposed regularization with optimal transport via theoretical analysis.
3. Some experimental results are good, like adding more modules on STL-10.
Weaknesses: 1. Although the author shows the connection between the proposed regularization and optimal transport, it does not provide insights regarding convergence. Whether the proposed regularization helps the convergence of the multi-module training might be more interesting for larger audiences.
2. Authors argue that 'residual connections are already biased towards small displacements.' However, residual connections are not toward small values for early layers, as shown in [28]. As a result, how does the proposed method control each module's regularization strength? If they are set to the same value, then the regularization may be harmful to early layers. In addition, the selection of the regularization strength becomes more complex when increasing the number of modules K.
3. The improvements over the previous method are only obvious on STL-10, and STL-10 is not a large dataset. In addition, larger datasets and models are omitted. I think memory saving will be more meaningful for larger datasets and models. As a result, the scalability of the proposed method is not examined.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Why is the memory saving of the proposed method often less than other methods? Where this overhead comes from?
2. Some notations are not introduced properly. For example, the `#' notation, and what is the difference of $T_k$ and $T_k^{\tau}$.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer U1e3,
Thank you for your valuable review, we answer your remarks below.
**Weaknesses**
**1. Convergence**
Indeed this is an important question and this will be further discussed in the final version. In Section 2.2, we have assumed that the training of the individual modules converges.
Indeed, the training of shallow networks is known to converge [A,B,C,D,E]. Papers [5,6,24,F,G] show the convergence of vanilla module-wise (both parallel and sequential) training by chaining existing convergence results for shallow modules. We have not considered the impact of the regularization on this, but it is a simple quadratic regularization that makes the target more convex and [29] show the beneficial effects of this transport regularization on training a network end-to-end.
On the theoretical side, assuming therefore that the training of each shallow module converges with the number of epochs, the assumptions under which we have convergence as the regularization weight parameter $\tau$ goes to zero and the number of modules $k$ increases are discussed in Section 2.2 (lines 130 to 136). Convergence as the number of modules increases is the central point in module-wise training as we precisely want the modules to build upon each other to solve the task, and this is exactly what we demonstrate.
In practice, experiments reported on Fig 2 and Fig. 3 show the importance of the regularization term for the convergence to a solution as the number of modules increases.
[A] *Understanding Deep Neural Networks with Rectified Linear Units*, Arora et al., ICLR 2018
[B] *Breaking the Curse of Dimensionality with Convex Neural Networks*, Bach, JMLR 2017
[C] *Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods*, Janzamin et al, arXiv 2016
[D] *Learning One-hidden-layer Neural Networks with Landscape Design*, Ge et al., ICLR 2018
[E] *Improved Learning of One-hidden-layer Convolutional Neural Networks with Overlaps*, Du et al., arXiv 2018
[F] *A Provably Correct Algorithm for Deep Learning that Actually Works*, Malach et al., arXiv 2018
[G] *Provable Bounds for Learning Some Deep Representations*, Arora et al., ICML 2014
**2. Setting the regularization strength**
It is true that in [28] the early modules are *less biased* towards small displacement values, but regularizing them the same as for deeper layers was beneficial in [29].
We discuss in Section 3.3 and Appendix D the possible need to have a different regularization strength for each module (i.e a different weight $\tau_k$ is given to module $k$), and use an algorithm inspired by the method of multipliers that was also used in [29] to adjust these values $\tau_k$.
Indeed as you indicate "the selection of the regularization strength becomes more complex when increasing the number of modules $K$", but this is true for all module-wise training methods that add a term to the local loss. The method of multipliers algorithm we use allows us to start all $\tau_k$ from the same initial value and then let them change at different rates during training (Section 3.3 and Appendix D). **This way the number of hyper-parameters required by our method is 3 and does not increase with the number of modules $K$**.
This is more principled than other methods that also add a term to the local loss. For instance, InfoPro [53] have two hyper-parameters per module and they simply assume that they change linearly from the first to the last module and then choose the 4 needed values manually from a fixed set.
Additionally, while observing the behaviour of this algorithm, we noticed that a simple pattern for the values $\tau_k$ works well in most cases and deduced a simple heuristic: use a value $\tau$ for the first $K/2$ modules and double it (i.e use $2 \tau$) for the last $K/2$ modules. A **single hyper-parameter $\tau$** has then to be fixed and this can be done manually or through any search or cross validation method.
**3. Large scale experiments**
Since submitting the paper we have run experiments on ImageNet (ResNet-101 split into 2 modules trained in parallel. Results are below and in the additional PDF file. This validates that the conclusion drawn on the smaller datasets still holds for large datasets such as ImageNet. We did not experiment with more than 2 modules on ImageNet due to resource limitations.
| Dataset | Parallel VanGL | Parallel TRGL (ours) | DGL | Sedona | InfoPro | E2E |
|----------|----------------|----------------------|-------|--------|---------|-------|
| ImageNet | 78.11 | **79.41** | 78.47 | 79.28 | 78.15 | 78.71 |
**Questions**
**1. Memory savings**
There might be a misunderstanding here. The memory savings are not less than the other methods. They are most often higher. InfoPro saves more memory in only 2 out of 7 cases (see Tables 4 and 5). InfoProL saves more memory in 2 out of three cases but many of the other methods (Sedona, DDG, FR) don't claim to save memory at all compared to end-to-end training.
Please note that VanGL is simply vanilla module-wise training without any added regularization or auxiliary network, so it is normal that it uses less memory than all other methods.
**2. Notations**
Sorry about that, we will add the definition of the pushforward measure $\#$. $T_k$ is simply the module trained in vanilla module-wise training (VanGL), so there is no dependence on $\tau$ which is the weight given to the regularization in TRGL.
---
Rebuttal 2:
Title: Can you please check the rebuttal comments?
Comment: Dear reviewer,
The authors have provided a response to your comments. Can you please take a look and accordingly comment, and updated your review?
Thanks,
-Area Chair
---
Rebuttal 3:
Comment: I want to thank the authors for their detailed response. They addressed my concerns, and I raised my score to weak accept. | Summary: TRGL offers a promising approach to module-wise training that addresses the stagnation problem, improves accuracy, and reduces memory usage in constrained and on-device settings. The method introduces a module-wise regularization inspired by the minimizing movement scheme for gradient flows in distribution space. By minimizing the kinetic energy of modules along with the training loss, TRGL encourages modules to change their inputs as little as possible, preserving task-relevant information.
Strengths: (1) TRGL significantly reduces memory usage compared to end-to-end training, ranging from 10% to 60% less memory. This makes it particularly suitable for constrained settings and on-device training where memory resources are limited.
(2) The authors provide theoretical analysis, proving that TRGL leads to more regular and stable greedy modules that progressively minimize the loss.
(3) TRGL can be applied to various network architectures, especially those using residual connections such as ResNets and vision transformers.
Weaknesses: (1) TRGL introduces an additional regularization term to the training objective, which increases the computational complexity of the training process. The calculation of the kinetic energy and its incorporation into the loss function may require additional computational resources, leading to longer training times. How about the actual training time compared to other methods?
(2) what is VanGL? is it just a module-wise training without the regularization terms?
(3) In my opinion, the performance gain is marginal compared to vanilla VanGL.
(4) The principle of finding Tau is vague.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See weakness
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: This increased complexity may result in longer training times and higher computational requirements, which could be a limitation in resource-constrained environments or when training large-scale models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Ad5i,
Thank you for your valuable review. We answer your remarks below.
**Weaknesses**
**(1) Training time**
Of course like any other method that adds a term to the local loss (so most other methods), training time is slightly increased in favor of saving memory, which is the real hard constraint when training on a small device. We do discuss this in Section 4.2 but probably not prominently enough and quantitatively compare to other methods. Starting line 259:
“Vanilla parallel module-wise (denoted VanGL in the paper) training does slightly slow down training. Epoch time increases by 6% with 2 modules and by 16% with 16 modules compared to end-to-end training. By adding our regularization, TRGL is only slower than VanGL by 2% for any number of modules. This is comparable to InfoPro which reports a time overhead between 1 and 27% compared to end-to-end training. “
Note that InfoPro is the most comparable method to ours in this regard as it also saves memory at the price of slowing down training a little bit, while having performances that are close to or better than end-to-end training. The real limitation for training on mobile devices is memory since the increase in training time is small as stated above: 2% compared to VanGL which has already been used for on-device training in [50,51] as indicated in the paper.
**(2) Meaning of VanGL**
Yes this is as you say. “VanGL” is defined on line 209: "We call vanilla greedy module-wise training with the same architecture but without our regularization VanGL, and we include its results in all tables for ablation study purposes"
**(3) Ablation**
The gains are not marginal on parallel training in Section 4.1. They are higher than 1 percentage point in 9 out of the 13 experiments, and higher than 2 percentage points in 6 out of the 13 experiments. This is confirmed by experiments on the larger ImageNet dataset that we added in the general answer to all reviewers above, where the improvement in Top 1 accuracy is 1.3 percentage points.
They are indeed marginal on sequential training in the full data regime, but quite big in the small data regime (up to 6 percentage points gained, Tables 10 and 12 in Appendix G).
Compared to other methods, our experiments cover more cases (parallel and sequential with many or few modules) and show that our method is more reliable as at least it never hurts performance. This is already discussed in Section 5 where we say starting line 294: "The results show that our approach works in all settings (parallel and sequential with many or few modules), whereas other papers don’t test their methods in all settings, and some fail in different settings from the original one in subsequent papers (e.g. delayed gradients methods when the number of modules increases [25] and PredSim in [40]). Also, for parallel training in Section 4.1, the improvement from the regularization compared to VanGL increases with the number of modules (so as the memory savings increase and module-wise training becomes more useful)."
**(4) Choosing hyper-parameter $\tau$**
We experimented with two strategies for choosing the parameter** $\tau$. A method of multipliers is used to adaptively change $\tau$ during training differently for each module and is detailed in Section 3.3 and Appendix D. A reference is made to [29] which also uses this method to adaptively change a regularization weight during training. This is more principled than other methods that also add a term to the local loss. For instance, InfoPro [53] have two hyper-parameters per module and they simply assume that they change linearly from the first to the last module and then choose the 4 needed values manually from a fixed set.
Beside, we also experimented with a simple heuristic that works well in practice and involves setting manually a single hyper-parameter: we use a value $\tau$ for the first $K/2$ modules and double it (i.e use $2 \tau$) for the last $K/2$ modules. So only $\tau$ has to be chosen. Again this is mentioned in Section 3.3.
**Limitations**
The real limitation for training on mobile devices is memory since the increase in training time is small as stated above: 2% compared to VanGL which has already been used for on-device training [50,51] as indicated in the paper.
---
Rebuttal 2:
Title: Can you please check the rebuttal comments?
Comment: Dear reviewer,
The authors have provided a response to your comments. Can you please take a look and accordingly comment, and updated your review?
Thanks,
-Area Chair | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for their valuable insights. We have addressed in the individual answers the points raised by the reviewers. Since some comments are shared by different reviewers, we summarise here the main answers:
**1. Experiments on ImageNet (Reviewers U1e3 and N1H7)**
The reviewers asked for experiments on larger datasets. Since submitting the paper we have run experiments on ImageNet (ResNet-101 split into 2 modules trained in parallel). Results are below and in the additional PDF file. This validates that the conclusion drawn on the smaller datasets still holds for large datasets such as ImageNet. We did not experiment with more than 2 modules on ImageNet due to resource limitations.
| Dataset | Parallel VanGL | Parallel TRGL (ours) | DGL | Sedona | InfoPro | E2E |
|----------|----------------|----------------------|-------|--------|---------|-------|
| ImageNet | 78.11 | **79.41** | 78.47 | 79.28 | 78.15 | 78.71 |
**2. Training time (Reviewers Ad5i and N1H7)**
The reviewers asked about the increase in wall-time training cost caused by our regularization. This was included in the paper but not prominently enough. In section 4.2 we say:
Vanilla parallel module-wise (denoted VanGL in the paper) training does slightly increase training time. Epoch time increases by 6% with 2 modules and by 16% with 16 modules compared to end-to-end training. By adding our regularization, TRGL is only slower than VanGL by 2% for any number of modules. This is comparable to InfoPro baseline which reports a time overhead between 1 and 27% compared to end-to-end training.
Note that InfoPro is the most comparable method to ours in this regard as it also saves memory at the price of slowing down training a little bit, while having performances that are close to or better than end-to-end training.
**3. Convergence (Reviewers U1e3 and X5WU)**
Indeed this is an important question and this will be further discussed in the final version. In Section 2.2, we have assumed that the training of the individual modules converges.
Indeed, the training of shallow networks is known to converge [A,B,C,D,E]. Papers [5,6,24,F,G] show the convergence of vanilla module-wise (both parallel and sequential) training by chaining existing convergence results for shallow modules. We have not considered the impact of the regularization on this, but it is a simple quadratic regularization that makes the target more convex and [29] show the beneficial effects of this transport regularization on training a network end-to-end.
Assuming therefore that the training of each shallow module converges with the number of epochs, the assumptions under which we have convergence as the regularization weight parameter $\tau$ goes to zero and the number of modules (denoted $k$ ln 131) increases are discussed in Section 2.2 (lines 130 to 136). Convergence as the number of modules increases is the central point in module-wise training as we precisely want the modules to build upon each other to solve the task, and this is exactly what we demonstrate.
In practice, experiments reported on Fig 2 and Fig. 3 show the importance of the regularization term for the convergence to a solution as the number of modules increases.
[A] *Understanding Deep Neural Networks with Rectified Linear Units*, Arora et al., ICLR 2018
[B] *Breaking the Curse of Dimensionality with Convex Neural Networks*, Bach, JMLR 2017
[C] *Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods*, Janzamin et al, arXiv 2016
[D] *Learning One-hidden-layer Neural Networks with Landscape Design*, Ge et al., ICLR 2018
[E] *Improved Learning of One-hidden-layer Convolutional Neural Networks with Overlaps*, Du et al., arXiv 2018
[F] *A Provably Correct Algorithm for Deep Learning that Actually Works*, Malach et al., arXiv 2018
[G] *Provable Bounds for Learning Some Deep Representations*, Arora et al., ICML 2014
**4. Choosing hyper-parameter $\tau$ (Reviewers Ad5i and U1e3)**
We discuss in Section 3.3 and Appendix D the possible need to have a different regularization strength for each module (i.e a different weight $\tau_k$ is given to module $k$), and use an algorithm inspired by the method of multipliers (that was also used in [29]) to adjust these values $\tau_k$.
The method of multipliers algorithm we use allows us to start all values $\tau_k$ from the same initial value and then let them change at different rates during training. **This way the number of hyper-parameters required by our method is 3 and does not increase with the number of modules $K$**.
This is more principled than other methods that also add a term to the local loss. For instance, InfoPro [53] have two hyper-parameters per module and they simply assume that they change linearly from the first to the last module and then choose the 4 needed values manually from a fixed set.
Additionally, while observing the behaviour of this algorithm, we noticed that a simple pattern for the values $\tau_k$ works well in most cases and deduced a simple heuristic: use a value $\tau$ for the first $K/2$ modules and double it (i.e use $2 \tau$) for the last $K/2$ modules. A **single hyper-parameter $\tau$** has then to be fixed and this can be done manually or through any search or cross validation method.
**5. Evidence for early overfitting (Reviewer N1H7)**
Besides our experiments in Figures 2 and 3, we cite 4 papers [36,5,53,40] on line 33 that make the same observation. The arguments developed in each paper are detailed below in the individual answer to Reviewer N1H7. We therefore considered that this was well-established in the literature. We propose to add the discussion in the individual answer to Reviewer N1H7 of the evidence in the literature for this phenomenon to the paper.
**6. Further sensitivity to hyper-parameter experiment (Reviewer X5WU)**
See Appendix H and a new experiment in the additional PDF
Pdf: /pdf/4b26bb7a64fb8eeaf8fe503f7f6effcd8bd82552.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Composing Parameter-Efficient Modules with Arithmetic Operation | Accept (poster) | Summary: This paper studies how to combine parameter-efficiently tuned models in the parameter space. The authors define two kinds of Arithmetic Operation for parameter ensemble: addition and negation. They evaluate their methods for distribution generalization, multi-tasking, detoxifying, and domain transfer. Extensive experiments are conducted to support the claims.
Strengths: + The paper is clear and easy to follow.
+ The paper studies an interesting topic, i.e., parameter ensemble for LLMs.
Weaknesses: - The idea is not novel. Parameter ensemble (weight averaging) has been proposed for a while, and both addition and negation methods are included in previous works [1,2,3,4]. Especially, there is an important missing reference [1] that defines linear interpolation in the parameter space, which I believe is a more general method that encompasses both "addition" and "negation".
- Some explorations have already been conducted before, e.g., distribution generalization [1, 3], multi-tasking and detoxifying [1], and domain transfer [4]. I think the authors should explicitly discuss the difference and novel findings compared with existing literature.
- Although the authors study the effects of many aspects, the analysis is partial and doesn't reach saturation so hard to make strong deduction out of it. I think the authors should delve deeper into the superficial findings to better understand why parameter averaging brings these benefits.
Please correct me if I misunderstood anything.
[1] Rofin, Mark, Nikita Balagansky, and Daniil Gavrilov. "Linear Interpolation In Parameter Space is Good Enough for Fine-Tuned Language Models." arXiv preprint arXiv:2211.12092 (2022).
[2] Wortsman, Mitchell, et al. "Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time." International Conference on Machine Learning. PMLR, 2022.
[3] Qin, Yujia, et al. "Exploring Mode Connectivity for Pre-trained Language Models." arXiv preprint arXiv:2210.14102 (2022).
[4] Ilharco, Gabriel, et al. "Editing Models with Task Arithmetic." arXiv preprint arXiv:2212.04089 (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NA
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and comments. Nevertheless, we kindly disagree with the reviewer, especially on the “novelty” assessment. It seems that there might be some misconceptions regarding our work's motivations and its differentiation from prior studies. We aim to elucidate our motivations and underscore the novelty of our contributions from three perspectives:
1. *Motivation*: The primary drive behind this research is the composition of parameter-efficient modules as opposed to merely ensembling every parameter from fully fine-tuned models. We posit that the composition of PEMs stands apart as a research problem from the traditional full model ensembling, and we view PEM composition as a way towards modular deep learning, which is more flexible and presents a realm of intriguing future prospects, distinct from standard full model ensembling. Our work aims to establish a simple attempt towards the goal. Therefore, our paper makes unique contributions to the direction of PEM composition and modular deep learning, while most of the papers mentioned by the reviewer (most of them are already cited in the submission, and we will cite [4] for completeness) focus on ensembling full fine-tuned models.
2. *Method*: We are the *first* to design both addition and negation operators for PEM composition. While our addition operator takes the form of a simple average, our negation operators, especially the one for $\rm{(IA)^3}$, are not trivial to design and diverge from the one in [1,4] that cater to full parameter ensembling and just naively negates all the parameters in a linear interpolation scheme – our negation operators for both LoRA and $\rm{(IA)^3}$ (Eq 5 and Eq 6) do not trivially follow a linear interpolation formulation as in previous work. For example, in linear interpolation of [4], their formulation only covers negating ALL the parameters of the module and assigning a weight to it, and both Eq 5 and Eq 6 in our submission do not follow this formulation. In our theoretical explanation in the paper and empirical ablation results in the response to Q2 of Reviewer EMdb, we show that our negation operator design is necessary and naively negating the parameters with a linear interpolation formulation as in previous study is ineffective in PEM composition.
3. *Empirical Results*:
* The reviewer mentioned that our empirical explorations have been conducted before – we respectfully disagree. Few previous work [2,3] only conducted the simple parameter addition of PEM modules in limited settings, other papers mentioned by the reviewer such as [1] involve both addition and negation operators but is confined to full model ensembling – however, their conclusion and observations do not seamlessly transit to the PEM context, given that we have shown that their negation operation does not work effectively for composing PEMs. This solidifies our position as the *first* to introduce and scrutinize negation operators tailored for PEM composition and to assess the method in a range of scenarios.
* Particularly, we are the *first* to study PEM composition in instruction tuning based on modern LLMs such as LLaMA. This not only underscores our method's applicability but also its relevance to the challenges faced in today's LLM landscape.
* In previous work on ensembling parameters of full fine-tuned models, it is commonly believed that the parameters to be averaged need to stem from the same initialization (pretrained model checkpoint). Our paper explores the setting where this assumption does not hold in the LoRA experiment, thus we think that our empirical study on the initialization effect also provides new insights.
[1] Ilharco, Gabriel, et al. "Editing models with task arithmetic." ICLR. 2022.
[2] Qin, Yujia, et al. "Exploring Mode Connectivity for Pre-trained Language Models." EMNLP. 2022.
[3] Ponti, Edoardo Maria, et al. "Combining Parameter-efficient Modules for Task-level Generalisation." EACL. 2023.
[4] Rofin, Mark, Nikita Balagansky, and Daniil Gavrilov. "Linear interpolation in parameter space is good enough for fine-tuned language models." arXiv preprint arXiv (2022). | Summary: The authors propose composing parameter-efficient modules (primarily LoRA and IA3 modules) directly in weight space, and show that simple linear combinations are able to achieve good performance in distributional generalization, multitasking, unlearning/de-toxifying, and domain transfer. The authors also show their approach works for both T5 and LLaMA models. Analysis of the approach suggests that their lora compositions are not as sensitive to different initializations as adapters.
Strengths: Straight-forward method, and well-tested across a number of diverse settings (different modules, different tasks, different underlying models).
The paper is clear and well-structured, and the initialization experiments in the analysis section are interesting. I think the exploring merging of parameter-efficient modules is important and interesting work.
Weaknesses: No glaring weaknesses. I think how lambda is chosen for tasks needs to be explained, unless I missed it - comparing Table 2 and Figure 2, it seems like lambda for Table 2 is > .5, so explaining how you chose the number is important. I see for the unlearning experiment you did tune lambda somewhat (Appendix A). If you have to tune lambda in all cases, this makes this approach more difficult to apply in practice due to the extra tuning required.
The domain transfer experiment in 4.5 is unclear to me - do you also train a module on the classification datasets to get theta^{amazon_cls/yelp_cls}? Looking at Figure 3, it seems lambda = 1.0 does best, which implies just using a trained classifier module directly with no composition, right?
Results are mostly over classification tasks - it would be interesting to see e.g. the multitasking-style experiments applied to tasks more challenging than GLUE, although I think this isn’t necessary for this work. However, the GPT-4 helpfulness scores seem promising.
Using GPT-4 for ratings is still a relatively new practice, and I think it is not yet well enough established that you can rely on it without having at least some testing of how well it correlates with human ratings - if you are not careful about things like positional bias this can result in results that may not correlate well with human judgement. At the very least, it would be useful to (a) check agreement with GPT-4 over a small subset of annotations, (b) compute the variation of GPT-4 scores over a set of different prompts to ensure no glaring biases.
Overall, I lean accept for this work, and think these weaknesses are mostly around clarifications, rather than weaknesses in the method. If the authors answer my questions well and make these clarifications, I am happy to accept.
Edit: I have raised my score after carefully reading the author's response and the other reviews. My issues have mostly been cleared up by rebuttal. Please see my response to the author rebuttal for more details.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How exactly is lambda chosen for each setting? What is your tuning / choice procedure?
- Why does it seem like lambda = 1.0 does best in Figure 3? Does this mean task composition is not useful in this case?
- In Figure 2, why do the performances on RTE/MNLI at lambda = 0/1 not match the values in Table 2?
- How well do GPT-4’s helpfulness scores correlate with human judgement?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss their limitations to a reasonable extent.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: How $\lambda$ is chosen for each setting
As mentioned in Line 137 of the submission, $\lambda$ is generally tuned on the validation set. Specifically, for classification tasks, we vary $\lambda$ from 0 to 1 in increments of 0.02 on a validation set with a limited amount of data. Similar $\lambda$ tuning when combining parameters is commonly practiced in [1], [2] and [3] as well. For negation settings in Section 4.4, we follow [4] and vary $\lambda$ from 1 to 0 with a step size 0.1 to find the maximum $\lambda$ to satisfy the requirement that the difference of PPL scores between the detoxified model and GPT-2 baseline should not exceed 0.5 (as noted in Line 517 of the appendix) – we note that a detoxified validation dataset is not required here since PPL is computed on WikiText-103. For detoxifying LLaMA, $\lambda$ can often be determined based on one or two outputs – we initially set $\lambda$ to 1 and generate sentences to evaluate their normality and linguistic proficiency manually. If the output is not satisfactory, we decrease the $\lambda$ value by 0.1 for the next iteration. Therefore, selecting the $\lambda$ value incurs minimal cost.
We agree with the reviewer that the necessity of tuning $\lambda$ is certainly a limitation of our method when applied in practice, as we already acknowledged at the end of the submission. Further exploration on automatically reweighting parameters without tuning is an important research direction that we leave as future work.
### Q2: Do we train a module on the classification datasets?
To clarify, In the domain transfer experiment, we did not train a module on the target classification dataset. The procedure is as follows: if we require θ_{yelp_cls}, but lack yelp classification data, and we have existing θ_{amazon_cls}, θ_{yelp_lm}, and θ_{amazon_lm}, we can obtain θ_{yelp_cls} by performing addition and negation operations among these three modules with the Equation in Line 232.
### Q3: $\lambda=1$ does the best in Figure 3?
Actually $\lambda=1$ is not the best value in Figure 3 – when $\lambda$ approaches 1, the curve is becoming flat, making it difficult to distinguish which point is the optimal one. The optimal $\lambda$ values in Figure 3 and used in our experiment are as follows: in T5-base, 0.86 for Yelp and 0.98 for Amazon; in T5-small, 0.88 for Yelp and 0.96 for Amazon. As shown in the equation of Line 232, the classification module makes a more significant contribution, while the subtraction of language modeling modules remains crucial but less prominent in the domain transfer setting.
### Q4: In Figure 2, why do the performances on RTE/MNLI at $\lambda$ = 0/1 not match the values in Table 2?
The numbers in Figure 2 and Table 2 are actually consistent: Figure 2 displays the variations in validation results of the composed LoRA for both MNLI and RTE tasks as $\lambda$ varies. For $\lambda$=0, the composed LoRA exhibits the same behavior as the LoRA trained on RTE, yielding 81.2 in RTE validation (red line in Figure 2, the leftmost point) and 54.7 in MNLI validation (blue line in Figure 2, the leftmost point). Conversely, for $\lambda$=1, the composed LoRA is equivalent to the LoRA trained on MNLI, achieving 75.8 in RTE validation (red line in Figure 2, the rightmost point) and 86.8 in MNLI validation (blue line in Figure 2, the rightmost point). These numbers match the ones in Table 2.
### Q5: How well do GPT-4’s helpfulness scores correlate with human judgement?
Thank you for pointing out the potential bias issue of GPT-4 evaluation! To mitigate the concerns on GPT-4 scoring and supplement more convincing results on the helpfulness evaluation, we conduct pairwise human evaluation to compare the helpfulness of Alpaca-LoRA and our composed model. In our annotation scheme, we present the annotators the query and the responses from two models anonymously in a random order, and ask for a vote on the winner or tie.
We begin our human evaluation by first assessing the inter-annotator agreement rate on a randomly sampled set of 50 examples out of 200 pairs of responses in the experiment. Three paper authors perform pairwise votes (model name is anonymized) and then we compute the agreement rate using tie-discounted accuracy following [5]. To compute the agreement rate between human and GPT-4, we convert our original GPT-4 scoring in the submission to pairwise votes similarly to [6]. As a result, the inter-annotator agreement rate of human-human is **78%**, human-GPT4 is **82%**, which are considered high and acceptable as observed in [5] and [6] as well.
Subsequently, we acquired the results of the human evaluation for all 200 pairs of responses. The win rates of the detoxified merge module were 36% for toxic instructions and 27% for normal instructions with a 40% and 42% tie rate, indicating that the negation operation did not negatively impact the model's performance much in terms of helpfulness. These results are also consistent with our original GPT-4 scoring in the submission. We have shown the full results table including human eval in Response to Q2 of Reviewer wusG. We will add the human eval experiment to the next revision of the paper.
[1] Matena, Michael S., and Colin A. Raffel. "Merging models with fisher-weighted averaging." NeurIPS. 2022.
[2] Wortsman, Mitchell, et al. "Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time." ICML. 2022.
[3] Wortsman, Mitchell, et al. "Robust fine-tuning of zero-shot models." CVPR. 2022.
[4] Ilharco, Gabriel, et al. "Editing models with task arithmetic." ICLR. 2022.
[5] Zhou, Chunting, et al. "Lima: Less is more for alignment." arXiv preprint arXiv (2023).
[6] Zheng, Lianmin, et al. "Judging LLM-as-a-judge with MT-Bench and Chatbot Arena." arXiv preprint arXiv (2023).
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: Hi, thanks for the detailed response! I realise now I found table 2 confusing due to the shared column/row names (and 'merge' being somewhat similar to 'avg' semantically) - it would be good to label the second column (detailing the trained module rte/mnli/merge) as something like 'module used' or 'module trained'. Your clarification clears up my confusion with table 2/figure 2, thanks. Similarly, thank you for the clarification with figure 3. It might be worth discussing the curve flattening in the paper text, as it's a useful insight that the subtraction of language modelling modules has a smaller effect in the domain transfer setting.
I'm satisfied that the paper is novel and while the method is somewhat incremental, I think the experiments are solid, interesting, and useful for future work looking into module merging. So long as the authors do include the extra details and experiments given in rebuttals in the final version, I am happy to raise my score to accept.
---
Reply to Comment 1.1.1:
Comment: Thanks for your endorsement! In the next revision, we will make sure to include the details and experiments given in the rebuttal, and clarify table 2 as suggested. | Summary: This paper proposes an efficient way to adapt pre-trained language models using parameter-efficient fine-tuning (PEFT). Instead of fully fine-tuning these models, the authors develop lightweight modules for each dataset, resulting in compact modules with varied skills. These modules are combined using linear arithmetic operations in the weight space, providing flexible module composition without needing extra training. This composition technique is applied to achieve distribution generalization, multi-tasking, detoxification, and domain transfer. The authors further extend this method to detoxify Alpaca-LoRA, a large language model. Empirical results suggest that this approach can create more effective modules that perform better than existing ones.
Strengths: - the paper is scientifically sound and easy to read
- the idea of adapting pre-trained language models is interesting
Weaknesses: - Without deviations and statistical tests, it is challenging to ascertain whether the model surpasses LoRA, given the close performance results.
- Utilizing GPT-4 for model evaluation could potentially introduce bias (e.g., see: https://arxiv.org/abs/2305.17493), as GPT-4 is inherently biased.
- The work appears to be an iteration of the paper from Pfeiffer et al., 2023, with restricted novelty.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: how would you emphasize the fundamental novelty of the paper comparing it with the one from Pfeiffer et al.?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, the authors have sufficiently acknowledged the potential limitations of their work. They have considered the inherent biases or safety concerns that may be present in the Parameter-Efficient Modules (PEMs) they used. They've also touched upon the black-box nature of neural networks that might inadvertently introduce toxicity in certain scenarios.
They identified two main limitations: the restriction to identical PEM architectures and similar module initialization in most experiments, and the necessity of tuning the weight hyperparameter λ. They have also provided a direction for future work to address these limitations, which includes exploring different PEM architectures, varied module initialization, and automated computation of the weight hyperparameter.
Regarding potential negative societal impacts, the authors do not explicitly discuss this. However, they do allude to safety and bias issues inherent in the PEMs they utilize, which indirectly covers potential societal concerns. It would be beneficial for the authors to further discuss potential negative societal impacts explicitly in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Deviations and statistical tests for close results
Thanks for the advice! In our submission, we only conducted statistical tests for the domain transfer experiment in Table 4. We understand that some results in Table 1 are close and statistical tests are necessary there as well. We plan to set different random seeds and repeat the experiment multiple times to calculate the standard deviation. Due to the large number of experiments that need to be carried out for Table 1 and the limited time, we will include deviations for Table 1 in the next revision.
### Q2: Potential bias from using GPT-4 as the judge
Thanks for the pointer and we agree that using GPT-4 as judge may incur biases. To mitigate concerns on this issue, we further run pairwise human evaluation to compare the helpfulness of *Alpaca-LoRA* and *Merge* in Table 5 of the detoxifying experiment.
Specifically, we conducted a manual evaluation of a total of 200 pairs of responses from our experiment in Section 4.6, consisting of both toxic instructions and non-toxic instructions generated by the original Alpaca-LoRA and the detoxified merge module. The details of human evaluation are designed following LIMA [1] – we presented the annotators with two responses in random order and asked them to choose from three options: 'Model A wins', 'Model B wins', or 'Tie'. Initially, three evaluators, who are the authors themselves, assessed 50 of them to calculate their agreement rate using the tie-discounted accuracy following [1], which was found to be *78%*. A close-to-80% agreement rate is considered high and acceptable among human annotators, as practiced in LIMA, Chatbot Arena [2] and MT-bench [2]. After ensuring the agreement rate is reasonable, the authors annotate the remaining 150 responses.
The results of the manual evaluation are shown in the following table.
The win rate of the detoxified merge module are 36% for toxic instructions and 27% for normal ones with a 40% and 42% tie rate, which aligns with the observation from the original GPT-4 scoring in our submission. The results imply that the negation operation didn’t sacrifice the model performance significantly on helpfulness.
| Method | Toxicity score $\downarrow$ | | Toxic generation (\%) $\downarrow$ | | Helpfulness score (GPT-4) $\uparrow$ | | Helpfulness win/tie/lose rate (Human, \%) | |
|---|---|---|---|---|---|---|---|---|
| | toxic | normal | toxic | normal | toxic | normal | toxic | normal |
| Alpaca-LoRA | 0.321 | 0.008 | 20 | 0 | 6.85 | 7.87 | 24/40/36 | 31/42/27 |
| Merge | 0.158 | 0.008 | 6 | 0 | 7.13 | 7.63 | 36/40/24 | 27/42/31 |
### Q3: Novelty of the paper compared to Pfeiffer et al.
The Pfeiffer et al.'s (2023) paper [3] mentioned by the reviewer is a comprehensive literature review of modular deep learning. Our study focuses on the problem classified under their 'Merging Modular Models' subsection of Section 9.1 Future Work. They neither explored nor conducted experiments on modular composition methods, thus our work is very different from theirs.
A more related work to ours from Pfeiffer et al. is AdapterFusion [4]. We highlight the following key novelty of our paper compared to AdapterFusion:
1. The methodologies are essentially different – while we perform composition on the weight space and merge multiple PEMs into one PEM, AdapterFusion operates on top of outputs of PEMs and does not merge the parameters. Also, our method is based on the composition of addition/negation operators that we first proposed in this paper but did not exist in AdapterFusion.
2. AdapterFusion needs additional training on extra training data while our method is training-free.
3. AdapterFusion was proposed for multi-task settings and only focused on “adding” abilities of different modules. However, our paper designs both addition and negation operators and explores more diverse and flexible settings where AdapterFusion could not be trivially applied, such as the detoxifying and domain transfer settings.
[1] Zhou, Chunting, et al. "Lima: Less is more for alignment." arXiv preprint arXiv (2023).
[2] Zheng, Lianmin, et al. "Judging LLM-as-a-judge with MT-Bench and Chatbot Arena." arXiv preprint arXiv (2023).
[3] Pfeiffer, Jonas, et al. "Modular deep learning." arXiv preprint arXiv (2023).
[4] Pfeiffer, Jonas, et al. "AdapterFusion: Non-Destructive Task Composition for Transfer Learning." EACL. 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, I've changed the rating to "Accept" | Summary: This paper proposes an approach to compose parameter-efficient finetuning modules without requiring additional training. The modules can be added to combine capabilities, or negated to remove some abilities from the model. The paper shows how different combinations of modules may be used in multiple scenarios such as out-of-distribution generalization, multi-task learning, detoxification and domain transfer. The authors also show that their approach may detoxify an instruction-tuned language model.
Strengths: While there are existing approaches to combine adaptors or parameter-efficient finetuning modules, the introduction of the negation operator allows for more complex operations.
The approach is evaluated in multiple scenarios (distribution generalization, multi-tasking, unlearning, domain transfer) and for two different types of parameter-efficient finetuning modules. It is generally reasonably effective.
The paper is fairly easy to follow. The experiments are mostly ordered in increasing order of complexity.
Being able to combine parameter-efficient finetuning modules without additional training allows for a cheap and flexible mechanism to adapt large language models.
Weaknesses: The main weakness of the paper (in my opinion) is the lack of comparison to existing approaches (e.g. Pfeiffer et al. 2021, Wang et al. 2022 (already cited)), especially for tasks where only the addition operator is needed. While the proposed approach is simpler than those requiring additional training, it is unclear whether performance is worse, comparable or superior.
For multi-task experiments, there is a noticeable drop in performance for RTE. While this may not be surprising, this is still somewhat concerning, especially given the limited comparison to other approaches.
[Minor] Given the weight $\lambda$, the addition operator is actually a weighted average operator.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: How does the proposed approach compare to existing work combining PEFT modules (even if some of them may not use LoRA or (IA)$^3$ directly)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have addressed limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Comparison to other PEM combining work
This is a good point. There have indeed been some works that combine multiple PEMs in the past, such as Pfeiffer et al. 2021 [1] and Wang et al. 2022 [2] mentioned by the reviewer. However, these approaches are not comparable to ours because (1) they require additional training and access to training data, and (2) they were only demonstrated on “addition” settings to combine abilities and could not be trivially applied to our settings that involve negation operations, such as detoxification. This is why we did not include them in our experiments. Even so, we agree with the reviewer that it may be helpful to include their results as well to understand the potential performance gap. Therefore, we run AdapterFusion [1] in our multi-task setting and show the results below.
| Method | | RTE | MNLI | Avg. |
|---|---|---|---|---|
| LoRA | AdapterFusion | 84.8 | 86.2 | 85.5 |
| | RTE | 81.2 | 54.7 | 68.0 |
| | MNLI | 75.8 | 86.8 | 81.3 |
| | Merge | 78.7 | 86.3 | 82.5 |
We utilized the same LoRA modules employed in our PEM composition as the base PEMs in AdapterFusion. Then, we train AdapterFusion using the combined RTE and MNLI training datasets in a standard multi-task training setting as described in [1]. The training hyperparameters follow the ones in [1]. As shown in the table, AdapterFusion method yielded improvements with an RTE accuracy of 84.8 and an MNLI accuracy of 86.2. AdapterFusion outperforms our method by 3.0 points in terms of the average performance of MNLI and RTE, which is expected due to the extra multi-task training. We would like to emphasize again that these previous work like AdapterFusion require training and access to the respective training data, thus their results are not directly comparable and can only serve as a reference point. In future revisions, we will add AdapterFusion results in our other settings where only the addition operator is needed (i.e. the distribution generalization and the multi-tasking settings).
### Q2: Noticeable drop in RTE of multi-task experiments
Despite a slight drop in performance on each individual task, we would like to note that the combined model performs well across multiple tasks, which can be viewed as an advantage over a single model that excels in only one task. Moreover, such single-task performance drop is also commonly observed in previous work on model merging [3,4,5]. Therefore, we think that although the performance drop on a single task in multi-task settings is not ideal, it is understandable at this stage. More advanced composition methods to fill this gap are left as future work.
[1] Pfeiffer, Jonas, et al. "AdapterFusion: Non-Destructive Task Composition for Transfer Learning." EACL. 2021.
[2] Wang, Yaqing, et al. "AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning." EMNLP. 2022
[3] Jin, Xisen, et al. "Dataless Knowledge Fusion by Merging Weights of Language Models." ICLR. 2022.
[4] Qin, Yujia, et al. "Exploring Mode Connectivity for Pre-trained Language Models." EMNLP. 2022.
[5] Ilharco, Gabriel, et al. "Editing models with task arithmetic." ICLR. 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and for reporting additional results.
For Q1, I agree that these other methods have a distinct advantage by using training data, so the results may not be directly comparable. I would suggest to more clearly demonstrate that "the corresponding training data is often unavailable" (from the intro).
---
Reply to Comment 1.1.1:
Comment: Thanks for the suggestion! We will add clarification on this to the intro in the next revision. Since the original review says that "the main weakness of the paper is the lack of comparison to existing approaches" and we have reached consensus after the rebuttal that the previously mentioned approaches may not be directly comparable to our method, does it change your mind on the original review rating? | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and comments, and we reply to the comments of each reviewer separately in the respective thread. Due to time limitations we could only address major points, but we’ll make sure to reflect all advice in future revisions. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors proposed to perform an arithmetic combination of PEFT Modules. Suggested combinations were evaluated on distribution generalization, multitasking, detoxifying, and domain transfer tasks. Authors showed that combining PEFT Modules produces new modules with desired attributes.
Strengths: - The proposed method is interesting for practitioners
- Experiments are mainly well designed
- The paper is well-written and easy to follow
Weaknesses: - For Table 1, it would also be highly beneficial to include fine-tuning results on the full dataset to understand how merging modules compares to it.
- It would be interesting to see any ablations on the design choices of the PEM Negation operator. Claims that one could not naively negotiate weights of LoRA (L120) seem unsupported by any evidence. Analysis of results with different types of negotiation would make the paper better. Furthermore, FFT negotiation from Section 4.4 implied such naive negotiation of all weights.
- While the authors explored a wide range of different tasks to understand the performance of the proposed approach, I found that the paper needed an in-depth analysis of the results. E.g., when speaking of detoxifying (Section 4.4), the only available results are the final toxicity score and PPL of obtained model. Though, there are more automatic metrics for text generation, such as a number of distinct n-grams, which add more dimensions to understanding performance. Furthermore, while discussing detoxifying, there are many fascinating things to do with negotiation. E.g., for FFT, it is explored to perform negotiation with a weight larger than $1$ (https://arxiv.org/abs/2211.12092).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please, refer to the weaknesses section
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: –
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Fine-tuning results on the full dataset for Table 1
Thanks for your advice! We run LoRA-tuning on the combination of the two subsets s0 and s1 for the eight GLUE tasks described in Table 1, and show the results in the following table (denoted as “full dataset”). Not surprisingly, we observe that merging modules slightly underperforms fine-tuning on the full dataset, implying that more advanced composition methods are worth further exploration to improve the performance. However, we note that full-dataset tuning requires access to both subsets during training and represents a different setting from our merging experiments, thus its numbers should be viewed as a reference point only. We will add the full-dataset tuning results into Table 1 for completeness in future revisions.
| Method | | MNLI | RTE | SST-2 | MRPC | QNLI | QQP | CoLA | STS-B |
|---|---|---|---|---|---|---|---|---|---|
| LoRA | full dataset | 76.8 | 76.5 | 92.8 | 88.0 | 84.8 | 80.6 | 0.55 | 0.90 |
| | $s_0$ | 71.4 | 72.2 | 92.2 | 86.3 | 83.1 | 79.0 | 0.50 | 0.88 |
| | $s_1$ | 72.3 | 69.0 | 91.9 | 87.7 | 83.0 | 80.8 | 0.51 | 0.89 |
| | $m$ | 73.5 | 75.8 | 92.2 | 88.0 | 83.3 | 81.1 | 0.52 | 0.89 |
### Q2: Ablation analysis of the PEM negation operator
Thanks for your advice! We conduct ablation experiments on LoRA and $\rm{(IA)^3}$ negation in the setting of Section 4.4. The following table presents the results where we naively negate all weights of the modules (denoted as “weight-negated”). For LoRA, the results after naively negating are identical to the original toxic LoRA because they are theoretically equivalent as we described in Line 120 of the submission. As for $\rm{(IA)^3}$, when naively negating the scaling vector $l$, it causes a catastrophic performance drop. When we vary the value of $\lambda$ in the range of 0.1 to 1, we could not find an appropriate $\lambda$ where the model generates fluent text – in contrast, the model always produces incomprehensible output that results in a high PPL. As a consequence, the toxicity score and metrics related to toxic generation become meaningless with a high PPL. These results demonstrate that negating the weights of PEMs naively is not effective for PEM composition.
| Method | Toxicity score $\downarrow$ | Toxic generation (\%) $\downarrow$ | PPL $\downarrow$ | Distinct n-grams $\uparrow$ |
|---|---|---|---|---|
| GPT-2 | 0.10 | 5.8 | 16.44 | 0.467 |
| toxic-FFT | 0.59 | 50.2 | 16.46 | 0.509 |
| toxic-LoRA | 0.43 | 34.3 | 17.00 | 0.474 |
| toxic-$\rm{(IA)^3}$ | 0.26 | 20.5 | 17.33 | 0.510 |
| negated-FFT $(\lambda=0.5)$ | 0.04 | 2 | 16.94 | 0.490 |
| negated-LoRA $(\lambda=1)$ | 0.01 | 0.1 | 16.67 | 0.467 |
| negated-$\rm{(IA)^3}$ $(\lambda=0.6)$ | 0.03 | 0.9 | 16.92 | 0.488 |
| weight-negated-LoRA $(\lambda=1)$ | 0.43 | 34.3 | 17.00 | 0.474 |
| weight-negated-$\rm{(IA)^3}$ $(\lambda=1)$ | 0.11 | 8.7 | 843.83 | 0.265 |
| weight-negated-$\rm{(IA)^3}$ $(\lambda=0.6)$ | 0 | 0 | 5.91E+04 | 0.021 |
| weight-negated-$\rm{(IA)^3}$ $(\lambda=0.1)$ | 0 | 0 | 3.00E+09 | 0.008 |
### Q3: More automatic metrics for text generation in detoxifying
Thanks for the advice! We use PPL and toxicity scores mainly following the setting in [1]. We agree with the reviewer's suggestion and introduce distinct n-grams (n=1) as an additional measure for assessing the diversity of the generated text. We use the Distinct method from PaddleNLP and the table above presents the results. Our analysis reveals that incorporating negation does not diminish the n-grams score compared to the GPT-2 baseline, which implies that the diversity of the generated text remains consistent with the original model.
### Q4: There are many fascinating things to do with negotiation for detoxifying
Thanks for the pointers and we agree! Our paper is mainly to study the general effect of composition in various settings, and detoxifying is just one of them that we take as an example. Thus, we only explored relatively simple variants in all our experiments to keep the paper concise, and we leave further study on more composition variants in different settings as future work.
[1] Ilharco, Gabriel, et al. "Editing models with task arithmetic." ICLR. 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I'm more than satisfied with it.
---
Reply to Comment 1.1.1:
Comment: We are glad that you are satisfied with our response! If our response (partially) addressed your concerns, would you like to consider updating the review rating accordingly? | null | null | null | null | null | null |
Pairwise Causality Guided Transformers for Event Sequences | Accept (poster) | Summary: The paper addresses the limited exploration of incorporating causal knowledge into deep learning models for temporal event sequences. The authors propose a novel approach to enhance the performance of transformer-based models in multivariate event sequences by injecting pairwise qualitative causal knowledge. They establish a framework for causal inference using a transformer architecture and provide theoretical justification for their approach. Experimental results demonstrate that their approach outperforms existing models by effectively leveraging knowledge about causal pairs, leading to improved prediction accuracy
Strengths: - The paper investigates a very relevant and important topic: incorporating causal knowledge into transformers. The proposed approach represents a valid contribution to this domain.
- The experimental section shows that the proposed approach outperforms all the baselines (both neural and non-neural), although in some cases by a small margin.
Weaknesses: - I find that the structure of the write-up could be improved. Sections 3 and 4 are quite technical and while being well-written, I find that the core contribution of the paper is somewhat hidden and not clearly presented. For example, I would have emphasized more the training details, including the proposed loss function
- The impact of the choice of the $\alpha$ value is not discussed.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See the Weaknesses section.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: "Broader Impact and Limitations" section provided at the end of the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and respond to specific questions below.
**Training Details.** We have included more training details in Section 3 of the Appendix (see "Model Implementation and Training"). Due to space limitations, we could not include the details around implementation and training in the main paper. We will try to make some edits in this regard and also make our core contributions clearer by incorporating a summary of our methods in Section 4 in the revised manuscript.
**Trade-off for $\alpha$.** The $\alpha$ value performs a trade-off between loglikelihood and the incompatibility term in Equations 3 and 4.
Like many penalty constraint optimization approaches: when $\alpha \rightarrow \infty$, it pushes the optimal solution to satisfy the pairwise causal knowledge constraint; when $\alpha \rightarrow 0$, it is domain-knowledge-free. In our experiments, we select the best $\alpha$ value based on the one which has the largest next event prediction loglikelihood on the validation subset.
The selected values range from 0.001 - 10 in various datasets, which are included in Table 2 of the Appendix.
---
Rebuttal Comment 1.1:
Comment: Thanks for reading my review. I appreciate your efforts in addressing my concerns. I will keep my score unchanged.
---
Rebuttal 2:
Comment: Dear Reviewer vJUR,
Could you check whether your concerns were properly addressed by the authors' response, or at least acknowledge you read the response?
Thank you,
The AC | Summary: The paper considers incorporating qualitative pairwise-casual relations into transformer based models for capturing temporal event sequences. The unbiassed estimation of the proposed measure, is ensured with theoretical justification. The experiments are conducted on both synthetic data, and several real event sequences, to demonstrate the superior performance of the proposed model in capturing multivariate event observations, compared with related methods.
Strengths: - To me, it seems to be the first attempt to consider incorporating pairwise causal relations into a neural autoregressive model, although some previous work have considered modelling event sequences by inducing logic rule into temporal point processes.
- To me, it seems quite important to provide theoretical justifications regarding the unbiassed estimation of the proposed measure.
Weaknesses: - L19: To me, it should be, "without" -> "with"
Some notations such as time (positions in a sequence), seem to be confusing!
- L125: I would consider $i$ refer to the indices of the events in a sequence. Hence, I would prefer to use $i$ to denote $i$-th event, while use $t_i$ to denote the corresponding timestamp. After reading Fig.1, I am still confused by the notations, as I cannot understand what kind of event sequences you are modelling, e.g., equally-spaced events or irregularly-spaced events. I would consider the most real events including the several real-world data used in your experiments, are irregularly-spaced. Thus, I am curious how do you express an event occurring at continuous-valued time, using only $l_i$. Suppose the data is equally-spaced, it is redundant to introduce $N_k$.
- L19: I cannot fully understand how the sample/batch size B affects the calculation of the propensity score in Eq. 1 as there is no variable with subscript $j$.
- L160: It is a bit weird to suddenly start to introduce transformer based event sequence models. A subtitle could be added.
- For Sec.5, it would be better to use past verbs.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: I have asked some key questions that significantly affect my fully digesting this paper, in the <Weaknesses>.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: To me, causal inference and causal discovery using temporal event sequences, is a really big and complicated topic. I would see more thorough discussion and justification, regarding more complicated scenarios, although the authors provide some suggestions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and address the key questions below. Importantly, we wish to clarify some misunderstandings that will hopefully clarify matters.
**Clarification about "Without" or "With" in L19.** It should indeed be "without" , as we have written. Our approach focuses on event sequences "without" meaningful timestamps. There are many applications, for instance, many well-known natural language processing (NLP) approaches for extracting events and event sequences from textual corpora (such as ref 34 in our paper). It is easier to extract sequences and often impossible to obtain timestamps for events from text, since they are typically not mentioned in the source corpora. For some of the real-world datasets in the experiments, we assume that the timestamps are either not provided or too noisy to be useful.
**Equally-spaced Events vs. Irregularly-spaced Events in L125.** We clarify the confusion here. We are modeling sequences where we only know the position of the events in the sequence, not the timestamps. Such event sequences can be considered as univariate categorical time series data. Our approach thus does not require modeling the event at continuous-valued time.
The introduction of $N_k$ is irrelevant to equally-spacing of events; rather it represents the number of events in the $k-th$ sequence due to varying lengths of event sequences.
**Batch Size and Eq.1 in L159.**
Subscript $j$ represents sequence $j$ from the sampled batch with number of sequences $|B|$ (the cardinality of $B$, $|B|$ is the upper index in Eq.1).
**Introduction of Transformer in L160.** In the paragraph just above L160 on the introduction of transformers, we have introduced neural autoregressive models. Transformer is a prominent neural autoregressive model. We can add a subtitle to make the transition more smooth in the revised version.
**Tense in Section 5.**
We will modify the content in Section 5 to past tense in the revised version.
**Limitations and Discussion.**
We note that we discussed limitations in the main paper; we also included more discussion around causal inference in Appendix (see "Causal Inference Assumptions").
We hope we have clarified several aspects and resolved misunderstandings such as the nature of the data and scope of work. We request the reviewer to consider increasing the score if he/she feels the clarifications have helped re-evaluate the work suitably.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal.
Comment: Most of my concerns have been addressed. I decide to increase my score to bordeline accept.
---
Rebuttal 2:
Comment: Dear Reviewer vJUR,
The authors have provided a response to your review comments. Could you see whether your concerns were properly addressed, or at least acknowledge you read it?
Many thanks,
The AC | Summary: The paper proposes a method to incorporate additional background information into transformers that is pairwise causal i.e. event Z affects event Y. They do this for temporal event sequence data where the data is non-stationary and this casual relationship can be confounded by additional events.
Strengths: The paper is well written and seems to be sufficiently novel although I do not have the broadest knowledge of this area.
Weaknesses: No notable weaknesses.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The authors use a upper triangular mask in the self attention block to ensure that only future events attend to the past and past event cannot attend to the future. How does this compare to just using RNN style architectures? Self attention has the benefit of allowing everything to attend to everything and is useful in scenarios where there are global relationships or the relationships are not well understood by humans and cannot be “built in” to the architecture a priori e.g. words in a sentence. However, for sequential events or other “time series” like data, local relationships should dominate and there would be some understanding of the relationships between events?
In Theorem 4, the true effect should be -E(P(l_i+1=y|l_i \neq z) ? This is what’s stated in Line 184 and the subsequent proof sketch also suggests as much.
What is p*(l_i) in equations 3 and 4? How is the injected qualitative knowledge any different from a regularization term?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and respond to specific questions below.
**RNN vs. Transformer.**
Our setting is general to neural network architectures for (event) sequences. To this end, RNN style architectures fit our framework. We choose transformer-based models for practical reasons; for instance, in epidemiology and healthcare, event interactions commonly involve long dependency (i.e. chronic disease). The popular transformer-based models capture long-range dependency, allow efficient parallelization during training, and generalize to a few interesting theoretical insights, e.g. Theorem 3.
We will include some ablation experiments with RNNs or variants in the revised manuscript to explore the locality effect.
**Theorem 4, True Effect.**
The true effect should be $\mathbb{E}(P(l_i+1=y|l_i = z))-\mathbb{E}(P(l_i+1=y|l_i \neq z))$. Thank you for spotting the typo.
**$p^{*}(l_{i})$ in Equations 3 and 4.**
$p^*(l_i)$ is the probability of observing $i-th$ event with label $l_i$ conditioned on history $h_{i-1}$ modeled by the transformer model/architecture. Specifically, Equation (2) provides a sequence level overview for a generic sequence {$l_1$,$l_2$,...,$l_{n-1}$,$l_n$} of matrix computation involved in transformers. Thus for the $i-th$ event, we can find the associated $i-th$ column in the embedding matrix $\mathbf{X} \in \mathbb{R}^{d \times n}$. $\mathbf{X}$ is composed of position embedding and type embedding. The position embedding matrix $\mathbf{P} \in \mathbb{R}^{ d \times n} $ (ref 51 in our paper) is achieved through:
$$ \mathbf{P}_{j,i} =
\text{cos} (i/10000^{\frac{j-1}{d}}) \text{if $j$ is odd}
$$
$$ \quad \quad = \text{sin} (i/10000^{\frac{j}{d}}) \text{if $j$ is even} $$
where $i$ is the position and $d$ is the dimension of encoding. Type embedding are through the product of a trainable embedding matrix $\mathbf{U} \in \mathbb{R}^{d \times |\mathcal{L}|}$ and one hot encoded vector $\mathbf{e_i} \in \\{0,1\\}^{|\mathcal{L}|}$ for $l_i$, i.e. $\mathbf{X} = (\mathbf{U}\mathbf{E}+\mathbf{P})^T$ where $\mathbf{E} =[\mathbf{e_1},\mathbf{e_2},...,\mathbf{e_n}]$. Let the output from $B$-block of transformer according to Equation 2 be $\mathbf{H} \in \mathbb{R}^{d \times n}$. Then $p^*(l_{i} =m)$ where $m \in \mathcal{L}$ is modeled by a multinomial distribution and be expressed as the following:
$$
p^{*}(l_{i} = m ) =
\frac{exp(\mathbf{W}(m,:) ^{\intercal} \mathbf{H} (:,i) +\mathbf{b}(m) )}{\sum_m exp(\mathbf{W}(m,:) ^{\intercal} \mathbf{H}(:,i) +\mathbf{b}(m))}
$$
where $\mathbf{W}(m,:)$ is the $m-{th}$ row of the corresponding trainable weight matrix, $\mathbf{H}(:,i)$ is the $i-{th}$ column and $\mathbf{b}(m)$ is $m-{th}$ entry of the corresponding bias term.
**Injected Qualitative Knowledge vs. Regularization Term.**
Injected qualitative knowledge can be viewed as a carefully designed regularization term in our setting. This is perhaps most effective when the number of sequences are small, say on the scale of a few tens to a few hundreds.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer WQuf,
The authors have provided a response to your review comments. Could you see whether your concerns were properly addressed, or at least acknowledge you read it?
Many thanks,
The AC
---
Rebuttal Comment 1.2:
Comment: Thank you for your response. I will keep my score the same as I think it is already reflective of the quality of this work. | Summary: This paper focuses on the multivariate event sequences, where different types of events occur sequentially. The authors present an approach that leverages pairwise qualitative causal knowledge to enhance the performance of transformer-based models in handling multivariate event sequences. Specifically, the authors formulate the occurrence of events as causal inferences and show how to obtain unbiased estimates theoretically. The proposed method is validated in both synthetic and real-world datasets.
Strengths: The authors have conducted extensive experiments, both in synthetic datasets and real-world datasets to validate their approach.
The utilization of the LLM for generating event sequences is an interesting idea.
The related work is well cited and discussed in the paper.
Weaknesses: The proposed theory appears to be a straightforward adaptation based on the existing theory of causal inference and inverse probability weighting. The incompatibility framework introduced in this paper primarily focuses on the design of the loss function, which consists of two terms. The first term aims to maximize the likelihood, while the second term serves as an unbiased estimator. However, the second term has already been introduced in previous works, as acknowledged by the authors. The main theoretical contribution of this paper seems to be formulating the causal inferences in the context of multivariate event sequences, as presented in Definition 1. However, the formulation provided in Definition 1 appears to be a straightforward and expected outcome, lacking significant novelty.
The improvement of the proposed method PC-TES seems marginal compared to the TES.
Some of the mathematical expressions in the paper do not make sense. Here are the specific expressions that require attention: Line 165, Equation (2): In the right-hand side of Equation (2), the triangular attention mask $\mathbb{M}$ appears outside the softmax function, resulting in an unnormalized result. In my opinion, it should be $\sigma[(W^i_K X )^T (W^i_Q X) \odot \mathbb{M} ]$;Line 172: Please check the right hand side of the equation, I think it should be $\sum_{k \leq j} \tilde{X_{ik}} \tilde{X_{kj}$
The writing in this paper requires significant improvement. Many expressions, including words and mathematical equations, are confusing. Here are the specific points that need attention: Line 140, Definition 1: Definition 1 is quite confusing, and it would be helpful if the authors provide a concrete definition of $h_i$. If I understand correctly, $h_i \triangleq {l_1, l_2, …, l_{i – 1}, l_i}$; Line 152, Definition 2: The authors should introduce the concept of propensity score and provide an intuitive explanation. It may be beneficial to move Assumption 1 before Definition 2 and present a causal graph. If I understand correctly, the causal graph should be: $h_i \rightarrow Z$, $h_i \rightarrow Y$, $Z \rightarrow Y$. The propensity score acts as a mediator between $h_i$ and $Z$.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Line 95: The word "of" is duplicated. One instance should be removed.
The ablation study was only performed on synthetic datasets. Why was not it conducted on real-world datasets?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations are discussed in the Section Broader Impact and Limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and respond to specific questions and comments below.
**Novelty of Proposed Approach.**
We argue that our theory is not a straightforward adaptation; rather, it is based on a careful formulation and design of mainstream transformer networks under the framework of causal inference. To arrive at formal results such as the unbiased estimator of Theorem 4, we carefully formulated the problem of causal pair injection in event sequences through precise definitions under the particular setting, and cautiously made some assumption regarding time-varying confounding.
We are not aware of prior work around applying causal inference in our setting.
Furthermore, we note that the second term in the loss is not introduced in previous work. Prior approaches (such as ref 14 in our paper) only suggests adding penalty for deviation based on incompatibility for event sequences with timestamps as modeled by temporal point processes.
**Performance Improvement.** The proposed PC-TES consistently improves over TES in all experiments and in some cases the improvement is statistically significant. For the synthetic datasets in Table 1, this improvement over TES can go as high as 5.3\% relative to the prediction of TES on synth-3. A two sample t-test with unequal variance shows that the two sample means are statistically different from each other (with p-value of 0.016), which implies the significance of the improvement. For real-world datasets, PC-TES consistently improves by up to 4.8\% on Defi dataset, relative to TES. For LLM generated event sequences, such improvement is 3.3\%, when using 5 event pairs in Table 3. All results firmly indicate the superior performance of our approach.
**Mathematical Expressions.**
We make the following clarifications about mathematical expressions:
*Line 165, upper triangular attention mask.* In Line 165, the triangular attention mask $\mathbb{M}$ needs to appear outside the softmax function. The goal is to ensure only future events are allowed to attend to past history and past event are not allowed to gain information from future instances via attention. On the contrary, including triangular attention mask $\mathbb{M}$ in the softmax operation will result in the computed term not being triangular due to the "softness" of the softmax. This will violate our definition of propensity in the transformer and thus violate Theorem 2.
*Line 172, masked attention.* Our expression is equivalent to the suggested one, $\sum_{k \leq j} \tilde{X_{ik}} \tilde{X_{kj}}$. Note we only need to sum up to the diagonal entry $(j,j)$ from the masked attention matrix $\tilde{A}$. We will change this to $\sum_{k \leq j} \tilde{X_{ik}} \tilde{X_{kj}}$ to be more mathematically precise.
*Line 152, Definition 2 and Assumption 1.* We present Definition 2 first (and then Assumption 1) because the notion of a propensity score is itself a key quantity of interest in causal inference and the potential outcome framework in particular. In an ideal setting where covariates $h_i$ do not affect the outcome or treatment, we would not need to adjust for them. More commonly however, this is not the case.
Assumption 1 is of practical importance and thus our goal is to tackle this problem using the propensity score as the "mediator" between $h_i$ and $Z$. We will consider moving Assumption 1 before Definition 2 and present the causal graph according to the reviewer: $h_i \rightarrow Z, h_i \rightarrow Y, Z \rightarrow Y$ where $h_i \triangleq$ {$l_1$, $l_2$, …, $l_{i – 1}$, $l_i$}.
**Line 95, repetition of the word "of".**
We thank the reviewer for spotting the typo and will remove the duplicate word.
**Ablation.**
We only considered synthetic experiments for ablation studies due to space limitations and also because they are more easily controlled.
We thank the reviewer for the suggestion and will also include some ablation experiments on real-world datasets in our revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my review.
Your response has clarified some of my concerns. While I still find the formulation and results rather straightforward, I do believe there is novelty in applying the causal inference method within your setting.
I still have some questions. Please find them below.
Performance Improvement
For the synthetic datasets in Table 1, the performance gains of PC-TES over TES are relatively small, at just 0.6%, 1.9%, 5.3%, and 0.7% respectively. Half of these improvements are less than 1%, which seems marginal. Additionally, the proposed PC-TES appears to have higher variance than TES. For example, TES has -113.95 (3.05) compared to PC-TES with -113.15 (3.66). With the modest gains and higher variance observed, it is difficult to state that PC-TES is significantly better than TES.
Mathematical Expressions
Regarding the upper triangular attention mask on line 165, I am not fully convinced by your response. Specifically, the statement that "including the triangular mask $\mathbb{M}$ in the Softmax operation will result in the computed term not being triangular" appears incorrect. For simplicity, let's denote $(W^i_K X)^T W^i_QX$ as $P$. Since $\mathbb{M}$ is upper triangular, $P \odot \mathbb{M}$ remains upper triangular after element-wise multiplication. Applying the Softmax operation on an upper triangular matrix necessarily maintains the upper triangular structure. Excluding $\mathbb{M}$ from Softmax fails to normalize the result properly. Please let me know if I am misunderstanding your position, but I believe my interpretation of the math is accurate here. I would appreciate further clarification on this issue.
---
Reply to Comment 1.1.1:
Title: Further Clarification to Review's Comments.
Comment: Thank you for your interest and additional questions about the performance improvement of our proposed PC-TES algorithm compared to TES, as well as observations about the softmax operation. We appreciate your feedback and would like to provide further clarifications about the points you raised.
**Performance Improvement for the Synthetic Datasets in Table 1.**
-Magnitude of Performance Gains:
While we acknowledge that the improvements in the synthetic datasets presented in Table 1 may appear modest, it's important to consider the context of the problem and the significance of even minor enhancements. In many real-world applications, even a fractional improvement can lead to substantial practical benefits. Additionally, the presented percentage improvements of 0.6\%, 1.9\%, 5.3\%, and 0.7\% correspond to distinct synthetic datasets with varying levels of complexity and patterns. The improvement of 5.3\% relative to TES on synth-3, for instance, indicates that PC-TES can effectively capture intricate patterns that TES struggles to model accurately. We believe that even these seemingly small gains may have important practical implications.
-Variability and Higher Variance:
We appreciate your observation regarding the variance in the results. The increased variance observed in PC-TES compared to TES is a direct consequence of the improved modeling capacity of PC-TES. By enhancing the model's ability to capture nuanced patterns, PC-TES could potentially lead to more variable predictions. The higher variance does not necessarily undermine the validity of the improvement; rather, it could reflect the algorithm's adaptability to a wider range of scenarios and data patterns.
-Statistical Significance and Marginality:
You rightly mention that half of the improvements are less than 1\%, which might be considered marginal.
We note that the statistically significant improvement of 5.3\% relative to TES on synth-3, as confirmed by the two-sample t-test with unequal variance (p-value of 0.016), demonstrates that the observed gains are not merely coincidental fluctuations.
**Mathematical Expressions Regarding the Upper Triangular Attention Mask on Line 165.**
Let $\mathbf{P} \odot \mathbb{M}$ be an upper triangular matrix $\mathbf{B} \in \mathbb{R}^{n \times n}$.
The softmax operation along a dimension of $\mathbf{B}$ can be expressed (with loss of generality, let this dimension be the column dimension) in the following:
\begin{equation}
\text{Softmax}(\mathbf{B}(:,j))(i) = \frac{\text{exp}(\mathbf{B}(i,j))} {\sum_{i=1}^{n} \text{exp}(\mathbf{B}(i,j))}.
\end{equation}
for **all** $j \in \\{1,2,...,n\\}$. The above equation shows the $i-th$ entry of the output vector after applying softmax on the $j$ column of $\mathbf{B}$. $\mathbf{B}(i,j)$ is the $(i,j)$ entry of $\mathbf{B}$. It is easy to show that the output of softmax operation on an upper triangular matrix $\mathbf{B}$ is not an upper triangular by showing there exists **some** $i \in \\{1,2,...,n\\}$ and $i>j$ such that $\text{Softmax}(\mathbf{B}(:,j))(i) \ne 0$. Take $j=1$ and $i=2$. $\text{Softmax}(\mathbf{B}(:,1))(2) > 0$ holds whether $\mathbf{B}(2,1)$ is 0 or not since $\text{exp}(\mathbf{B}(2,1)) > 0$ for any $\mathbf{B}(2,1) \in \mathbb{R}$. Hence the output of softmax operation on $\mathbf{B}$ is not upper triangular.
The following example is randomly generated by Pytorch:
---------------------------------------------------------------------
m = torch.nn.Softmax(dim=1)
triup = torch.triu(torch.randn(3, 3))
output = m(triup)
---------------------------------------------------------------------
where the upper triangular matrix triup is [[0.8651, 0.5936, -0.4257],
[ 0.0000, -0.6275, 1.1915],
[ 0.0000, 0.0000, -0.9110]] and output from softmax operation is
([[0.4908, 0.3741, 0.1350],
[0.2072, 0.1106, 0.6822],
[0.4163, 0.4163, 0.1674]]), which is not upper triangular.
In conclusion, the upper triangular attention mask $\mathbb{M}$ needs to appear outside the softmax function to guarantee that the output from equation (2) is indeed upper triangular. We hope this is clarifying. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes to use pairwise event causality pairs to improve the performance of transformer-based models, based on the intuition that causal knowledge encodes useful information like “event Z amplifies future occurrences of event Y”. Experiments demonstrate the performance of the proposed method.
Strengths: 1. The intuition of causality helps event sequence prediction makes sense.
2. The experimental evaluation covers many aspects of the methods.
Weaknesses: 1. The statements of some assumptions are not precise enough,
2. Some theoretical aspects of the method are not clear to me.
Technical Quality: 1 poor
Clarity: 4 excellent
Questions for Authors: 1. About assumption 1. What do you mean by “time confounding”? Is there any definition that is more mathematically precise, like a variable T that affects the probability of the p(Z,Y)?
2. How many “correct” pairs can you obtain, considering the so called “causal event pairs“?This seems to be important since it is what the whole method is based on.
3. About thm 4. It seems to be a trivial one since the “bias” of the estimator directly follow the estimator under IPW. Is there anything missing?
4. Some theoretical analysis about the window w and the length can also be presented so that your method’s theoretical performance is more clear to the readers.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. We will work towards further improving the general clarity of the paper.
We address the reviewer's specific questions below.
**Time Confounding and Assumption 1.**
Time confounding in our context is analogous to time-varying confounding in treatment effect estimation in a traditional time series setting. Mathematically, time confounding means that for any instance $i$, the covariates under Definition 1, history $h_i$ = {$l_1$,$l_2$,...,$l_i$} affects both the probability of future occurrence of treatment event $p(l_{i+1}=z|h_i)$ and that of future occurrence of outcome event (at least once) in the next window $w$ given the treatment and covariates, i.e., $ p( I(y)^w_{i+1} |l_{i+1}, h_i)$. For window $w=1$, this outcome is $p(l_{i+2}=y|h_{i+1})$. Denoting the treatment as $Z$ at $i+1$ and outcome as $Y$ at $i+2$, the generation mechanism follows the causal graph which visually captures the time-varying confounding
$h_i \rightarrow Z $, $h_i \rightarrow Y$, $Z \rightarrow Y$.
**Number of "Correct" Pairs.**
The causal pairs are determined by domain experts or through other sources of background knowledge. In our experiments, we incorporated a single pair for the synthetic event sequence datasets, and multiple pairwise information for the LLM-generated event sequence datasets. In practice, our model is capable of incorporating as many pairs as possible; this will naturally depend on the application at hand.
**Thm 4 and Bias.**
In response to the reviewer's comment, we appreciate his/her attention to Theorem 4, which states the unbiasedness of our estimator. We agree that the unbiasedness of the estimator directly follows from the use of inverse probability weighting (IPW) in our model specifically for event sequences.
However, we would like to highlight the significance of Theorem 4 in the context of our paper. While it may seem straightforward that the "bias" of the estimator follows the estimator under IPW, it is crucial to formally establish the unbiasedness property through a well-defined theorem. This theorem serves as a formal proof of the statistical property of our proposed estimator, giving confidence to readers and researchers about its validity and correctness.
Furthermore, Theorem 4 also provides theoretical insights into the working mechanism of our model, demonstrating that incorporating causal event papers into the modeling of event sequences with transformers, along with the use of IPW, indeed results in an unbiased estimator.
In summary, Theorem 4 may appear intuitive given the use of IPW, but its inclusion in the paper is necessary to provide a formal proof of the unbiasedness property and to reinforce the credibility of our proposed method. We will make sure to clarify the importance of Theorem 4 and its contribution to the paper in the revised version to address the reviewer's concern adequately.
**Theoretical Analysis on Window and Length.**
We discuss the computational aspect of the selection of window $w$ for the outcome in an event sequence of length $n$. The outcome according to Definition 1 is the probability of occurrence of $y$ (at least once) in the next window $w$ given the treatment and covariates. In our paper as well as in the Appendix, we showed empirical results from selecting two appropriate window sizes -- $w=1$ and $w=2$, respectively. While choosing larger windows might be feasible, the selection of smaller sizes $w=1$ and $w=2$ is preferred for computational efficiency. The window selection in this case is relevant to the following probabilistic query: "the probability of not observing any event $Y$ in next $w$ events" (see ref 50 for further details). Thus the (worst case) computational cost is $\mathcal{O}((|\mathcal{L}|-1)^{n-2})$ for the largest window $w=n-2$ in our setting, according to Definition 1. Our empirical evaluation of the incompatibility loss is a function of $w$; we note that we could potential marginalize $w$ to obtain the factual and counterfactual outcome and thus the treatment effect and incompatibility loss. This could be a potential direction for future research.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer fbyg,
The authors have provided a response to your review comments. Could you see whether your concerns were properly addressed, or at least acknowledge you read it?
Many thanks,
The AC
---
Rebuttal Comment 1.2:
Title: Thanks for the response
Comment: Thanks for your time to the response. Some of my concerns like theoretical insights have been cleared, and I would keep the score as it is to express my opinion on this paper. | null | null | null | null | null | null |
Compression with Bayesian Implicit Neural Representations | Accept (spotlight) | Summary: ## Summary
The authors propose an approximated correlation communication approach to compress Bayesian INF. Two practical considerations, such as prior fitting and posterior refinement are proposed. It achieves an R-D performance comparable to the SOTA INF method on image compression. Moreover, the audio compression is also considered.
========================================
## Post rebuttal
The authors have addressed my concern wrt to the experiments, in a quite unexpected way. I raise my rating to accept.
Strengths: ## Strength
* The idea of using correlation communication to compress image INR is novel. The compression of bayesian INR is also novel and likely to have impact outside compression community. Furthermore, it also considers audio compression and achieves reasonable results.
Weaknesses: ## Weakness
* To the best of my knowledge, there is no algorithm that communicates correlation using finite number of samples. The adaptative reject sampling [Harsha 2010, The Communication Complexity of Correlation] and Possion Functional Representation [Li 2018, Strong Functional Representation Lemma and Applications to Coding Theorems] both requires infinite number of samples in the worst case to communicate exact posterior. As [Theis 2021, Algorithms for the Communication of Samples] has shown in Fig. 2, the expected code length of relative entropy coding [Flamich 2020], even if converge expotentially fast, still have quite some distance to the true posterior. And the expected code length of [Flamich 2020] is also inferior to [Harsha 2010] [Li 2018], which is $D + \log D + \log\log D +,... $ The [Flamich 2020] does not emphasis this gap, while it is supposed to negatively effect the performance of the proposed approach. And this inaccurate posterior gap also exists for A* coding [Flamich 2022]. Then my question is, how bad such posterior approximation it is? Or to say, if we assume that we can use infinite sample optimal PFR to achieve exact posterior, what is the R-D curve like?
* Another weakness is that the advantage of the proposed approach over MSCN [Schwarz 2022, Meta-Learning Sparse Compression Networks] is unclear. From the perspective of R-D performance, the proposed approach is only comparable to MSCN on Kodak and is slightly outperformed by MSCN on CIFAR. However, the encoding time of the proposed approach seems to be quite long due to the progressive posterior refinement. As neither the author nor original MSCN paper report encoding time of MSCN, I can not evaluate the advantage of the proposed approach pver MSCN. Or to say, if the authors can show that the run time of propoased approach is significantly faster than MSCN, this paper is more acceptable. This is especially true as the authors claim that their approach is simpler (L54,66,253). They need evdience, such as faster runtime, to support their claim.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The limtation is discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review and the valuable questions. We respond to their concerns below.
**Concerns regarding channel simulation:** While the reviewer states important questions about the quality of our coding scheme's posterior approximation quality (which we address below), we also note that the reviewer conflates exact and approximate channel simulation. Harsha et al.'s rejection sampler, PFR and A* coding are all exact channel simulation algorithms, whose optimal, practically achievable expected codelengths are at most $\mathbb{I}[\mathcal{D} : w] + \log(\mathbb{I}[\mathcal{D} : w] + 1) + 5$ bits (Theorem 1; Li & El Gamal, 2018), where $\mathbb{I}[\mathcal{D} : w]$ denotes the mutual information between the data $\mathcal{D}$ and the INR weights $w$. Furthermore, given mild constraints that are virtually always satisfied in practice (bounded target-proposal density ratio), all these algorithms terminate with probability 1. However, as we mention in line 96, we use depth-limited A* coding in our work, which is an approximate channel simulation algorithm with finite runtime and the bias of which is formally characterised by Theorem 3.2 of Havasi et al. [23].
> How bad such posterior approximation it is? Or to say, if we assume that we can use infinite sample optimal PFR to achieve exact posterior, what is the R-D curve like?
We thank the reviewer for this question. First, as we explain in lines 174-180, we use our finetuning procedure precisely to mitigate the approximation gap. However, we have also added an extra group of experiments to show the theoretical R-D curve in Figure 2 in the rebuttal document that we will also include in the camera-ready version of our paper. In these new experiments, we still finetune the model parameters progressively. However, instead of A* coding, we directly sample from the posterior distribution of the current block and assume that the sample is transmitted to the decoder at maximum efficiency, using $D_{KL}[Q || P]$ bits. We see that on the CIFAR-10 dataset, the theoretically optimal performance is 1 dB higher than the practical performance, demonstrating that there is still much room for improvement for our coding scheme.
In addition, we conducted another experiment to investigate how the PSNR value changes during the finetuning process, the results of which are shown in the rebuttal document as Figure 3. We compressed some of the parameters using A* coding, directly sampled the rest from the posterior distributions, and used their corresponding KL divergence to estimate the coding cost. Interestingly, the PSNR tends to increase as the finetuning process goes on, however, it tends to drop when the finetuning process is nearing completion. This phenomenon occurs because, at the initial finetuning stage, the finetuning gain is more than the loss from A* coding, as many uncompressed groups can be finetuned to correct the errors of A* coding. But when the finetuning process nears completion, there are fewer uncompressed groups which could compensate for the bad sample of A* coding. Therefore, the general PSNR curve tends to decrease when it approaches the end of finetuning.
In summary, these two figures show that while A* coding's sample results may have a distance to the accurate posterior, our proposed progressive finetuning strategy effectively mitigates most of these discrepancies.
> Another weakness is that the advantage of the proposed approach over MSCN is unclear.
We recently caught an error in the evaluation of the MSCN paper, which we confirmed with the authors in private communication. In particular, the image compression performance they report on the Kodak dataset has some errors. Since they cannot directly apply their method to the high-resolution images due to computational constraints, they split the images into patches and encoded them separately. However, they calculate the PSNR for the images by averaging the PSNR values of their patches, which is incorrect. The correct would have been to reassemble the patches first and compute the PSNR on the whole image. Thus, COMBINER actually performs much better than MSCN on the Kodak dataset (more than 1.5dB gain).
> As the authors claim that their approach is simpler, they need evidence, such as faster runtime, to support their claim.
We thank the reviewer for their suggestion, we elaborate on COMBINER's simplicity and runtime below.
**Simplicity:** Our claim refers to the technical simplicity of our method, which consists only of two parts: (1) train a variational Bayesian implicit neural representation on some data and (2) transmit a sample from the INR's weight posterior to encode the data it represents.
**Runtime:** After improving the practical implementation of entropy coding, we update our method's encoding and decoding times in Tables 1 and 2, as shown in the one-page rebuttal document. Now both the encoding time and decoding time are acceptable for practical application. Specifically, on the CIFAR-10 dataset, it only takes around 33 minutes for our largest bitrate model to encode 500 images from the CIFAR-10 dataset, meaning each CIFAR-10 image takes around 3.96 seconds for encoding. We can compare our method COMBINER with previous methods like COIN [10] and COIN++ [11] (according to the coding time reported in the paper of COIN++), showing the comparisons in the below table.
| | COIN | COIN++ | COMBINER (Ours) |
|-------------------|-------------------|-------------------|-------------------|
| Encoding Time | 2.97 s | 0.095 s | 3.96 s |
| Decoding Time | 0.46 ms | 1.29 ms | 3.90 ms |
_We are happy to address any further questions the reviewer might have. However, should we have addressed the reviewer's concerns adequately, we kindly ask them to consider raising their score._
---
Rebuttal Comment 1.1:
Comment: * First of all, if exact posterior is used, the the rate shuold be $D_{KL} + \log D_{KL} + \log\log D_{KL}...$ bits. If the current curve of Figure 2 in the rebuttal document, it is wrong, as its rate is $D_{KL}$ bits, which is only true for asymptomatic channel simulation, not one shot channel simulation like this paper. I suggest the author to consider this one-shot overhead and show the correct figure in the later version.
* Second, I am glad to see that the authors address my concern related to MSCN, in a very very unexpected way. Currently I am more confident with accepting this paper, and I will raise my rating to accept.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer rYFn
Comment: We thank the reviewer for their response to our rebuttal; we address the reviewers first comment below
> First of all, if exact posterior is used, the the rate shuold be $D_{KL} + \log D_{KL} + \log\log D_{KL} + \dots$ bits. If the current curve of Figure 2 in the rebuttal document, it is wrong, as its rate is $D_{KL}$ bits, which is only true for asymptomatic channel simulation, not one shot channel simulation like this paper. I suggest the author to consider this one-shot overhead and show the correct figure in the later version.
We believe there are two points of slight misunderstanding here, which we address below. Please let us know if we did not interpret the reviewer's comment correctly.
**The reported curve in Figure 2:** We believe the reviewer assumes that the dashed line in Figure 2 is supposed to describe the efficiency of an exact channel simulation protocol, for which the codelength would include some additional overhead terms besides the KL divergence, as the reviewer rightly mentions. However, this is not our intention. As described in Section 3.3 of the paper, we use an approximate protocol such that the codelength is always approximately equal to the KL (within each block). Hence, what we compare in Figure 2 is the quality of approximate samples compared to exact ones at the same coding cost. In other words, the dashed line describes the practically unattainable scenario if our scheme would always yield exact samples instead of approximate ones at the same codelength as before.
We also note that if we included the extra log term in the rate calculation of the idealised sampler, the theoretically ideal curve would look worse since it would shift the curve to the right; thereby making COMBINER look better.
**The overhead terms in exact channel simulation:** We believe the reviewer is referring to the limiting case of Li and Vitanyi's universal prefix-free codes to encode the index returned by the sampler, yielding a codelength bound of approximately $D_{KL} + \log D_{KL} + \log\log D_{KL} + \dots$ in the one-shot case. Via Jensen's inequality, this would yield a bound of approximately $I[\mathcal{D} : w] + \log I[\mathcal{D} : w] + \log\log I[\mathcal{D} : w] + \dots$ in the average case, where $I[\mathcal{D} : w]$ denotes the mutual information between the data $\mathcal{D}$ and the INR weights $w$ that encode it. However, using the zeta distribution approach described in Appendices A and B in Li & El Gamal (2018), we can reduce the average-case bound to approximately $I[\mathcal{D} : w] + \log (I[\mathcal{D} : w] + 1) + 5$, which is a better bound!
## References
M. Li and P. Vitanyi, An Introduction to Kolmogorov Complexity and
Its Applications, 3rd ed. New York: Springer-Verlag, 2008.
C. T. Li and A. El Gamal (2018). Strong functional representation lemma and applications to coding theorems. IEEE Transactions on Information Theory, 64(11), 6967-6978. | Summary: This paper proposes overfitting variational Bayesian neural networks to the data and compressing an approximate posterior weight sample using relative entropy coding, which enables direct optimization of the rate-distortion performance by minimizing the $\beta$-ELBO. Moreover, an iterative algorithm for learning prior weight distributions is introduced and a progressive refinement process for the variational posterior is employed for improved performance. Experiments were conducted on image and audio compression.
Strengths: - the work encodes data with variational Bayesian implicit neural representations, which enables direct optimization of the rate-distortion performance by minimizing the $\beta$-ELBO.
- the work presents an iterative algorithm for learning prior weight distributions and a progressive refinement process for the variational posterior for improved performance.
Weaknesses: The presentation can be improved, e.g.,
- $\theta_p$ indicates $\mu_p, \sigma_p$, which should be clearly described.
- Some typos, e.g., Line 183: represent -> represents; Line 190: the to -> the; Line 209: the its -> its; Line 296: subscript "2".
- Title of [5] is missed, "Auto-Encoding Variational Bayes". Similar problem in [35]. And [35] actually repeats [34].
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - It would be nice to compare the proposed method with the concurrent work [52[ published in ICML 2023.
- Will the encoding time be the obstacle for applying he porposed method to video compression or 3D data compression?
- Code availability helps reproducibility although enough implementation details are provided.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and detailed review. We will carefully proofread the paper again and correct the typos and references in our final version. Moreover, we address the reviewer's questions below.
> It would be nice to compare the proposed method with the concurrent work [52] published in ICML 2023.
VC-INR [52] adopts a similar MAML framework also used by prior works, including MSCN [13] and COIN++ [11], and achieves impressive results. Our approach is novel and different from all these works; thus, we still have a gap with that paper. However, we would like to note the significant gap in complexity between the two methods: besides using the MAML-based approach, VC-INR uses a complex gating mechanism, low-rank weight factorization and variational auto-encoders to encode the weight factors to obtain its good performance. On the other hand, COMBINER simply encodes a variational INR with relative entropy coding.
> Will the encoding time be the obstacle for applying the proposed method to video compression or 3D data compression?
Thanks for your question. Although videos or 3D data usually have more pixels/points than image data, our method still applies to these data formats. As a simple, practical solution, we can sample a subset of pixels during training without significantly compromising performance. For example, when training the model prior on a high-resolution image dataset, we sample 25% pixels to accelerate training, which we found to have negligible impact on the final performance. On the other hand, we can reduce the iteration number during the progressive finetuning process. As shown in Figure 1 in the rebuttal document, as we decrease the iteration number, the finetuning time reduces from nearly 1 hour (corresponding to nearly 30000 iterations) to 334 seconds (less than 3000 iterations), which only sacrifices 0.3dB reconstruction quality.
> Code availability helps reproducibility although enough implementation details are provided.
It took us some time to clean up the code after submitting the manuscript. We have completed cleaning up part of the code and sent the anonymous link to the AC guided by the NeurIPS rebuttal policy.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for the response. | Summary: The paper improves INR-based (image) compression by introducing majorly two techniques: 1) a relative entropy coding based model compressing framework as an alternative to commonly used quantization - entropy coding pipeline; 2) a semi-amortized approach to train the model prior which is similar to beta-VAE and enables RD tradeoff with relative entropy coding. With the proposed novel pipeline, one can adopt COIN-like INR compression with better RD performance because it gets rid of quantization errors.
Strengths: The paper is well-written with dedicated figures and easy to follow. Overall I like the story of adopting beta-VAE/REC in INR coding methods. It is intuitive that given a dataset, all data neural representations share some common knowledge, and a potential amortized approach tends to exist. Thus, a meta-model describing the prior model distribution is soundable. This re-connects compression with VAEs, though a different way from feature-based learned image compression methods. And this re-connection makes a finer rate control of INR more promising. I believe this work contributes to society and can inspire future studies.
Weaknesses: My major concern is about practicality. Seems that to train a model prior, we should first train many model posteriors. As also discussed by the authors, this training makes the entire training (encoding) time extremely slow. Is it possible to adopt this model posterior to further speed up the encoding of samples out of the training set? i.e. we may imagine efficiently finetuning a new INR (posterior) from the obtained prior. If this finetuning requires less time to convergence, the enlarged encoding time may be amortized.
Another issue for me is the somewhat marginal performance improvement. Is it worth costing such a large encoding time for the model prior, instead of simply training a COIN/COIN++ model longer to compensate for the quantization error? It would be better to report the converging speed of the models e.g. something similar to PSNR-iteration curves.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations, which is convincing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments and feedback on our work; we address the reviewer's concerns below.
> Seems that to train a model prior, we should first train many model posteriors. As also discussed by the authors, this training makes the entire training (encoding) time extremely slow. Is it possible to adopt the model posterior of training samples to further speed up the encoding of samples out of the training set? We may imagine efficiently finetuning a new INR (posterior) from the obtained prior.
This appears to be a misunderstanding; we don't need to re-learn the prior each time we encode an image. We learn the model prior once from a few training images and their corresponding posteriors, akin to learning the weights of a neural network before deployment. Then, during test time, we fix the prior we learnt earlier and only optimize the variational posterior of the INR corresponding to the data.
We experimented with training a new INR model initialized from the established prior. However, we observed its convergence speed is similar to an appropriately randomized initialization. Consequently, given a model prior and a test image, we trained the corresponding INR posterior from scratch. However, it is an interesting question whether we could find better initialization heuristics for our INRs to speed up convergence. We will note this as a possible avenue for future work in the camera-ready version of the paper.
> Is it worth costing such a large encoding time to compensate for the quantization error?
With the sacrifice of encoding time, we contribute to two critical improvements compared with previous methods: (1) joint rate-distortion optimization; (2) memory-efficient training. The first improvement is critical because the development path of VAE-based compression indicates that joint rate-distortion is necessary for superior performance. The second improvement means we do not need to crop high-resolution signals such as Kodak images into patches like previous methods. In that way, implicit neural representations have much more potential to capture the whole image's correlation structure and deliver better compression performance.
Compression with implicit neural representations is a new and promising paradigm in data compression. Many researchers are currently interested in whether INR-based compression can surpass VAE-based compression in terms of rate-distortion performance. It is quite similar when Minnen et al. [3], for the first time, employed the autoregressive context model to improve the compression performance, which increases the decoding time dramatically. But several follow-up works successfully solved this issue soon after the work of Minnen et al. [3]. We believe our work can provide insights into this field as a new INR-based compression approach with several advantages, and we are hopeful that many of our method's practical limitations will be removed in future work.
> It would be better to report the converging speed of the models e.g. something similar to PSNR-iteration curves.
We appreciate your constructive suggestion. We conducted additional experiments to address this, adjusting the finetuning strategy across varying iteration numbers. Our analysis on a Kodak image (Kodim03.png) is described in a graph juxtaposing PSNR against iteration number. Figure 1 in our rebuttal document shows that performance improves marginally as iterations increase, and the finetuning time grows linearly according to the iteration number. The results in Figure 2b of our main paper match the highest iteration number. Notably, we can reduce encoding time at the small cost of a 0.3dB drop in PSNR by setting the iteration number to 10%.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concerns. I maintain my acceptance rating. | Summary: This paper addresses the problem of lossy data compression (evaluation is on images and audio) using implicit neural representations (INRs). In this approach, a neural network is designed that maps coordinates (e.g., x,y locations in an image or time for audio) to samples (RGB values or audio amplitude). Then the overfit network itself is the representation that is stored or transmitted since the data can be approximately recovered by running the network over appropriate coordinates.
INRs for data compression are not new, but this paper includes two key novelties: (1) a variational Bayesian approach is adopted, which (2) allows the method to jointly optimize for rate (the entropy of the neural network parameters) and distortion (the quality of the reconstructed data). Previous INRs for compression used less powerful formulations and separately optimized for rate and distortion, which typically leads to worse overall performance (and the empirical evaluation in this paper demonstrates that here).
Within this setup, the authors describe an iterative algorithm for learning the prior over network weights, and they describe a progressive refinement method that improves performance by dividing the network parameters into blocks and conditionally coding each block given previous blocks.
The method is evaluated on images (CIFAR-10 and Kodak) and audio (LibriSpeech). In both cases, the method is shown to outperform previous INR-based models. For images, it does not outperform the best VAE-based compression methods.
Strengths: Originality: the problem is not new and jointly optimizing rate and distortion is quite common in neural compression in general, but I have not seen it done before for INRs. Nor have I seen an approach that uses variational Bayes and relative entropy coding with INRs.
Quality and clarity are both excellent.
I think the significance of this paper is quite high for the neural compression community and especially researchers looking at INRs. VAE-based (often called "nonlinear transform coding" or NTC) is the dominant approach right now, but INRs are quite interesting and there's a growing literature using such methods for compression. This work is significant because it does the right thing (joint optimization of rate and distortion), which has not been done before with INRs, and because it sets a new SOTA for compression with this approach.
Weaknesses: 1. Obviously the paper would be stronger if the empirical results were better. Focusing on Fig. 2b (RD curves on the Kodak image set), COMBINER trails CVPR2020 by more than 2dB at 0.2 bpp (a huge gap), and CVPR2020 is no longer a SOTA approach. That said, COMBINER is SOTA for INR-based methods (as far as I know), which is an important result, and I think research can get stuck in a local minimum if we reviewers require new approaches to outperform established ones too soon in the research cycle, e.g., Fig. 2b shows that COMBINER slightly outperforms the VAE-based method from ICLR2018.
2. Neural model compression is a large subfield so some discussion or comparison with a model compression method (beyond simple weight quantization) seems like an important baseline that's missing.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Section 5.1 ("Models") discusses the model architecture saying that it was empirically determined. How sensitive are the results to the base model architecture, and how easily can the encoding process prune (reduce parameter entropy to near zero) large parts of a model architecture that is much larger than the RD optimal model?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and constructive feedback and respond to the reviewer's concerns below.
> Obviously, the paper would be stronger if the empirical results were better. COMBINER is SOTA for INR-based methods (as far as I know), which is an important result, and I think research can get stuck in a local minimum if we reviewers require new approaches to outperform established ones too soon in the research cycle.
Thanks for the positive comments. As the reviewer says, it is often unrealistic to expect a new approach to surpass all these well-established approaches in a short time. But we do believe that our paper will provide insights into this field, and we will keep improving our method continually. We believe this framework will soon be developed to achieve more impressive compression performance.
> Neural model compression is a large subfield so some discussion or comparison with a model compression method (beyond simple weight quantization) seems like an important baseline that's missing.
Havasi et al. [23] applied relative entropy coding to model compression and demonstrated state-of-the-art performance, and they also compared it with some model compression baselines. Another paper [Ref1] also compared with some model compression methods and found using relative entropy coding is more effective for Bayesian model compression than other model compression methods, especially when the model size is small. This is shown, for example, in Table 1 of [Ref1] (Minimal Random Coding Learning achieves the best performance on LeNet5-Caffe). As these prior works already demonstrate the superior performance of using relative entropy coding for model compression, we omit these results in our paper.
## References
[Ref1] Scalable Model Compression by Entropy Penalized Reparameterization. Oktay et al., ICLR 2020.
---
Rebuttal Comment 1.1:
Comment: > Havasi et al. [23] applied relative entropy coding to model compression and demonstrated state-of-the-art performance...
The authors' reliance on previous results comparing relative entropy coding to other model compression methods seems sufficient. It would be good to add a reference to (Oktay 2020) and make it clear in the paper that they're relying on results from that paper and (Havasi 2019).
I don't see any reason to adjust my rating. | Rebuttal 1:
Rebuttal: We extend our gratitude to all the reviewers for their comprehensive feedback and time spent reviewing our manuscript. It is heartening that all the reviewers agree that the idea proposed in this paper is novel and valuable. We have addressed their concerns in our respective responses.
In addition, after submitting the manuscript, we cleaned up the code and improved the practical implementation. On the one hand, we attach an anonymous link of our code to the AC for reproducibility guided by the NeurIPS rebuttal policies. On the other, we also update the practical encoding and decoding time, as shown in Tables 1 and 2 of the one-page rebuttal document. Specifically, encoding for 500 CIFAR-10 images now ranges from 13 to 33 minutes (0.91 to 4.45 bpp), which means it only takes a few seconds to encode each CIFAR-10 image. For high-resolution Kodak images, the encoding time is now only 21.5 minutes at 0.07 bpp. Moreover, by adjusting the number of iterations during the progressive finetuning phase, we can reduce the finetuning time by 90 %, with only a minimal 0.3 dB performance trade-off. We will include these statistics in the final version of our paper.
Pdf: /pdf/fe8f6020a796413dfa76c97f42e3fc5191646a10.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a new method for compressing general signals, by using Variational Bayesian implicit neural representations. It proposes an algorithm for learning a prior distribution over the implicit representation weights, as well as a pipeline for inferring the posterior distribution corresponding to every given signal to be encoded. The authors employ several tricks (e.g. Relative Entropy Coding) borrowed from previous works, that are meant to increase compression efficiency.
The method is evaluated over two image datasets (namely the CIFAR10 dataset of very small images and the Kodak dataset containing 24 larger images) and one audio dataset, while comparing performance with several image compression methods and the classic MP3 audio compression method, respectively.
Strengths: Generally speaking, the paper is presented well - despite the fact that compression is not my main expertise, I was able to follow and understand most of it.
The combination of the different ideas used in this method is interesting and appealing.
Limitations, as well as concurrent work, are discussed candidly.
Weaknesses: Besides the method's complexity, which could (at least partially) be attributed to my lack of expertise in this field, I found two major flows:
Performance:
The performance curves presented in Fig. 2 and 5 in the paper do not indicate an advantage of the proposed method over many of the existing methods, in terms of the rate-distortion trade-off. In the case of audio (Fig. 5), some advantage is shown only over the classic MP3 compression, and only for part of the kbps range.
Speed/Complexity:
The encoding process of this method is long and expensive. Besides being a big limitation (as noted by the authors) and hence a weakness of the method compared to some of the other ones, I'm missing some detailed comparison of these aspects.
A third, relatively more minor flow has to do with the experimental setup itself, as experiments were conducted over only two small image datasets and one audio dataset, and compared only with MP3 in the latter case.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As I'm not entirely familiar with this field, the authors are welcome to direct my attention to certain points they feel that I overlooked, and I'll gladly consider increasing my initial rating.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations were candidly discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's time and effort to review our paper and address your concerns below.
## Performance
In our paper, we focus on beating the previous INR-based compression methods. The performance of VAE-based methods is shown in Figure 2 using dotted lines for reference, similar to all previous INR-based compression papers [10][11][12].
While VAE-based methods have long been dominant in neural image compression, compressing with INRs has emerged as a promising alternative but has certain limitations given its novelty and short development history. As Reviewer D8f2 commented, research would get stuck in a local minimum if we require new approaches to outperform established ones too soon in the research cycle. Our proposed method supports joint rate-distortion optimization, a significant improvement compared to previous INR-based compression methods.
## Complexity
First, after improving the practical implementation of entropy coding, we update our method's encoding and decoding times in Tables 1 and 2, as shown in the one-page rebuttal document. Now both the encoding time and decoding time are acceptable for practical application. Specifically, on the CIFAR-10 dataset, it only takes around 33 minutes for our largest bitrate model to encode 500 images from the CIFAR-10 dataset, meaning each CIFAR-10 image takes around 3.96 seconds for encoding. We can compare our method COMBINER with previous methods like COIN [10] and COIN++ [11] (according to the coding time reported in the paper of COIN++), showing the comparisons in the below table.
| | COIN | COIN++ | COMBINER (Ours) |
|-------------------|-------------------|-------------------|-------------------|
| Encoding Time | 2.97 s | 0.095 s | 3.96 s |
| Decoding Time | 0.46 ms | 1.29 ms | 3.90 ms |
Second, we can reduce the number of iterations during the finetuning process to shorten the encoding time. As shown in Figure 1 in the rebuttal document, we can reduce the iteration number from about 30000 to less than 3000, while the PSNR only drops for 0.3 dB. It demonstrates the time of progressive finetuning, which constitutes the bulk of the encoding time, can be largely reduced by only sacrificing a little compression performance.
## Experimental Setup
Evaluating on Kodak dataset with 24 images is a standard setting for research in data compression. It is because compression models are not so sensitive to imbalanced data distribution, and the image distribution on the Kodak dataset is relatively representative.
In addition, most previous INR-based works are constrained by MAML's computational demands and hence cannot be directly applied to high-resolution image compression like COMBINER can.
Therefore, these works report their performance on the low-resolution Cifar-10 dataset.
Since these works are our important baselines, we also follow this setting and compare the results on the Cifar-10 dataset for consistency.
_We are happy to answer any further questions the reviewer might have. However, if we answered the reviewer's questions adequately, we kindly invite the reviewer to consider raising their score._
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
I now feel comfortable raising my score, following your clarification regarding the distinction with regard to INR-based methods (due to their relative novelty). However, as this argument still does not explain the slight disadvantage with respect to the INR-based MSCN method on the CIFAR dataset, and since it's unclear whether the proposed method is better than MSCN in terms of speed (as noted by reviewer rYFn), I'll settle for borderline accept. | null | null | null | null | null | null |
Cross-Scale MAE: A Tale of Multiscale Exploitation in Remote Sensing | Accept (poster) | Summary: The authors proposed a self-supervised method based on ViT masked auto encoders (MAE), namely Cross-Scale MAE, to improve the representations learnt in remote sensing based models. It includes a scale augmentation of each input image to learn features at different scales in the encoder while overcoming the need of multiscale aligned data. A constraint is also imposed by exploiting information consistency across multiscale images and improving structural and semantic level representation.
The proposed Cross-Scale MAE methods includes both discriminative and generative approaches.
Experiments have shown better performances on downstream tasks compared to remote sensing self-supervised MAE methods, namely SatMAE and Scale-MAE. The model has been optimized using xFormers to improve its computational efficiency without performance degradation.
Strengths: 1/ The Cross-Scale MAE method has been clearly explained. It combines both discriminative and generative approaches to learn scale-invariant representations. The cross-scale consistency loss aims to maximize information at different scales while minimizing the sharing information between images at different locations. The cross-scale prediction and reconstruction losses improve the consistency and effectiveness of the decoder to retrieve the semantic information. This original combination is agnostic to the backbone used. It may tackle the actual multiscale problems related to various resolutions in satellite data.
2/ The presented results have shown that Cross-Scale MAE outperforms competing methods on several datasets according the KNN classification accuracy with representation as a metric highlighting the relevance of the learnt presentations.
The ablation studies have been rigorously conducted showing the impact of each proposed loss term and the global combination, the effect of the contrastive loss used to constrain the encoder and the influence of the scale augmentation methods.
3/ Experiments on the computational time and memory usage conducted in the Appendices are appreciated.
4/ This work could have a significant impact since large volume of remote sensing data are publicly available without labels. There is an urgent need of efficient and powerful pretrained models for many remote sensing applications, including climate change and global warming understanding and monitoring.
Weaknesses: 1/ Additional experiments presented in the appendices have led to similar conclusions than presented in the main document. However, one may notice that the proposed method outperforms SatMAE in average but not consistently considering the presented standard deviations in Appendices Figure 3, Figure 5 and Figure 8. Standard deviations in the main document, including competing methods, would have been appreciated to highlight the consistency of the results.
2/ L231 "SatMAE is a current state-of-the-art Masked Auto-Encoder (MAE) for remote sensing imagery" The GFM model [1] could also be considered as a strong MAE competing method outperforming SatMAE. This method has not been compared to Scale-MAE which could be interesting for the community and could potentially highlight the relevance of the proposed method.
[1] M. Mendieta et al., GFM: Building Geospatial Foundation Models via Continual Pretraining. In Arxiv 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions
1/ Figure 3: why performance curves of Scale-MAE are not included?
2/ The experiments have been conducted considering several fixed scale ratios. Have the performances been quantified using a random scale ratio at each batch during training?
Comments
1/ Figure 2: AT blocks are not defined, it is not clear in (b) what $f_{e,n}$ is.
2/ Typo: lack of consistency between "multiscale" and "multi-scale"
3/ Typo: lack of consistency between "SatMAE" and "Sat-MAE"
4/ Typo: $\tau$ in Eq. 3 is not defined.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have neither detailed the limits, nor the potential negative societal impacts (military or surveillance applications) of the proposed method. The publication of the code and the pre-trained model was not mentioned eithe and will condition the final rating of the submission for reproducibility reasons and the accessibility to the public.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your meaningful comments. Regarding to your question and comments, our answers are listed as following.
W1. On moving figures from Supplement to the main text. Thanks for the suggestion. We will move one of these figures to the main text and highlight the following: The graphs in the supplementary material are generated by conducting 25 runs for each sample in the testing dataset. In each of these 25 runs for a specific sample, the sample is cropped to a different random scale before being input into the models. For an individual run, the same image scale and masking configuration are used as inputs to the models being compared to ensure a fair comparison.
W2. On comparison to GFM. The GFM model excels in representation learning through continual learning, aiming to enhance large language model applicability to satellite images via knowledge distillation. Although GFM's source code is not yet available, we recognize its reported segmentation mIoU of 0.753 on the Vaihingen dataset. In a rough comparison, Cross-Scale MAE achieved an accuracy of 0.7603 on the same task. We will include the GFM paper in our related work for comprehensive context.
Q1. On missing Scale-MAE in Fig. 3. Thanks for pointing this out. Scale-MAE was not open source until recently. We are thus able to add its performance to Fig. 3. Please check Fig. 2 of the rebuttal.
Q2. On using random scale ratio. Yes, indeed! The model randomly selects a scale ratio from 0.2-0.8 at each batch during the pre-training.
To the comments:
1. We have improve the model structure figure(figure.2 of the paper) and posted in the rebuttal file as figure.3, please check.
2-4. Thanks for pointing out the typos, we will correct them in the revision.
---
Rebuttal Comment 1.1:
Comment: I would like to thanks the authors for their valuable rebuttal.
The additional results and materials will improve the submission significantly, including new results comparing to other SSL methods and extensive comparisons with the competing methods.
The updated architecture figure is also appreciated.
Considering this additional work and the answers provided to the other reviewers, I would be inclined to increase my rating towards the acceptance.
However, it will be conditioned to the concern mentioned in "limitations" which have not been mentioned by the authors. "The publication of the code and the pre-trained model was not mentioned either and will condition the final rating of the submission for reproducibility reasons and the accessibility to the public."
Could you please provide a feedback on this topic?
---
Reply to Comment 1.1.1:
Title: Reply to the Limitations
Comment: Thank you for swiftly providing feedback on our rebuttal. To your concerns, we answer as following:
Regarding the limitation and next step:
(1). In our augmentation, we currently focus on spatial augmentation (scaling) while maintaining consistent channel content across two augmentations. However, the complexity of remote sensing scenes extends beyond just scale differences. Variations in source images include both scale and channel disparities. To address this challenge, we currently retain shared channels, like RGB, while discarding differing channels. This approach, although maintaining consistency, could lead to loss of valuable information from dropped channels. In forthcoming endeavors, we aim to devise solutions for this issue.
(2). Our multi-level contrastive learning employs diverse strategies across levels—leveraging both positive and negative samples in the encoder level, and exclusively positive samples in the decoder level. This strategy currently yields optimal performance, although the underlying mechanisms remain unexplored. In future research, we intend to delve into the intricacies of this strategy to gain deeper insights.
Regarding the publication of the code and the pre-trained model:
Thank you for inquiring. We are strong advocates of open source principles, and we are committed to sharing our code and pre-trained models on GitHub following the submission. | Summary: This paper proposed a flexible self-supervised learning (SSL) framework that yields robust representations named Cross-Scale MAE by enforcing cross-scale information consistency at structural and semantic levels without needing aligned multiscale remote sensing imagery. And this paper deploys xFormers to realize the Cross-Scale MAE.
Strengths: 1. This paper designed Cross-Scale MAE to enhance the consistency of the representations obtained from different scales for the multi-scale objects in remote sensing.
2. This paper deploys xFormers to realize the Cross-Scale MAE, which provides a viable reference for other models to be trained in a single GPU.
Weaknesses: Lack of clarity in expressing the specific problem to be solved by the designed model
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What exactly is the semantic understanding of remote sensing image understanding, and are there any specific applications? such as remote sensing image classification, and remote sensing image change detection,...
2. In Section 4, are there some visualization results and the related analysis to show the advantages of the Cross-Scale MAE?
3. In Section 4, lines 235-238, “In Section 5.1” and “Section 5.2” have errors.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your meaningful comments. Regarding to your question and comments, our answers are listed as following.
To the questions:
Q1. On semantic understanding: We would like to explain our viewpoint by addressing them at two levels of representations used in the loss. In the paper, we used two contrastive losses at different levels of representation; one is the output of the encoder (feature 1), and the other is the output after layernorm of the last self-attention block in the decoder (feature 2), which is used to map to the reconstructed image. Comparing the two representations, we could say that feature 1 is a relatively 'lower' feature than feature 2 and thus contains more structural information. Feature 2 is relatively 'closer' to the reconstructed image, which includes more semantic information than feature 1. We will clarify this by emphasizing that this is a “relative” definition.
Due to the usage of cross-scale contrastive loss at the encoder and a cross-scale predictive loss at the decoder, the extracted representation would possess both discriminative and representative characteristics. Hence, the learned representation can serve well for downstream tasks like classification and segmentation. In the Supplement, we posted more results, including the downstream classification task (Supplement C) and zero-shot transfer learning performance on CoCo (Supplement D). We have since added comparisons with more models and on more downstream tasks. In the rebuttal file, please refer to Table 1 (for comparing segmentation performance with SOTA self-supervised segmentation approaches).
Q2. On visualization results: We have included visualization results and related analyses in the Supplement. Supplement A (Figures 1 & 2) provided visualizations of the advantages of our method pre-trained on the fMoW dataset. We also showcased results from a model pre-trained on the CoCo dataset in Figure 4 of Supplement D.1. Finally In Supplement D.2., Figure 6, we provided a visualization of the zero-shot performance of our method where a model pre-trained on the CoCo dataset is tested against samples from the fMoW test set, without any additional fine-tuning on CoCo.
Q3. Thanks for pointing out the typos made in Sec. 4. We will make sure to correct it in the revision.
---
Rebuttal Comment 1.1:
Title: Final Rate
Comment: Thank you for your response, I have no further questions and will maintain my rating. | Summary: This paper proposes a Cross-Scale MAE to tackle the multiscale problem in remote sensing images. The triplet loss is designed including cross-scale consistency loss at the encoder, cross-scale prediction loss at the decode, and reconstruction loss. Comparative experiments show competitive performances.
Strengths: 1. This paper introduces the cross-scale consistency for image representation.
2. The scales are aligned in both the encoder and decoder.
3. Compared experiments show its superiority.
Weaknesses: 1. The authors claim the semantics have been aligned in this method. However, the object semantics described in Fig.1 are vague. Because, there are no semantic inputs or guidance (object labels, etc.). Therefore, it is inaccurate to claim that the feature output by the decoder is the semantics. Normally, decode is used for image reconstruction tasks (Fig.2), and the obtained features should be the low-level structure information.
2. Multiscale is an old research topic. In this paper, methods such as MAE and contrastive learning are applied to align the multi-scale enhanced inputs, most of these techniques exist in contrastive learning. I am not optimistic about the novelty of the paper.
3. In the experiment, it is only compared with the MAE method in remote sensing. Other contrastive learning methods and the MAE method in machine learning have not been discussed yet.
4. The experimental accuracies in table 2 should align with their original papers (Sat-MAE, Scale-MAE). Only K-NN testing seems not enough.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Fig.1 can not convey effective information. It just shows different sizes of aircraft, common sense, and very little useful information. I think this picture can be integrated into your research motivation or method innovation, not just three remote-sensing images.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your meaningful comments. Regarding to your question and comments, our answers are listed as following.
W1. On “semantic” vs. “structural”: We agree with the reviewer about the meaning of low-level features. We would like to explain our viewpoint by addressing them at two levels of representations used in the loss. In the paper, we used two contrastive losses at different levels of representation; one is the output of the encoder (feature 1), and the other is the output after layernorm of the last self-attention block in the decoder (feature 2), which is used to map to the reconstructed image. Comparing the two representations, we could say that feature 1 is a relatively 'lower' feature than feature 2 and thus contains more structural information. Feature 2 is relatively 'closer' to the reconstructed image, which includes more semantic information than feature 1. We will clarify this by emphasizing that this is a “relative” definition.
W2. On the novelty of the paper: To clarify, we are not 'aligning' multi-scale inputs, but extracting shared representations. Scale augmentation guarantees alignment.
While our pipeline may seem straightforward at first glance, it solves three challenging problems. First, given that masking in MAE can be interpreted as a type of augmentation, are there other types of augmentations that can enhance the robustness of representation learning? This paper figures out an intuitive scale augmentation approach that enables scale-invariant representation. Our ablation study (Table 3 of the main text) showed the significant performance gain between using scale augmentation (2nd row) and without it (1st row).
Second, given the multiple-scale inputs, most existing pipelines would extract features independently from these inputs without exploiting the cross-scale correlation. In the proposed pipeline, we designed three losses that exploit this cross-scale correlation at two levels, the cross-scale consistency loss at the encoder embed and the cross-scale predictive loss at the decoder embed. In fact, how to use the contrastive loss to calculate the cross-scale consistency loss is a contribution of its own. The ablation study in Table 3 of the main text also showed the significant contribution of each of the cross-scale loss function (Table 3, rows 3-5).
Third, the incorporation of xFormers into our pipeline enables the pre-training process to be performed on single GPU, providing a practical pipeline for ViT-based model training. For additional xFormer insights, please refer to Supplement E.
Essentially, compared to SatMAE which is the first paper that applies MAE to extract representations from satellite images with single and fixed scale, Scale-MAE and the proposed Cross-Scale MAE are also based on MAE, but focus on the multi-scale problem. Specifically, Scale-MAE develops the Ground Sample Distance (GSD) position encoding and applies multi de-convolution after decoder to reconstruct images of different scales. Nonetheless, Scale-MAE integrates the scale information into the network via hard coding of known GSD. And, the de-convolution can only result in a specific scale ratio. The proposed Cross-Scale MAE designs the network to learn the information across different scales. With scale augmentation and multi-level contrastive loss between the scale pair and masked patches reconstruction, Cross-Scale MAE can learn informative and consistent representation across different scales without the need of known GSD.
W3. On comparing with other contrastive learning and MAE methods: We have added more comparisons in the rebuttal file, including more contrastive learning frameworks and MAE-based framework. Please check Tables 1 and 2 of the rebuttal file.
W4. On KNN testing being insufficient. We completely agree that having kNN evaluation is insufficient. Due to the usage of cross-scale contrastive loss at the encoder and a cross-scale predictive loss at the decoder, the extracted representation would possess both discriminative and representative characteristics. Hence, downstream tasks should include both classification and segmentation performance. We have since conducted additional experiments on segmentation that further demonstrate consistent improvements over SOTA self-supervised segmentations. The results are shown in Table 1 of the rebuttal.
In addition, the Supplement includes more results, such as the downstream classification task (Supplement C) and zero-shot transfer learning performance on CoCo (Supplement D).
Q1.On Figs. 1 and 2: Thanks for the suggestion. We have redrawn Fig. 2 and combined information from Fig. 1. Please see Fig. 3 in the rebuttal file for a revised illustration of the system architecture.
---
Rebuttal 2:
Title: Final Rating
Comment: The final rating is Reject.
As a pretrained model, the transferability in diverse downstream remote sensing tasks is significant; however, this manuscript only presented the image classification performance. The claim of superior to SatMAE and Scale-MAE is not suitable. Please check their settings; they adopted their pretrained MAE on various remote sensing tasks, not only image classification.
I agree with Reviewer UGSM. This manuscript presented a good practice of combining conventional algorithms and validated their approach to image classification. Thank the authors for their effort. However, it brings little knowledge improvement to me. I also agree the Reviewer 14Vd, we have our own criteria for novelty. I don't think this combination can meet the standard of NeurlPS's audience.
Besides, for downstream results in Supplementary, I have some suggestions for authors to improve this manuscript.
First, reconstruction performance may be the result we do not really concern about. For SSL, transferred performances on the downstream and non-pretext tasks should be our focus. Better reconstruction performance makes no sense here. But I appreciate the reconstruction results the authors showed.
Given the limited novelty and insufficient downstream transfer experiments, I have to reject this manuscript in this round, though I think this manuscript is promising if the authors can provide desired experiments and more thoughtful theoretical analysis.
Thank the authors for their manuscript and detailed rebuttals. | Summary: This paper presents Cross-Scale MAE, a self-supervised model built upon the Masked Auto-Encoder (MAE), which tackles the challenges in remote sensing image understanding (such as extensive coverage, hardware limitations, and misaligned multiscale images) by learning relationships between data at different scales during pretraining. By introducing scale augmentation and enforcing information consistency across multiscale images at both structural and semantic levels, the proposed model demonstrates improved performance in downstream tasks compared to standard MAE and other existing remote sensing MAE methods.
Strengths: Cross-Scale MAE combines discriminative and generative learning approaches, benefiting from self-supervised and multiscale representation learning advances. The novel approach of scale augmentations and multi-level cross-scale consistency constraints ensures consistent and meaningful representations, enabling enhanced remote sensing image understanding.
Weaknesses: Some points are confusing, which prevents the readers from understanding the main ideas. Also, the compared methods are not enough to illustrate the advantages of the proposed cross-scale MAE.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Introduction should be simplified. The main contributions and motivation should be highlighted as clearly and simply as possible.
2. From the current version, I just learn that the authors combine some existing techniques to handle their tasks, limiting the novelty of the proposed model.
3. The caption of Fig. 2 is incomplete and meaningless. Also, the information conveyed by Fig. 2 does not align well with the description in the text, resulting in limited useful information for readers when interpreting the figure.
4. Please explain Eq. 1 in detail. If it is proposed by yourselves, please clarify the rationality and significance. Or, please cite the original literature at least.
5. The multiscale augmentation scheme described in Section 3.2 is just a simple down-sampling. I fail to comprehend the reason behind its inclusion as a section.
6. The details of the encoder and decoder are missing.
7. The reasons for the three terms in Eq. 7 are the same should be explained and confirmed.
8. The compared models are insufficient to confirm the usefulness of cross-scale MAE.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See “Questions.”
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1&2)
On simplifying the Introduction and clarifying contribution and novelty: As suggested, we will remove Fig. 1 and integrate the information there to Fig. 2. We have also redrawn Fig. 2 with more detailed captions to clarify the contribution. Please refer to Fig. 3 of the rebuttal.
Essentially, compared to SatMAE which is the first paper that applies MAE to extract representations from satellite images with single and fixed scale, Scale-MAE and the proposed Cross-Scale MAE are also based on MAE, but focus on the multi-scale problem. Specifically, Scale-MAE develops the Ground Sample Distance (GSD) position encoding and applies multi de-convolution after decoder to reconstruct images of different scales. Nonetheless, Scale-MAE integrates the scale information into the network via hard coding of known GSD. And, the de-convolution can only result in a specific scale ratio. The proposed Cross-Scale MAE designs the network to learn the information across different scales. With scale augmentation and multi-level contrastive loss between the scale pair and masked patches reconstruction, Cross-Scale MAE can learn informative and consistent representation across different scales without the need of known GSD. Additionally, we leverage xFormer on a single GPU for pre-training efficiency.
While our pipeline may seem straightforward at first glance, it solves three challenging problems. First, given that masking in MAE can be interpreted as a type of augmentation, are there other types of augmentations that can enhance the robustness of representation learning? This paper figures out an intuitive scale augmentation approach that enables scale-invariant representation. Our ablation study (Table 3 of the main text) showed the significant performance gain between using scale augmentation (2nd row) and without it (1st row). Second, given the multiple-scale inputs, most existing pipelines would extract features independently from these inputs without exploiting the cross-scale correlation. In the proposed pipeline, we designed three losses that exploit this cross-scale correlation at two levels, the cross-scale consistency loss at the encoder embed and the cross-scale predictive loss at the decoder embed. In fact, how to use the contrastive loss to calculate the cross-scale consistency loss is a contribution of its own. The ablation study in Table 3 of the main text also showed the significant contribution of each of the cross-scale loss function (Table 3, rows 3-5). Third, the incorporation of xFormers into our pipeline enables the pre-training process to be performed on single GPU, providing a practical pipeline for ViT-based model training.
Q3. On Fig. 2: Thanks for the suggestion. We will remove Fig. 1. We redrew Fig. 2 with more descriptive caption. Please see Fig. 3 in rebuttal.
Q4. On Eq. 1: This is standard positional encoding. We will make sure to add reference and clarify this is part of the background of basic setting, not our proposed work.
Q5. On the necessity of having Sec. 3.2 as a section: We've dedicated a section to this process due to its pivotal role in our framework. Different from Scale-MAE's complex position encoding, we use a more intuitive approach—scale augmentation with random ratios—to exploit scale information. This enables the network to learn from data. Leveraging scale pairs, we devise multi-stage contrastive learning. Moreover, our framework transforms into a multi-modality fusion tool, accommodating inputs from diverse sensing modalities, like WV and Sentinel 2, or Sentinel 2 and Landsat, without the need for synthesis.
Q6. Our encoder and decoder architecture follows MAE and SatMAE's method, adopting ViT as the backbone. For the encoder, a ViT processes visible patches, embedding them with positional embeddings. Transformer blocks are utilized, excluding masked patches and negating mask tokens. ViT Base employs an embedding dimension of 768 with 12 transformer blocks, each having 12 attention heads. In ViT Large, the embedding dimension is 1024, and the encoder comprises 24 transformer blocks with 16 attention heads.
The decoder incorporates all tokens, including encoded visible patches and mask tokens, representing missing patches. Mask tokens are shared, learned vectors. Although also using Transformer blocks, the decoder is smaller and shallower than the encoder, enhancing efficiency without compromising meaningful representation learning. During pre-training, the decoder guides the encoder, fostering image reconstruction. Both ViT Base and ViT Large maintain an identical decoder. It boasts an embedding dimension of 512, housing 8 layers with 16 attention heads. Furthermore, Fig. 2 has been updated for enhanced encoder/decoder depiction.Please see Fig. 3 of rebuttal.
Q7. Regarding the terms in Eq. 7: These combined losses comprehensively optimize the proposed Cross-Scale MAE model. The cross-scale consistency loss ($L_{cc}$) ensures uniform and dependable features across diverse scales. The decoder's cross-scale prediction loss ($L_{cp}$) highlights the model's versatile scaling prediction, promoting robustness. Lastly, the reconstruction loss ($L_{re}$) guarantees faithful input-output representation, preserving vital details. Together, these losses offer a holistic optimization strategy, each addressing distinct challenges in multi-scale feature alignment and extraction.
Q8. On compared models being insufficient to showcase the proposed: In the Supplement, we posted more results, including the downstream classification task (Supplement C) and zero-shot transfer learning performance on CoCo (Supplement D). We have since added comparisons with more models and on more downstream tasks. In the rebuttal file, please refer to Table 1 (for comparing segmentation performance with SOTA self-supervised segmentation approaches), and Table 2 (for adding linear classification comparisons to three more models).
---
Rebuttal Comment 1.1:
Title: Final Rate
Comment: Thank you for the authors' response. Although they have provided additional explanations and experiments to enrich their work, the proposed methods mostly consist of piecing together existing techniques, and their level of innovation still falls short of the requirements set by NIPS. Therefore, my final decision is to reject the paper. | Rebuttal 1:
Rebuttal: We are appreciate all reviewers' significant comments.
Initially, we wish to elucidate that our supplementary material, accompanying the main paper, encompasses a wealth of substantial experimental outcomes. This includes the visualization of multiscale representation benefits, downstream task analyses, zero-shot transfer learning demonstrations, and an evaluation of xFormers. We believe that several concerns raised by reviewers can be addressed effectively through the supplementary material.
Furthermore, to comprehensively address reviewer queries, we've curated a rebuttal document. This document integrates additional experiment results as requested by reviewers and incorporates refined figures in accordance with their suggestions.
We trust that our responses and explanations address your inquiries and uncertainties. Feel free to inquire further about our explanations.
Pdf: /pdf/8e6333204062a0abdc462535c0ac769dc68db746.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a Masked-Auto-Encoder approach for training remote sensing data at different scales. The model is pretrained on the FMoW-RGB and then the representations assessed through KNN classification performance on four other remote sensing datasets. Comparisons are made to Sat-MAE and Scale-MAE. Ablation studies are performed to analyze the impact of the different parts of the loss function. Performance is seen to be higher than Sat-MAE across different amounts of data and on different datasets and generally higher than Scale-MAE on different datasets (performance with different amounts of data not shown).
Strengths: The multi-scale problem in remote sensing is a key challenge that rightly deserves research focus.
The writing is mostly clear, although some sentences or pronoun references could be made clearer in places.
The ablation studies indicate the contribution of the various architectural components.
KNN performance as compared to SAT-MAE is very promising.
A range of datasets at different scales are analyzed.
Weaknesses: Unlike natural images, the scale in most remote sensing tasks is known because of the geoinformation (this is similar to the situation in microscopy where the zoom/scale is known). One question is whether features across scales is actually the best approach in this regard because the scale from a given sensor is fixed. A very relevant approach is given by https://github.com/mahmoodlab/HIPT. Although originally developed for microscopy images, the same notion around handling large image with a known scale size is explored.
In the related works, there is no reference to remote-sensing specific work in this field including approaches like SeCo. A broad review article is mentioned on, but these other methods are not evaluated for comparison.
Given the comparison to Sat-MAE and Scale-MAE, it would be beneficial to discuss these explicitly in the related works and how the current method differs from them.
It would be beneficial to perform the analysis over multiple runs so the confidence internals and significance of performance difference established.
Figure 3 (performance as a function of data) is only shown in comparison to Sat-MAE which is shown in FIgure 2 to substantially underperform Scale-MAE. Why not show results from Scale-MAE here as well?
While using KNN classification to compare the representations learned, ultimately this does not measure the main goal of SSL models which is to learn a representation which performs better on the downstream task. KNN evaluation is one measure, but it is also cirtical to analyze with a (full) frozen decoder as well as a fine-tuned decoder.
Details around training protocol (i.e. learning rate) are missing. Training on a single GPU is mentioned in 4.3, but it is uncler if this setup is used throughout the work, or only done after the fact as a demonstration that it can be done.
Even though multiple datasets are explored, pretraining is done on a single dataset and then transferred to others. How does changing which dataset is used impact results? Furthermore, how does training on multiple datasets with different resolutions work? In theory this could be the greatest advantage especially if pretraining on low-resolution data (which is often publicly available) could transfer to very high-resolution datasets.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: The caption for Figure 2 could be more descriptive explaining the different components.
It might be more helpful to make Table 5 a chart so the saturation of the performance can be visualized. It's difficult to tell from the raw numbers if the model has finished training.
The authors reference the varying number of channels used in remote sensing imagery in the introduction, but then do not explore it here. How can the method be used to handle imagery from sources with a different number of channels.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are not explicitly discussed although possible next steps are briefly mentioned. No negative societal impacts are expected.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your meaningful comments. Regarding to your question and comments, our answers are listed as following.
To the weaknesses:
W1. On scale being fixed: We completely agree that remote sensing images usually come with known scale which is fixed for a specific sensing modality. This is why existing practices usually have to pre-train different models to handle images with different scales/resolutions. By introducing scale augmentation as input and cross-scale losses, the Cross-Scale MAE model is able to handle images of various scales using just ONE model. This has made the model "scale-invariant" and largely increased the model generalization capacity. Additionally, even though the paper is motivated by remote sensing applications, the same network can be used to process natural images, where scale information usually is not known apriori. The experimental results shown in Supplement D, where the network is pre-trained on CoCo, validate the effectiveness of the model to scale changes.
Furthermore, from the ablation study summarized in Tab3 of the main text, we also showed the significant performance gain between using scale augmentation (2nd row) and without it (1st row).
W2. On related works like SeCo: We will include references to more related works like SeCo in addition to the broad review article. We also added comparisons to more methods to Tab3 in Supplement C. The new table (Tab2 of rebuttal) shows Cross-Scale MAE outperforms SeCo and Scale-MAE in the linear classification task.
W3. On more discussions to SatMAE and Scale-MAE: We will add the following details. SatMAE is the first paper that applies MAE to extract representations from satellite images but with single and fixed scale. Scale-MAE and the proposed Cross-Scale MAE are also based on MAE, but focus on the multi-scale problem. Specifically, Scale-MAE develops the Ground Sample Distance (GSD) position encoding and applies multi de-convolution after decoder to reconstruct images of different scales. Nonetheless, Scale-MAE integrates the scale information into the network via hard coding of known GSD. And, the de-convolution can only result in a specific scale ratio. The proposed Cross-Scale MAE designs the network to learn the information across different scales. With scale augmentation and multi-level contrastive loss between the scale pair and masked patches reconstruction, it can learn informative and consistent representation across different scales without the need of known GSD.
W4. On the necessity of multiple runs: When employing the ViT backbone, we noticed that displaying discrepancies across multiple training runs isn't customary, echoing practices in related works like SatMAE and Scale-MAE. This stability is influenced by consistent loss convergence and the considerable compute resources needed for training, discouraging multiple runs of the same model setup. Our development process encompassed training multiple models, revealing a consistent convergence pattern across them. Hence, we opted against training multiple runs for the same final model configuration. However, testing runs underwent scrutiny; the results are available in the Supplement (Figs. 3, 5, 8). These graphs stem from 25 runs per testing set sample, with each run involving varied cropping and scaling. In every run, uniform image scale and masking are employed for fair comparison.
W5. On adding Scale-MAE performance to Fig. 3: The Scale-MAE model was not open source until recently. We are thus able to add its performance curve to Fig. 3. Please check Fig.2 of the rebuttal.
W6. On more downstream tasks: We completely agree that having kNN evaluation is insufficient. Due to the usage of cross-scale contrastive loss at the encoder and a cross-scale predictive loss at the decoder, the extracted representation would possess both discriminative and representative characteristics. Hence, downstream tasks should include both classification and segmentation performance. We have since conducted additional experiments on segmentation that further demonstrate consistent improvements over SOTA self-supervised segmentations. The results are shown in Tab1 of the rebuttal.
W7. On details of training protocol. The training details, including hyperparameter choices, can be found in Supplement C (2nd paragraph). We should have referred to it in the main text. It's important to note that all other hyperparameters remained consistent with those utilized for the final models presented in the MAE and SatMAE papers we built upon. Regarding the training GPU setup, we confirm that the models were indeed trained on a single GPU throughout the entirety of our work.
W8. On using multiple datasets with different resolutions to train: The fMoW dataset is indeed a multi-scale dataset with the Ground Sample Distance (GSD) varying from 0.3m-3m. Also, during training, we assume the actual GSD is unknown and apply scale augmentation to synthesize image pairs with different scales. So the trained model can be used to handle input images with a wide range of scale variation - it can handle both single-scale and multi-scale dataset without the need of the ground truth GSD. Besides fMoW, we also pre-trained the model on CoCo to evaluate its effectiveness on natural images. The results were provided in Supplement D.
Q1. On Fig2. We redrew Fig2 with more descriptive caption. Please see Fig3 in rebuttal.
Q2. On Tab5: We converted Tab5 to a line chart. Please see Fig1 in rebuttal.
Q3. On imagery with different channels: This is indeed a challenging problem and has been dealt with in areas like domain adaptation and channel harmonization, where different channels are preprocessed to map to a reference set of channels. Cross-Scale MAE cannot fundamentally solve this problem. For now, we pick the subset of bands shared by different sensing modalities, i.e., RGB and near-infrared. | null | null | null | null | null | null |
LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference | Accept (poster) | Summary: The paper propose a novel framework called LinGCN, which reduces the multiplication depth and optimize the performance of Homomorphic Encryption based GCN inference. According to the evaluation results, LinGCN shows promising results in both latency speedup and inference accuracy over existing approaches.
Strengths: * The paper is well-written, with illustrative figures on the framework and evaluation results.
* I'm not an expert in privacy-preserving computation, but the evaluation results seem promising to me. The proposed framework not only improves the latency speedup but also the inference accuracy. Behind the stage is their proposed differentiable structural linearization, which reduces the multiplication depth in the HE process.
Weaknesses: * The authors should better provide analytical results on the reduced computational cost and the utility loss. I mean although the intuition and the methodology design is convincing, some analytical statements may further support the empirical results.
* The authors should also discuss how the security of the HE-based GCN inference process is influenced with the modified designs proposed in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness part above.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See the weakness part above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful for your valuable insight.
### Response to weakness 1:
With a reduced multiplication depth,computational cost and latency will also reduce. Analytically, the Q parameter is reduced with shallower depth, the coefficient of ciphertext polynomial also has a lower bit size, which reduces the data loading time and operators’ computational latency. Further analytical details on computation and latency reduction can be found in **Table 2** in the **appendix**.
### Response to weakness 2:
The proposed LinGCN framework **does not impact** the security level of CKKS based HE framework. For all experiments with the multiplication depth reduction setting, we guarantee a fixed 128 bits margin for security level. Further details can be found in Table 1 in appendix. Here we provide a brief explanation:
Initially, we have to guarantee a security level of at least 128 bits, which is determined by the encryption parameter N and Q. With one level saved, the budget of Q will also decrease.
For example, with N=16384, the maximum Q=438 could guarantee a 128 bits security level. For a non-optimized 3-ST-GCN model, it required Q=509 bits to complete the HE inference (N=32768). With one activation pruned, we save Q by 33 bits. Thus, after we pruned 3 activation layers of this model, Q = 509-33*3=410. Then, we can reduce the N from 32768 to 16384 and still guarantee a 128 bits security level.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I have read the comments and will keep my score. | Summary: To improve the efficiency of HE-based PPML for GCN, in this paper, the authors propose LinGCN, an end-to-end framework for non-linear reduction and polynomial replacement. LinGCN features 3 key elements, including 1) a differentiable structural linearization algorithm, 2) a compact node-wise polynomial replacement policy, and 3) finer-grained operator fusion for node-wise activation functions. The authors demonstrate good improvement over the baseline CryptoGCN.
Strengths: 1. The paper is well-written and the motivation is well-explained.
2. The authors demonstrate both higher accuracy and lower latency compared to the baseline CrypoGCN.
3. The ablation experiments is comprehensive.
Weaknesses: 1. Only one dataset is shown in the paper. This makes it doubtful for how well the proposed techniques generalize to other datasets, e.g., Cora?
2. Similar techniques have all been proposed by previous method. For example, ReLU linearization and polynomial replacement are widely studied for CNN. The proposed fusion technique is also widely used in plaintext CNN inference. Although the synchronized linearization is unique for GCN, it still makes the proposed method incremental.
3. It is unclear why LinGCN demonstrates better accuracy compared to CryptoGCN. Although KD is helpful, more analysis needs to be provided.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. What can LinGCN achieve better accuracy compared to CryptoGCN? What if similar distillation is leveraged for CryptoGCN as well?
2. How are the proposed linearization method different compared to those used for CNNs, like Selective Network Linearization (SNL), RRNet, etc.
3. How does the proposed method generalize to other datasets or tasks?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful review.
### Response to question 1 (Combined response with weakness 3):
Please refer to **Global Response (ii)** for further detail.
### Response to question 2 (Combined response with weakness 2):
Thanks for the valuable insight. To the best of our knowledge, our LinGCN framework is the first attempt to apply fine-grained **structural linearization** in CKKS based HE settings for GCN to significantly reduce the multiplication depth. Although LinGCN shares some similarity between existing CNN works such as SNL and RRNet, the fundamental problem we are trying to solve is different. LinGCN focuses on leveraging **structural linearization** methods to **reduce the multiplication depth** in CKKS based HE setting, and it can benefit all HE based operators (refer to Table 2 in appendix). SNL and RRNet try to partially replace ReLU with linear or polynomial function to reduce the nonlinear operator latency of the private inference under MPC setting. And those methods in CNN works cannot benefit CKKS based HE setting, as discussed in the manuscript from line 157 to line 175. Our LinGCN is the first to significantly advance the benchmark for fine-grained multiplication depth reduction, representing a new baseline to accelerate the HE-based GCN private inference-an emerging field that has been far less explored than that in CNN. We believe our new results and insights would help the community further move forward in this important field.
### Response to question 3 (Combined response with weakness 1):
The proposed LinGCN is evaluated on NTU-XVIEW skeleton joint dataset, one of the largest graph datasets which has been evaluated in CKKS based HE scheme so far. We also evaluate the performance using three model variants:STGCN-3-128, STGCN-3-256, and STGCN-6-256, where the first number represents the layer count, and the second number represents the last STGCN layer's channel count.
Per reviewer request, we provide an extended evaluation of LinGCN framework on GCN model for **Flickr dataset**. The Flickr dataset consists of 89,250 nodes, 989,006 edges, and 500 feature dimensions. This dataset's task involves categorizing images based on descriptions and common online properties. Please refer to **Global Response (i)** for further details.
---
Rebuttal 2:
Comment: Dear Reviewer,
May we kindly inquire if the responses have sufficiently addressed your concerns, or if further explanations or clarifications are needed? Your time and efforts in evaluating our work are greatly appreciated.
Best
---
Rebuttal Comment 2.1:
Comment: Thank the author for the clarification.
My concern about the novelty of linearization is still not fully addressed. While methods like SNL or RRNet cannot help reduce the multiplication depth, the main reason is on their ReLU pruning pattern, not their methods. There are other papers, e.g., "Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference", which simultaneously reduce the ReLU counts and the depth of the network. The advantages of the proposed method compare with these works are not very clear.
Hence, I tend to keep my original score for the paper.
---
Reply to Comment 2.1.1:
Comment: We very much appreciated your time and further insight and would like to provide our explanation as follows:
First, thanks a lot for pointing us to the new reference [1]. However, we notice that [1] is available in arxiv on Apr 26th, and it was published in CVPR 2023 (Jun 18, 2023 – Jun 22, 2023), while this NeurIPS work was submitted on May 11th. Due to the short time, we are unable to (though we would like to) include the comparison with [1] during the paper submission and the rebuttal time right now. Also, according to **the Neurips 2023’s policy on comparisons to recent work**-*”Papers appearing less than **two months** before the submission deadline are generally considered concurrent to NeurIPS submissions. Authors are not expected to compare to work that appeared only a month or two before the deadline”*, [1] is considered as a concurrent work.
Second, we would like to further emphasize the technique difference between ours and [1] and why our method outperforms these existing works like [1]: Work [1] removes the whole ReLU layer based on sensitivity to reduce the model depth, and focuses on MPC based PI acceleration. However, the sensitivity is obtained from the original full model, and may lead to inconsistency when nonlinear is aggressively pruned. In principle, the method used by [1] is similar to that of the CryptoGCN [2], of which the key is the nonlinear sensitivity checking and pruning. However, as our comparison results in Figure 1 (page 2) confirm, it is less effective than our LinGCN framework. This is because our LinGCN uses gradient based structural linearization (Section 3.2, page 4-5) with L0 regularization during the training phase, which is different from and also superior to the sensitivity checking based method.
We sincerely hope those explanations address your concern. We could include the quantitative comparison and discussion in the updated version based on the reviewer’s further advice.
[1] Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference
[2] CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph Convolutional Network Inference | Summary: LinGCN optimizes HE-based GCN inference by reducing multiplication levels through a differentiable structural linearization algorithm and a compact node-wise polynomial replacement policy, both guided by a two-level distillation from an all-ReLU teacher model. LinGCN also improves HE solutions for GCN private inference, enabling finer operator fusion for node-wise activation functions.
Strengths: 1 Important and well-motivated problem
2 Experiments show a clear improvement in latency and accuracy compared to prior work.
Weaknesses: While the concepts of structural linearization and the replacement of ReLU with a polynomial function are not novel in themselves, their application to Graph Convolutional Networks (GCN) is noteworthy. However, it's important to mention that the degree of novelty in this approach is not particularly significant, given the pre-existing knowledge in this field.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The curves in Figure 1 is confusing. Why the blue curve encompasses a variety of points (red stars, orange dots, and blue dots)? Similarly, the black curve also exhibits this characteristic.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not discuss the limitations of the current work nor does it discuss any negative societal impact. Concentrating solely on non-linear operations may not be sufficient for achieving PPML-efficient architectures. This is particularly true in light of emerging lighter cryptographic topologies for non-linear operations, such as those discussed in the following work:
Huang, Zhicong, et al. "Cheetah: Lean and fast secure {two-party} deep neural network inference." 31st USENIX Security Symposium (USENIX Security 22). 2022.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful for your valuable comments.
### Response to weakness:
Thanks for the important feedback. Our main contribution **structural linearization** for multiplication depth reduction is intrinsically different from existing linearization methods (unstructured linearization, layer-wise linearization) used in CNN works and CryptoGCN. **structural linearization** based multiplication depth reduction especially benefits the CKKS based HE setting. To the best of our knowledge, none of the SOTA CNN works (e.g., SNL, DELPHI) have adopted the **structural linearization** method. The polynomial replacement itself is only one of the contributions in our LinGCN framework. Our major contributions are summarized in **Global Response (ii)**.
### Response to question:
Thanks for the suggestions. As indicated in the figure legend, blue curve is the pareto frontier of the LinGCN framework. Red stars means STGCN-3-128 model used in LinGCN framework. Orange dots means STGCN-3-256 model used in LinGCN framework, and blue dots means STGCN-6-256 model used in LinGCN framework. The black dots and black curve represent the model performance data points collected from CryptoGCN and its pareto frontier. We will include more descriptions in the legend and make the figure clearer in the final version.
### Response to limitations:
Cheetah requires the client's assistance while conducting computation because it uses a MPC + HE based framework. While our work’s setting does not require the client’s assistance because we use CKKS based HE scheme for private inference. The ciphertext multiplication method used in [1] is BFV, in which it requires the client to do the decryption for the extracted LWE ciphertext, re-encryption for intermediate results and send the new ciphertexts to the server, then the server can perform the next layer’s computation. Client computation and communication are the bottleneck in their proposed method [1]. The major latency bottleneck in our CKKS based HE system is the **multiplication depth** which is associated with the linear and nonlinearity operation, and is not the nonlinearity itself. Hence, structural linearization helps with reducing the ring size while maintaining the same security level, and will make all operator latency to be smaller. The detailed operator latency breakdown with **multiplication depth reduction** can be found in the Appendix Table 2. The table shows the rotation is the major performance bottleneck, not the nonlinearity operator itself.
[1] Huang et al., "Cheetah: Lean and fast secure two-party deep neural network inference," USENIX Security 2022. | Summary: This study presents an approach for enhancing the efficiency of private inference ( using homomorphic encryption (HE)) in Spatiotemporal graph convolutional networks through fine-grained and structured pruning/dropping of non-linearity. The proposed method consists of two main steps. Firstly, ReLUs are eliminated by substituting them with identity/linear operations, reducing the multiplicative depth required for HE operations. Subsequently, the remaining non-linearities are replaced with HE-friendly quadratic activations.
Strengths: 1. The idea of optimizing ReLUs with a fine-grained and structured approach is promising. Typically, fine-grained optimizations tend to lack structure, which can reduce the overall benefits of optimization.
2. The paper is well-structured and easy to understand. The proposed method is explained clearly and organized, making it accessible to readers. The authors provide detailed explanations and build a strong foundation for their approach, ensuring it can be easily grasped.
Weaknesses:
**Limited novelty (compared to CryptoGCN)**:
Compared to the CryptoGCN paper (NeurIPS'22), this paper demonstrates limited novelty. Both papers propose techniques to reduce the multiplicative depth in STGCN and replace ReLUs with quadratic activations. However, while CryptoGCN employs fine-tuning to recover the accuracy drop, this paper utilizes logit and feature-based distillation to address the accuracy drop resulting from the reduction in non-linearity.
For a fair comparison, particularly regarding the improvement in accuracy, an ablation study should be included in the paper to demonstrate the benefits of logit-based knowledge distillation, feature-based knowledge distillation, and the combined usage of both techniques compared to CryptoGCN.
**CKKS implementation is used rather than rotation-free HE**
In this paper, the authors used the CKKS implementation of Homomorphic Encryption (HE), which incurs a significant total latency due to the rotation operations. The provided Table 2 in the Appendix demonstrates that **approximately 90%** of the total HE latency is consumed by rotation operation in the 12-STGCN-6-256 model. This highlights the inefficiency of the CKKS implementation for private inference, especially when considering the availability of rotation-free HE techniques [1] and their implementation in Microsoft-SEAL.
**Minor corrections**
As stated in lines #157 to #161, the optimizations implemented in SNL, DELPHI, and SAFENet are sub-optimal for achieving high efficiency in HE. It is important to note that these optimizations primarily aim to minimize the cost of non-linearity in Multi-Party Computation, under the assumption that intensive HE operations can be carried out offline.
1. Huang et al., "Cheetah: Lean and fast secure two-party deep neural network inference," USENIX Security 2022.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. The sensitivity analysis (Figure 5) shows that ReLUs in the penultimate layers (layer 4) are highly effective, which aligns with similar findings in CNN studies like DeepReDuce (ICML'21) and SENet (ICLR'23). What does it imply about feature learning within CNNs vs GCNs?
2. Is the better accuracy of LINGCN over CryptoGCN only due to CryptoGCN's use of two-level Knowledge Distillation? Both these methods use second-degree polynomial activation (instead ReLUs).
3. Does the accuracy mentioned in the paper refer to floating-point accuracy (in plaintext) or fixed-point accuracy (after converting weights/biases and activation to fixed-point for operations in the ciphertext domain)?
**Note** I am open to increasing the score if most concerns are addressed in the rebuttal.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are not discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful for your constructive comments.
### Response to limitation 1, novelty:
CryptoGCN's approach to employing layer-wise nonlinear pruning and polynomial replacement results in a substantial degradation in accuracy and an insufficient reduction in multiplication depth. Our proposed LinGCN framework is different from CryptoGCN. LinGCN employs: (i) **Differentiable structural linearization**. (ii) Node-wise polynomial activation functions augmented by distillation. (iii) Exploration of nonlinear reduction via polynomial sequences. (iv) Homomorphic Encryption (HE) for enabling finer-grained operator fusion.The LinGCN is designed to significantly reduce multiplication depth and yield much better performance compared to CryptoGCN.
The effects of these strategies have been empirically demonstrated in the ablation study, as illustrated in Figure 6 (Page 9), which encompasses the (a) replacement sequence, (b) node-wise or layer-wise nonlinear reduction, (c) logit-based distillation, and (d) feature-based distillation.
### Response to limitation 2, encryption scheme:
Our research has a different threat model setting in comparison to the setting described in [1]. Specifically, the rotation-free Homomorphic Encryption (HE) described in [1] requires client assistance for private inference, assuming that clients have significant computational capabilities and can actively participate in the encrypted computation process. This method, which leverages the BFV scheme for ciphertext multiplication, requires the client to decrypt the extracted LWE ciphertext, re-encrypt intermediate results, and transmit new ciphertexts to the server for subsequent layer computation. Overall, the framework in [1] introduces computational and network communication bottlenecks from the client side.
In contrast, our methodology is focused on an HE without-client-aid setting, in which the server solely requires the client to encrypt and transmit input data once. Then, the server autonomously conducts the necessary computations.
Our HE without-client-aid setting and the MPC+HE setting in [1] represent two orthogonal strategies for implementing Privacy-Preserving Machine Learning (PPML). Each approach caters to distinct private inference scenarios. The primary objective of our LinGCN framework is to alleviate the server's computational burden within the context of an HE without-client-aid setting.
[1] Huang et al., "Cheetah: Lean and fast secure two-party deep neural network inference," USENIX Security 2022.
### Response to limitation 3, minor corrections:
Thanks for the suggestion, existing works such as SNL, DELPHI, and SAFENet focus on latency reduction of online inference with MPC-only framework, and those optimizations can not directly benefit the CKKS based HE for offline inference framework. We will clarify it further in the final version.
### Response to question 1:
The results of our experiment indicate that the last layer's nonlinearity is more prone to be linearized, whereas the 4th layer serves as the most vital nonlinear layer. This observation is consistent with works in CNN, as evidenced in studies such as SNL, DeepReDuce, and SENet. Within these models, the importance trend of nonlinearity layers appears to follow a pattern that fluctuates from low to high and then reverts to low across the network layers.
We further have conducted evaluation on a new graph dataset (i.e., Flickr). Pelase referr to the **Global Response (i)** for more details. We will incorporate those discussions in the final version.
### Response to question 2:
Please refer to the **Response to limitation 1** and **Global Response (ii)**
### Response to question 3:
The accuracy mentioned in our paper indicates the floating-point accuracy (in plaintext). We use the 2^33 scaling factor to transform the input into fixed points for encoding & encryption settings, which is the same as SOTA, e.g., CryptoGCN. As mentioned in CryptoGCN, the large scaling factor and bit width margin makes sure the ciphertext inference has the same accuracy as plaintext. We will include further discussion about plaintext/ciphertext encoding and accuracy in the final version.
---
Rebuttal Comment 1.1:
Title: Rebuttal's response
Comment:
Thank you for the rebuttal, and I acknowledge the Author's effort in producing results on the new dataset. I'm convinced with the ablation study presented in Figure 6c and Figure 6d, and it is evident that the accuracy benefits (over CryptoGCN) of the proposed method mainly stem from the (relatively) loss-less reduction to multiplicative depth.
I have *only one question* about the study presented in Figure 5 (an extension to the previously asked Q1): What is Authro's intuition behind Layer 4's criticality being the highest? Figure 5 is a **good observation**; however, the Authors should have presented the **insights** for the same. In general, the semantic information presented in a layer of a network increases from the initial layer to the deeper layer, and the redundancy in features is higher in the deeper layer.
---
Reply to Comment 1.1.1:
Title: Insights and Intuition Behind Figure 5: Importance of Nonlinear Layers
Comment: We appreciate your thoughtful comments and your recognition of our efforts.
The model in Figure 5 is STGCN (STGCN-3-256), with 3 STGCN layers and 6 nonlinear layers. The nonlinear layers 1-2 reside in the first STGCN layer. Nonlinear layers 3-4 reside in the second STGCN layer. Nonlinear layers 5-6 reside in the last STGCN layers.
The STGCN layer employs the GCNConv operator to capture the relationships and information among the node neighbor features [1]. Unlike CNNs which may benefit from deeper layers in most cases, GCNs exhibit a performance limit of how many layers we can stack [1]. Too many stacked GCN layers may lead to node representations becoming too similar. The phenomena is called **over-smoothing** [1]. For GCNs with too deep layers, as each layer aggregates information from neighboring nodes, distant nodes start to influence each other, potentially resulting in loss of model expressiveness and diminished performance [1]. The over-smoothing problem might be exacerbated if nonlinear layers are extremely linearized. Intuitively, it is preferred to preserve nonlinearity for every several linearized GCN layers to prevent the over-smoothing effect from getting worse.
In Figure 5, the 4th nonlinear layer is situated at the second (middle) STGCN layer of STGCN-3-256, which may be crucial for mitigating the over-smoothing problem associated with prior deep linearized STGCN layers. During the automatic structural linearization process, the gradient propagation is orchestrated to strike a **balance between two goals**: (i) preserving the feature structural information at the deeper layers, similar to techniques used in CNN based model (SNL[2], DeepReduce[3], and SENet[4]), and (ii) mitigating the **over-smoothing** effect resulting from the deep linearized GCN layers. Consequently, the middle layer's (nonlinear layer 4 in Figure 5) nonlinearity emerges as the most vital component within the STGCN architecture. This phenomenon is also observable in the STGCN-6-256 model nonlinear reduction result, where we observe that the 5th and 6th nonlinear layers (reside within the 3rd STGCN layer) are most important for the network's nonlinearity.
We will incorporate this discussion and corresponding references into the final version.
[1] Simple and Deep Graph Convolutional Networks, ICML, 2020
[2] Selective Network Linearization for Efficient Private Inference, ICML, 2022
[3] DeepReduce: ReLU Reduction for Fast Private Inference, ICML, 2021
[4] Learning to Linearize Deep Neural Networks for Secure and Efficient Private Inference, ICLR, 2023 | Rebuttal 1:
Rebuttal: ## Global Response:
We truly appreciate your valuable and constructive comments. We have made a substantial effort to clarify your doubts and enrich our experiments in the rebuttal phase. Below are the responses to two common doubts:
### (i). New dataset evaluation:
Without loss of generality,we extended our evaluation on flickr dataset, which is a representative node classification dataset widely used in GNN tasks.It consists of 89,250 nodes, 989,006 edges, and 500 feature dimensions. This dataset's task involves categorizing images based on descriptions and common online properties.
For the security setting, we assume that node features are user-owned and the graph adjacency list is public. The Flickr dataset has a larger adjacent list but smaller feature dimension compared to the NTU-XVIEW dataset. We utilize three GCN layers with 256 hidden dimensions. Each GCN layer has similar structure as STGCN used in our paper. We conduct full-batch GCN training to obtain ReLU-based baseline model accuracies of 0.5492/0.5521 for validation/test dataset.
We obtain the accuracy/latency tradeoff detailed in the following table.
| Num. of nonlinear layers in GCN layers | Accuracy (val/test, %) | Latency (s) |
|:---------------------------------------:|:----------------------:|:-----------:|
| 6 | 0.5281/0.5275 | 4290.93252s |
| 2 | 0.5247/0.5266 | 2740.93779s |
| 1 | 0.5269/0.5283 | 2525.79771s |
We observe that the proposed LinGCN framework substantially diminishes the number of effective nonlinear layers and thus reduces the multiplication depth, with comparable accuracy. This leads to an expedited private inference within the GCN model (1.7 times speedup). The experiments on the Flickr dataset substantiate the generalization capability of the LinGCN framework, reinforcing the robustness of our proposed methods.
### (ii) Contributions and ablation study:
The superior performance of the LinGCN framework over CryptoGCN is not solely attributable to KD. As Figure 6(c)(d) (page 9) in the ablation studies reveal, a two-level KD process contributes to an approximate 1% improvement in accuracy. The LinGCN framework achieves both enhanced accuracy and reduced multiplication depth through a multifaceted approach that includes: (i) **Differentiable structural linearization**. (ii) Node-wise polynomial activation functions, coupled with distillation. (iii) Nonlinear reduction through polynomial sequence exploration. (iv) Extended Homomorphic Encryption (HE) solutions to enable finer-grained operator fusion.
Collectively, these contributions render the LinGCN's performance markedly superior to CryptoGCN. Figure 6 in the Ablation study further elucidates the effects of these factors on the results, showcasing the (a) replacement sequence, (b) node-wise or layer-wise nonlinear reduction, (c) logit-based distillation, and (d) feature-based distillation. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Evolving Standardization for Continual Domain Generalization over Temporal Drift | Accept (poster) | Summary: Mitigating the temporal distribution shift problem is meaningful in realistic. This paper introduces the problem of continual domain generalization over temporal drift (CDGTD), aiming to address the issue of potential temporal distribution shifts in unseen test domains. The authors propose an Evolving Standardization method that utilizes a multi-scale attention module to forecast the distribution of the unseen test domain. They then mitigate the distribution shift by standardizing features using the generated statistics of the test domain. To the best of my knowledge, the method is novel and shows promising results in the experiments section.
Strengths: 1. The paper exhibits a commendable structure and offers clear explanations throughout.
2. The authors conduct ablation studies to examine the individual components of their proposed method.
3. To the best of my knowledge, the method presented in this paper is novel.
4. Experiments results confirm the proposed method works well.
Weaknesses: 1. The authors only selected a subset of datasets from the Wilds-Time benchmark for their experiments, it would be valuable to include results on MIMIC-Readmission and MIMIC-Mortality datasets as well, considering wilds-time is the most relevant benchmark for this problem.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Figure 3 (b) shows that the method is not sensitive to hyperparameters \alpha and \lambda on yearbook datasets which is pretty strange to me. Do the authors have some intuitions about that? If I am a user using your method, Could the authors provide suggestions or considerations for selecting appropriate values for these hyperparameters? In particular, I am curious about the rationale behind the specific choices of {0.1, 0.5, 1.0, 1.5, 2.0} for the ablation experiment.
2. What is the difference between your experimental settings and wilds-time's eval-stream? Why don't directly follow wilds-time for the experiments?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: NIL
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Sincerely thanks for your efforts in reviewing the paper. Below, we respond to your questions in detail.
> **Q1:** The used datasets are only a subset of datasets from the Wilds-Time benchmark. It would be valuable to include MIMIC-Readmission and MIMIC-Mortality datasets.
**A1:** Thanks for your suggestion. Actually, we almost use all the provided datasets in the Wilds-Time benchmark, except for the MIMIC-Readmission and MIMIC-Mortality datasets. This omission is not deliberate. The reason is that they are restricted-access resources and we have some troubles in acquiring the credentialed access to the two medical datasets. Since Yearbook, fMoW, Huffpost and Arxiv are all from the Wilds-Time benchmark and our method has achieved comparable and even superior performance on the four datasets, we think these results are sufficient to validate the effectiveness of our method. Yet, of course, there is no doubt that more results are more convincing, and we will leave the successful application for the credentialed access and the results for the two medical datasets as future work.
As a compensation for the missing results of these two datasets, we additionally run our method on some other datasets (2-moons, Online News Popularity (ONP) and Electrical Demand (Elec2)) that are used by previous temporal DG methods GI [2] and DRAIN [3]. These datasets are also of distribution shifts over time. Please refer to the global reply **R1** for the detailed dataset description. **Table 4** in the PDF file gives the results, where the misclassification errors of baselines are reported from DRAIN [3]. We see that our method EvoS still outperforms the most recent method DRAIN.
> **Q2:** Explanation for the insignificant hyperparameter sensitivity and suggestion for hyperparameter values.
**A2:** Thanks. The hyperparameters $\alpha$ and $\lambda$ control sampling truncation range and the tradeoff of adversarial loss $\mathcal{L}\_{adv}$, respectively. For the normal distribution, values within one standard deviation from the mean make up 68.27%, two deviations account for 95.45%, and three deviations cover 99.73%. Since $\alpha=3$ is akin to no truncation (Variant G in the ablation study of the paper), we limit $\alpha$ to 2 in the sensitivity experiment, using an interval of 0.5 for other values. For $\lambda$, as $\mathcal{L}\_{adv}$ is of similar magnitude as cross-entropy loss $\mathcal{L}\_{ce}$, we vary $\lambda$ by 0.5 intervals in both directions around $\lambda=1.0$.
Actually, in the original Figure 3(b) of the paper, sensitive regions exist (e.g., $\lambda \in [1.0, 2.0]$). However, due to the narrow value range, sensitivity might not be evident. In **Figure 2(b)** of the PDF file, we extend hyperparameter ranges to $\alpha \in \\{0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0\\}$ and $\lambda \in \\{0.1, 0.5, 1.0, 1.5, 2.0, 5.0, 10.0, 15.0, 20.0\\}$. These new results exhibit similar conclusions that EvoS is more sensitive to $\lambda$ than $\alpha$, and large $\lambda$ values significantly worsen performance. Thus, in practice, we recommend selecting $\lambda$ from a range $\leq$1 through validation. For $\alpha$, its effect is modest; users can directly use $\alpha=1$, which balances diversity and representativeness, as about 68.27% of the data lie within one standard deviation from the mean in a normal distribution.
> **Q3:** Difference between our experimental setting and wilds-time's eval-stream.
**A3:** Thanks. Wild-Time [1] provides two evaluation strategies: Eval-Fix and Eval-Stream, and our experimental setting uses the Eval-Fix. Eval-Fix denotes that the model is evaluated on a single, fixed train-test time split. This evaluation strategy offers a quick evaluation protocol and is the primary evaluation strategy in Wild-Time. Thus, we adopt the Eval-Fix strategy in our experiments.
Specifically, given a sequence of domains with total length $\mathcal{T}$ and the split timestamp $T\_s$, the average performance $Avg\_{fix}$ (i.e., "OOD avg." in our paper) and worst performance $Worst\_{fix}$ (i.e., "OOD worst" in our paper) under the strategy of Eval-Fix are defined as
$Avg\_{fix} = \frac{1}{K} \sum_{i=T\_s+1}^{T\_s+K} {Acc(\mathcal{D}^i)}, Worst\_{fix} = \min\_{i\in \\{T\_s+1, \cdots, T\_s+K\\}} {Acc(\mathcal{D}^i)},$
where $K=\mathcal{T} - T\_s$ is the number of future domains to be evaluated, and $Acc(\mathcal{D}^i)$ is the accuracy on domain $\mathcal{D}^i$.
As for Eval-Stream, it denotes the evaluation with data stream, i.e., the model is evaluated at each timestamp using the average and worst performance on the next $K$ timestamps. Concretely, given a sequence of domains with total length $\mathcal{T}$, the average performance $Avg\_{stream}$ and worst performance $Worst\_{stream}$ under the strategy of Eval-Stream are defined as
$Avg\_{stream} = \frac{1}{\mathcal{T}-K} \sum_{i=1}^{\mathcal{T}-K} \frac{1}{K} \sum_{j=i+1}^{i+K} Acc_{i}(\mathcal{D^j}), Worst\_{stream} = \frac{1}{\mathcal{T}-K} \sum_{i=1}^{\mathcal{T}-K} \min_{j\in \\{i+1, \cdots, i+K\\}} Acc_{i}(\mathcal{D^j}),$
where $Acc_{i}(\mathcal{D^j})$ is the accuracy on domain $\mathcal{D^j}$ when using the model at the $i$-th timestamp. Hence, the Eval-Fix that we adopt can be viewed as a single timestamp evaluation within Eval-Stream, where the model is only evaluated at $T\_s$ and $K=\mathcal{T}-T\_s$.
To further verify the effectiveness of our method, in **Table 3** of the PDF file, we additionally provide the result when using the Eval-Stream evaluation strategy. Due to time limitations, we just provide results on Yearbook and Huffpost datasets. According to the result, we see that EvoS still outperforms other baselines, showing that our method is better at handing the problem of continual domain generalization over temporal drift.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifying comments. I will keep my score. I suggest include MIMIC-Readmission and MIMIC-Mortality datasets in your future version to ensure a comprehensive evaluations. | Summary: This paper introduces a problem formulation for Continual Domain Generalization over Temporal Drift (CDGTD) and proposes the Evolving Standardization (EvoS) method to address the challenge of gradually shifting data distributions over time, aiming to generalize to unseen domains that are not too far into the future. The EvoS method characterizes the evolving pattern of feature distribution and mitigates distribution shift by standardizing features with generated statistics of the corresponding domain. It utilizes a multi-scale attention module (MSAM) to learn the evolving pattern under sliding time windows of varying lengths. MSAM can also generate statistics of the current domain based on the previous domains' statistics and the learned evolving pattern. The paper demonstrates the efficacy of EvoS through experiments on multiple real-world datasets, including images and texts.
Strengths: 1. The paper aims to addresses an important problem in machine learning, namely, Continual Domain Generalization over Temporal Drift. The proposed algorithm, EvoS, is a new approach which characterizes the evolving pattern of feature distribution and mitigates the distribution shift by standardizing features with generated statistics of the corresponding domain.
2. The paper highlights the necessity of learning evolving patterns to address temporal distribution shift.
3. The paper conducts experiments on multiple real-world datasets, including images and texts, to validate the efficacy of the proposed EvoS method.
Weaknesses: 1. The paper presents a setting where domains arrive sequentially, and models can only access the data of the current domain. However, the authors argue that we should learn from the current source domain to adjust models and generate target domains. This setting contradicts the idea of Domain Generalization, where data from multiple domains may be available simultaneously, or models can access historical data from previous domains.
2. However, the authors of the paper do not provide sufficient context or motivation for the research problem and fail to distinguish this setting from continuous learning and test-time adaptation.
3. The EvoS method introduced in this paper is notably more complex than other baselines. However, the authors do not provide a detailed analysis of the computational requirements of the proposed method.
4. EvoS outperforms other baselines in its tailor-made tasks, which make the efficacy of EvoS unconvincing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing the paper. Below, we address your concerns in detail.
> **Q1:** This setting contradicts the idea of Domain Generalization.
**A1:** Thanks. Firstly, we want to clarify that our CDGTD is a challenging and practical variant of conventional DG. Hence, its setting naturally differs.
Past DG methods focus on static, discrete scenarios with fixed domains and sudden shifts among them. They train a model on multiple domains simultaneously offline, striving for ideal but hardly achievable generalization on any unsee domain. In real world, new data continuously arises over time. A realistic way is to leverage fresh domain along with old domains to enhance model generalization. One way is to store past domains and re-train the model using standard DG methods. Yet, this is inefficient and doesn't allow reuse of previous model. Besides, retaining all historical domains requires substantial memory, especially in scenarios with rapid data accumulation.
Considering the limitations, we draw inspiration from continual learning (CL) to consider DG in a dynamic and continuous configuration. In CDGTD, the training domain evolves dynamically over time following some patterns, and domains are assumed to arrive sequentially to mimic the realistic scenario where new training domain emerges, while the access to only current domain minimizes storage cost. Like CL, the model starts from the previous timestamp's state and incrementally trains with current domain. This makes previously trained model reusable and boosts training efficiency.
Overall, CDGTD aims at a more practical DG scenario that simultaneously considers the **dynamics of training domains**, **training efficiency** for frequently updating model for better generalization and **storage burden**.
> **Q2:** Motivation for CDGTD and differences with continuous learning (CL) and test-time adaptation (TTA).
**A2:** Thanks. Actually, in the introduction, we have explained why CDGTD is introduced. For your convenience, we summarize it below.
**Motivation of CDGTD.** In standard DG, models are trained offline on fixed source domains to achieve broad generalization on any unseen domain. However, such ideal generalization is tough. In real world, new data continually emerges over time, offering a chance to enhance the generalization of previously trained model. But standard DG methods struggle to update models with fresh data efficiently. Their static domain setup and offline training mode rquire starting model training anew with new and prior data, leading to low efficiency and high storage cost for historical data. Instead, a more practical way is to employ the training paradigm of CL, but with the goal to generalize on future unseen domains. This way reduces storage cost and enhances training efficiency. Thus, we introduce CDGTD to efficiently and effectively address temporal DG.
**Differences with CL.** The biggest difference is the objective. CL aims to learn new tasks while retaining performance on old tasks, whereas CDGTD prioritizes generalization on future unseen domains. These different goals yield distinct challenges. CL faces catastrophic forgetting of task-specific knowledge, while CDGTD tackles modeling underlying evolutionary patterns of temporal domains, and how to utilize these patterns to mitigate distribution shifts in forthcoming domains.
**Differences with TTA.** One difference is the focused optimization phase. TTA optimizes the source-pretrained model during testing phase with test data, while CDGTD optimizes the model during training phase with sequential training domains. Another difference is the data. TTA's arriving test data usually comes from the same domain or distribution, while CDGTD's test domain evolves over time, showing distribution shifts. Moreover, TTA emphasizes in-time adaptation using test data, while CDGTD focuses on generalization without using test data.
> **Q3:** Analysis of the computational requirements of the method.
**A3:** Thanks. Please refer to the global reply **R2** for the detailed analysis of memory and time complexity.
> **Q4:** EvoS outperforms other baselines in its tailor-made tasks, which make the efficacy of EvoS unconvincing.
**A4:** Firstly, the challenging CDGTD remains mostly unexplored. It is inevitable to run related methods in our experimental setup for comparisons. To ensure fairness, we have applied the same training and evaluation settings for compared methods.
Secondly, as described in Section 4.1, our experimental tasks mainly comes from the existing Wild-Time benchmark (NeurIPS'22) [1]. This benchmark offers multiple **real-world** datasets displaying distribution shifts over time, including the Yearbook, fMoW, Huffpost, and Arxiv datasets we use. By contrast, prior temporal DG methods GI [2] and DRAIN [3] use either synthetic (e.g., 2-Moons) or small (e.g., Shuttle) datasets. Thus, we opt to assess performance on more challenging and realistic datasets.
Thirdly, the evaluation strategy is not specially tailor-made by us. It is from the Eval-Fix strategy in Wild-Time [1]. Specifically, given a domain sequence of length $\mathcal T$ and a split timestamp $T\_s$, the model trains on $\mathcal{D}^1$ to $\mathcal{D}^{T_s}$ sequentially and evaluates on future domains $\mathcal{D}^{T\_s+1}$ to $\mathcal{D}^{\mathcal T}$. Under Eval-Fix, average performance $OOD_{avg}$ and worst performance $OOD_{worst}$ are defined as
$OOD_{avg} = \frac{1}{\mathcal T - T\_s} \sum_{i=T\_s+1}^{\mathcal T} {Acc(\mathcal{D}^i)}, OOD_{Worst} = \min_{i\in \\{T_s+1, \cdots, \mathcal T\\}} {Acc(\mathcal{D}^i)}.$
Finally, to address your concerns about convincingness, we present the generalization results on the last domain for 2-Moons, ONP, and Elec2 datasets from DRAIN [3] in **Table 4** of the PDF file. Notably, all baselines' results are reported from DRAIN. The outcomes show EvoS's superior performance on these non-tailored tasks, affirming its effectiveness.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' responses. I appreciate the clarifications provided regarding the proposed settings in this paper. I find the setting to be meaningful. However, I still have concerns regarding the complexity of the model. Therefore, I will incrementally raise my rating. Nonetheless, I remain open to reassessing and further improving my rating if the other reviewers and ACs consider these concerns to be of lesser significance.
---
Reply to Comment 1.1.1:
Title: Further response to concerns about the complexity of the model
Comment: We sincerely thank you for recognizing our clarifications of the proposed setting. For your concerns about the complexity of the model, we would like to provide a further clarification. In fact, our model is not complex in implementation, where there are only three modules: multi-scale attention module, feature standardization module and adversarial learning module, along with a basic backbone network (e.g., DenseNet-121 for fMoW) for feature extraction.
* For the multi-scale attention module (MSAM), each $\mathcal{A}_w$ has the similar structure with a single multi-head self-attention (MHSA) layer in transformer models. The only special handling is the processing of input tokens, where a sliding time window of length $w$ is applied to input tokens for aggregating information at scale $w$.
* For the feature standardization module, it just leverage the generated statistics (mean and variance) by MSAM to mitigate the distribution shift by normalization operation.
* And the adversarial learning module is similar to common ways in DA/DG works, but features are sampled from the preserved domain distributions in the memory pool $\mathcal{M}$ to solve the issue of unavailable historical data.
Overall, each module has its own role and is not complicated to implement. In addition, we detailedly analyze from the model, time and memory complexities below.
**Model complexity.** Below, we provide the model complexity (measured by the number of parameters) of our method EvoS and other two temporal DG methods (LSSAE [2] and DRAIN [3]) on the three image classification datasets: Yearbook, RMNIST and fMoW.
-----------------Parameters (MB) of different methods----------------
| Method | Yearbook | RMNIST | fMoW |
| --- | --- | --- | --- |
| backbone | 4-layer CNN in [1] | ConvNet in [2]| DenseNet-121 |
| LSSAE [2] | 4.70 | 23.25 | 90.92
| DRAIN [3] | 7.51 | 184.29 | 1113.85 |
| EvoS | **1.94** | **15.81** | **56.11**|
LSSAE introduces Sequential Autoencoder to explore the underlying continuous structure in the latent space of deep neural networks, where the complicated VAE and LSTM networks require lots of parameters. DRAIN [3] needs to encode and decode the entire network parameters, which requires a great amount of parameters for large backbone networks. By contrast, our EvoS is overall less complex than these temporal DG methods, and is more friendly to the relatively large backbone network.
**Time complexity.** In global reply **R2**, we provide the time complexity of our MSAM, which is approximately $\mathcal{O}(W \cdot (T^2 d_f + T \cdot d_f^2))$. $W$ is the number of multi-head attention modules in MSAM, $T$ is the number of training domains and $d_f$ is the dimension of deep features. In implementation, $W$ is set to a relatively small value ($W=3$) in our paper. Meanwhile, for conventional transformers, the time complexity of a single MHSA layer is $\mathcal{O}(n^2 d + n \cdot d^2)$, where $n$ is the number of input tokens and $d$ is the feature dimension of input tokens. We can see that the time complexity of our MSAM is equivalent to multiplying the time complexity of a single MHSA layer in conventional transformers by a small value of $W$. In other words, it can be seen as $W$ ($W=3$ in our paper) layers of MHSA, which is a quite tiny module.
**Memory complexity.** In global reply **R2**, we give the memory complexity of our memory pool $\mathcal{M}$, which is $\mathcal{O}(T\cdot d_f)$. $T$ is the number of training domains and $d_f$ is the feature dimension. In practice, $d_f$ is usually much smaller than the dimensions of original inputs. For example, the dimensions of an image in the fMoW dataset are $224\times 224 \times 3 = 150,528$, while the dimension of pooled features in DenseNet-121 is $1,024$, about $0.007$ times of the former. Besides, only two vectors need to be stored per domain. Hence, the memory cost of $\mathcal{M}$ is relatively small. The following table lists the memory cost of $\mathcal{M}$ and the increment of GPU memory consumption for our EvoS on different datasets with batch size 64.
----------Table 1: Memory cost (MB) of the memory pool $\mathcal{M}$ in EvoS ----------
| Method | Yearbook | RMNIST | fMoW |
| --- | --- | --- | --- |
| EvoS | 5.25 | 9.00 | 32.00 |
----------Table 2: GPU memory consumption (GB) of different methods----------
| Method | Yearbook | RMNIST | fMoW |
| --- | --- | --- | --- |
| backbone | 4-layer CNN in [1] | ConvNet in [2]| DenseNet-121 |
| IncFinetune | 1.72 | 1.84 | 10.69 |
| EvoS | 1.94 | 2.09 | 11.04 |
| Increment $\Delta$ (GB)| 0.22 | 0.25 | 0.35 |
From the above three aspects of complexity, we think our method has an acceptable level of complexity.
Refs:
[1] Wild-time: A benchmark of in-the-wild distribution shift over time. In NeurIPS, 2022.
[2] Generalizing to evolving domains with latent structure-aware sequential autoencoder. In ICML, 2022.
[3] Temporal domain generalization with drift-aware dynamic neural networks. In ICLR, 2023. | Summary: This paper introduces the problem of continual domain Generalization over Temporal Drift (CDGTD), where the domain distribution gradually changes over time and the model needs to generalize to new domains in the near future with training domains sequentially arriving. And this paper also proposes an Evolving Standardization (EvoS) approach to predict and adapt the data distribution of future domains by utilizing a multi-scale attention module to generate the mean and standard deviation of features in current domain and conducting feature standardization based on generated statistics to mitigate the domain shift. Extensive experiments on both image recognition and text classification are done to validate the effectiveness of the proposed method.
Strengths: 1. The introduction of the problem of continual domain generalization over temporal drift (CDGTD) is innovative and meaningful. Unlike existing temporal domain generalization approaches, CDGTD incorporates incremental scenarios that mimic real-world situations where training domains arrive sequentially. This assumption adds an extra challenge to generalization, as the model needs to mitigate catastrophic forgetting during the continual learning process.
2. The proposed method EvoS is novel to me. Especially, the inclusion of multi-scale attention module to generate evolved feature statistics is intriguing, where sliding time windows of different lengths are cleverly used to integrate multiscale information for better characterization of evolving patterns.
3. Overall, the quality of the paper is commendable. The authors have conducted a thorough ablation study to examine the impact of various components, and they have also made the code and datasets available in the supplementary material.
4. The paper is mostly well-organized and well-written.
5. The experimental results on image recognition and text classification datasets in the CDGTD setting demonstrate state-of-the-art generalization performance across a diverse range of tasks.
Weaknesses: 1. I am wondering whether the number of heads (n_h) and the feature dimension of heads (d_h) are the same in each attention module A_w, since the authors just simply describe the value of n_h and d_h in MSAM. Yet, there are multiple attention modules in MSAM and the input dimension of each attention module is different. More details are necessary to make it clearer.
2. In Eq. (12) and (13), it seems that the random sampling process is only performed once at the beginning of each training phase. Wouldn't this reduce the diversity and representativeness of the proxy samples for historical domains?
3. As the number of historical domains increases, will the adversarial training suffer from the imbalance problem? What if we randomly select one historical domain in each iteration to participate in the adversarial training?
4. The specific training way of “Offline” and “IncFinetune” in Table 1 and 2 should be provided.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Minor issue, typos:
Line 299: Finally, the results of variant H and EvoS compares ...
Line 309 and 319: MASM -> MSAM
Line 321: \beta and \lambda controls ...
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations of the proposed approach have been discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your efforts in reviewing the paper as well as your constructive comments. Below, we do our utmost to address your concerns.
> **Q1:** I am wondering whether the number of heads ($n\_h$) and the feature dimension of heads ($d\_h$) are the same in each attention module $\mathcal{A}\_w$, since the authors just simply describe the value of $n\_h$ and $d\_h$ in MSAM. Yet, there are multiple attention modules in MSAM and the input dimension of each attention module is different. More details are necessary to make it clearer.
**A1:** Thanks for pointing out this. Actually, in practical implementation, we use the same feature dimension of heads $d\_h$ for each attention module $\mathcal{A}\_w$ in MSAM, $w=1,2,\cdots, W$, while the number of heads in attention module $\mathcal{A}\_w$ is set to $w\cdot n\_h$ to accommodate the different input dimension of each attention module. For the specific values of $d\_h$ and $n\_h$, we have provided in Section D.2 of the appendix ($d\_h=8$ for all datasets and $n\_h=16$ for Yearbook, $n\_h=32$ for RMNIST, $n\_h=64$ for fMoW, $n\_h=128$ for Huffpost and Arxiv). In the revision, we will make this implementation details more clear.
> **Q2:** In Eq. (12) and (13), it seems that the random sampling process is only performed once at the beginning of each training phase. Wouldn't this reduce the diversity and representativeness of the proxy samples for historical domains?
**A2:** Thanks. In fact, the random sampling process is performed in each iteration to provide diverse and plenty features for representing historical domains. Concretely, in each iteration, we will randomly sample a batch of features from every preserved domain distribution in the memory pool $\mathcal{M}$, and use them for the adversarial training to mitigate the overfitting of the model to current domain and enhance the generalizability of the feature encoder. This point will be clarified in the revision.
> **Q3:** As the number of historical domains increases, will the adversarial training suffer from the imbalance problem? What if we randomly select one historical domain in each iteration to participate in the adversarial training?
**A3:** Thanks for your comment. Although the total number of sampled features is greater than the number of features from current domain, the averaging operation in computing the adversarial loss balances the contribution of each feature to the adversarial training, which avoids suffering from the quantity imbalance problem. As for the results when randomly selecting one historical domain distribution in each iteration to participate in the adversarial training, we provide them in **Table 1** of the uploaded PDF file, where one distribution is randomly selected from the memory pool $\mathcal{M}$ in each iteration to sample a batch of $B$ features for calculating the adversarial loss $\mathcal{L}_{adv}$. From the results, we see that randomly selecting one historical domain distribution performs worse. This may be because the feature space to be aligned frequently changes if using this manner, making the optimization challenging. By contrast, it is more stable to simultaneously leverage all the preserved domain distributions in the memory pool $\mathcal{M}$ in each iteration for the adversarial training.
> **Q4:** The specific training way of “Offline” and “IncFinetune” in Table 1 and 2 should be provided.
**A4:** Thanks for your advice. "Offline" denotes merging all training domains into one domain and training the model on the merged domain with the cross-entropy loss for classification tasks. And "IncFinetune" represents incrementally fine-tuning the model, i.e., the model is sequentially trained on each training domain. In other words, at each timestamp, the model is initialized into the state at the previous timestamp and then fine-tuned using the domain at current timestamp. We will add a detailed description of the training for the two baselines in the revised paper.
> **Q5:** Minor issue, typos: Line 299: Finally, the results of variant H and EvoS compares ... Line 309 and 319: MASM -> MSAM Line 321: \beta and \lambda controls ...
**A5:** Thanks. We have proofread the paper carefully and revised the paper thoroughly to correct the typos.
---
Rebuttal 2:
Comment: The rebuttal has addressed my concerns. I believe the proposed setting is inspiring for real-world dynamic environment. As a result, I would like to raise my score to defend my recommendation. | Summary: This paper considers domain generalization over temporal-drift data where the model is trained online and required to generalize to the unseen future domain. The proposed method, Evolving Standardization (EvoS), assumes each domain follows a Gaussian distribution and utilizes transformers to capture the temporal drift by predicting the Gaussian mean and variance of the current domain given historical domains’ means and variances. Specifically, a multi-scale attention module (MSAM) is proposed, which utilizes a sliding window to aggregate domain statistics within a given time scope and makes the transformer predict based on the aggregated domains. The Gaussian statistics generator (namely, feature encoder) is shared over domains and trained in an adversarial way to achieve domain-invariant property. The proposed method is tested on several real-world temporal DG datasets and achieved good performance.
Strengths: 1. The proposed problem, which seems like a combination of evolving/temporal domain generalization and online continual learning, is interesting and novel.
2. The proposed method is shown to achieve optimal performance over various existing methods.
3. The paper is in general clearly written and easy to follow.
Weaknesses: 1. The technical contribution is incremental. The single-scale attention is a review of transformer models and domain-invariant adversarial loss is a common tool in either DA/DG works. The key idea of the multi-scale attention module, which is a sliding window-based aggregation of multiple domains’ statistics, is relatively straightforward. Probably some theoretical analyses such as the generalization error bound of the proposed method on the future domain can strengthen the overall technical contribution.
2. The usage of the memory pool is similar to episodic memory in continual learning replay-based methods, and I am wondering about the memory complexity/cost, especially considering the case where the number of temporal domains is large. Also, in online continua learning episodic memory is typically assumed fixed and small to guarantee practical interests, and I am curious if a similar assumption is made in this work.
3. The baselines from the continual learning domain used in this paper are not SOTA (the latest one is A-GEM from 2019), and I encourage the author to compare them with more recent SOTA CL methods. In addition, continual domain adaptation is also a closely related area to this paper, and some methods such as CIDA [1] and CDOT [2] are also interesting to explore.
4. I would encourage the author to consider visualizing the trained model in a gradual manner, such as the visualization of the decision boundary on the Rotated 2-Moons dataset in papers of e.g., CIDA, GI, and DRAIN. Such visualization can serve as qualitative analysis and better demonstrate if the method can truly capture the underlying temporal drift of data.
[1] Wang, Hao, Hao He, and Dina Katabi. "Continuously indexed domain adaptation." Proceedings of the 37th International Conference on Machine Learning. 2020.
[2] Jimenez, G. O., Gheche, M. E., Simou, E., Maretic, H. P., and Frossard, P. Cdot: Continuous domain adaptation using optimal transport. arXiv preprint arXiv:1909.11448, 2019.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to my “Weaknesses” section for the questions.
My final score will largely depend on the rebuttal and discussion with other reviewers. I am willing to increase my score if my questions are adequately addressed.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: As far as I have checked, I did not find potential negative societal impacts in this work. I have listed my concerns in the "Weaknesses" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your efforts in reviewing the paper. Below, we respond to your questions in detail.
> **Q1:** The technical contribution is incremental.
**A1:** Thanks. Firstly, although transformers can model sequential relationships, they require inputs themselves to be of sequentiality, like sentence words in NLP or tailored image patches in CV for ViT. In our case, sequentiality spans domains, not within a domain. Directly using transformers on the current domain data fails to capture evolving temporal patterns. Our efficient and novel solution is to model domain distribution using deep feature stats (mean, variance). We then use these statistics from seen domains as transformer inputs to generate next domain stats. Such way captures evolving patterns and meanwhile avoids heavy storage burden for historical data.
Secondly, we further design the simple yet effective Multi-Scale Attention Module (MSAM) to accommodate and integrate the evolving pattern with different length of observation intervals, which has not been considered by previous temporal DG methods GI [2] and DRAIN [3]. Though the implementation of MSAM is relatively straightforward, the results of variant H *vs.* EvoS in the ablation study of the paper show that MSAM is indeed beneficial.
Thirdly, while domain-invariant adversarial loss is prevalent in DA/DG works, its application is challenging in our context due to inaccessible historical domains. We cleverly address this by modeling domain distribution at the deep feature level and utilizing truncated-sampled features as proxies of historical domains. Moreover, we further alleviate distribution shift by standardizing features via predicted evolving statistics. This approach leverages temporally evolutionary pattern, an aspect overlooked by prior DA/DG methods.
Lastly, beyond our method, our another contribution is the introduced problem of Continual Domain Generalization over Temporal Drift (CDGTD). This is a more challenging and practical variant of DG, considering the **dynamic nature of training domains**, the **efficiency of frequent model updates** with newly collected domain data, and the **storage burden** for historical data.
> **Q2:** The memory complexity/cost and memory pool size.
**A2:** Thanks for your comment. Please refer to the global reply **R2** for the detailed memory complexity/cost. As for the memory pool size, since the dataset used in our paper has a moderate number of domains and the memory cost is small, we do not restrict the size of the memory pool $\mathcal{M}$ in our experiments. Yet, as you concerned, a fixed memory pool size is practical when considering a lifelong process, i.e., $T \rightarrow \infty$. Thus, we additionally conduct the experiment, where the memory pool $\mathcal{M}$ is implemented as a FIFO queue with different fixed size $L$. That is, only the statistics for up to the $L$ most recent historical domains can be stored. The results on Yearbook dataset in **Figure 2(a)** of the PDF file demonstrate that our method generally performs well and a relatively large memory pool size is better for temporal domain generalization. So in practice, we recommend that the memory pool size be as large as possible under affordable memory cost. In the revision, we will clarify this.
> **Q3:** Results for more recent SOTA CL method and continual DA methods CIDA and CDOT.
**A3:** Thanks for your advice. We have added the results of CIDA, CDOT and a more advanced CL method SGP (AAAI'23) [4] on dataset Yearbook and RMNIST in **Table 2** of the PDF file. From the results, we can observe that the three methods obtain inferior performance on future domains.
For CIDA, it simultaneously takes the sample and time (i.e., [x, t]) as the input and aims to learn time-invariant feature representations via a two-party adversarial game. In this method, the temporal distribution shift is assumed to be characterized by the value of time, discarding the other features's dependency on time and dependency on other confounding factors. As a result, it fails to accurately capture the complex temporal drift.
For CDOT, it applies the regularized optimal transport to transport the most recent labeled samples to the future using a learned coupling from past data. Its performance depends on how closely the transported images resemble the true target image, showing limited ability in addressing the persistent temporal drift.
And for SGP [4], it combines orthogonal gradient projections with scaled gradient steps along significant gradient spaces from prior tasks to enhance new learning and minimize forgetting. Despite the same mode of sequential training, our goal differs from SGP. Our EvoS focuses on generalizing to upcoming unseen domains, whereas SGP aims to excel in both past and present tasks. Consequently, lacking specific temporal drift handling, SGP exhibits lower performance on future domains.
By contrast, our EvoS shows superior performance, owing to the better modeling of the evolutionary pattern in temporal domains via MSAM and the mitigation of the distribution shift by standardizing features with generated statistics.
> **Q4:** Visualization of the decision boundary on Rotated 2-Moons dataset.
**A4:** Good suggestion. **Figure 1** of the PDF file gives the result. The 2-Moons dataset in GI [2] is used, where each moon consists of 100 instances, and 10 domains (0 to 9) are obtained by sampling 200 data points from the 2-Moons distribution and rotating them counter-clockwise in units of $18^\circ$. In Figure 1, the model is sequentially trained until the $t$-th domain is finished, and then we visualize the decision boundary on current domain $\mathcal{D}^t$ and the next future domain $\mathcal{D}^{t+1}$, $t=5,6,7,8$. According to the result, the decision boundary generated by our method successfully adapts to future domains, showing that our method can capture the underlying temporal drift of data. In the revision, we will add this experiment.
---
Rebuttal 2:
Title: Looking forward to seeing your feedback!
Comment: Dear reviewer Nwuj,
We have detailedly discussed **the contribution and memory complexity** of our approach. Additionally, we have provided fresh results encompassing **a recent state-of-the-art CL method**, as well as continuing DA methods (**CIDA and CDOT**), along with **visualizations of evolving decision boundaries**. We hope our responses have addressed your questions and concerns. The rebuttal period is going to end. If you have any other questions or suggestions, please let us know and we are more than happy to respond as soon as possible. Looking forward to your feedback. Thanks so much!
Best regards,
Authors
---
Rebuttal 3:
Comment: I appreciate the responses from the authors and my previous concerns about the complexity analyses and more empirical results have been addressed. I have increased my score accordingly. | Rebuttal 1:
Rebuttal: Sincerely thank all reviewers for the efforts in reviewing our paper and the constructive suggestions. We are more than encouraged that reviewers find
* our proposed problem of Continual Domain Generalization over Temporal Drift (CDGTD) to be **interesting** and **novel** (*Reviewer Nwuj*), **innovative** and **meaningful** (*Reviewer P1pd*), **important** (*Reviewer uaS8*),
* our method to be **novel** and **intriguing** (*Reviewer P1pd*), **new** and with **efficacy** (*Reviewer uaS8*), **novel** and **promising** (*Reviewer YTjU*),
* our paper to be **clearly written** (*Reviewer Nwuj*) and with **commendable structure** (*Reviewer YTjU*) and **quality** (*Reviewer P1pd*),
* the ablation study to be **thorough** (*Reviewer P1pd*),
* and the performance to be **state-of-the-art** across a diverse range of tasks (*Reviewer P1pd*).
As for the concerns and suggestions of each reviewer, we have done our utmost to address them and responded to each of them in detail. And a one-page PDF file has been uploaded containing the relevant figures and tables mentioned in the responses. Please refer to the PDF file for detailed results, if needed. Below are some global replies (**R**) that may be used in the separate response to different reviewers:
> **R1: Introduction to the dataset used in the rebuttal.**
Apart from the dataset in our paper, we additionally run our method on below datasets with temporal shifts that are used by previous temporal DG methods [2, 3]. The description of these datasets are given as below.
* **2-Moons** is a variant of the 2-entangled moons dataset by rotating data counter-clockwise in units of $18^\circ$ to construct 10 domains, where the rotation angle is used to simulate the temporal shift.
* **Online News Popularity (ONP)** summarizes a heterogeneous set of features about articles published by Mashable in a period of two years. The dataset is split into 6 domains by time and the goal is to predict the number of shares in social networks (popularity).
* **Electrical Demand (Elec2)** contains information about the demand of electricity in a particular province. Following [2, 3], the first 30 domains are used (two weeks as one time domain) and the task is to predict if the demand of electricity in each period (of 30 mins) was higher or lower than the average demand over the last day.
For these datasets, following [2, 3], we use the last domain for testing and the rest for training.
> **R2: The complexity anlysis of our method.**
**Memory complexity** mainly comes from the memory pool $\mathcal{M}$. Assuming that there are $T$ historical domains and the feature dimension is $d_f$, then the memory complexity of the memory pool is $\mathcal{O}(T\cdot d_f)$. In practice, the dimension of pooled features is usually much smaller than that of original inputs after processing by the deep neural network. For example, the dimension of an image in the fMoW dataset is $224\times 224 \times 3 = 150,528$, while the dimension of pooled features in DenseNet-121 is $1,024$, about $0.007$ times of the former. Besides, only two vectors need to be stored per domain. Hence, the memory cost of $\mathcal{M}$ is relatively small, compared with sample replay-based CL methods. Concretely, the memory cost of $\mathcal{M}$ for the relatively large dataset fMoW is $32$ MB, and the GPU memory consumption of the whole method EvoS increases by $0.35$ GB over IncFinetune (10.69 GB of IncFinetune and 11.04 GB of EvoS).
**Time complexity** mainly comes from the multi-scale attention module (MSAM). Taking one of the attention module $\mathcal{A}_w$ in MSAM as an example, we assume that $d_f$ is the feature dimension of input tokens, $d_h$ and $n_h$ are the feature dimension and the number of heads, and $n_i$ is the number of input tokens in $\mathcal{A}_w$. Then the time complexity comprises:
* the transformation of input tokens to their query, key and value embeddings: $\mathcal{O}(n_i \cdot d_f \cdot d_h \cdot n_h)$,
* the calculation of attention weight matrix: $\mathcal{O}(n_i \cdot d_h \cdot n_i \cdot n_h)$,
* the multiplication of attention weight matrix and value matrix: $\mathcal{O}(n_i \cdot n_i \cdot d_h \cdot n_h)$,
* convert the feature dimension of attended value embeddings into the input dimension: $\mathcal{O}(n_i \cdot d_h \cdot d_f \cdot n_h)$.
Thus, the time complexity is $\mathcal{O}(n_i \cdot d_f \cdot d_h \cdot n_h) + \mathcal{O}(n_i \cdot d_h \cdot n_i \cdot n_h) + \mathcal{O}(n_i \cdot n_i \cdot d_h \cdot n_h) + \mathcal{O}(n_i \cdot d_h \cdot d_f \cdot n_h)\approx\mathcal{O}((n_i^2 + n_i \cdot d_f) \cdot (\cdot d_h \cdot n_h)).$
Since $n_i$ will be no larger than the number of training domains $T$ and $d_h \cdot n_h$ usually sets to $d_f$ in transformers, the time complexity of MSAM can be roughly approximate as $\mathcal{O}(W \cdot (T^2 d_f + T \cdot d_f^2))$, where $W$ is the number of multi-head attention modules in MSAM. In the implementation, $W$ is set to a relatively small value ($W=3$) in our paper and the time complexity is acceptable.
**Global references:**
[1] Wild-time: A benchmark of in-the-wild distribution shift over time. In NeurIPS, 2022.
[2] Training for the future: A simple gradient interpolation loss to generalize along time. In NeurIPS, 2021.
[3] Temporal domain generalization with drift-aware dynamic neural networks. In ICLR, 2023.
[4] Gobinda Saha, Kaushik Roy. Continual Learning with Scaled Gradient Projection. AAAI, 2023.
Pdf: /pdf/f8e8fd711a8eacbfb84dfff81c19f47193f4dca7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A Cross-Moment Approach for Causal Effect Estimation | Accept (spotlight) | Summary: This work proposes a cross-moment approach to estimating the average causal effect with latent confounders in linear SCM. One proxy variable of the latent confounder can be observed. In contrast to prior research (e.g., difference-in-difference) that requires stringent assumptions, this work shows that the causal effect can be identified and estimated using cross moments between the treatment, the outcome, and the proxy variable. It also discusses when the effect with latent confounder cannot be identified. Experiments on both synthetic and real-world data show its effectiveness.
Strengths: 1. This work introduces simple arithmetic operations on the cross moments to estimate causal effects with latent confounders in linear SCM. It addresses a conventional challenge in the field of solving an OICA problem and biased estimation of DiD, which may result in bad local optima.
2. It is technically sound and the idea is clearly and concisely described.
3. It can have significance in the field and practical importance given the prevalence of the studied problem.
Weaknesses: 1. The major concern is the evaluation. The baselines included are quite weak and old, e.g., KP14 published in 2014. Why the OICA method [SGKZ20] has very poor performance is not clear. Other baselines may include proximal causal inference [1], for example. Also, for real-world data experiments, other baselines are not included. And the differences between the two methods when x is not included are not explained.
2. The limitations of the proposed approach are not discussed.
3. How applicable this method is is not clear.
[1] Mastouri, A., Zhu, Y., Gultchin, L., Korba, A., Silva, R., Kusner, M., ... & Muandet, K. (2021, July). Proximal causal learning with kernels: Two-stage estimation and moment restriction. In International Conference on Machine Learning (pp. 7512-7523). PMLR.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Why SGKZ20 is very poor and what makes KP14 and the proposed approach much better than it?
2. What are the limitations of the proposed approach?
3. What are the potential applications of the proposed method?
I acknowledge I read the authors's response and I keep my positive score.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: No limitations are discussed. Consider shortening Related Work and add limitations in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Regarding comments in weaknesses**
1. [Concerns about evaluations] Regarding the method in [SGKZ20], as mentioned in lines 70-71, it is based on solving an over-complete independent component analysis (OICA) which, in practice, can get stuck in bad local minima and return wrong results. In our experiments, we observed that it is often the case and it could not find the true mixing matrix and thus, has quite poor performance. We will add this explanation to the revised paper. The work mentioned by the reviewer requires at least two proxy variables $W,Z$ which may not be available in some settings. Moreover, they often need "Completeness" assumptions (such as in [MGTT18,TYC+20]), which in the discrete setting is equivalent to matrices $P(U|Z,d)$ and $P(W|U)$ be invertible. Nevertheless, we compared our algorithm with recent work [TYC+20] in proximal causal inference requiring two proxy variables, and the results are given in the attached file. The settings of experiments in Figure 2 and Figure 3 in the attached file are exactly the same as Figure 3(b) and Figure 5(a) in the submitted paper, respectively. As can be seen, Cross-Moment algorithm outperforms both methods in [KP14] and [TYC+20] when two proxy variables are available.
2. [About limitations] Please refer to the global response.
3. [Applicability] As mentioned in Introduction (lines 50-60), the causal graph we consider is applicable to settings in negative outcome control in the SCM framework as well as DiD in the potential outcome framework. Moreover, our experiments in real dataset showcase our approach's effectiveness in the DiD setting.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for answering my questions and doubts. The authors have especially agreed to include the explanation for the bad performance of the method in [SGKZ20] in the next version and also presented addtional results to show their algorithm can outperform more recent models. I believe by including both will improve the paper, and hence I change my score to "Weak Accept". However, it is still not clear to me why the compared methods are different for synthetic data and real-world data.
---
Reply to Comment 1.1.1:
Comment: Thanks for reading the responses. We will add what was suggested by the reviewer in the revised paper. Regarding your question, please note that in the real dataset, we only have access to one proxy variable (employment level before the rise in the minimum wage). The methods in [KP14] or [TYC+20] require at least two proxy variables. Moreover, it is unclear which one of the covariates can be served as proxy variable W in these works. | Summary: This work focuses on estimating the causal effect in the presence of an unmeasured confounder within a linear causal model. The authors demonstrate that the desired causal effect can be identified by employing a single proxy variable, leveraging its non-Gaussian characteristics. Additionally, they propose a Cross-Moment algorithm for estimating the model. Furthermore, they demonstrate the effectiveness of the proposed method through experiments on both synthetic and real-world datasets.
Strengths: Estimating the causal effect becomes challenging in presence of unmeasured confounders.
The proposed sufficient identification condition of this paper is novel.
The analysis in this paper is presented in a logical manner.
This paper is clearly written.
Weaknesses: The model is restricted as a linear causal model.
The sufficient identification condition is applicable only to a single unmeasured confounder, and the proposed method may not be suitable for cases involving multiple latent variables.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Identification:
1. Is the causal effect \beta identifiable under the Non-Gaussianity assumption? This is not clear to me. In my opinion, the reason why Assumptions 2 and 3 are introduced instead of non-gaussian assumption is because one need to use the cross-moment. Am I correct?
2. If Z directly affects D, the identification of the causal effect of D on Y may be affected. It would be helpful to investigate and discuss the potential implications of this scenario in the paper.
3. In the current setting, the paper assumes the existence of a single unmeasured confounder U. However, it is worth exploring and addressing the situation where there are multiple unmeasured confounders U. This could enhance the comprehensiveness and applicability of the proposed method.
Related work:
The following paper may be related to this work and is deserved to discuss.
Shuai, K., Luo, S., Zhang, Y., Xie, F., & He, Y. "Identification and Estimation of Causal Effects Using non-Gaussianity and Auxiliary Covariates." arXiv preprint arXiv:2304.14895 (2023).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Estimating the causal effect using the proposed method requires the availability of one proxy variable that is associated with the unmeasured confounder but does not directly affect the treatment variable. However, in practical scenarios, it can be challenging to identify or obtain such a proxy variable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Regarding comments in weaknesses**
1. [Linearity assumption] Please refer to the global response.
2. [Single latent confounder] Please refer to the global response.
**Questions**
1. As shown in Theorem 1, the causal effect is identifiable if condition in (5) is satisfied for the latent confounder. Moreover, as mentioned in lines 173-175, under some mild assumption (Assumption 3), this condition is equivalent to non-Gaussianity of latent confounder. Please note that the non-Gaussianity of latent confounder always implies the condition in (5). Moreover, in our results, we do not have any constraint on the type of distributions of the other observed variables in the system. Assumption 2 is indeed introduced so that all the cross moments are well-defined.
2. According to [SGKZ20], with non-Gaussian exogenous noises, if there is an edge from $Z$ to $D$ with the causal coefficient $\rho$, another model which is consistent with the same distribution over observed variables exists. The linear SCM equations for this model are as follows: $U:=\epsilon_z$, $Z:=U+\alpha_z\epsilon_u$, $D:=-(\alpha_d/\alpha_z)U+(\alpha_d/\alpha_z+\rho)Z+\epsilon_d$, $Y:=-(\gamma/\alpha_z)U+(\gamma/\alpha_z)Z+\beta D+\epsilon_y$. Incidentally, the causal effect in the second model is also $\beta$. Thus, without any prior knowledge about the causal structure, we can identify $\beta$ uniquely using the method in [SGKZ20] (which is based on OICA) as both models have the same causal effect from $D$ to $Y$. Regarding the Cross-Moment method, it can be shown that the same Eq. (4) holds for the causal graph with the additional edge from $Z$ to $D$. As a future work, it is interesting to devise a cross-moment method to identify $\alpha_d/\alpha_z$ in this setting.
3. We addressed this comment in the global response.
4. Thanks for mentioning it. We will discuss (Shuai et al. 2023) in the related work.
**Regarding comments in limitations**
As we discussed in Introduction (lines 50-60), the causal graph we consider is applicable to settings in negative outcome control in SCM framework as well as DiD in the potential outcome framework. Moreover, our experiments in real dataset showcase our approach's effectiveness in the DiD setting.
---
Rebuttal Comment 1.1:
Title: Regarding question 2
Comment: Thanks for your response! Regarding your example in Q2, what if we assume the faithfulness assumption, can we identify those two models from observed variables? In my view, the reason why the causal effect of $D$ on $Y$ can be uniquely identified is that the observed descendants of $U$ are not the same as the descendants of $D$. Please correct me if I'm wrong.
---
Reply to Comment 1.1.1:
Comment: Thanks for reading the responses. Under the faithfulness assumption, if the observational distribution is generated based on the original causal graph (with just an edge from $Z$ to $D$), the second model that we proposed in the response violates faithfulness assumption as $Z$ and $Y$ should be d-separated given $D$ and $U$ which is not the case in the causal graph of this model. Thus, under the faithfulness assumption, the original model is uniquely identifiable. Regarding uniquely recovering $\beta$, as mentioned by the reviewer, the latent confounder $U$ does not have the same observed descendants as $D$. Otherwise, we can swap their corresponding exogenous noises and get a new model with a different causal effect similar to the example in Section 4.1 of [SGKZ20]. | Summary: This paper introduces an innovative technique for estimating the causal effect of a treatment on an outcome within linear structural causal models. This method utilizes cross moments, which are statistical moments derived from the joint distribution of the treatment and outcome variables, to quantify the causal effect. The authors demonstrate that this approach can relax the conventional assumption of a common trend in the difference-in-difference estimator, allowing for causal effect estimation in scenarios where traditional methods may fall short. To validate the effectiveness of the proposed method, the authors provide both simulation studies and a real-world application. These empirical analyses showcase the promising potential of this novel approach for estimating causal effects in linear structural causal models.
Strengths: * Novelty: The paper introduces an innovative approach to estimating causal effects in linear structural causal models with latent confounders by leveraging cross moments. This method deviates from conventional approaches and exhibits the potential to yield more precise estimates within specific contexts.
* Rigor: The authors establish a rigorous theoretical framework for their proposed method, delineating the conditions that allow for the identification of the causal effect and the applicability of the method. Furthermore, they substantiate their claims through comprehensive simulation studies and a real-world application, bolstering the robustness and effectiveness of the approach.
* Significance: Estimating causal effects is a crucial task across various domains, and the proposed method holds substantial importance as it can potentially deliver more accurate estimates in specific scenarios. By addressing the limitations of traditional approaches, this method offers a valuable contribution to the field of causal effect estimation.
Weaknesses: The authors should compare their proposed method to more existing methods for estimating causal effects, such as negative outcome control. This will help readers understand how the proposed method compares to existing methods in terms of accuracy and efficiency.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Regarding comments in weaknesses**
As we mentioned in the global response, please note that most of these methods require at least two proxy variables and also further "Completeness" assumptions. Nevertheless, in the attachment of global response, we did additional experiments for comparing with more recent method in [TYC+20]. The settings of experiments in Figure 2 and Figures 3 in the attached file are exactly the same as Figure 3(b) and Figure 5(a) in the submitted paper, respectively. As can be seen, Cross-Moment algorithm outperforms both methods in [KP14] and [TYC+20] when two proxy variables exist.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: The author successfully addressed my questions in their rebuttal stage, and I would like to keep my score to vote for acceptant. | Summary: The authors consider the estimation of causal effect in linear SCM with independent errors when there is a latent confounder U and one proxy variable of U (negative control outcome).
They generalize the DiD literature by relaxing the assumption of common trends and propose a general identification formula under the stated structural assumptions.
The authors show that under some restrictions on the latent confounder moments and on the U-Z, U-D relations, causal effects are uniquely identified nonparametrically using the identification formula. Furthermore, they propose a general estimation algorithm based on the cross-moments of Z and D.
In addition, the authors provide an ``impossibility result" which shows that under fully gaussian linear SCM, causal effects can not be uniquely identified.
They illustrate their proposed method in simulation and a data example.
Strengths: Correcting the bias due to residual confounding using proxy variables (negative controls) is an emerging topic in causal inference. The authors consider the more difficult task that seeks to correct the bias with only one proxy variable.
Under the assumed linear structural model with independent errors, the authors utilized a well-known identity that can be thus used as an identification formula. Theorem 1, which provides the uniqueness guarantees, is novel and motivates a computationally easy estimation algorithm, which is an improvement in comparison to other proximal learning methods.
The formal results are rigorous and nontrivial. The theoretical guarantees provide a meaningful illustration of the limitations of the proposed identification formula.
The paper is well-written and easy to follow.
Weaknesses: The authors provide results only for linear SCM with independent errors. Both linearity and exogenous errors are fairly strong assumptions that are not likely to hold in practice. Moreover, the theoretical results heavily depend on both assumptions and are not likely to extend to other SCMs.
On line 117, D is assumed to be a binary treatment. On line 119, the authors explicitly define the causal estimand of interest as the average causal effects on the treated. However, the SCM (line 133) states that $D = \alpha_dU +\varepsilon_d$, which, coupled with the assumption of independent zero mean errors (line 134), yields that
$$\Pr(D=1)=E[D]=\alpha_dE[U] + E[\varepsilon_d]=0$$
That is, if D is binary, the SCM implies that it is a deterministic random variable that equals 0 with probability one.
In addition, in Algorithm 1, $num$ is identically the same for all $n$ whenever $D$ is binary.
The authors are most likely well aware of this inconsistency since in the simulation study $D$ is not taken to be a binary variable.
Their proposed method works well for non-binary D, but causal estimands should be adjusted accordingly.
Section 3.2 (lines 209-228) is well known in the literature (see for example the recent review by Roth et al. 2023 ``What’s trending in difference-in-differences? A synthesis of the recent econometrics literature").
Experiments under misspecification (linearity, additional latent variables, etc.) are not presented. The robustness of the proposed methods is an open question.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In the data example, estimation using the cross-moments algorithm is performed on the residuals of the outcom~covariates regression. Are there any theoretical guarantees (e.g., similar to Theorem 1) when covariates are included? are the covariates also need to have a linear relation to the treatment/outcome/negative control for the uniqueness of $\beta$?
Proximal learning (Tchegen Tchegen et al, cited by the authors) provides a flexible approach for estimating causal effects with latent confounders when there are at least two proxy variables. In practice, many studies do have more than one possible proxy variable. Do you think it is possible to extend the cross-moments algorithm for scenarios with more than one negative control (e.g., under linear SCM)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: As already stated, the theoretical results strongly rely on the linearity and independent error assumptions. The authors did not adequately address those limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Regarding comments in weaknesses**
1. [About linearity and independent exogenous noises assumptions] Independence of exogenous errors is the main and standard assumption in structural causal models (SCM) which is based on the principle of independent mechanism (For more discussion on why this is a reasonable assumption, please see Section 2.1 in [1]). Regarding the linearity assumption, please refer to the global response.
[1] Peters, Jonas, Dominik Janzing, and Bernhard Schölkopf. Elements of causal inference: foundations and learning algorithms. The MIT Press, 2017.
2. [About binary treatments] Section 2.1 (lines 117, 119) is referring to the DiD approach and it does not pertain to our assumption which we state later in our linear SCM setting (lines 133-134). As mentioned in line 134, without loss of generality, we can assume that all the variables have a mean equal to 0, as we can always achieve this by centering the data. It is noteworthy that our method is applicable to both discrete or continuous variables that satisfy the linear equations in (3). In the DiD setting, for instance, if there is a binary variable $D$ that takes values 0 or 1 with probability $1/2$ each, it can be represented in the SEM by a binary variable that equals $-1/2$ or $1/2$ with probability $1/2$ each. Please note that the experiments for the Minimum Wage and Employment dataset are done for the binary variable $D$, which shows that our method works well for the binary case. We will clarify these details in the revised paper.
3. [The reference "Roth et al. 2023"] Thanks for mentioning the reference. We will discuss it in the revised paper.
4. [Experiments under miss-specification] Regarding additional latent variables, please refer to the global response.
About linearity assumption, we carried out some experiments under miss-specification in the linear relations. For the edges from the latent confounder $U$ to observed variables $Z,D,Y$, we replaced the linear relations with nonlinear functionalities. In particular, $Z= 10\tanh(\alpha_zU/a)+\epsilon_z$, $D= 10\tanh(\alpha_dU/a)+\epsilon_d$, and $Y= \beta D+ 10\tanh(\gamma U/a)+\epsilon_y$, where $a$ is some constant in $[2,10]$. Please note that we kept the linear relation from $D$ to $Y$ as it is challenging to quantify the causal effect with a single value if the relation is non-linear. In fact, in the non-linear case, the causal effect depends on the value of the treatment. For instance, one possible candidate to capture the causal effect is $\partial \mathbb{E}[Y|do(D:=d)]/\partial d$, which is a function of $d$.
The average relative error against the parameter $a$ is depicted in Figure 1 in the attachment of the global response. The relations from the latent confounder to the observed variables become more non-linear for lower values of $a$. Cross-Moment method still has a decent performance compared with the baselines.
**Questions**
1. In the case of covariates with linear relations, we can show that the causal effect can be identified uniquely if the latent confounder is not an ancestor of any covariates in the system. We will add a remark about this in the paper and also provide proof in the appendix. The main idea in the proof is to regress dependent variables on the covariates and reduce the problem to the one considered in the paper.
2. In our experiments (see lines 316-323), we extended our method to work with more than one proxy variable. As we mentioned above, the proposed method can be applied to linear SCMs with any number of latent confounders as long as for any latent confounder $U$, there exists a proxy variable such as $Z$, which is not an ancestor of $D$.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. The authors addressed my questions and concerns. | Rebuttal 1:
Rebuttal: # Global Response
We thank the reviewers for their time and valuable feedback. In the following, we provide a global response to some of the concerns/questions raised in the review.
**About linearity assumption:** The linearity assumptions present in a large body of the research in causal discovery/inference (for instance, see the survey in [1]) as it is important to see which learning tasks are feasible in this setting. Moreover, in practical scenarios where the available data is sparse, which is often the case in fields such as social science or medical research, linear models may serve as a good starting point. The use of more sophisticated models typically requires a larger dataset to train effectively. However, in situations where such extensive data is not available, the linear model can provide valuable insights, despite its simplicity. Therefore, while linear models may not capture the full complexity of real-world systems, they continue to be valuable tools in the domain of causal discovery and inference due to their interpretability and feasibility, especially when data is limited.
[1] Shimizu, Shohei. "LiNGAM: Non-Gaussian methods for estimating causal structures." Behaviormetrika 41 (2014): 65-98.
**About multiple latent confounders:** Regarding additional latent variables, the proposed method can be applied to linear SCMs with any number of latent confounders as long as for any latent confounder $U$, there exists a proxy variable $Z$, which is not an ancestor of $D$ (under a subtle change in Eq. (4)). The main idea is to adjust the values of $Cov(D,Y)$ and $Var(D)$ in Eq. (4) using proxies for each of corresponding latent variables similar to our treatment in the submitted version.
It is noteworthy that most previous work required at least two proxy variables $W,Z$, and even often came with some "Completeness" assumptions (such as in [MGTT18,TYC+20]), which in the discrete setting is equivalent to matrices $P(U|Z,d)$ and $P(W|U)$ be invertible.
We will add a remark about the above discussion in the revised paper. Please note that if a latent confounder does not come with any proxy variable, one can construct SCMs in which the causal effect is not identifiable.
Pdf: /pdf/ccdebc9a4fd5d45fa8a869b18c4c15096a86e373.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Quantification of Uncertainty with Adversarial Models | Accept (poster) | Summary: The authors propose a new method for estimating epistemic uncertainty. They propose to adversarially search for modes of the posterior distribution. Empirically, they demonstrate that their uncertainty performs well on OOD detection.
Strengths: The empirical results of this approach on OOD detection are very promising.
Weaknesses: - The method performs well on ood detection. Is the resulting uncertainty also well calibrated?
- l60 : The argument as to why ensembles and BNNs underperform is not clear to me. The authors say that they miss important posterior modes. Caen the authors elaborate on this?
- L142: can the authors demonstrate that the current methods underestimate uncertainty? Just because some methods dont explore all modes doesn mean that they underestimate uncertainty. E.g. they may overestimate the width of other modes.
- Lack of comparison with other ensemble methods, e.g. [1]. [1] in particular shows similar results on the two moon example. This makes me wonder whether the main impact of the method comes from the mixture of gaussians, as done in [1], or from the adversarial search. I guess it would be important to quantify that.
[1] Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for this assessment of our work. Regarding the stated weaknesses:
- **Calibration:** Thank you for proposing this interesting direction. Your intuition was right, we found that our method indeed improves upon the other considered baseline methods, although it was not directly designed to improve calibration of the predictive distribution. The detailed results measuring the expected calibration error (ECE) and the calibration plots for the ImageNet-1K validation dataset are given in Table 2 and Figure 2 of the PDF document attached to the global answer. The results for QUAM were obtained using the same hyperparameters as for misclassification detection reported in the appendix of the main paper. Future work should explore methods based upon QUAM that specifically target to improve the calibration.
- **Why do prior methods underestimate epistemic uncertainty:** In line 60 we raise the concern that Deep Ensembles and MC dropout fail to capture important posterior modes and thus underestimate epistemic uncertainty. This is inspired by [1], where the authors raise attention to the fact that “Ultimately, the goal is to accurately compute the predictive distribution in Eq. (1), rather than find a generally accurate representation of the posterior. In particular, we must carefully represent the posterior in regions that will make the greatest contributions to the BMA integral.”. The regions that contribute most to the posterior integrals defining the epistemic uncertainty in equations (1) and (2) in our paper are those where predictions differ a lot to the reference model, thus exhibit high KL-divergence. QUAM explicitly searches for those, while Deep Ensembles and MC dropout do not have an explicit mechanism to enforce variety in the predictive distributions on a new test sample, resulting in low values of KL-divergences. An additional reference in the literature would be [2], which also discusses the problem of functionally similar ensemble members for estimating the BMA predictive distribution. As pointed out also in [3], the issue is that Deep Ensembles maximize the posterior probability of the ensemble models through gradient descent, which is prone to yield functionally similar “easy” solutions. We are thus not concerned that prior methods do not find all modes, but modes that yield functionally different solutions that predict differently, thus underestimate the epistemic uncertainty. The same arguments apply also to the dropout models used in MC dropout.
- **Empirical evidence that prior methods underestimate epistemic uncertainty:** This issue is empirically exemplified for Deep Ensembles and MC dropout on a toy dataset, where results are shown in Figure 2 in the main paper. They underestimate the epistemic uncertainty due to their lack of finding functionally different solutions that yield different predictions, compared to the ground truth (HMC) shown in Figure C.2 in the appendix. Furthermore, we observe the same issue in the experiments on the two-moon dataset (Figure 3) when considering a test point that is for instance in the upper left or lower right corner, compared to the ground truth (HMC). Here we directly observe that the epistemic uncertainty is too low. Similarly, we find that epistemic uncertainty is underestimated for inputs outside the range of data points (Figure C.4 in the appendix) in a regression task.
- **Lack of comparison to other ensemble methods:** Thank you for pointing us to this interesting paper [4]. Unfortunately, the authors did not provide the original code for their proposed method MoLA. We gave our best effort to reproduce their results on the two-moon example using the reported pipeline, but found the results to only marginally differ from the single network Laplace approximation reported in our paper that uses the same implementation for the Laplace approximation as [4]. However, we also evaluated our reimplementation of MoLA on the MNIST OOD detection task. The results are stated in Table 3 in the attached pdf to our global answer. We found that MoLA improves upon the Laplace approximation of a single model, but leads to worse performance than just using the underlying Deep Ensemble and consequently does not outperform QUAM as well. We will release an updated version of our code including this new baseline and add the additional results to Table 1 for the final version of the paper.
Thank you once again for your thoughtful assessment and valuable feedback. We have worked diligently to address your concerns, and we hope that our responses clarify the issues you raised. Please do not hesitate to reach out if you have any further questions or require additional information.
---
[1] Wilson, A. G., & Izmailov, P. (2020). Bayesian deep learning and a probabilistic perspective of generalization. Advances in neural information processing systems, 33, 4697-4708.
[2] D'Angelo, F., & Fortuin, V. (2021). Repulsive deep ensembles are bayesian. Advances in Neural Information Processing Systems, 34, 3451-3465.
[3] Parker-Holder, J., Metz, L., Resnick, C., Hu, H., Lerer, A., Letcher, A., ... & Foerster, J. (2020). Ridge rider: Finding diverse solutions by following eigenvectors of the hessian. Advances in Neural Information Processing Systems, 33, 753-765.
[4] Eschenhagen, R., Daxberger, E., Hennig, P., & Kristiadi, A. (2021). Mixtures of Laplace approximations for improved post-hoc uncertainty in deep learning. arXiv preprint arXiv:2111.03577.
---
Rebuttal Comment 1.1:
Comment: Thank you or addressing my points. Further comments below:
**Calibration**
The plot 2 e) in the additional PDF looks odd. Why does QUAM almost exclusively predict 100% confidence? How would the method compare on a dataset where calibration is less saturated?
**Empirical evidence that prior methods underestimate epistemic uncertainty**
I dont think that fig. 2/3 are sufficient empirical evidence that ensembles under estimate uncertainty. In particular, other works have found that Ensembles are better calibrated than MCD. It seems there is rather sth. of with the trained ensemble in this work. The fact that all ensembles members reach the same solution indicates that there is not suficient randomness in the process? Do the authors only vary the model initilization?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for his additional comments and questions.
**Calibration**
We agree with the reviewer, that the results depicted in Figure 2 e) in the PDF attached to the global answer looks different to other considered methods. Our explanation of them is as follows. The calibration results for QUAM were obtained using the same hyperparameters as for misclassification detection. Naturally, this is a discriminative task, where low uncertainty (thus high confidence) should be assigned to the correctly classified samples and high uncertainty (thus low confidence) to the misclassified samples. QUAM excels at this, as stated in Table C.2 in the appendix (IN-Misclass.). Since the majority of samples are correctly classified, we expect that QUAM yields high confidence for those samples. Future work could shed more light on the nature of these observations. We note that, while related, quantifying the uncertainty defined by Equations (1) and (2) in our paper is not equivalent to attaining a well calibrated prediction. *We would like to emphasize that QUAM was not explicitly designed to improve calibration, but does so empirically.*
We acknowledge that future work would need to investigate calibration of those methods for different datasets. However, we argue that ImageNet is a favorable dataset for calibration, as the accuracy of the classifier is at ~84% top-1 accuracy, thus also lower accuracy bins (to calculate the ECE) contain enough samples to get robust statistical estimates.
**Empirical evidence that prior methods underestimate epistemic uncertainty**
To the best of our knowledge, it is not clear cut whether MCD or Deep Ensembles are better calibrated (see e.g. Figure 2 test performance in [1]). The members of the Deep Ensemble in Figures 2 and 3 of our paper are indeed the same architecture trained from different random initializations, exactly as proposed in the original paper [2] and generally used as a baseline in e.g. [1], [3], [4] or [5]. Given that [4] (Figure 1) and [5] (Figure 1) report qualitatively the same results for Deep Ensembles as in Figure 3 of our paper, it suggests that the experiment was implemented correctly. Also, equivalent results for the same experiment with Deep Ensembles have been reported in [3] (Figure 1), which was suggested by the reviewer. Further empirical evidence that MCD and ensembles of MCD models (a combination of Deep Ensembles and MCD thereof) underestimate epistemic uncertainty for regions far from training samples (in the latent space of a variational autoencoder trained on MNIST) is given in Figures 1 and 6 in [6]. Finally, note that we varied the model architecture by using different sizes of externally obtained, pre-trained, and verified EfficientNets [7] to perform the ImageNet experiments, as reported in Table 2 of our paper. QUAM empirically outperforms this ensemble in estimating the uncertainty as well.
We hope that our response clarifies the remaining questions and gives rise to a more positive assessment of our work.
---
[1] Ovadia, Y., Fertig, E., Ren, J., Nado, Z., Sculley, D., Nowozin, S., ... & Snoek, J. (2019). Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. Advances in neural information processing systems, 32.
[2] Lakshminarayanan, B., Pritzel, A., & Blundell, C. (2017). Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30.
[3] Eschenhagen, R., Daxberger, E., Hennig, P., & Kristiadi, A. (2021). Mixtures of Laplace approximations for improved post-hoc uncertainty in deep learning. arXiv preprint arXiv:2111.03577.
[4] Liu, J., Lin, Z., Padhy, S., Tran, D., Bedrax Weiss, T., & Lakshminarayanan, B. (2020). Simple and principled uncertainty estimation with deterministic deep learning via distance awareness. Advances in Neural Information Processing Systems, 33, 7498-7512.
[5] Van Amersfoort, J., Smith, L., Teh, Y. W., & Gal, Y. (2020, November). Uncertainty estimation using a single deep deterministic neural network. In International conference on machine learning (pp. 9690-9700). PMLR.
[6] Smith, L., & Gal, Y. (2018). Understanding measures of uncertainty for adversarial example detection. The Conference on Uncertainty in Artificial Intelligence (UAI).
[7] Tan, M., & Le, Q. (2019, May). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp. 6105-6114). PMLR. | Summary: The paper introduces Quantification of Uncertainty with Adversarial Models (QUAM), a novel approach for epistemic uncertainty estimation in deep learning. Well-known uncertainty quantification approaches, such as Deep Ensembles or variational inference, underestimate the epistemic uncertainty by sampling from posterior modes found by gradient descent, which might prevent some modes to be found due to the shape of the loss landscape. To overcome this limitation, QUAM introduces adversarial models: plausible models with a high posterior that differ in the prediction from a reference one. Mixture importance sampling, with the adversarial models as modes, is then used to estimate the epistemic uncertainty. Experiments on both synthetic and vision data confirmed the validity of the approach in improving epistemic uncertainty estimation.
Strengths: The paper is very well-written, all the design choices are motivated by references to existing work, and the experiments are reproducible and robust. The topic is of great significance since in relates to important topics such as Out-Of-Distribution detection and Responsible AI, as also stated by the authors in the Societal Impact Statement. Overall, I really liked reading this work and I rate it as a strong accept.
Weaknesses: I have no major remarks against this version of the paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I have just two minor observations:
- The notation used in equation 1 (like the mutual information) was introduced only in the Appendix.
- Figure C.1 (Appendix): maybe wrong colour assignment of the adversarial model (yellow instead of blue).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors addressed potential negative societal impact of their work in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for this very positive assessment of our work.
Indeed we missed out on formally introducing the symbols for the cross-entropy, the KL-divergence and the mutual information in equation (1) in the main paper. Thank you for pointing out we will correct this in the final version of the paper.
Furthermore, thank you for pointing out the wrong color assignment in the caption of Figure C.1 in the appendix. We changed the color of the adversarial model part from yellow to blue for better readability when preparing the final version of the supplementary material, but did not change the caption accordingly. We will correct that in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for having fixed my minor observations.
Since all the reviewers acknowledged the contribution of adversarial models to the field, and the authors addressed most of the reviewers' concerns on clarity, missing related works and missing ablation studies, I confirm my positive assessment of the work. | Summary: This paper introduces Quantification of Uncertainty with Adversarial Models (QUAM). Building on the claim that previous epistemic uncertainty estimation methods (e.g. MC dropout, Deep Ensembles) underestimate the epistemic uncertainty by only considering the posterior distribution when sampling models, the authors introduce an Adversarial Model Search algorithm to identify models located within a posterior mode but with different predictive distributions compared to the reference model.
Strengths: Previous epistemic uncertainty estimation methods (i) require changing the training procedure to account for uncertainty estimation by e.g. adding dropout layers or training multiple models, (ii) underestimate the epistemic uncertainty by only sampling models from the posterior distribution.
The proposed method QUAM addresses (i) by searching for adversarial models at test time for each test sample, thus working with any pre-trained model. Moreover, it solves (ii) by designing an adversarial model search algorithm that allows to cover more posterior modes by identifying models located within a posterior mode but with different predictive distributions compared to the reference model.
Weaknesses: ## Lack of clarity
Although the method is interesting and addresses some important limitations of previous epistemic uncertainty estimation approaches, the paper writing should be revised.
Too often important details are omitted in the main paper or disclosed too late. In particular,
1. **Definition of adversarial models.** the definition of adversarial models introduced in this paper is completely new and has nothing to do with previous definitions of generative adversarial models [a, b] or adversarial attacks [c]. However, the first definition of what the authors mean by adversarial models is provided only at page 6, making it very hard to understand the paper before reading line 226. This paper would **greatly benefit** from earlier definitions of adversarial models in both abstract and introduction. Moreover, writing "(not adversarial examples!)" is the least elegant solution.
2. **Missing details.** Implementation details are solely reported in the supplement, but some details are essential in the main paper for better understanding. In particular, the number of adversarial models, number of MC samples, and the number of models in the deep ensemble should be clearly stated in the main paper.
3. **Settings (a) and (b)** It is not clear from the writing how the settings (a) and (b) defined in Section 2 relate to the experiments and whether the proposed QUAM presents any significant advantage on either task in particular compared to previous methods.
## Unsupported claims
Some claims are unsupported. They should either be supported by quantitative or theoretical analysis in the paper or some reference to other papers should be provided. In particular,
1. The claim that previous epistemic uncertainty estimation methods underestimate the uncertainty (line 59, line 143) is not backed up by empirical results or references to other papers.
2. "Adversarial models are characterized by a large value of the integrand of the integral that defines he epistemic uncertainty" (line 65). Without a definition of adversarial models in the introduction, this claim is not possible at all to understand.
## Missing ablation on number of adversarial models
The main paper does not specify the number of adversarial models learned through algorithm 1. According to the supplement, it seems that 10 are used. The paper would benefit from an ablation on the number of adversarial models
## Missing explanation of why the search algorithm does not always converge to the same solution
The search algorithm is not constrained to avoid finding always the same solution. Since it is an optimization problem, it is likely that similar solutions to the same problem are found if starting from the same initialization. The paper would benefit from an explanation of why the search does not converge always to the same adversarial model. Perhaps, enforcing that solutions must be different could improve the performance.
## Missing ablation on trade-off inference speed vs. uncertainty estimation performance at different number of samples
As pointed out by the authors, the proposed method sacrifices inference speed for uncertainty estimation performance. Although this undermines the practicality of the proposed approach, the theoretical insights gained from QUAM could outweigh this limitation.
However, an ablation on the trade-off between inference speed and uncertainty estimation performance at different number of samples for MC dropout, Deep Ensembles and QUAM is needed. I would expect MC dropout to be significantly more efficient for low number of samples, still obtaining reasonable performance at low inference time. Moreover, QUAM should have a higher uncertainty estimation performance upper bound than the competitors, if computational resources allow to tolerate the high inference time. This comparison would allow practitioners to understand which method to use based on their computation budget and needs.
## References
[a] Wang, Kunfeng, et al. "Generative adversarial networks: introduction and outlook." IEEE/CAA Journal of Automatica Sinica 4.4 (2017): 588-598.
[b] Li, Guofu, et al. "Security matters: A survey on adversarial machine learning." arXiv preprint arXiv:1810.07339 (2018).
[c] Chakraborty, Anirban, et al. "Adversarial attacks and defences: A survey." arXiv preprint arXiv:1810.00069 (2018).
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Overall, I find the paper unclear and lacking some important details and ablations. I believe the contribution to be relevant, but the paper would benefit from rewriting for clarity and from additional experiments based on the comments in the Weaknesses section.
I would change my opinion if the authors would convincingly address my comments in the Weaknesses section and, most importantly, optimize the paper for clarity. Providing the definition of adversarial models only at page 6 makes very hard understanding the first 5 pages.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: The authors pointed out the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for this thoughtful feedback and critical assessment of our work, as well as the concrete suggestions to improve clarity. We will change the manuscript according to the reviewer’s suggestions for the final version as follows:
- **Clarity:**
1) *Definition of adversarial models:* It is true that adversarial models are formally defined only at page 6, although they are a novel concept. In the abstract and the introduction, we only loosely define adversarial models as having both a high posterior as well as a high divergence between their predictions and that of a reference model, consequently being counterexamples of the reference model, predicting differently for a new input while explaining the training data equally well (Lines 12-13 and 65-70). This is because we deliberately have chosen to first focus on presenting the problem, only then followed by the solution, namely the search for adversarial models. However, to improve clarity, we will refine and expand these early definitions and add the formal definition to the introduction (after Line 70) in the final version of the paper.
2) *Missing details:* Thank you for pointing this out. We will move the number of adversarial models, number of MC samples, and the number of models in the Deep Ensemble from the appendix to the end of section 3 as a new subsection to further improve clarity.
3) *Settings (a) and (b):* QUAM is a method to estimate the integral defining the epistemic uncertainty in both setting (a) and (b), which are posterior expectations of divergences between predictive distributions. We evaluated the synthetic dataset experiments on both settings (a) and (b). Regarding the large scale experiments in the vision domain, we focused on our newly introduced setting (b). This is stated in the first sentence introducing this set of experiments (line 278/279) and in the table headers.
- **Unsupported claims:**
1) Thank you for raising this point, we added references to similar claims in the literature as follows: [1] points out that adequate sampling of those posterior regions that make the greatest contribution to the posterior expectation of the predictive distribution are of greater interest for the purpose of estimating this posterior expectation than obtaining a generally good posterior representation. Those regions are the ones with the highest KL divergence in our setting (b) that only QUAM explicitly searches for. Similarly, [2] claims that Deep Ensembles do not provide enough functional diversity; we argue the same applies to MC dropout. [3] claims this is due to gradient descent always finding similar “easy” solutions.
Empirically we investigated this issue on a toy dataset, results are shown in Figure 2, the ground truth (HMC) to compare to is shown in Figure C.2 in the appendix. Furthermore, we observe that epistemic uncertainty is underestimated in the experiments on the two-moon dataset (Figure 3) when considering a test point that is for instance in the upper left or lower right corner. Similarly, we find that epistemic uncertainty is underestimated for inputs outside the range of data points (Figure C.4 in the appendix) in a regression task.
2) We did not perceive this as a claim but solely as a loose version of the definition of adversarial models. Nonetheless, you are right that it is hard to understand at this stage in the paper. We will improve it by including the definition in the introduction as suggested before.
- **Number of adversarial models and convergence:** We use 10 adversarial models for the MNIST experiments and 10 to 1000 for the ImageNet-1k experiments (see ablation study below). As you pointed out, directly maximizing the KL-divergence to the prediction of the reference model would indeed converge to very similar solutions, which we empirically observed during early experiments. Therefore, we instead minimize the cross-entropy towards one out of all possible classes at a time. This way we attain diverse solutions that, as you pointed out, lead to improved performance. This is discussed in detail in appendix section C.1. We will move these implementational details to the main paper at the end of section 3 as a new subsection to make it more explicit.
- **Ablation on inference speed vs. performance:** QUAM requires updating the last layer for up to 100 steps for ImageNet-1k OOD detection, which is equivalent to 15 forward passes on the full network required by MC dropout (assuming that the last layer is 5\% of all parameters and that the backward pass is twice as expensive as the forward pass). In a new ablation study, we searched for adversarial models only on a subset of classes based on the highest softmax probabilities assigned by the given, pre-selected model. The results are listed in Table 1 and Figure 1 of the PDF document attached to the global answer. For instance, when searching adversarial models only for the 10 most probable classes (QUAM$\_{top1\\%}$), the number of full forward passes per sample is reduced from 15k (QUAM$\_{all}$) to 150, an inference speed reduction by a factor of 100. Still, QUAM outperforms MC dropout in terms of performance. Also, training a single additional ensemble member requires more computational cost than evaluating all ImageNet-1k OOD samples with QUAM. We will provide more details on inference speed vs. performance in the final version of the paper.
We hope to have addressed your open questions and concerns. Your insightful review has guided us in enhancing the clarity of our paper, and we believe these changes will contribute positively to the overall impact of our work.
---
[1] Wilson, A. G., & Izmailov, P. (2020). Bayesian deep learning and a probabilistic perspective of generalization. NeurIPS.
[2] D'Angelo, F., & Fortuin, V. (2021). Repulsive deep ensembles are bayesian. NeurIPS.
[3] Parker-Holder, J., & ... & Foerster, J. (2020). Ridge rider: Finding diverse solutions by following eigenvectors of the hessian. NeurIPS.
---
Rebuttal Comment 1.1:
Comment: The authors provided a convincing rebuttal, and I appreciate their efforts towards improved clarity of their submission.
- **Clarity:**
1. _Definition of adversarial models:_ I thank the authors for acknowledging the importance of an earlier definition of adversarial models, and for adding it to the introduction.
2. _Missing details:_ This is an important step for clarity and reproducibility.
- **Unsupported Claims:**
The reply provided by the authors and supporting references seem convincing.
- **Number of adversarial models and convergence:**
I thank the authors for pointing out Sec. C. 1 in the appendix. Sec. C.1 reports fundamental details that should at least partially be mentioned in the main paper. The observation that "directly maximizing the KL divergence always leads to similar solutions to the optimization problem" was an obvious limitation by just reading the main paper, and it was not addressed there. To ensure that this paper complies with the clarity standards of this conference, I hope that the authors will carefully discuss this in the main paper.
- **Ablation on inference speed vs. performance:** The reply provided by the authors, the new proposal for tuning the computation cost of QUAM and the results provided in Figure 1 are convincing. I thank the authors for providing them and I believe that they will enrich the paper.
Overall, the authors addressed most of my concerns. However, my initially negative opinion was also greatly influenced by the initial poor quality of presentation and clarity. The authors will have to work on a significant rewriting to make the paper suitable for the standards of this conference.
I am leaning towards increasing my rating, the extent of which will also depend on the discussion with other reviewers.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your helpful and constructive suggestions. We rewrote the manuscript accordingly, which resulted in a significant improvement of clarity and quality of presentation. We fully agree with the reviewer that the clarity and good presentation of research results are very important. To summarize, we did the following main improvements of the manuscript:
- **Number of adversarial models and convergence:** We added a new subsection on the practical implementation at the end of section 3 in the main paper. This subsection is based upon section C.1 from the appendix. Most importantly, we now discuss the problem that direct KL optimization always leads to the same solution as explained in Sec. C.1. We further explain that we want to find different regions with large contributions to the epistemic uncertainty integral in equation (1) and (2). This is achieved by minimizing the cross-entropy towards one out of all possible classes at a time. Indeed, the more of those regions QUAM identifies, the more effective is mixture importance sampling. We agree with the reviewer that it is essential to elaborate on this problem and how we tackled it in the main paper, as it is essential to our method.
- **Definition of adversarial models:** We added Definition 1 to the introduction (paragraph starting at line 58). Consequently, we revised the informal definition of adversarial models previously given in this paragraph accordingly. Furthermore, we contrast the definition of adversarial models from other concepts with ‘adversarial’ in their naming, such as “adversarial examples”, “adversarial training”, “generative adversarial networks”, or “adversarial model-based RL”.
- **Unsupported claims:** We added the additional references [1], [2] and [3] and extended discussion of why prior methods underestimate epistemic uncertainty to the respective claims (line 59, line 143) and more explicitly refer to our empirical evidence.
- **Missing details:** We moved information about the most crucial hyperparameters (e.g. ensemble size, # of forward passes for MC dropout, # adversarial model searches, …) from the appendix to the experimental section in the main paper.
- **Ablation on inference speed vs. performance:** We added the new ablation on inference speed vs. performance to the main paper.
- **Additional experiments requested by other reviewers:** We added the new results for MoLA to Table 1 in the main paper. Furthermore, we added the results for calibration to the appendix.
Would the reviewer see the need for any additional significant changes? We would gladly consider and address those.
Finally, we thank the reviewer for acknowledging our convincing results, as well as our efforts towards improving the clarity and quality of presentation of our paper and hope that they adjust their assessment of our work accordingly.
---
[1] Wilson, A. G., & Izmailov, P. (2020). Bayesian deep learning and a probabilistic perspective of generalization. NeurIPS.
[2] D'Angelo, F., & Fortuin, V. (2021). Repulsive deep ensembles are bayesian. NeurIPS.
[3] Parker-Holder, J., & ... & Foerster, J. (2020). Ridge rider: Finding diverse solutions by following eigenvectors of the hessian. NeurIPS. | Summary:
This paper is about uncertainty estimation using adversarial models (not examples!). The authors propose a new uncertainty estimation method, called QUAM, which performs a search of an adversarial model, which is one that fits the training set but has predictions far away from the predefined model, with the idea that a point is uncertain if multiple models explain it very differently, while still fitting the original training set.
QUAM has the potential to estimate epistemic uncertainty better than other uncertainty models, and experimental results point in this direction.
Contributions are:
- The QUAM framework for uncertainty quantification using adversarial model search.
- The new concept of an adversarial model, different from an adversarial example, which is a model (set of weights) that explains the training set well while having maximum difference with the orignal model in a new test point.
- The proposed method can estimate epistemic uncertainty for any model, including a pretrained model, which is an advantage over usual uncertainty methods that require modifications to the training process or model retraining.
Strengths: - The paper is very well written and easy to understand.
- The paper touches an important topic, uncertainty estimation methods fail in samples far from the training set, usually producing overconfident uncertainties that are not useful to detect out of distribution samples. This is well known from Ovadia et al and other papers in the literature.
- This paper defines a new kind of meta-model, the adversarial model for uncertainty quantification, where to make a prediction in a new test point, new model parameters are found that fit the training set while providing alternative explanations for the new test point, producing higher quality epistemic uncertainty. This is of course very computationally expensive and it is mentioned as a limitation in the conclusions.
- Experimental results show that QUAM outperforms the baselines on the task of out of distribution detection on MNIST vs FMNIST/KMINST/EMNIST/OMNIGLIT and ImageNet-1K vs Imagenet-O/ImageNet-A.
- I believe the proposed method QUAM is a good contribution to the field of uncertainty quantification, as it proposed a new framework for uncertainty estimation that is model agnostic and provides high quality epistemic uncertainty estimates.
Weaknesses: - I have some serious doubts about the concept/interpretation of aleatoric uncertainty in this paper. In the literature and widely agreed concepts, aleatoric uncertainty is about the data, it is a property of the data, like stochasticity in measuring processes or noise, but the paper makes claims that "aleatoric uncertainty is the stochasticity of the model and epistemic uncertainty is the uncertainty about model parameters." (Lines 39/40), here the claim is made that aleatoric uncertainty is about stochasticity of the model, which I do not believe it is correct. Just as an example, there are non-stochastic models (like ensembles) that can estimate aleatoric uncertainty without using stochastic components. And I mention previously, aleatoric uncertainty is mainly a property of the data, not of the model, even though a model can estimate the aleatoric uncertainty of the data. I believe here the paper should clarify or simply fix the definition.
~~- Another issue I have is about selection of baselines, in the two moons setting, DUQ [1] is a strong baseline that actually approximates epistemic uncertainty much closer than QUAM in the two moons dataset (see Figure 1 of the DUQ paper), and I believe this also points that an issue in epistemic uncertainty estimation is the model's inductive bias. DUQ uses a RBF layer to predict classes, based on distance to class centroids, unlike a separating hyperplane in a highly dimensional space, so epistemic uncertainty is very different. I believe in this paper DUQ and other single network uncertainty estimation method (like Direct Uncertainty Estimation, DDU), should also be compared.~~
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Questions:
- How were the baselines selected? The baseline UQ methods are not state of the art for epistemic uncertainty estimation. I already mentioned in weaknesses that DUQ has better epistemic uncertainty estimation in the two moons dataset.
- In Figure 3f, QUAM has artifacts or non-smoothness in the uncertainty, in the top-left and bottom-right corners, while other models do not have such artifacts, is there an explanation or intuition about this?
Post-rebutal, both of these questions have been answered to my satisfaction by the authors.
Suggestions:
- The DE (LL) setting is the same as the concept of sub-ensembles [2], maybe it could be mentioned or cited.
- In Tables 1 and 2, the aleatoric uncertainty of the model is claimed to be used, but I am not sure what this means, as most models actually output predictive uncertainty, which is the combination of aleatoric and epistemic uncertainty. Even a model without any uncertainty quantification (i.e without using MC-Dropout, Ensembling, BNNs, etc) still has a non-zero degree of aleatoric and epistemic uncertainty, so I believe this claim should be changed to predictive uncertainty and not aleatoric uncertainty.
[2]: Deep Sub-Ensembles for Fast Uncertainty Estimation in Image Classification
by Matias Valdenegro-Toro. arXiv 1910.08168.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The paper properly discusses the limitations of the proposed method in the conclusion sections, including the fact that for each new test point, a model search has to be performed which is computationally very expensive.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful assessment of our work. Regarding the explicitly stated remarks and questions:
- **Aleatoric uncertainty:** Your remark is correct, aleatoric uncertainty is indeed a property of the data, stemming from the stochasticity / noise in the measurement process as we point out in line 29/30. Fundamentally, aleatoric uncertainty is due to the stochasticity of the conditional distribution $p(\boldsymbol{y} \mid \boldsymbol{x})$ (often referred to as predictive distribution) that assigned labels to the samples in the dataset. However, we do not know the underlying conditional distribution, but use a model to approximate it, resulting in an approximation of the conditional distribution $p(\boldsymbol{y} \mid \boldsymbol{x}, \boldsymbol{w}) \approx p(\boldsymbol{y} \mid \boldsymbol{x})$. Therefore, we consider quantifying the aleatoric uncertainty as characterizing this stochasticity of the model, in accordance with e.g. [1] or [2]. Nevertheless, you are right that lines 39/40 should be improved. We suggest “Consequently, we consider uncertainty quantification as characterizing a stochastic model of the world. Here, aleatoric uncertainty stems from the stochasticity of the model and epistemic uncertainty from the variability of plausible model parameters.”, but would appreciate suggestions.
- **Choice of baselines:** We certainly acknowledge the recent advances in single network uncertainty estimation like DDU [3], DUQ [4], DUE [5], SNGP [6], and others, and their ability to provide meaningful epistemic uncertainty estimates based on the location of new samples in the feature space. We also agree that they provide uncertainty estimates akin to those of HMC in the two-moon example. However, we did not compare our approach to these for the following reasons:
1) *Model Constraints:* The aforementioned single forward pass methods require the regularization of the feature space to assess uncertainty (either a two-sided gradient penalty or spectral normalization in the examples above), which has to be applied during training of the model. Therefore, it is not applicable under our newly introduced setting (b), where we assume a given, pre-selected model without any constraints of how it was obtained.
2) *Different Uncertainty Perspectives:* Single forward pass methods capture a different notion of epistemic uncertainty than Bayesian methods. They capture epistemic uncertainty through the location of new test samples in the latent feature space, whereas we consider capturing epistemic uncertainty through posterior integrals, thus through sampling different models. We solely claim that QUAM enhances the quantification of epistemic uncertainty for the specific notion given by the respective terms in equations (1) [setting (a)] and (2) [setting (b)] of the main paper. We thus selected the best known baselines that also aim to quantify this notion of uncertainty. A comprehensive exploration of the relation between these two different notions of epistemic uncertainty remains an exciting topic for future research, which hasn’t been deeply explored yet to the best of our knowledge.
- **Artifacts / non-smoothness:** QUAM finds adversarial models for every input $\boldsymbol{x}$. Thus, to create Figure 3, we applied our method to each point of an input-mesh used to show the uncertainty over the whole input space. For all other methods, we applied the same sampled models to the whole input-mesh at once, thus the higher smoothness. Note that if e.g. MC dropout is applied to each test input individually instead of to all of them at once, the same non-smoothness will be observed due to the different sampled models for different test points.
Thank you for pointing out the connection of the last layer ensemble to [7], we will reference it accordingly in the final version of the paper.
Regarding the aleatoric uncertainty in Tables 1 and 2, we used the estimate of aleatoric uncertainty as in equation (2). This is the entropy of the predictive distribution under the single given, pre-selected model $\mathrm{H}[p(\boldsymbol{y} \mid \boldsymbol{x}, \boldsymbol{w})]$ and therefore solely an estimate of aleatoric uncertainty.
We greatly appreciate your insightful feedback, which has helped us to refine and clarify our work. Should you have any further questions or need additional explanations, we look forward to provide them. Thank you for your time and consideration.
---
[1] Helton, J. C. (1997). Uncertainty and sensitivity analysis in the presence of stochastic and subjective uncertainty. journal of statistical computation and simulation, 57(1-4), 3-76.
[2] Kendall, A., & Gal, Y. (2017). What uncertainties do we need in bayesian deep learning for computer vision?. Advances in neural information processing systems, 30.
[3] Mukhoti, J., Kirsch, A., van Amersfoort, J., Torr, P. H., & Gal, Y. (2021). Deep deterministic uncertainty: A simple baseline. arXiv preprint arXiv:2102.11582.
[4] Van Amersfoort, J., Smith, L., Teh, Y. W., & Gal, Y. (2020, November). Uncertainty estimation using a single deep deterministic neural network. In International conference on machine learning (pp. 9690-9700). PMLR.
[5] van Amersfoort, J., Smith, L., Jesson, A., Key, O., & Gal, Y. (2021). On feature collapse and deep kernel learning for single forward pass uncertainty. arXiv preprint arXiv:2102.11409.
[6] Liu, J., Lin, Z., Padhy, S., Tran, D., Bedrax Weiss, T., & Lakshminarayanan, B. (2020). Simple and principled uncertainty estimation with deterministic deep learning via distance awareness. Advances in Neural Information Processing Systems, 33, 7498-7512.
[7]: Valdenegro-Toro, M. (2019). Deep sub-ensembles for fast uncertainty estimation in image classification. arXiv preprint arXiv:1910.08168.
---
Rebuttal Comment 1.1:
Title: good rebuttal
Comment: Thank you for the detailed rebuttal. I agree with point 2. About point 3, wouldn't it make more sense to then predict all baselines the same way, even if they have similar artifacts? At least to make a sensible comparison.
About point 1, aleatoric uncertainty, I think here I have the biggest disagreement, I think your description is a bit conflicting because the conditional distribution has mixed aleatoric and epistemic uncertainty in it (called predictive uncertainty), and obtaining only aleatoric uncertainty from it is difficult without additional methods, that is why model stochasticity does not model aleatoric uncertainty exclusively. Since this point is not really relevant from your paper, I suggest to actually remove it since it could mislead future readers.
I will update my review accordingly.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for updating the review and the rating of our work.
Regarding point 3, we agree with the reviewer that predicting all baselines the same way on a per-test-sample basis would make a better comparison to QUAM. However, previous work such as [1], [2] or [3] conducted similar experiments on the two-moon dataset, applying the baselines on all test samples at once. We decided to stay close to the setup of prior work, but we can also change the baselines to operate on a per-test-sample basis. Moreover, for a sufficiently high number of sampled models, the artifacts will smoothen out, so we could also just do more adversarial model searches for QUAM. Either way, we will elaborate on the origin of artifacts being different models sampled for individual test points.
We fully agree with the reviewer, the debated point about aleatoric uncertainty is not really relevant for our paper and change the respective section accordingly.
---
[1] Eschenhagen, R., Daxberger, E., Hennig, P., & Kristiadi, A. (2021). Mixtures of Laplace approximations for improved post-hoc uncertainty in deep learning. arXiv preprint arXiv:2111.03577.
[2] Liu, J., Lin, Z., Padhy, S., Tran, D., Bedrax Weiss, T., & Lakshminarayanan, B. (2020). Simple and principled uncertainty estimation with deterministic deep learning via distance awareness. Advances in Neural Information Processing Systems, 33, 7498-7512.
[3] Van Amersfoort, J., Smith, L., Teh, Y. W., & Gal, Y. (2020, November). Uncertainty estimation using a single deep deterministic neural network. In International conference on machine learning (pp. 9690-9700). PMLR. | Rebuttal 1:
Rebuttal: We thank all reviewers again for the time and effort they have invested in order to provide their high quality feedback.
All reviewers found our approach novel and relevant, and acclaimed the general applicability of the method as well as the empirical performance. Nevertheless, reviewers were not unanimous about the paper's clarity of writing and requested additional ablations and evaluations, as well as further baselines for our OOD detection experiments.
We are confident to have addressed all concerns in the individual responses and will incorporate them in the final version of the paper. To summarize:
- We have supported the claim that prior methods underestimate uncertainty by citing several papers and further elaborated on the empirical evidence provided in our paper.
- We have improved the clarity of writing based on the reviewers feedback, e.g. by shifting implementational details from the appendix to the main paper and introducing the formal definition of adversarial models earlier.
- We have performed three additional experiments/ablations (see attached PDF document) that further confirm the strong performance of QUAM:
1) Table 1 and Figure 1: We compared QUAM to other baselines in terms of inference speed (*as requested by reviewer EcJb*). We discovered that the inference time of QUAM can be reduced by a factor of 100, while outperforming baselines like MCD in both performance and speed.
2) Table 2 and Figure 2: We evaluated QUAM in terms of expected calibration error (*as requested by reviewer rwwy*). Although QUAM was not directly designed to be well calibrated, we discovered that it outperforms our baseline methods in being able to provide a more calibrated output prediction.
3) Table 3: We added a new baseline (MoLA) to our OOD detection experiment (*as requested by reviewer rwwy*). QUAM outperforms the proposed method in OOD detection on the FMNIST, KMNIST, EMNIST and OMNIGLOT test datasets.
We hope that this clarifies all questions and concerns and that the additional experiments provide convincing evidence of the effectiveness of our method. Thank you for your time and efforts!
Pdf: /pdf/36954408b47132a3ad625413a26d1e8c4aa97120.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Replicability in Reinforcement Learning | Accept (poster) | Summary: This work studies the question of reproducibility in reinforcement learning (RL). They define reproducibility as an algorithm returning the same policy on two different random draws from the environment, with probability at least $\rho$. In the generative model setting, they show that there exists an algorithm which is $\rho$-reproducible, returns an $\epsilon$-optimal policy with probability at least $1-\delta$, and collects at most $O(\frac{N^3 \log 1/\delta}{(1-\gamma)^5 \epsilon^2 \rho^2})$ samples, for $N$ the number of state-actions. They also show a lower bound that this is tight (up to a factor of $1/(1-\gamma)^2$). As the $N^3$ complexity could be prohibitively large in many cases, they show that by relaxing the definition of reproducibility somewhat, they are able to obtain a complexity that scales with $N$ instead.
Strengths: 1. To my knowledge, the setting is novel: I do not think the question of reproducibility has been previously considered in the RL literature. Furthermore, the question of algorithmic reproducibility has in general seen attention recently in the broader machine learning literature, so there is general interest in the setting.
2. The results paint a fairly complete picture of the problem, with nearly matching upper and lower bounds. In addition, they show that the $N^3$ dependence can be reduced by considering a slightly relaxed formulation of the problem.
Weaknesses: 1. The paper does not clearly motivated why we should care about reproducibility in RL. While reproducibility in science in general is an issue, in RL typically the goal is simply to find a policy that (approximately) maximizes the reward, and we don’t necessarily care whether two runs of the same algorithm produce identical policies or not, as long as they have similar performance. The paper does not give clear justification for why this problem is important, but I think this is necessary given the novelty of it.
2. Definition 2.6 should make clear that the algorithm is able to request which state-actions the $n(\epsilon,\delta)$ are from (if it is indeed able to). As it currently reads, it is ambiguous if the algorithm can request which state-actions they are from, or if it is simply given $n(\epsilon,\delta)$ samples from the generator—the latter option is clearly not possible since if all samples are from the same state-action it is not possible to learn a good policy.
3. Definition 2.9 is also somewhat unclear for a similar reason. My reading is that $\bar{S},\bar{S}’ \sim G$ means that samples from a particular set of state-actions are generated from $G$. Would this exclude the case when all the samples are coming from the state-action? Also, does it exclude an adaptive algorithm which chooses which state-actions to sample from based on the data it has already seen?
4. Using the notation $\bar{r}$ for the randomness of the algorithm is somewhat confusing as $r$ is already used for the reward.
5. I understand that space is limited, but if possible it would be helpful to give some more explanation for the algorithm, in particular the replicable rounding procedure, since this is non-standard in the RL literature. Some intuition on why the $N^3$ dependence arises would also be helpful.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The primary question I have is on motivating the problem setting, as I stated in the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their constructive comments.
> The paper does not clearly motivated why we should care about reproducibility in RL.. but I think this is necessary given the novelty of it.
The reviewer raises an important point. In many applications, like mean estimation, correctness of the answer ensures some form of replicability. In the RL setting, we can show that policies that are derived from two sets of i.i.d. data will yield rewards that are $\varepsilon$-close to one another. However, in many RL applications we set up the rewards as a proxy that will lead to a good policy, i.e. the object of interest is the policy itself and not necessarily its reward. Unfortunately, "standard" RL algorithms do not achieve replicability in the space of policies. This is because most of them are based on the Bellman operator, which is essentially a max operator, and it is very brittle.
As a concrete example, Yu et al. [2019] state that RL applied towards “a medical or clinical treatment regime is composed of a sequence of decision[s] to determine the course of [action] such as treatment type, drug dosage, or re-examination timing … with a goal of promoting the patient’s long-term benefits.” In such cases, where the output we mostly care about is the policy and not the numerical value of the reward, replicability in the policy estimation is crucial. Indeed, practitioners would not trust the results of algorithms that vary significantly when executed on (potentially different) datasets that are obtained using the same sampling process from the same underlying population.
Also, note that replicability within the applied RL community is not a novel concept. Numerous studies such as that of Henderson et al. [2019] and Lynnerup et al. [2019] have attempted to tackle this issue from a methodological perspective. These works provide a series of recommended steps in order to perform replicable RL research. From an algorithmic point of view, the previous works have attempted to minimize the differences in the input of the algorithm since RL algorithms are notoriously sensitive to hyperparameters, the environment, etc. We take a different approach and strive to design algorithms that are inherently more stable, therefore attacking the problem at its root.
> Definition 2.6 should make clear that the algorithm is able to request which state-actions the $n(\epsilon,\delta)$ are from (if it is indeed able to). As it currently reads, it is ambiguous if the algorithm can request which state-actions they are from, or if it is simply given $n(\epsilon,\delta)$ samples from the generator—the latter option is clearly not possible since if all samples are from the same state-action it is not possible to learn a good policy.
We would like to thank the reviewer for bringing this up. The algorithm is indeed able to specify the state-action pairs. We will clarify that in the next version of our draft.
> Definition 2.9 is also somewhat unclear for a similar reason. My reading is that $S, \bar{S} \sim G$
means that samples from a particular set of state-actions are generated from $G$. Would this exclude the case when all the samples are coming from the state-action? Also, does it exclude an adaptive algorithm which chooses which state-actions to sample from based on the data it has already seen?
We would like to thank the reviewer for bringing this up, similarly as before the algorithm is able to select which states-actions it will get samples from. Our algorithms are not adaptive and, intuitively, we do not see how adaptivity could help with replicability. We will clarify that in the next version of our draft.
> Using the notation $\bar{r}$ for the randomness of the algorithm is somewhat confusing as is already used for the reward.
We thank the reviewer for the constructive comment and will change the notation to use $\xi$ instead.
> I understand that space is limited, but if possible it would be helpful to give some more explanation for the algorithm, in particular the replicable rounding procedure, since this is non-standard in the RL literature. Some intuition on why the $N^3$ dependence arises would also be helpful.
We agree with the reviewer's comment and will include the following explanation in the next version of our manuscript.
Let $Q^\star$ be the optimal $Q$-function. The replicable rounding procedure is based on a rounding scheme that appeared in [Impagliazzo et al., 2022]. It takes as input two $Q$-vectors $Q_1, Q_2$ with the promise that $||Q_1 - Q_2||_\infty \leq \rho \cdot \epsilon,$ where $\rho, \epsilon$ are the replicability, accuracy parameters respectively. For that, we need $O(N/(\epsilon^2 \rho^2))$ many samples. Then, we consider each coordinate $(s,a)$ separately. We discretize the interval $[0,1/(1-\gamma)]$ that $Q_1(s,a), Q_2(s,a)$ belong to by first consider a random offset $r \sim U[0,\epsilon]$ and then increments of length $\epsilon$. Importantly, this random offset is shared across the two executions. The size of the intervals that we choose for the rounding guarantee that the $|Q_1(s,a) - Q^\star(s,a)| \leq \epsilon, |Q_2(s,a) - Q^\star(s,a)| \leq \epsilon$ and because of the shared random offset we can show that, with probability $1-\rho$, $Q_1(s,a) = Q_2(s,a)$. Because we need to take a union-bound over the $N$ state-action pairs, we need to set the replicability parameter to be $\rho/N$. Since the dependence on $\rho$ is quadratic, we get the extra $N^2$ factor in the sample complexity.
Yu et al. [2019]: Chao Yu, Jiming Liu, and Shamim Nemati: Reinforcement learning in healthcare: a survey.
Henderson et al. [2019]: Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger: Deep reinforcement learning that matters.
Lynnerup et al. [2019]: Nicolai A. Lynnerup, Laura Nolling, Rasmus Hasle, John Hallam: A survey on reproducibility by evaluating deep reinforcement learning algorithms on real-world robots.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: I would like to thank the authors for their response. I will keep my score as is. | Summary: The paper studies replicable reinforcement learning algorithms in the tabular MDP setting with an oracle generative model. The paper gives the first lower-bound for the sample complexity of a $\rho$-replicable $(\varepsilon, \delta)$-optimal algorithm. To obtain this lower-bound, the paper first builds an information-theoretical lower-bound for the sample complexity of estimating $N$ independent possibly biased coins, and then reduces the multiple coin estimation problem to some leveled tabular MDP learning with an oracle generative model. By proposing an algorithm (which is basically running an existing RL algorithm for tabular MDPs with a generative model with some carefully designed precision parameters, and then properly rounding the value function) with $\tilde O(N^3)$ dependency of $N$ in the sample complexity upper-bound, the paper shows that the proposed sample complexity lower-bound is indeed sharp in $N$.
The paper also studies a weaker replicability formulation called approximate-replicability and gives an algorithm with $\tilde O(N)$ sample complexity.
Strengths: The paper is clearly written and easy to follow for readers with background on recent theoretical RL works. The given sample complexity lower-bound in $N$ is tight. The proposed algorithms in both formulations are built upon existing oracle algorithms and simple post-processing and hence easy to understand.
Weaknesses: In my opinion, the structure of the paper can be slightly improved. The reductions between the optimal Q-function estimation and the optimal policy, the multiple coin estimation problem and RL on tabular MDPs are at a high level easy to understand. The author may consider compress these contents and move some details of the important technical steps (e.g., the replicable rounding step and how to develop the sample complexity lower-bound for the replicable multiple coin estimation problem) to the main text, in order to bring more insight to the readers after reading the main text.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you provide some insights and comments on the replicability of RL in the pure exploration setting or episodic regret minimization setting (hence both without discounting factors)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper considers RL on tabular MDPs with a generative model, it would be more exciting to see results on weaker settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their constructive comments.
> In my opinion, the structure of the paper can be slightly improved. The reductions between the optimal Q-function estimation and the optimal policy, the multiple coin estimation problem and RL on tabular MDPs are at a high level easy to understand. The author may consider compress these contents and move some details of the important technical steps (e.g., the replicable rounding step and how to develop the sample complexity lower-bound for the replicable multiple coin estimation problem) to the main text, in order to bring more insight to the readers after reading the main text.
We thank the reviewer for the comment. We plan to make these changes in the next version of our draft. Moreover, if the paper gets accepted, we will utilize the extra space to elaborate more on these points.
> Can you provide some insights and comments on the replicability of RL in the pure exploration setting or episodic regret minimization setting (hence both without discounting factors)?
This is indeed an important area for future research. In these settings, the agent will need to keep track of a potential “model” of the world by performing some Upper Confidence Bound (UCB) type of update. The crucial step to ensure replicability is that, by sharing randomness, the learner will be able to update the “model” of the world in the same way across the two executions (e.g., by doing some variant of randomized rounding in the UCB-type of updates). The main difficulty here is to figure out how to come up with a rounding scheme that on the one hand ensures replicability and on the other hand yields low regret/small number of episodes of exploration.
> The paper considers RL on tabular MDPs with a generative model, it would be more exciting to see results on weaker settings.
While we agree with the point that the reviewer raised, our main goal in this work was to provide a comprehensive treatment of the tabular setting, which is the most fundamental setting in RL. We hope and believe that our work will inspire further research in more realistic settings like the linear function approximation setting and the episodic setting. We think that the ideas we have developed in our work, like the replicable $Q$-function estimation (under the different notions of replicability we consider), will be useful in this line of work.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: The rebuttal looks good to me. I will keep my score. | Summary: The paper makes a significant contribution by introducing a theoretical study of replicability in reinforcement learning. It focuses on discounted tabular Markov Decision Processes (MDPs) with generative models and explores two definitions of replicability: exact and approximate versions. For the exact version of replicability, the authors provide a lower bound and propose an algorithm that achieves near-optimal performance, with the exception of dependence on the discount factor $\gamma$. In the case of the approximate version of replicability, the authors show an improved sample complexity.
Strengths: - The paper provides a comprehensive study on the theory of replicability, offering insights into both the exact and approximate versions of replicability. Notably, for the exact version, the authors present both upper and lower bounds, which align closely.
- The concept of replicability in reinforcement learning is highly intriguing, and this paper serves as a pioneering work in advancing the study of theory of replicability in RL. By introducing and delving into this concept, the authors open up new avenues for exploring replicability and its implications in RL research.
- Technically, the paper exhibits strong foundations and analysis, showcasing a solid understanding of the subject matter. The presentation of the research is clear and well-motivated, ensuring that readers can grasp the significance and implications of the findings effectively.
Weaknesses: - While the topic of studying replicability in RL (Reinforcement Learning) is intriguing, I have reservations about its significance. For instance, if there exists a single optimal policy, it logically follows that all RL algorithms should eventually converge to the same policy. Hence, replicability is inherently implied.
- I understand that this is primarily a theoretical paper; however, it would be advantageous to include some experimental analysis. Specifically, exploring the behavior of classical RL algorithms in the tabular case with regards to replicability would provide valuable insights.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How do classic RL algorithms in the tabular case behave concerning the two notions of replicability discussed in the paper?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their constructive comments.
> While the topic of studying replicability in RL (Reinforcement Learning) is intriguing, I have reservations about its significance. For instance, if there exists a single optimal policy, it logically follows that all RL algorithms should eventually converge to the same policy. Hence, replicability is inherently implied.
While we agree with the reviewer that if there is a single optimal policy, replicability is inherently implied, we believe that only a very small set of RL problems fall into this category. On top of that, if we denote by $\delta$ the gap in the reward of the optimal policy and the second best policy, the number of samples we need to find an exact optimal policy scales as $O(1/\delta^2)$. Moreover, if there are two or more optimal policies, then replicability is not trivially implied. In most applications it is the case that either the set of optimal policies is not a singleton, or that the difference in the utility of the optimal policy and the second best policy is so small that we are not willing to pay $O(1/\delta^2)$ in the sample complexity.
Notice that if there are multiple optimal policies, a naive approach is to add random noise to the rewards. This will ensure that, with probability 1, the optimal policy will be unique. Nevertheless, the gap between the best policy and the second best policy will be very small, which will make it prohibitive to pay the required sample complexity in order to be able to detect it.
> I understand that this is primarily a theoretical paper; however, it would be advantageous to include some experimental analysis. Specifically, exploring the behavior of classical RL algorithms in the tabular case with regards to replicability would provide valuable insights.
As the reviewer correctly pointed out, our main focus in this paper is a mathematical treatment of replicability in RL. Nevertheless, we are working on experiments which we will add to the next version of our paper. Let us comment on how classical RL algorithms in the tabular setting behave with regards to the replicability notions we consider. Most of these algorithms like the ones in [Kearns and Singh, 1999], [Azar et al., 2013], are based on the Bellman operator which is inherently non-replicable. This is because the Bellman operator is, essentially, a max operator over some computed quantities, which makes it very sensitive to estimation errors. Thus, these algorithms do not satisfy the replicability definitions we consider in this work. On the other hand, in Section 4 we show that if instead of the max operator we consider a soft-max version of it, we can get results that are ``replicable’’ under our definition of approximate replicability.
> How do classic RL algorithms in the tabular case behave concerning the two notions of replicability discussed in the paper?
Please see our response to the previous comment.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions! | Summary: Reproducibility is a big problem in RL. This paper builds on Impagliazzo (2022) on replicability in learning, and develops a replicability framework for RL. It focuses on a discounted tabular MDP setting with a generative model. The replicability problem is given shared internal randomness, how many samples are needed for a learning algorithm to output the exact same policy with probability (1-$\rho$) across two executions. It shows that $\rho$-replicability requires $O(N^2/\rho^2)$ more samples than would otherwise be needed for an \epsilon-optimal Q function, where N is the number of state-action pairs. This matches the lower bound which they also provide. There is also extensions to approximately replicable policy estimation.
Strengths: 1. The paper presents a nice framework for replicability in RL which is related to the reproducibility problem.
2. The paper is well-written, and it is easy to understand.
3. The results are interesting though not surprising.
Weaknesses: My main issue with the paper is the underlying premise that the reproducibility crisis in ML and RL is due to the replication difficulty owing to randomness. It is not! As the authors themselves remark, a lot of the reproducibility issues arise due to the need for hyper-parameters, code not being aligned with the algorithms presented in the papers, training details, environments not being set up in the same way and results reproducible only on trajectories with specific random seeds, etc. As we all well know, if we want to estimate the mean of a distribution from n samples, we will not get the same estimate from two different experiments. That is why confidence intervals should always be reported: That is the standard way, in some sense of ensuring replicability in statistics. And to get say 95\% confidence intervals, it is fairly easy to calculate the number of samples (or sample trajectories) needed. That calculation I would expect would scale as O(N^2) as well. So the question is what more have we learnt from the results in this paper? I think the results in this paper are essentially confidence interval type calculations just dressed up nicely as $\rho$-replicability.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Q1. Can the authors through some experimental work justify why their notion of replicability (particularly approximate replicability) is more suitable than reporting say 95% confidence intervals? If not experimental work, could you argue this through a simple example?
Q2. Could you elucidate how the replicability framework you introduce can practically help resolve the reproducibility issues in AI/RL?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please explain how the limitations of your framework can be overcome with further work to make it useful for practically addressing the reproducibility issues in AI/RL?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their constructive comments.
> As we all well know... So the question is what more have we learnt from the results in this paper? I think the results in this paper are essentially confidence interval type calculations just dressed up nicely as $\rho$-replicability.
The reviewer raises an interesting and important point. In many applications, like mean estimation, correctness of the answer ensures some form of replicability. In particular, if we consider the mean estimation problem we know that, with high probability, if we estimate the mean on two sets of $O(1/\epsilon^2)$ i.i.d. data then the two answers $x_1, x_2$ will be $\epsilon/2$-close to the true one so, by triangle inequality, they will be $\epsilon$-close to each other. Coming back to the RL setting, a similar calculation shows that the two policies that are derived from two sets of i.i.d. data will yield rewards that are $\varepsilon$-close to one another. However, this does not mean that the two policies will be close under any reasonable notion of ``closeness”. In other words, confidence intervals imply replicability in the value space but not in the policy space. In a lot of RL applications we set up the rewards as a proxy that will lead to a good policy, i.e. the object of interest is the policy itself and not necessarily its reward. In order to achieve replicability in the space of policies, we need to do more work than to just report confidence intervals.
As a concrete example, Yu et al. [2019] state that RL applied towards “a medical or clinical treatment regime is composed of a sequence of decision[s] to determine the course of [action] such as treatment type, drug dosage, or re-examination timing … with a goal of promoting the patient’s long-term benefits.” In such cases, where the output we mostly care about is the policy and not the numerical value of the reward, replicability in the policy estimation is crucial. Indeed, practitioners would not trust the results of algorithms that vary significantly when executed on (potentially different) datasets that are obtained using the same sampling process from the same underlying population.
Also, note that replicability within the applied RL community is not a novel concept. Numerous studies such as that of Henderson et al. [2019] and Lynnerup et al. [2019] have attempted to tackle this issue from a methodological perspective. These works provide a series of recommended steps in order to perform replicable RL research. From an algorithmic point of view, the previous works have attempted to minimize the differences in the input of the algorithm since RL algorithms are notoriously sensitive to hyperparameters, the environment, etc. We take a different approach and strive to design algorithms that are inherently more stable, therefore attacking the problem at its root.
> Can the authors through some experimental work justify why their notion of replicability (particularly approximate replicability) is more suitable than reporting say 95% confidence intervals? If not experimental work, could you argue this through a simple example?
We are working on adding experimental evaluation in the next version of our draft. Let us describe a simple example where our notion of approximate replicability is more suitable than just reporting confidence intervals. Consider the problem of computing a stochastic s-t path in a graph. In this problem, the learner has access to a graph but does not know the cost/weight of each edge. We assume that this cost is associated with some distribution in [0,1]. The agent can get information about the cost by querying the edge. Let us consider a naive solution, where the agent estimates the true mean of every edge with accuracy $\epsilon$ and then returns the shortest path based on the estimated means. We can show that with probability 95% the returned path has a total cost at most $\epsilon$ more than the optimal one. However, it is not hard to see that due to the estimation error, with probability 99.99% each run of the experiment returns a different path. It is reasonable to say that such an experiment is not replicable. Our notion of approximate replicability allows, essentially, the agent to return a distribution over paths, so that i) the two distributions are ``close’’ and ii) if we sample a path from this distribution, then with high probability, its cost will be at most $\epsilon$ more than the optimal one.
> Could you elucidate how the replicability framework you introduce can practically help resolve the reproducibility issues in AI/RL?
One important practical message our work sends regarding the reproducibility issues in AI/RL is that the max operator leads to results that are brittle and are very sensitive to estimation errors and numerical errors, which make the results non-reproducible. On the other hand, soft-max operators like the one we consider to obtain our approximate replicability results, are much more robust to these types of errors.
> Please explain how the limitations of your framework can be overcome with further work to make it useful for practically addressing the reproducibility issues in AI/RL?
Interesting next steps in this line of work are to consider settings beyond the tabular RL, like the linear function approximation setting. Another direction is to consider the episodic setting and see how we can balance the tradeoff between achieving low regret and computing policies that are ``close’’ across executions.
Yu et al. [2019]: Chao Yu, Jiming Liu, and Shamim Nemati: Reinforcement learning in healthcare: a survey.
Henderson et al. [2019]: Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger: Deep reinforcement learning that matters.
Lynnerup et al. [2019]: Nicolai A. Lynnerup, Laura Nolling, Rasmus Hasle, John Hallam: A survey on reproducibility by evaluating deep reinforcement learning algorithms on real-world robots.
---
Rebuttal Comment 1.1:
Title: Reviewer i28M
Comment: Dear Reviewer i28M,
We would be grateful if you could let us know if our explanation made sense to you. We hope the discussion with other reviewers has also helped address any lingering concerns.
Thanks a lot!
---
Rebuttal Comment 1.2:
Comment: I am not convinced replicability in the policy space is a worthy goal, or even possible. We can have two different policies with the same value function. And if we care only about policy replicability, we should do imitation learning, perhaps even just behavior cloning. And the problem of replicability from different datasets due to the distribution shift problem, not randomness.
Having considered the results in the manuscript, and the authors' response, I find my original score to be too generous. So I am revising it.
---
Reply to Comment 1.2.1:
Comment: We respectfully disagree with the reviewer's comments.
> I am not convinced replicability in the policy space is a worthy goal, or even possible.
The interest in the recent line of work on the formal study of replicability in ML initiated by Impagliazzo et al., which studies a similar problem as we do, shows that replicability in the space of classifiers/models and not just utility, is a topic of interest in the ML community. The comment that this goal might not even be possible is factually incorrect, since our results show that it is, indeed, possible. In fact, in the case of approximate replicability, we show that this property can be achieved with poly-logarithmic sample complexity overhead. This discussion also contradicts the comment that our results are not surprising, which was raised by the same reviewer.
> We can have two different policies with the same value function.
As we explained, in many applications having the same value function is *not* the end goal.
> And if we care only about policy replicability, we should do imitation learning, perhaps even just behavior cloning.
We are a bit confused about this comment. Our work provides a black-box transformation to achieve replicability in the policy space, no matter what the underlying algorithm we begin with is. We don't see why restricting ourselves to imitation learning and behavior cloning would provide any benefit. In the case of exact replicability, we provide lower bounds which illustrate that the approach the reviewer is suggesting will not lead to improved sample complexity compared to our current results. We underline again that in the case of approximate replicability, our transformation comes, essentially, at no cost in the sample complexity of the algorithm. We would be happy to provide further clarifications if the reviewer can explain what they mean by their comment.
> And the problem of replicability from different datasets due to the distribution shift problem, not randomness.
This is factually incorrect. Even if the datasets are coming from the same distribution, as we have argued in our work and in our rebuttal, the policies that the algorithm outputs can be significantly different. Of course, the problem of replicability becomes even more difficult in the setting of distribution shifts, but this is beyond the scope of our work. We hope and believe that our work will provide the foundation to study the problem of replicability in more complex environments, like the ones that have distribution shifts. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
The Goldilocks of Pragmatic Understanding: Fine-Tuning Strategy Matters for Implicature Resolution by LLMs | Accept (spotlight) | Summary: The paper introduces a task to evaluate the ability of large language models (LLMs) to resolve conversational implicatures and go beyond the literal interpretation of the meaning of the language. The evaluation is based on a dataset of naturally occurring implicatures that are converted into a binary classification task using a set of templates. For instance, the dialogue _"Can you come to my party on Friday?”_ , _"I have to work"_ is converted into two sentences: _Esther asked “Can you come to my party on Friday?” and Juan responded “I have to work” which means no_ and _Esther asked “Can you come to my party on Friday?” and Juan responded “I have to work”, which means yes._. The two sentences are inputed in the different models, considering the one maximizing the likelihood as the model choice.
Four categories of state of the art models are evaluated on the proposed task. The evaluation protocol takes into account potential variance induced by prompt templates. Different methodologies are also benchmarked to try to improve the performance of the models on the task, this includes:
- Fine tuning on the task.
- Instruction tuning-;
- Chain of thought prompting
Results demonstrate that the task is challenging for base models (BERT, RoBERTa, GPT2, GPT3, ...) with a performance slightly above random. Without any fine tuning, some models already reach a performance above 70% (text-davinci-00x, ChatGPT), with GPT4 only 5 points behind human performance. The improvement induced by instruction tuning and chain of thought prompting is further demonstrated for some models with GPT4 reaching human performance.
Strengths: The paper is well written with all the material provided to reproduce the results. The paper introduces an original evaluation protocol for the task of resolving implicatures and demonstrates that human performance can be reached by combining state of the art models (GPT-4) and the recently proposed chain-of-thought prompting.
Weaknesses: The task proposed is not new and the dataset proposed is the same as the one introduced in [BigBench](https://arxiv.org/abs/2206.04615). The results, in the mentioned paper, already demonstrated the ability of PaLM with k-shot-prompting to [perform](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/implicatures) above average human performance. The main differences between the two papers include the choice to keep ambiguous cases for the task and adding other models along with a chain of thoughts prompting to the benchmark.
In the benchmark, the variance in the performance of the models, displayed in Table 2, only takes into account the variability induced by different prompt wording. To compare the models, it would have been helpful to perform bootstrap on the test set to estimate to support the claim on the comparison of the performance of the different models.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why has no fine tuning on the base models been performed?
Do you have the increase in performance induced by chain of thoughts prompting for GPT4 for particularized (context heavy) examples?
l107 _"Our analysis is novel in its approach; using ambiguous data that humans can easily resolve, and scope"_ Maybe reformulate, not very clear
GPT4 doesn't seem to benefit from in context example. How do you explain that?
Did you try to isolate the effect of data contamination? Especially for GPT-4, the model could have been trained on the data used in your evaluation protocol given that the original dataset was published in 2020
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The proposed evaluation framework focuses on conversational implicature. If implicature is key for natural language understanding, evaluating models on conventional implicature is also important. If outside of the scope of this well conducted evaluation, it could be of interest for future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to thoroughly review, rating the soundness and presentation as good. We are happy the reviewer believes our evaluation protocol is original and all the material is provided to reproduce it. We justify below why _our contribution is especially important in light of the BIG bench result the reviewer mentions_, which we believe is not a sound nor reproducible empirical result.
To summarise our argument, further detailed below;
1. The method of the implicature task contributors in BIG bench raises serious questions as to the validity of their claims
2. The result is based on base LLMs; no models with SotA fine-tuning methods are evaluated
3. It is not discussed in a peer-reviewed (or any) paper
Additionally, we discuss the bootstrap estimates of the std error, why data contamination is unlikely, and answers to other questions. We hope that this will lead to strengthening your recommendation or letting us know what still stands in the way, so that we may further improve our paper.
### Clarifying misunderstandings in the reviewer’s summary of the submission
Before we address the reviewer’s comments in detail, we’d like to clarify what we believe is a misunderstanding in the reviewer's summary. We don't fine-tune models on the task. We instead look at _emergent_ pragmatic understanding from different domain-general SotA LLM training and fine-tuning methods. Further, the reviewer says we perform “_Instruction tuning: adding k examples from the training set in the prompt;”._ We apologise for the confusion, but instruction tuning is a different method from adding k examples to the prompt. The latter is a prompting technique requiring no weight updates to the model, whereas the former is a SotA fine-tuning method. This is mentioned in line 47-49 and line 179-182. We will make sure to update these explanations to prevent future confusion by other readers.
### The BIG bench result is not an empirically sound or reproducible result
We respectfully disagree with the reviewer's assertion that our work lacks contribution due to the existence of the BIG bench task using the same dataset and presenting similar results, although we can understand that at first glance this might seem to be the case. For a detailed argument, please refer to the [global author rebuttal](https://openreview.net/forum?id=5bWW9Eop7l¬eId=RgN3e2U0c5).
### Bootstrap on the test set to compare different models
Bootstrap estimates of standard error is actually a feature of the evaluation library that we used (see line 207-233 on GitHub in the file `lm-evaluation-harness/lm_eval/metrics.py`, function `bootstrap_stderr()`), so we do have these numbers. We didn't include them because the variation was always much lower than the variation across prompts, which we felt would be more interesting to most NLP researchers. Once we upload all result files, the bootstrap estimates will also be included.
### Conventional implicature is out of scope for this work
The reviewer says _“evaluating models on conventional implicature is also important. If outside of the scope of this well conducted evaluation, it could be of interest for future work.”_ We agree that studying whether conventional implicature is interesting, but indeed, it is out of scope of what is being studied here. We look at how models can resolve implicatures that require context to be resolved, whereas conventional implicatures are resolved by the conventional meaning of the word. In fact, linguists sometimes argue conventional implicatures are part of semantics, not pragmatics (see appendix D).
### Answers to questions
_Why no fine tuning on the base models?_
We believe the reviewer suggest an interesting avenue for future work by this question that is out of scope for our current work (we do not have the computational resources), and hope someone with the available computes picks up this question.
_Increase in performance from CoT for GPT4 for particularized examples?_
On particularised examples GPT-4 achieves 81.6\% accuracy with CoT prompting (from Table 28), and 71.2\% 5-shot (from Table 20), so the improvement is ~10\%.
_Why does GPT4 not benefit from in context examples_
It’s a good question but it’s hard to say because we don’t know details of GPT-4. We think this is due to more extensive instruction tuning allowing it to better respond to instructions zero-shot. This is partly corroborated by our results in appendix I.6 that show that the models mostly benefit from in-context examples because of the formatting.
_Effect of data contamination_
We believe data contamination is not an issue for this benchmark for multiple reasons. When we evaluate GPT-4 with only the question, the performance is close to random (e.g. question-only 0-shot is ~49\% and 5-shot is ~54%). We can add these results to the appendix to clarify this in the paper. Additionally, when we search for the dataset in the large-scale internet-scraped datasets, we do not find any matches. This can be verified with the Gaia search tool on HuggingFace. Finally, the pattern of performance that GPT-4 achieves maps closely onto human performance. If it would’ve memorised the dataset it should be higher than human performance, as our human evals are not public yet.
### Concluding remarks
We hope this adequately addresses the reviewer’s concerns. We will make the following changes to the manuscript to clarify the arguments laid out above:
1. adjust the related work to summarise the argument about BIG bench.
2. rewrite Appendix H to get across the above arguments in more detail.
We would like to thank the reviewer for raising these issues, as we believe clarifying the presentation issues above will make the paper’s contribution clearer. We hope you consider raising your score, and if there are any outstanding concerns that would inhibit you from doing so, we will gladly address them.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for answering all the concerns mentioned. Once again, I think it's a well conducted paper with a comprehensive evaluation providing interesting insights on the ability of LLMs to resolve conversational implicatures.
- About instruction tuning, you are completely right and sorry for the confusion. And you nicely demonstrate the benefit on this methodology to resolve implicatures.
- About the BIG Bench task, I understand the two limitations you are mentioning: overestimation of the human performance and the other one about the choice made to keep only the subset of unambiguous examples in the BIG Bench benchmark. About the human performance, you performed a thorough evaluation, involving 5 annotators for each subsets of 150 examples with metrics on annotator agreement. This is way more detailed that the BIG BENCH paper. About the choice, made in Big Bench of keeping the subset of *unambiguous* cases, it seems that the annotators, in your paper, are able to perform significantly above chance level. So, indeed, they might have been too conservative when selecting the subset but I don't think it makes the task more interesting. You also mention that the *BIG bench task uses only base LLMs and no state-of-the-art fine-tuning methods*. It's important to note that they reach above human performance relying on it.
- About the effect of data contamination, thanks a lot for providing the performance with the question only for GPT-4. With those numbers, it seems indeed that data contamination is unlikely. I think adding this in the Appendix could indeed be helpful. Could you also specify how you evaluated GPT-4 with question only?
Finally, I want to thanks the authors for addressing all the comments. The paper is well written with a well conducted benchmark, ensuring reproducibility with all the details provided. My main concerns were about the novelty of the work done and the potential moderate impact given the BIG BENCH paper. To be more specific, the task seemed already to be [solved](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/implicatures) with the big bench paper, where they demonstrated that relying on PaLM with k-shot-prompting performed above average human. Given that the Big Bench paper is not peered reviewed and that evaluation is way more detailed in your paper, I have raised my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for thoughtfully responding to our rebuttal, and revising their score. We are curious as to what would hold the reviewer back from stronger support for the paper. We believe it might be down to some outstanding misunderstandings, and would appreciate working with you both to ensure that the writing can be amended where needed such that other readers do not make them, and in the hope that you will consider bringing your level of support for the paper in line with the other reviewers.
**BIG bench**
We believe (correct us if we’re wrong) that what is standing between the reviewer and stronger support for the paper is related to a perceived limited impact due to the BIG bench result, as the reviewer also mentions they think the paper is *"well conducted paper with a comprehensive evaluation providing interesting insights on the ability of LLMs"* and say that we are *"ensuring reproducibility with all the details provided"*. Below, we aim to clarify that the difference with the BIG bench is not *just* that our work was peer reviewed and theirs wasn’t, but that—beyond the novel analysis and model comparisons in our paper—**our version of the task is also harder**, with differences in the data distribution and experimental protocol, which we believe makes the evaluation **fairer and more informative**.
We expect PaLM's few-shot score to be lower on our dataset due to the difficult additional examples in our benchmark comprising 30% of the total data. We expect BIG bench human performance to be underestimated due to low-quality human evaluation, and actual human performance to be higher, like on our benchmark. Therefore we cannot be sure that PaLM would reach above human level performance on our benchmark, and with it, solving the task -- in fact it seems quite unlikely.
Taking all this into account, we would argue that you simply cannot say much about implicature performance of contemporary LLMs based on the BIG bench result at all, and in turn our paper is **the first to actually provide this contribution**.
**Spurious correlation experiment**.
We re-ran the benchmark fully but removing the responses from the examples. We did this 0-shot and 5-shot, where we did the latter to give 5 examples of how to potentially utilise spurious correlations in the benchmark. The results are as follows:
```
| Question-only | 0-shot | 5-shot |
|---------------|----------------|----------------|
| ChatGPT | 54.3% +/- 3.3 | 41.7% +/- 12.4 |
| GPT-4 | 48.9% +/- 10.5 | 53.7% +/- 0.5 |
```
The result for chatGPT and GPT-4 are mostly random, meaning that it's unlikely these models will utilise spurious correlations in our benchmark in a few-shot setup.
**********************Conclusions**********************
We hope that clarifies the fact that the contribution of this paper is indeed ******************************************substantive and novel******************************************. To further clarify the writing regarding this, we will revise the related work section to get the above across more clearly. Specifically, we will focus more on the claim that we still don’t know LLM’s performance compared to humans from the BIG bench result because it likely overestimates LLM implicature understanding and underestimates human performance. We would be thankful if you had any further suggestions regarding where we could make this distinction clearer, and hope you will be in a position to strengthen your support for the paper. | Summary: This paper evaluates several popular LLMs on a benchmark for evaluating pragmatic reasoning via resolving question-answer conversation snippets into the implied binary answer. The study explores the influence of different factors in LLM design, including training method (next-token prediction vs. instruction tuning), number of few-shot examples, model size, inference method, etc. Several conclusions are made about the results: instruction tuning seems to improve pragmatic reasoning, models are mostly robust to perturbations in the input prompts, scaling model size seems to have an effect in improving performance, CoT prompting is useful for some models, and remaining failures of the model appear to center on more difficult (particularised) implicatures.
Strengths: * The paper offers a very comprehensive / thorough / careful evaluation of the LLMs being evaluated. Particularly, the try a number of different prompt variations, try different numbers of few-shot examples (as well as having a randomization process for choosing these examples, to avoid confounding results with some fixed set of examples)
* The finding that few-shot examples are useful mostly for conveying format of the response, rather than as a way of defining the task itself, is very interesting. I would be interested to see a more in-depth study of this (perhaps another paper) as it is somewhat surprising (but perhaps less surprising for instruction-tuned models).
Weaknesses: In general, I wanted to see a bit more discussion and analysis, since this paper is focused on analysis.
* Moving the analysis for other types of implicatures into the main paper
* More examples of what the model is generating, particularly for CoT examples
* More discussion / analysis of human "errors" here -- why might there be errors? Genuine noise in the annotation process, or different interpretations of the same example?
* Discussion on why instruction tuning might be influential in improving pragmatic reasoning
* Discussion / analysis on why there is still a gap for particularised implicatures, and what we can do to address it
Nits:
* The first example (GPT's answer to the user's question about the phone) actually doesn't necessarily make sense, as it's definitely not literally true -- GPT can't have seen the user's phone
* "fine-tuning on instructions at the example-level" is vague to me, particularly the "example level" part. What is this supposed to refer to? Is there a distinction between general "instruction-tuning" and "instruction-tuning ad the example level"?
* Also, "context" in Insight 5 is vague -- I wouldn't think of this kind of commonsense/world knowledge as "context" necessarily, so I'd suggest renaming this
* Formatting of Figure 2 is somewhat odd
* I'd suggest adding a line for "human performance" in Figure 2
* I'd suggest adding the few-shot CoT example (or some examples of generated CoT answers) into the main paper
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Is there evaluation on this dataset with respect to an upper bound in performance given only the response, without the question? I'm curious whether answers like "I've gotta get up early" or "Some" have some spurious correlations with the labels, e.g., "no", in this dataset.
* Is the accuracy actually decreasing relative to k in Figure 4? I.e. is this difference significant? If so, why might that be?
Minor questions:
* Are there cases where examples are ambiguous, and situational context might affect the label? Did you investigate cases where the human annotators were "wrong" to identify the source of those "errors"?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, mostly. More discussion on these non-binary implicatures (at the end of the discussion) would be interesting. I'd also be interested if the authors can perform analysis on the existing benchmark to see whether the labels can be predicted with only one half of the example (i.e., the response); i.e. whether there are spurious correlations within the benchmark.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the supportive review, saying the paper offers a _“very comprehensive evaluation”_ and that _“the finding that few-shot examples are useful mostly for conveying format [..] is very interesting”_. Below, we address questions. To summarise, we:
- present new results showing that spurious correlations are likely not an issue
- show the source of human errors
- present CoT generations and how those address the gap for particularised examples
- speculate on why instruction tuning is important
We are confident the reviewer’s suggestions will contribute to a better manuscript and hope that our responses below will lead to strengthening your recommendation or letting us know what still stands in the way, so that we may further improve the submission.
### Potential spurious correlations
This is an interesting suggestion, so we ran versions of the benchmark with only the question or response. Getting the implicature right from the response only does not always indicate spurious correlations, as some examples only need the response (e.g. rhetorical questions like ‘do pigs fly?’). Question-only results do always indicate spurious correlations.
|Response-only|0-shot|5-shot|
|---|---|---|
|ChatGPT|59.2% +- 4.7|58.3% +- 6.6|
|GPT-4|62.6% +- 1.7|65.5% +- 1.1|
|Question-only|0-shot|5-shot|
|---|---|---|
|ChatGPT|54.3% +- 3.3|41.7% +- 12.4|
|GPT-4|48.9% +- 10.5|53.7% +- 0.5|
Models mostly perform random for question-only, so spurious correlations don't seem to be an issue. For response-only, GPT-4 5-shot gets 65%. Some examples it gets right are: “do fish swim?” and "let's hope so".
### CoT generations + gap on particularised examples
For this analysis, we propose to add an appendix section that looks at the examples that GPT-4 got right with CoT and wrong with 5-shot. This answers the question about what can fill the gap on particularised examples; CoT closes it for GPT-4 (see Table 28). Explicit reasoning helps, and an example can clarify how (which we will add to the main text). We find models usually get the following wrong:
> A: Is there a bus I can get to the station?
> B: You can’t rely on it
> Implicature: yes
GPT-4 5-shot gets this wrong for all 6 templates. With CoT it gets it right for 5 of 6 templates.
The CoT GPT-4 generates is useful:
> Alice says 'You can't rely on it.' Alice must be implying that there is a bus, but it may not be dependable or timely. This means the response to Bob’s question is yes, but with a caution about reliability. Answer: yes
### Source of human error
We agree this can be added to the main text. We will add that the score we expect a model to achieve is not 100\%, but human best. Some examples can be interpreted in different ways, so there is no true right answer, only the answer most people give. We find that part of the errors in our human eval are different interpretations of the same example, and a few annotation errors. One example that people disagree on is:
A: “Was that easy to negotiate?” B: “That is as easy as shooting fish in a barrel.” Implicature: yes
It depends on whether you think it’s easy to shoot fish in a barrel. We propose to look into the examples that all humans get wrong (likely annotation errors) and examples that humans disagree on (likely multiple interpretations), and summarise the results in the main paper.
### Why instruction tuning (IT) at the example-level is influential
To answer the reviewer’s question on why we distinguish between benchmark-level and example-level IT; the former is where annotators write a single instruction for an entire dataset. The models are then fine-tuned on each example from the dataset with the same instruction. By contrast, in example-level IT each example in a dataset gets a new instruction, resulting in a more diverse dataset. We will clarify the distinction further in the paper.
We think example-level IT is important for pragmatics because it provides examples of pragmatic inferences (if the instruction is ambiguous, the annotators are asked to infer the intent; see sec. 3.6 in [1]). It also provides diversity; each example is a new task with a tailored instruction (see Appendix A.2.1 on p. 26 in [1]). In the discussion we suggest future work should look into the effect of data diversity on pragmatic inference (line 297-298); we will work this out in more detail.
[1] Ouyang et al., 2022
### Analysis for other types of implicatures into the main paper
We appreciate this suggestion, and also find the other types interesting, but the patterns we find are not significant (overlapping confidence intervals), so it’s difficult to say anything about this in the main paper.
### Discussion on non-binary implicatures
Could the reviewer clarify what they would like discussed? Much discussion is out of scope, because we focus on binary implicatures, but we could for example discuss methods for evaluating non-binary implicatures.
### Other questions
*decreasing accuracy w.r.t to k in Fig 4?”*
This decrease is not significant w.r.t. k=0. Reasons for no increase w.r.t k could be that the examples only clarify structure, which can be clear at k=1, and more examples might cause high variance.
On using the word “context”; we respectfully disagree that we should rename this. It's meant to encompass things like commonsense and world knowledge. We'll clarify in the paper.
We will clarify what human performance is in Fig 2.
### Concluding remarks
We thank the reviewer for all suggestions. We propose to add to the paper:
- the spurious correlations experiment
- analysis on the source of human errors
- a CoT completion with a discussion on how this fills the gap for particularised examples
- why example IT might be important
We believe the paper will be stronger after adding these, and thank the reviewer for suggestions. We hope you feel in a position to support the publication more strongly as a result. If not, we will gladly attempt to address additional concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply! In terms of discussion about non-binary implicatures, apologies for a vague suggestion. I do think discussing evaluation would be interesting (both metrics, and how to collect this kind of data), though don't want to draw from the focus of the paper. Thanks also for the additional analysis and examples. Will update my score as this reply has addressed many of my concerns!
---
Reply to Comment 1.1.1:
Title: Thanks for the response!
Comment: We thank the reviewer for the response and the increase in rating. We will allocate some of the additional page in a final draft of this work to discussing non-binary implicatures in more depth. We are confident that with your thoughtful review we have been able to improve the manuscript; thanks for your engagement! | Summary: The authors analyze the behavior of LLM pragmatic understanding (capability to add and omit imformation to efficiently communicate in context) using a new evaluation protocol. They compare LLM performance to human performance and argue that instruction fine-tuning with examples may improve pragmatic understanding.
They propose using the assignment of higher likelihood to coherent utterances than similar but incoherent utterances as "resolving an implicature correctly." This includes both describing a situation where implicature understanding is necessary to essentially resolve an entailment question, whether an answer that on its face is irrelevant means "yes" or "no," or whether its a sufficient answer. For example, `Esther asked “Can you come to my party on Friday?” and Juan responded “I have to work”, which means yes.` is a failure case because the answer is wrong; the response means no. A model preffering `...which means yes.` resolves the implicature.
They assess this problem is few and zero shot settings for LLMs. They control for prompt variation effects with 6 different prompt settings, testing a fairly comprehensive set of both generative proprietary LLMs and standard open pretrained transformers like RoBERTa. They find no model can beat human performance on their resolution task, but GPT-4 comes closest.
They conclude with a detailed analysis, with 5 insights.
Strengths: Well-defined and scoped problem. Out-of-community definitions (eg, explaining pragmatics) well-motivated. Citations are sufficiently comprehensive in my opinion.
Exciting direction to explore to further understnading and engineering of LLMs.
Fully-specified approach and sufficiently broad evaluation.
Clear statement of insights.
Overal, I think this paper has clear value to the community.
Weaknesses: I think it would be possible to use more than 6 "curated prompt templates" (163-172). The "we control for prompt variability" is really not central to the story of the paper but I don't really find the idea that averaging in essence over 6 prompts is sufficient to argue that that has happened. Perhaps claim of robustness should be weakened.
~They claim task-specific instruction tuning improves implicature resolution, but there's no concrete A/B test for this. They don't instruction fine-tune a model themselves for the task (not that doing so would necessarily be possible given constraints). How do we know that IFT is really driving GPT4's superior performance and not just scale or some other factor? I find the evidence presented here unconvincing~
EDIT: In light of the author's response, I do think the case for IFT driving the improvement to be a bit more convincingly made, or at least as convincingly made as can be given the inherent constraints (OpenAI's closedness for ex). I hope to see the authors response to this point integrated in the camera ready, with maybe some weakening of claims in light of the incomplete degree of knowability of this issue
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Maybe respond to the questions I raised implicitly in the weaknesses? I am open to being convinced.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: I find limitations to be sufficiently addressed this time.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time, and for the supportive and positive review. We are happy to see the reviewer believes the soundness and presentation are excellent and the contribution is good, stating the paper *“has clear value to the community”*. Below, we address the questions raised. To summarise;
- we argue that six templates to control for sensitivity to prompt wording is significantly more than the standard in LLM literature (namely 1) and that more can become prohibitively expensive.
- we highlight the results in the submission that show that instruction-tuning at the example-level is important for pragmatic understanding; most notably the sharp increase in performance between Cohere-base-52B and Cohere-command-52B, which is only due to instruction tuning at the example level. However, we agree that verifying this in a controlled study is an interesting future work direction (albeit outside our computational budget).
We hope the reviewer will find the answers satisfactory, and will seek to make clarifying tweaks to the writing to anticipate such questions arising for future readers of the paper.
### Only six prompt templates
We appreciate this suggestion, as we agree with the reviewer that in essence we still cannot be sure there is not some prompt that the model will fail on (or perform better on), and that this is an important shortcoming of LLM evaluations more generally. With the below response we’d like to show that we use many more prompt templates than is the standard in LLM literature (namely 6 instead of only 1 template), and hence believe the claims of likely robustness hold up. Additionally, using more templates quickly becomes prohibitively expensive, which is why most papers likely don’t use more than one or two.
We agree with the reviewer that six prompt templates will not cover the full scope of potential wordings of the problem, we still believe it shows robustness to prompt wording and our claims are not overstated. This is mainly because:
1. We find in general a low variance w.r.t. prompt wording for models that perform well, showing that variability to wording is most likely not going to be an issue.
2. The standard in LLM evaluation papers is still to only report results on a single prompt template (e.g. see; many of the tasks in BIG-bench, or the highly cited zero-shot reasoning paper (Kojima et al., 2022) which uses 2 templates differing only slightly in wording, or the recent LLM MPT-7b presented by MosaicML in a blogpost, or Hu et al. 2023 Appendix A (”a fine-grained comparison …”) who use one prompt template per task, etc.).
3. This common pattern of using relatively few prompt templates is likely because more is prohibitively expensive (we times all our evals by 6 for each model, so instead of 600 evaluations per model if we had 1 template per example, we have 3600 evaluations per model. Taking into account that we have evaluated 17 model classes each with multiple model sizes totalling 49 different models, adding more templates runs into high costs pretty quickly). A pragmatic balance between exploring the effect of prompt templates and affording to run experiments on a budget needed to be struck.
Furthermore, because our 600 examples in the dataset are naturally occurring, they already span a broad coverage of different topics and writing styles, showing that a model being able to achieve human-level performance on them is likely to be able to generalise to different examples. A small selection of 3 examples from the test set:
*A: You were a smoker? B: Two packs a day.*
*A: Do you think we were right? B: I think you’d lose.*
*A: Do you have any ketchup left? B: We are swimming in it.*
We hope this adequately addresses the concern of the reviewer regarding claims of robustness, and if not we’d be happy to consider caveating claims of robustness in the paper if the reviewer can elaborate on this still being necessary in light of our arguments above.
### Claiming task-specific instruction-tuning improves implicature resolution
Thanks for bringing up a valid point of discussion here. We agree with the reviewer that A/B testing is important, but as the reviewer notes as well, this is out of scope given our computational constraints. We believe our results do sufficiently convincingly show that instruction-tuning at the example level is important. For example, Figure 3 (right) shows that base models at similar scales as IFT models perform significantly worse. We see that Cohere-command 52B significantly outperforms Cohere-base 52B, and the only difference between those models is instruction-tuning at the example level (Cohere-command is fine-tuned from Cohere-base). In fact, Cohere-command 52B outperforms other base models more than 3 times the size by a large margin (e.g. GPT-3 175B, BLOOM-176B, OPT-175B).
Then the question remains, how do we know example-level IT is the driving factor for GPT-4 and the other OpenAI models? The answer is that we can’t be sure because of OpenAI’s policy of secrecy, but we can be pretty confident: We evaluated 10 models across 6 model classes and two APIs in the group example-level instruction tuned (of which GPT-4 is one). Within this group, models probably vary significantly in other training and architecture details (especially Cohere-command models versus OpenAI models). This means the most significant difference the Example IT models have with other model groups is instruction-tuning at the example level, making it likely that this factor is the driving factor in their performance. Nonetheless, we agree with the reviewer; an important future work direction would be to verify our findings by a controlled study looking into the effect of instruction fine-tuning on implicature resolution.
We thank the reviewer again for taking the time, and hope the questions raised in the weaknesses section have been adequately addressed. If not, please let us know, and we will gladly look into addressing them.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thoughtful response. On the IFT claim, I am convinced, and have updated my confidence. I would encourage the authors to add this to the discussion section in the camera ready if it isn't already there! :) | Summary: This paper investigates how well recent LLMs preform on resolving conversational implicatures. It presents a task on conversational implicature resolution built on top of the crowdsourced and human-annotated dataset of George and Mamidi (2020). The work presents an evaluation protocol that lays out how LLMs are evaluated on implicature resolution. The work further presents an array of experiments on various LLMs to understand whether they do well on this task and in what settings (e.g. 0-shot vs 5-shot). Results are analyzed in the form of insights and main takeaways that the authors highlight.
Strengths: - Addresses an important problem, namely studying the competence of LLMs at pragmatic inferences—here specifically implicatures
- Extensive experiments on a wide range of recent LLMs to validate how well these LLMs perform pragmatic inferences involving implicatures
Weaknesses: - The work is poorly situated with respect to prior work on computational modeling of pragmatics and the related work section misses an important line of work which I discuss in the comments.
- Further analysis is required in certain cases (see comments below)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Intro:
Line 28: “utterances that convey something other than their literal meaning”
At the first pass of this sentence and reading “other”, I read it as suggesting implicature conveys something else different/unrelated to the literal meaning, and I had to reread it to fit it within the definition of implicature. I suggest replacing “other” with “beyond” as I feel that fits better the description of implicature.
Section 2:
As a work that is, in essence, motivated by investigating how good LLMs are at “interpreting language in context—incorporating its pragmatics”, I feel the related work section leaves out an important line of work on the computational modeling of pragmatic phenomena including implicature and the closely related phenomenon of presupposition and on understanding the limitations of modern NLP models in capturing these phenomena starting with the work of Cianflone et al 2018 on modeling adverbial presupposition, Schuster et al 2020 & Li et al 2021 on scalar implicature, the ImpPress dataset (Jeretic et al 2020) on presupposition and implicature in the context of NLI and related experiments on how good NLI models (including BERT-based models) are at capturing these phenomena, Kim et al 2021 on presupposition verifiability in the context of QA, Parrish et al 2021 with the NOPE presupposition corpus, Kabbara et al 2022 on the limitations of NLI models in resolving presupposition cases. These first come to mind. Despite these efforts focusing on (what is now) smaller LMs models (with the exception of the first mentioned work which precedes the BERT era), I believe they are important efforts that set the stage for the present work and need to be cited and imo are more relevant to the work than some of the information mentioned in the first paragraph of Section 2 (e.g., how some models are toxic, unhelpful, etc). Appendix D has a nice background on implicature from a linguistics/philosophy of language but still misses all of the work I mentioned here on modern computational modeling of pragmatic phenomena.
Section 3:
A general concern of mine is the benchmark contribution of the paper given that the same dataset has been introduced in BIG-bench. The authors are clear about how their dataset is different given that it encompasses instances that are discarded in BIG-bench (for being too “ambiguous”) but regardless the overlap between the two is obviously substantial and so I felt this is something that could benefit from a discussion here.
Section 4:
Regarding the random label experiment and looking at the related section in the appendix (I.6), two points/questions come to mind:
1) Given that there are two labels in question, did you verify how often the label was not changed? That could eclipse the results we’re seeing here.
2) This experiment and the close results in some cases (for 5-shot text-davinci-001 and cohere-command-52b) brings up the following question: to what extent the models are learning any notion of pragmatics here? Did you carry out some qualitative analysis in a systematic way to see whether the models are in some cases getting it right for the wrong reasons? For example, Kabbara el al 2022 showed that for presupposition-based NLI, RoBERTa and BERT were getting high accuracy performance on certain presupposition types but upon further investigation and adversarial testing, it turns out the models are often exploiting superficial cues that are not related to notions of pragmatics. I wonder to what extent this could be applicable for the task you’re exploring here.
Regarding the analysis in Appendix I.4 (varying k in-context examples), apart from the conclusion that is in the body of the paper (Section 4, Insight 2), I somewhat feel the results are so noisy for Cohere-52B and OPT-175B that it’s hard to really make sense of the results. InstructGPT3-175B is the only case where the results follow our intuition: the more examples a model sees, the better it gets at understanding the notion of implicature and so the higher the accuracy. For Cohere, we see a very noisy trajectory across the board between k = 1 and 15 (before the results start stabilizing and moving upwards) and for OPT, the results are mostly on a downward trajectory except for a chunk in the middle and end up still going downward for k>15. Would like to see if you have any thoughts on this.
Table 3: Regarding the 5-shot CoT performance
We see a variance between the text-davinci models. Generally, although specific details are not public, we can assume models got better as newer versions were released. So it makes sense to see the performance improve for the 5-shot and 5-shot COT scenarios across the 3 davinci-models. However, it’s still unclear why CoT would help marginally in one case (002) and in a rather noticeable way (003) but hurt the performance in the 001 case. Any insight there? This is even more pronounced when the drop is not trivial (7%) and is the only case among 6 (if we were to count the -0.1% as roughly not helping but not hurting)
Appendix G: Human evaluation
I’m somewhat confused by the choice of the authors to run human experiments using on prompt only. They could’ve simply chosen to run multiple experiments with at least a couple of different prompts. The authors justify this by the fact that humans are less likely (than models) to be sensitive to variations in the prompt. I feel this is at best speculative and, if anything, human experiments and pilot studies often show that human subjects can be primed to think or react in a certain way given certain wording or structure, etc.
Minor presentation note:
Line 25: I’m not sure if there’s a writing style guide that allows this form “I have to work.”. [essentially the two dots back-to-back]. Seems rather odd to me. I would think the one within the quotations need to be dropped. See here for more details:
https://www.hamilton.edu/academics/centers/writing/style/essentials/punctuation-of-quotations#:~:text=The%20final%20period%20or%20comma,the%20period%20follows%20the%20citation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review and for the in-depth suggestions. We are glad to read that the reviewer thinks our work _“addresses an important problem”_ and contains _“extensive experiments on a wide range of recent LLMs”_, rating the soundness, presentation, and contribution as good. Below we address the reviewers comments. To summarise;
- we discuss the line of work on computational modeling of pragmatics, which we will incorporate in our manuscript
- we motivate our contribution further by addressing that the BIG bench result cannot be built upon because their method raises serious questions and is not discussed in any paper.
- we discuss that models are likely doing pragmatic inferences, but not learning to do so in-context, and present results motivating that models are not relying on spurious correlations.
We hope that this will lead to strengthening your recommendation or letting us know what still stands in the way, so that we may further improve the submission.
### Situating our work
Thank you for pointing out these works, we are happy to incorporate them. Specifically, we will expand the related work by discussing those that look into emergence of pragmatic understanding from language modeling (Jeretic et al (2020) and Parrish et al (2021)). We will cite the papers that look into explicit computational modeling for pragmatic understanding in the the related work, saying they laid the groundwork for the idea that pragmatics is difficult for computational models, and expand upon each in the background section (Cianflone et al. (2018), Schuster et al. (2020), Kim et al., (2021)). We couldn't find Li et al. (2021), could you share the title?
### Contribution in relation to the BIG-bench
Thanks for raising this point of discussion, which we address in the [global response above](https://openreview.net/forum?id=5bWW9Eop7l¬eId=RgN3e2U0c5), as reviewer VB9K mentioned it as well.
### Random labels
We aren't sure we understand the question about the random label experiment (_”did you verify how often the label was not changed?”_). We take it to mean; how often did the random labelling assign the correct label to the in-context examples? In the 5-shot case the label is wrong 1443 times and right 1557 times. For 1-shot, the label is wrong 265 times and right 335 times. We hope this clarifies and if not, please let us know.
### Are models learning pragmatics?
We believe they are, though not from the in-context examples. It's an interesting question what other spurious correlations the model might rely on. Kabbara et al. (2022) point out cues that arise because the data is synthetic, like the presence of certain tokens (“exactly”) in many of the examples or similarity between premise and hypothesis (”Rene might have hidden” and “Rene hid”). Similar lexical cues will not be a part of our dataset, as they are naturally occurring and have little/no similarity between question and response. This also means the adversarial testing method Kabbara et al. use is less applicable.
Instead, we ran an analysis to determine potential spurious correlation between question and label. If the models know what the implicature is without even looking at the response, that indicates spurious correlations. The result is as follows:
|Question-only|0-shot|5-shot|
|---|---|---|
|ChatGPT|54.3% +/- 3.3|41.7% +/- 12.4|
|GPT-4|48.9% +/- 10.5|53.7% +/- 0.5|
From this we can conclude there are probably no spurious correlations in the questions that can be used without fine-tuning, and hence it’s unlikely the models in our study uses them.
### Noise w.r.t. increasing k (I.4)
We agree with the reviewer that the results for the separate prompt templates w.r.t. k barely show a pattern, apart from what's mentioned in the paper. We know from the random labels experiment that the models that can do pragmatic inferences only use the in-context examples for the format. We think the reason for this noise is that giving the format of a problem doesn’t help if the failure wasn’t formatting but understanding in the first place. In such case, a model might give entirely different answers when given different in-context examples.
### 5-shot CoT performance
We have asked ourselves this question as well. What we came up with is the following (albeit speculative):
- CoT doesn’t help/hurts for Dav-1 and Dav-2 and Cohere-command-52B
- CoT does help for Dav-3, ChatGPT, and GPT-4
What distinguishes the first and the second category is RLHF. Only ChatGPT, Dav-3, and GPT-4 have undergone RLHF fine-tuning. Perhaps this enables CoT reasoning for this task.
### Human evaluation
This is a fair point. Indeed, assuming humans are not sensitive to prompt template is speculative, but we opted for this to make the full human study _directly_ comparable to the model's results on template 2. If we had done a mix of all templates we either had to spent 6x as much on the human evals (which was not within our budget) or subsample evals, making it less comparable to part of the model study. We hope the reviewer also sees value in this design choice over the alternative. We will clarify in the paper that there is a speculative aspect to this choice.
Thanks for the notes on line 25, and 28; we will update both!
We would like to thank the reviewer for the thoughtful questions and we are confident that we can improve the manuscript as a result of this discussion. Specifically; we will update the related work and background. We’ll add the spurious correlation analysis in the appendix. We’ll clarify that the design choice of using only one prompt template for the human eval is speculative. Finally, we’ll update the related work and summarise why this work is a contribution in light of the BIG bench results. We hope this adequately addresses the mentioned weaknesses. If there’s discussion points left that would prevent the reviewer from raising their score, please let us know and we will further address them.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: Thank you for the detailed response to my review!
1) Regarding the references that I mentioned, this is the complete list of papers (according to the order of appearance in my review):
- Cianflone et al (ACL 2018): Let’s do it “again”: A First Computational Approach to Detecting Adverbial Presupposition Triggers
- Schuster et al (ACL 2020): Harnessing the linguistic signal to predict scalar inferences
- Li et al (SCiL 2021): Predicting scalar inferences from “or” to “not both” using neural sentence encoders
- Jeretic et al (ACL 2020): Are Natural Language Inference Models IMPPRESsive? Learning IMPlicature and PRESupposition
- Kim et al (ACL 2021): Which Linguist Invented the Lightbulb? Presupposition Verification for Question-Answering
- Parrish et al (CoNLL 2021): NOPE: A Corpus of Naturally-Occurring Presuppositions in English
- Kabbara et al (COLING 2022): Investigating the Performance of Transformer-Based NLI Models on Presuppositional Inferences
I believe these should be referenced in the main body of the paper as they all present various research efforts on either modeling the pragmatic phenomena of implicature and/or presupposition or understanding the limitations of LLMs in capturing these phenomena. And so, mentioning them properly situates the work. I would leave it to you to best decide to what level of detail to cover them in the Related Work section and referencing them in some way (e.g. grouping several by some topics) VS expanding on which in the appendix.
2) Are the models learning pragmatics?
Thanks for presenting these results. The answer is convincing to me. I would think highlighting this in your paper (possibly in the body of the paper by just giving the insight and then further expanding on this in the appendix) will benefit the paper because this specific point (is the model actually learning pragmatics?) is often a point that is raised in this type of papers that attempt to highlight a competence of LLMs especially when it comes to whether they exhibit some form of “understanding”. Specifically, many in the community these days would reject that LLMs can exhibit any kind of understanding, let alone a pragmatic understanding. So such a discussion will strengthen the position of the paper.
I replied above to points that either asked for a clarification or that I wanted to further comment on. The rest of the answers are satisfactory. I will raise my score as the rebuttal addressed most of my concerns (raised both overall score and the soundness score).
---
Reply to Comment 1.1.1:
Title: Thanks for the response!
Comment: Thanks a lot for the additional response and for listing these related works in more detail. We agree they do set the stage for our work, and merit discussion. We also agree with the reviewer that the discussion highlighting why we believe models are likely actually learning something interesting here would be an interesting addition to the paper, further motivating the contribution. We thank you for your engagement and believe we will have improved the manuscript as a result of our discussion. | Rebuttal 1:
Rebuttal: This response is for **reviewer VB9K** and **reviewer SeJL**, who ask about our contribution in light of the BIG bench results. With the below we aim to motivate in further detail why we believe the BIG bench result cannot be built upon in a scientific way. Hence, our work is an important contribution validating the BIG bench result in a reproducible and methodologically sound way, and above that providing insight into what aspects of LLM training are crucial for the ability to do pragmatic inferences.
1. **The methodological approach of the task contributors in BIG bench implicatures raises serious questions as to the validity of their claims.** The reason for this is twofold:
1. The BIG bench result likely overestimates implicature resolution performance. They show that base LLMs at a certain scale achieve above human average performance on a _subset of our dataset: the unambiguous examples_. This likely overestimates performance on our more challenging _ambiguous_ subset, which we show humans in our study resolve at significantly above chance level (72\%). Hence, the question remains; can LLMs resolve _inherently ambiguous implicatures like humans can_? We find the answer to be “no” for some type of models, and yes for other types, and believe this is an important scientific finding to be shared with the research community.
2. We believe the human evaluation of the BIG bench task is of low quality (noting that this is impossible to fully verify because there is no information available on how the evaluation was done exactly). The average human evaluator on BIG bench implicatures achieves around 82\% performance (where ours achieves on average 86\% on a more challenging dataset), and their human best rater achieves 100\% (where our human best is 92\%). This difference between human average and best hints at poor quality average rating.
2. **the BIG bench task uses only base LLMs and no state-of-the-art fine-tuning methods**, which are the standard in almost all recent published LLMs and considered crucial to performance (e.g. [1], [2]). So even if we could take their results at face value, a question remains; _what aspects of LLMs contributes to their performance on implicatures?_ In our work we find that implicature performance emerges at a much smaller scale in models instruction fine-tuned at the example level, and that scale and prompting techniques are important.
3. **the BIG bench implicatures contribution is not a scientifically scrutinised and peer-reviewed result, as it is not described and discussed in a peer-reviewed (or any) paper** (the BIG bench tech report does not mention it apart as an entry in a list of tasks), and therefore our result reproducibly validating their result with a sound protocol is—on its own—an important contribution.
We thank both reviewers to raising this point of discussion, and we will make the above clearer in the related work section as well as in Appendix Section H.
[1] “Llama 2: Open Foundation and Fine-Tuned Chat Models”, Touvron et al., 2023
[2] “GPT-4 Technical Report”, OpenAI, 2023 | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Approximately Equivariant Graph Networks | Accept (poster) | Summary: This work discusses a non-trivial case of equivariant graph networks. In this scenario, the input graph $G$ remains fixed, and therefore, the relevant permutations that act on the graph signals are the automorphisms of $G$. Under this assumption, the authors describe a bias-variance tradeoff with respect to the symmetry group to which the hypothesis class is restricted. Since the automorphism group tends to be trivial in large graphs, the authors introduce the concept of approximated symmetry by considering a symmetry group associated with a coarser version of the graph. The main result of the paper formalizes the bias-variance tradeoff in relation to the approximated symmetries.
Strengths: * The paper introduces a novel perspective and analysis of equivariant graph networks, both in terms of the fixed graph scenario and the approximated equivariance via graph coarsening.
* The use of Graphons to define a metric over graph spaces is interesting and could have many applications.
* The theoretical claims are very clear and explained in a detailed manner.
Weaknesses: * An explanation regarding the clustering methods is missing in the main text. The method section in the paper does not suggest how to coarsen a graph G, therefore it would be helpful to explain how it is done, specifically in the traffic flow experiment.
* Although the appendix provides some explanation, additional information about the architectures employed in the experimental section would be beneficial for a more comprehensive understanding of the empirical results.
* The empirical results presented in Table 2 do not provide strong evidence supporting the necessity of approximated equivariance.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Definition 1 - What does the J in summation stand for? should it be N?
* The implementation of this method on an arbitrary graph, which lacks any inductive bias concerning the graph's structure, remains unclear. Are the authors able to offer any insights or intuition regarding a general approach or recipe for graph coarsening in such cases?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: There is no potential negative impact to this work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and appreciation of our work. We provide detailed responses to the each section point by point:
### Responses to Weaknesses:
1. (Graph coarsening details) We thank the reviewer for raising this point and will add more discussion of graph coarsening methods in the camera-ready version. The clustering choice of the traffic flow experiments are included in the supplementary information (Figure 5) due to the page limit. We also give a detailed explanation of the graph clustering choices in our new set of experiments. We note that theoretically, we assume that the coarse-grained graph approximates the fine graph in cut distance. However, in practice **we construct the coarse-grained versions of the graphs not by minimizing the cut-distance, but rather by using off-the-shelf clustering methods**. In this sense, the theory does not justify the implementation completely rigorously, but rather heuristically motivates it. We will clarify this point in the camera-ready version.
2. (Experimental details) We now provide a new set of experiments with detailed information of the architectures and set-up in the common response and in the attached PDF. We will also re-organize the main paper to include additional important information of the previous experiments.
3. (Experiments in Table 2 do not strongly support the theory)
- To further support our theory, we now provide **a new set of experiments (see attached PDF) that clearly illustrates the benefits of approximate equivariance** (induced from graph coarsening). The details are thoroughly described in the common response, and we give a brief summary below.
- We consider a regular 2D grid as our fixed graph domain, and images as our graph signals. The goal is to reconstruct the original images given masked images by learning a mapping $f: \mathbb R^N \to \mathbb R^N$, also known as image inpainting. We make use of standard image datasets (MNIST, FashionMNIST) and perform the symmetry model selection via graph coarsening: We cluster the grid into $d \times d$ patches, where $d$ ranges from $1$ (no clustering, corresponding to the trivial symmetry) to $N$ (one giant cluster, corresponding to the $\mathcal{S}\_N$ symmetry). Figure 6 in PDF shows the empirical risk first decreases and then increases as the group decreases, illustrating the bias-variance trade-off from our theory.
- Note that the 2D grid as a graph has global reflection symmetries (from vertical and horizontal axis), but nodes from a local $d \times d$ patch are not symmetrical in the original grid. Yet enforcing approximate symmetries among nodes in local patches suitably can outperform trivial symmetries, see for example Figure 6 where using $2 \times 2$ patches with the coarsened symmetry ${\cal S}_{4}^{196}$ yields better test error than the trivial symmetry case.
- We agree that for the traffic flow prediction task, empirical results in Table 2 don’t seem to show strong advantage of approximate equivariant. We want to remark that the traffic flow prediction concerns both the spatial graph domain and the temporal dimension, which is likely to downplay the gain in the spatial domain. We nevertheless include the experiment in the paper since it illustrates the potential to enhance standard GNNs architectures with approximately equivariant modules.
- In light of the reviewer’s feedback, we plan to replace our original experiments with the new experiments in the main text to clearly illustrate our theory, and defer most of the original experiments to the Appendix.
### Responses to Questions:
1. (Notation) We thank the reviewer for catching this typo and indeed the summand should be N. We will correct this typo in the camera-ready version.
2. (Graph coarsening method)
- Indeed we do not propose a computational method to derive a ground-truth optimal coarsening for general graphs and general tasks. **We envision our approach being used for model selection. Namely, one should implement different coarsening procedures and find the one that works best for the problem**. Our symmetry model selection perspective suggests that a natural coarsening recipe is to sweep from a few large coarsened nodes to many small coarsened nodes. We will clarify this motivation in the camera-ready version.
- That said, graph clustering is a form of graph coarsening that is very widely studied in the literature and used in practice. For instance, if the graphs are instances of social networks, clustering is a natural approach. Under certain random graph model assumptions, spectral clustering, message-passing algorithms and semidefinite programs enjoy theoretical guarantees, see for example
- Lyzinski, Vince, et al. "Perfect clustering for stochastic blockmodel graphs via adjacency spectral embedding." (2014): 2905-2922.
- Qin, Tai, and Karl Rohe. "Regularized spectral clustering under the degree-corrected stochastic block model." Advances in neural information processing systems 26 (2013).
- Abbe, Emmanuel. "Community detection and stochastic block models: recent developments." The Journal of Machine Learning Research 18.1 (2017): 6446-6531
- To summarize, in practice we construct the coarse-grained versions of the graphs not by minimizing the cut-distance, but rather by using off-the-shelf clustering methods. Hence, the theory heuristically motivates the implementation. | Summary: This paper formalizes the notion of active symmetries and approximate symmetries of GNNs on a fixed graph domain. Furthermore, it theoretically characterizes the statistical risk of linear regression with symmetries and show a bias-variance tradeoff. For graph tasks, it utilize
coarsed graph for approximate symmetries. Experimental results on human pose estimation and traffic flow prediction valid the proposed method.
Strengths: 1. Novel theoretical results and methods. To my best knowledge, its the first risk gap with a combination of symmetries and graphon. The approximating symmetry by coarsened graph is also fancy.
Weaknesses: 1. The result in Section 3.1 seems trivial and useless.
2. The experiment section is hard to read as all implementations are put in Appendix.
3. Graph coarsening is vital for the proposed method. But coarsening method for general graph task in unknown.
4. The experiments is conducted on small and specific tasks only rather than general graph task, like link prediction [1].
[1] https://ogb.stanford.edu/docs/linkprop/
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. As shown in the Weaknesses section, please add a more concrete description of your method in the maintext.
2. Please provide experimental results on more general graph tasks, like link prediction.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and appreciation of our theoretical contribution. We provide detailed responses to the each section point by point:
### Responses to Weaknesses:
1. (Result in Section 3.1 seems trivial and useless) We want to remark that considering simplistic models to analyze complex phenomena is very common in machine learning research, and in science in general. **Insights from simplistic models often transfer to the real-life phenomenon. We hence do not see Section 3.1 as useless**. The set-up in Section 3.1 is indeed simple, using a linear regression model with white noise. Yet we hope to make a first step to analyze the symmetry model selection using a very simple setting, such that the results are clear to state and prove, and verifiable through simulation. Although linear regression setting is simple and may seem trivial compared to deep neural networks used in practice, **analysis in linear regression allows us to (1) precisely characterize the bias-variance tradeoff; (2) illustrate the phenomenon we observe in practice for non-linear models**. Moreover, we now added a new set of experiments (see attached PDF), where Figure 6 (left) shows that **the bias-variance tradeoff can be observed both in simple linear models and nonlinear models**.
2. (Experimental details) We thank the reviewer for the suggestions, and we have supplemented **a new set of experiments which solely use our proposed $\mathcal{G}$-Net without any further augmentation**. The details are thoroughly described in the common response, and we give a brief summary below. We consider a regular 2D grid as our fixed graph domain, and images as our graph signals. The goal is to reconstruct the original images given masked images by learning a mapping $f: \mathbb R^N \to \mathbb R^N$, also known as image inpainting. We make use of standard image datasets (MNIST, FashionMNIST) and perform the symmetry model selection via graph coarsening: We cluster the grid into $d \times d$ patches, where $d$ ranges from $1$ (no clustering, corresponding to the trivial symmetry) to $N$ (one giant cluster, corresponding to the $\mathcal{S}\_N$ symmetry). Figure 6 in PDF shows the empirical risk first decreases and then increases as the group decreases, illustrating the bias-variance trade-off from our theory.
3. (Graph coarsening method)
- We agree with the reviewer that there is no ground-truth optimal coarsening for general tasks. Indeed we do not propose a computational method to derive a ground-truth optimal coarsening for general graphs and general tasks. **We envision our approach being used for model selection. Namely, one should implement different coarsening procedures and find the one that works best for the problem**. Our symmetry model selection perspective suggests that a natural coarsening recipe is to sweep from a few large coarsened nodes to many small coarsened nodes. We will clarify this motivation in the camera-ready version.
- That said, graph clustering is a form of graph coarsening that is very widely studied in the literature and used in practice. For instance, if the graphs are instances of social networks, clustering is a natural approach. Under certain random graph model assumptions, spectral clustering, message-passing algorithms and semidefinite programs enjoy theoretical guarantees, see for example
- Lyzinski, Vince, et al. "Perfect clustering for stochastic blockmodel graphs via adjacency spectral embedding." (2014): 2905-2922.
- Qin, Tai, and Karl Rohe. "Regularized spectral clustering under the degree-corrected stochastic block model." Advances in neural information processing systems 26 (2013).
- Abbe, Emmanuel. "Community detection and stochastic block models: recent developments." The Journal of Machine Learning Research 18.1 (2017): 6446-6531
- To summarize, in practice we construct the coarse-grained versions of the graphs not by minimizing the cut-distance, but rather by using off-the-shelf clustering methods. Hence, the theory heuristically motivates the implementation.
4. (More experiments including link prediction) Our experiments are intended to illustrate the learning task on a fixed graph setting. Concretely, we are given a dataset of input/output graph signals supported on a **fixed** graph domain, and aim to learn the best function to map the input signal to output signal. On the other hand, **link prediction** usually aims to predict unobserved edges given a partially observed graph, which **does not fit into our setting in a direct way**. However, we did extend our experiments to more tasks, like image inpainting as explained thoroughly in our common response.
### Responses to Questions:
1. Please refer to the previous Section, Responses 2 (Experimental details)
2. Please refer to the previous Section, Responses 4 (More experiments including link prediction)
---
Rebuttal Comment 1.1:
Comment: Thank you for detailed reply. It solves my concerns with experimental details and graph coarsening methods.I am willing to raise my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for carefully reading our rebuttal. We are glad that your concerns have been addressed.
We noted that there seemed to be a bug of editing review yesterday, but it has been resolved now. | Summary: The authors discuss the generalization of learning a map that is equivariant to one group, while the ground truth is (approximately) equivariant to another group. Their theoretical analysis first considers the case where the ground truth is exactly equivariant and finds a bias-variance trade-off: making the hypothesis equivariant to a larger group increases bias, while reducing variance. When the hypothesis equivariance group is smaller than the ground-truth group, the bias is 0. Next, the authors define a graph coarsening criterion via graphon theory, that results in a coarse graph $G'$ and whose nodes are a clustering of the nodes of the fine graph $G$. This gives a new symmetry group on the fine graph $G$, called $\mathcal G_{G \to G'}$, which equals the automorphisms of the coarse graph times the permutations of nodes in the clusters. The ground truth graph function is then assumed to be $\epsilon$ close to a function equivariant to $\mathcal G_{G \to G'}$. Theoretically, the authors find a trade-off in a lower bound the risk gap (the difference in test loss between a hypothesis and its symmetrization to the approximate symmetry group). In their experiments, on fixed graphs, the authors use neural networks that combine graph methods on $G$ with methods equivariant to permutations in clusters in the graph. They evaluate different cluster sizes and find an optimum between clusters between size 1 and all nodes clustered together.
Strengths: - Very interesting formalization of approximate graph symmetry via graph coarsening with graphons and the group $\mathcal G_{G \to G'}$ .
- The authors develop novel theoretical results on generalization with approximate symmetries.
- The ideas give an interesting theoretical insight in using clustering and permutation equivariant networks on graphs.
- The paper mostly clearly written.
Weaknesses: - The paper should explain better (in the main paper) why the risk gap is an important criterion for model selection. Lines 183-186 are hard to follow.
- There appears to be a substantial mismatch between the theory and the experiments: the experiments rely substantially on combining equivariant nets to $\mathcal G_{G \to G'}$ with the ( $G$ -automorphism equivariant) graph nets on $G$ . The theory does not discuss this combination of symmetries. Also, the graph cut distance doesn't appear to be computed in practice.
- The theoretical results talk about the symmetry group $\mathcal G_{G \to G'}$ being a combination of cluster permutations and coarse graph $G'$ automorphisms. However, in their experiments the authors assume those automorphisms to be trivial, so that they only work with cluster permutations. This is a substantially less interesting class of groups, which should be more fairly stated in the paper.
- The coarsening group $\mathcal G_{G \to G'}$ appears quite similar to the groupoid of nodes used in [1], which contains the automorphism group of node neighbourhood (akin to the cluster permutations in this paper) and maps between similar neighbourhoods (akin to the automorphism group of $G'$ ). The authors should consider comparing to this paper.
- The neural network architecture $\mathcal G$ -Nets used in the experiments should be more fully defined in the main paper.
[1] P de Haan and TS Cohen. 2020. “Natural Graph Networks.”
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Schur's lemma doesn't apply always naively on the real numbers (as they're not algebraically closed.) How do the authors apply the complex results to the real field, as mentioned in line 575 of the appendix? Do they ignore the possible additional intertwiners?
- In section 3, it is not so clear which results rely on the group acting via a permutation of rows (so related to graphs), and which parts could generalize to other groups and representations. Could the authors comment on that?
- In line 168, the authors say that the mismatch term goes below zero, presumably because an inner product goes above zero. However, it is unclear to me why the sign of the inner product is known in general.
- In line 178, the authors make a comment on an inner product of characters increasing when the group increases. Can the authors clarify why this holds in general, or make the comment more specific?
Minor points:
- The authors appear to be confusingly using both the symbols $\Psi$ and $\mathcal Q$ for the symmetrization operation.
- What are the two measures in (3)? Shouldn't these both be the Lebesgue measure? $\mu$ is already used for a measure on $\mathcal X$ .
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors are not quite transparent about the mismatch between their theory and experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the critical assessment and constructive feedback of our work. We provide detailed responses to the each section point by point:
### Responses to Weaknesses:
1. (why risk gap) We will add more motivation in Section 3 for the risk gap. To summarize: The risk quantifies how a given model performs on average on any potential input. The smaller the risk, the better the model performs. The risk gap computes the difference of the risk between two models (satisfying different symmetries), and thus allows us to **perform model selection**. Lines 183-186 intend to explain the significance of the risk gap (i.e., choosing different symmetry affects generalization nontrivially in the small $n$ regime).
2. (mismatch between theory and experiments)
- We thank the reviewer for pointing this out, and we are providing **a new set of experiments which solely use our proposed $\mathcal{G}$-Net without any further augmentation**. The details are thoroughly described in the common response, and we give a summary below. We consider a regular 2D grid as our fixed graph domain, and images as our graph signals. The goal is to reconstruct the original images given masked images. We perform the symmetry model selection via graph coarsening: We cluster the grid into $d \times d$ patches, where $d$ ranges from $1$ (no clustering, corresponding to the trivial symmetry) to $N$ (one giant cluster, corresponding to the $\mathcal{S}\_N$ symmetry). Figure 6 in PDF shows the empirical risk first decreases and then increases as the group decreases, illustrating the bias-variance trade-off from our theory.
- We also want to point out that the graph cut distance serves as a theoretical tool for our construction; in practice, we construct the coarse-grained versions of the graphs not by minimizing the cut-distance, but rather by using off-the-shelf clustering methods. In this sense, the theory motivates the implementation but does not justify it completely rigorously. We will clarify this in the camera-ready version.
3. (experiment assumes trivial coarsened graph symmetry)
We thank the reviewer for raising this point. We added **new experiments with nontrivial $\mathcal{A}\_{G’}$ symmetries**. Concretely, we consider a global reflection symmetry for the coarsened grid graph of FashionMNIST. We compare the non-trivial coarsened graph symmetry with the trivial case in the inpainting problem (c.f. Figure 6 - right in the PDF). Our results show the utility of using the coarsened graph symmetry, which leads to better performance.
4. (comparison to NGN)
We thank the reviewer for these remarks and will include a more detailed comparison to Haan and Cohen’s paper for the camera-ready version. Concretely:
- Our set-up focuses on learning on a fixed graph and thus graph automorphism, whereas their set-up considers different graphs and thus graph isomorphism.
- Our goal is to choose the best symmetry for generalization, whereas their goal is to design maximally expressive graph networks.
5. (experimental details) We now provide a new set of experiments with detailed information on the architectures and set-up in the common response and the attached PDF. We will also re-organize the main paper to include additional important information from the previous experiments.
### Responses to Questions:
1. (Construction of equivariant maps on the reals) We thank the reviewer for raising this question. We didn’t explain the general construction in the appendix of the paper because we didn't need it in the experimental setting. We will add the general construction in the appendix of the revised version for completeness, and outline the key ideas here. By Maschke’s theorem we can decompose the representation in irreducibles over $\mathbb R$. Then we can check further how to decompose these irreducibles over $\mathbb C$, and apply Schur's lemma. We have 3 cases for the decomposition:
- The irreducible over $\mathbb R$ is also irreducible over $\mathbb C$: We can directly apply Schur’s lemma.
- The irreducible over $\mathbb R$ decomposes in two different irreducibles over $\mathbb C$: We can send each $\mathbb C$-irreducible to their isomorphic counterpart.
- The irreducibles over $\mathbb R$ decompose in two copies of the same irreducible over $\mathbb C$: We can send each irreducible to any isomorphic copy independently.
2. (Generalization to other groups/representations) Yes, our results are built on the results from Elsedy et al. (2021), which **can be used for general compact groups with orthogonal representations**. We believe it may be possible to extend the results to other representations of compact groups. Extending the results to non-compact groups seems much harder. We will comment on this in the camera-ready version.
3. (Sign of the inner product) We thank the reviewer for raising this point and agree that the original remark in line 168 is incorrect. Indeed, the inner product $\langle f^*, f_G^{\perp} \rangle$ can be negative. We fixed the discussion in line 167-168 to "...when $\cal{G} > \cal{A}_G$, the mismatch term can be positive, negative or zero (depending on $f^*$) whereas the constraint term increases with $\cal{G}$ ".
4. (Inner product of characters) We thank the reviewer for catching this confusing comment. The correct version of the remark holds in general and it reads as follows:
The inner product of characters measures the dimension of the linear equivariant space (this is a standard result, but we will add the proof in the appendix for completeness). Thus it decreases as the group increases. Consequently in the risk gap formula that measures the generalization gain, the variance term increases, whereas the bias term decreases (since the projection to the orthogonal complement of $\mathcal{G}$-equivariant space increases, and negates by the minus sign).
5. Minor points: We apologize for the confusing notations and will fix them in the final version.
---
Rebuttal Comment 1.1:
Title: Why only horizontal reflections?
Comment: I thank the authors for their rebuttal. Most of my points have been addressed.
I have one remaining question though on the new experiment. Why isn't the automorphism group of the square graph, the dihedral group, chosen for $\mathcal A_{\mathcal G}$? Instead, the authors picks a subgroup thereof.
---
Reply to Comment 1.1.1:
Title: For simplicity of parameterisation, we consider Abelian groups and full permutation groups in the experiments
Comment: We thank the reviewer for the question. For simplicity of parameterization, we restrict ourselves to Abelian groups or full permutation groups in the experiments. Thus, we choose the horizontal reflection symmetry, a subgroup of the dihedral group which is *not Abelian*. That said, we note that
- The horizontal reflection symmetry is more meaningful for our image signals (than other elements of the dihedral group such as the diagonal reflection).
- The goal of the experiment aims to show utility of considering nontrivial coarsened graph symmetry (up to our choice, and do not have to be the ground-truth symmetry $\mathcal{A}\_{G'}$)
We will remark this thoroughly in the camera-ready version. | Summary: The authors observe that – while graphs considered as a class have global permutation symmetry – for specific problems, e.g. when learning on a fixed graph, the graph has a much smaller symmetry. Consequently, they attempt to answer the question of how symmetric the model should be in comparison with global permutation symmetry or the graph’s natural symmetries. They quantify this model selection problem as a bias-variance tradeoff, which they validate with numerical experiments.
Strengths: To my knowledge, this paper contains several new ideas. The concept of choosing a supergroup of an object’s natural symmetry group as a statistical strategy motivated by bias-variance trade-off is a novel one in equivariant ML as well as in graph-ML. Similarly, the concept of explicitly considering the natural symmetry of a fixed graph in devising the learning algorithm rather than assuming global permutation symmetry is novel as well. Finally, the concept of using graphon distances to define a notion af approximate equivariance on graphs is yet another novelty that addresses the issue of defining approximate equivariance on a space of objects that is fundamentally discrete.
Methodologically, I found the mathematical approaches taken to be appropriate, and the experiments to be clean tests of the ideas presented. In particular, the authors demonstrate the existence of the proposed bias-variance trade-off in multiple settings.
Finally, I found the author’s argument that the use of overly-equivariant models should be viewed as a statistical strategy that purposefully introduces some systematic error to gain regularity to be a good perspective on the current state of graph neural networks, where people have seen good practical success with permutation-equivariant GNNs on problems that have much lower natural symmetry.
Weaknesses: The paper could do with being run through a spell-check: there are a few obvious typos, e.g. “neighrborhood” in line 289. Moreover, I did not feel like the choice of using graphons to construct a distance between graphs was properly motivated: additional discussion on why the authors chose to use graphons to compare graphs would help motivate the direction of the paper. Is it only to embed the graphs in a continuous space?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: This question might reflect my unfamiliarity with a subset of the literature, but to what extent is the bias-variance tradeoff that the authors propose a perspective that has been used for more general equivariant neural networks where the symmetry of the model is greater than the symmetry of the data? Am I correct in thinking that this perspective on the use of overly-symmetric models is novel for general equivariant-ML, not just for graph ML? If so, maybe this should be noted in the text, e.g. by noting in the discussion that these results should translate to other groups and analyzing this as a future direction.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Limitations and potential negative societal impacts are appropriately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully examining our work and appreciating the novelty and the impactfulness of our paper. We provide detailed responses to the Weakness and Questions section.
### Responses to Weaknesses:
1. (spell check): We thank the reviewer for pointing this out and will fix all the typos for the camera-ready version.
2. (motivation of graphon): We thank the reviewer for the question. Graphons are generalizations of graphs in the sense that any graph induces a graphon (see Definition 1 of the main text, also Figure 1 for an illustration). Therefore we can use the cut distance (defined on graphons) as a graph metric. **The cut distance is a natural similarity measure for graphs of different sizes**, as we now explain. By the weak regularity lemma [1], the cut distance between a large graph and a small graph can be interpreted as follows. *Seeing the small graph as a stochastic block model (SBM), the distance is small if the large graph “looks like” it was randomly sampled from the SBM*. The meaning of “looks like” has a precise meaning via the regularity lemma. Namely, the number of edges between any two subsets of nodes of the large graph behave like the expected number of nodes between these sets, if the graph was sampled via the SBM. This is comuted exactly by the cut distance, which is the reason to consider cut distance when working with fine graphons and their coarse-grained versions. We will clarify this motivation in the camera-ready version of the paper and add the above reference.
### Responses to Questions:
1. (Extension to general equivariant ML): We thank the reviewer for raising this important point. Indeed, **our bias-variance tradeoff results can extend to general equivariant machine learning models**. Our current formulation requires that the group is compact and acts on the input/output spaces via an orthogonal representation. We believe the orthogonality requirement can be lifted straightforwardly, but the compactness requirement seems much harder to lift (see discussion in [2]). We will include this in the discussion section and outline interesting future directions.
References:
1. Lovász, László, and Balázs Szegedy. "Szemerédi’s lemma for the analyst." GAFA Geometric And Functional Analysis 17 (2007): 252-270.
2. Villar, Soledad, et al. "Dimensionless machine learning: Imposing exact units equivariance." arXiv preprint arXiv:2204.00887 (2022). | Rebuttal 1:
Rebuttal: ## Common Response
We thank the reviewers for their detailed assessment of our work, and their appreciation for the novelty and the impactfulness of our paper. We are encouraged that all reviewers found that our theoretical results are novel and interesting, particularly on the symmetry bias-variance tradeoff perspective (YT16) and the formulation of approximate symmetries via graphon analysis (van5, RNWs, KAcz), and that our graphon analysis techniques “could have many applications” (KAcz). We are grateful for all the comments and constructive feedback, which will undoubtedly contribute to increasing the overall quality of the paper.
Along with detailed responses to each reviewer, we synthesize below the common questions and responses to all reviewers:
1. Experiments details (van5, RNWs, KAcz); mismatch between theory and practice (van5):
- **We provided a new set of experiments that more accurately match our theory and illustrate our bias-variance tradeoff perspective**, together with a thorough explanation of the architecture and set-up; see explanations below and PDF attached.
2. Graph coarsening methods and guarantees (RNWs, KAcz):
- We added a more detailed discussion on existing graph coarsening methods based on clustering with proven guarantees (i.e., small cut-distance to the generative graphon)
- We want to highlight that **our symmetry selection perspective** (using symmetry induced from the coarsened graph) **allows us to compare different coarsening procedures, and choose the one that works the best for the target application** (which balances optimally the bias from coarsening error and the variance from constraining the hypothesis class).
3. Extension to general groups (YT16, van5):
- We explained the **applicability of our analysis to general compact groups with orthogonal representation**, and remarked on the possibility of lifting some of the requirements (van5).
- We added a discussion on the implications of our work to general equivariant ML beyond graph ML with pointers to related work (YT16).
### New Experiments
To illustrate our theory, we consider a $28 \times 28$ grid graph as the fixed domain, and grey-scale images as the graph signals. The learning task is to reconstruct the original images given masked} images as inputs (a.k.a *image inpainting*). We investigate the symmetry model selection problem by clustering the grid into $d \times d$ patches, where $d \in \\{28, 14, 7, 4, 2, 1 \\}$. Here $d = 28$ means one cluster (with full permutation symmetry); $d = 1$ is $784$ singleton clusters with no symmetry (trivial).
**Data.** We consider the datasets MNIST and FashionMNIST. For each dataset, we take $100$ training samples and $1000$ test samples via stratified random sampling. The input and output graph signals are $(m_i \odot x_i, x_i)$ ($\odot$ is entrywise multiplication). Here $x_i \in \mathbb R^{28 \times 28} \equiv \mathbb R^{784}$ denotes the image signals and $m_i$ denotes a random mask (size $14 \times 14$ for MNIST and ${20 \times 20}$ for FashionMNIST). For experiments with *reflection symmetry on the coarsened graph*, we further transform each image in the FashionMNIST subset using horizontal flip with probability $0.5$ (FashionMNIST+hflip).
**Model.** We consider $2$-layer $\mathcal{G}$-equivariant networks, a composition of $f_{\text{out}} \circ \texttt{ReLU} \circ f_{\text{in}}$, where $f_{\text{in}}, f_{\text{out}}$ denote the input/output linear equivariant layers. We use a hidden size $28$ for all models. $f_{\text{in}}, f_{\text{out}}$ are parameterized as eqn. (1) below. Concretely, any linear equivariant function $f: \mathbb R^N \to \mathbb R^N$ with respect to the symmetry group induced by the coarsening $\mathcal{G} = \mathcal{S}\_{c\_1} \\ldots \mathcal{S}\_{c\_M} \times \mathcal{A}\_{G'}$ (c.f. Defn 3) admits the following block-matrix form (assuming the nodes are ordered by their cluster assignment) with $f_{kl}$ block matrices, and $a_k, b_k, c_{kl}$ scalars:
$$
f = \\begin{bmatrix}
f_{11} & \\cdots & f_{1M} \\\\
& \\ddots & & \\\\
f_{M1} & \\cdots & f_{MM}
\\end{bmatrix}, \\, f_{kk} = a_k \\mathbf{I} + b_k \\mathbf{1} \\mathbf{1}^{\top}, \\, f_{kl} = c_{kl} \\mathbf{1} \\mathbf{1}^{\top} \\text{ for } k \\neq l. \\quad (1)
$$
The coarsened graph symmetry $ \mathcal{A}\_{G'}$ induces constraints on $a_k, b_k, c_{kl}$. If $ \mathcal{A}\_{G'}$ is trivial, then these scalars are unconstrained. In the experiment, we consider a reflection symmetry on the coarsened grid graph, i.e., $ \mathcal{A}\_{G'}= \mathcal{S}\_2$ which acts by reflecting the left (coarsened) patches to the right (coarsened) patches. Suppose the reflected patch pairs are ordered consecutively, then $a_k = a_{k+1}, b_k = b_{k+1}$ for $k \in \\{1, 3, \ldots, M-1 \\}$, and $c_{kl} = c_{k+1, l-1}$ for $k \in \\{1, 3, \ldots, M-1 \\}, l \in \\{2, 4, \ldots, M \\}$ (see Figure 8 in PDF for an illustration).
In practice, we extend the formulation to $f: \mathbb R^{N \times d} \to \mathbb R^{N \times k}$.
We train the models with ADAM (learning rate $0.01$, no weight decay, at most $1000$ epochs). We report the best test accuracy at the model checkpoint selected by the best validation accuracy (with a $80/20$ training-validation split).
Figure 6 in PDF shows the **empirical risk first decreases and then increases as the group decreases, illustrating the bias-variance trade-off from our theory**. We perform further ablation studies on the effect of the $\mathcal{G}$-Net architecture and the coarsened graph symmetry $\mathcal{A}\_{G'}$. Figure 6 (left) compares $2$-layer and $1$-layer linear $\mathcal{G}$-Net, **demonstrating that the tradeoff occurs in both linear and nonlinear models**. Figure 6 (right) shows that **using reflection symmetry of the coarsened graph outperforms the trivial symmetry baseline**, highlighting the utility of modeling coarsened graph symmetries.
Pdf: /pdf/cbdbc934e07efe7e0dfe238ff413cdb1dfce0f3c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Self-Adaptive Motion Tracking against On-body Displacement of Flexible Sensors | Accept (poster) | Summary: In the context of ubiquitous sensing, the paper addresses the issue that on-body devices cannot be firmly worn at a fixed position across different sessions by adapting to unknown displacements. The authors propose three main contributions: (1) A transformation layer adapts to unknown displacements, (2) an LSTM networks identifies patterns, and (3) a discrepancy loss for unsupervised learning.
Strengths: - The problem setup is well motivated. The addressed problem exists in many application areas and research fields.
- The paper is well written and easily understandable.
- The soundness of the technical claims are good and the experiments support the proposed method.
- The authors will make the datasets and source code publicly available.
Weaknesses:
- Section 2.3 contains introductory applications that is repeated from Chapter 1, and hence, can be integrated in the Introduction.
- Experiments are only performed on this specific dataset. It is not clear, whether the proposed method can be generalized to other applications, e.g., on datasets such as HHAR, UCI HAR, WISDM, and uWave (time series domain adaptation datasets).
- Experiments in Sec. 5.1 should also cover data from more than one participant
- References are missing and state-of-the-art methods on which experiments are performed are limited. The following methods can also be considered for comparison: MMCD [1], MMDA [2], DDC [3], DAN [4], HoMM [5], CDAN [6], DIRT-T [7], CoDATS [8], AdvSKM [9], and Sinkhorn methods [10,11].
In my opinion, the paper needs an extensive re-write if published at an ML-venue such as NeurIPS. From a methodological perspective the main contribution is the novel sequence discrepancy loss (as (1) learnable affine transformations and (2) Fourier-encoded LSTMs or similar ideas already exist [12]). The paper should be re-written around this main contribution and the on-body placement use-case should be one dataset of many time series data set where the method should be applied on.
**Some minor points:**
- L34ff: But if the features are transformed accurately they still share the same feature space, right?
- L101: it is uncommon to name a specific journal of a citation within the main text
- L115: increase[s]
**Missing references:**
- [1] Naimeh Alipour and Jafar Tahmoresnezhad: Heterogeneous Domain Adaptation with Statistical Distribution Alignment and Progressive Pseudo Label Selection. In Applied Intelligence, volume 52, pp. 8038–8055, October 2021. doi: 10.1007/s10489-021-02756-x.
- [2] Mohammad Mahfujur Rahman et al.: On Minimum Discrepancy Estimation for Deep Domain Adaptation. In Domain Adaptation for Visual Understanding, Springer, Cham., January 2020. doi:10.1007/978-3-030-30671-7_6.
- [3] Eric Tzeng et al.: Deep Domain Confusion: Maximizing for Domain Invariance. In arXiv preprint arXiv:1412.3474v1 [cs.CV], December 2014.
- [4] Mingsheng Long et al.: Learning Transferable Features with Deep Adaptation Networks. In Proc. of the Intl. Conf. on Machine Learning (ICML), volume 37, pp. 97–105, July 2015. doi: 10.5555/3045118.3045130.
- [5] Chao Chen et al.: HoMM: Higher-Order Moment Matching for Unsupervised Domain Adaptation. In Proc. of the AAAI Conf. on Artificial Intelligence (AAAI), volume 34(4), pp. 3422–3429, April 2020. doi: 10.1609/aaai.v34i04.5745.
- [6] Mingsheng Long et al.: Conditional Adversarial Domain Adaptation. In Advances of Neural Information Processing Systems (NIPS), volume 31, pp. 1647–1657, December 2018. doi: 10.5555/3326943.3327094.
- [7] Rui Shu et al.: A DIRT-T Approach to Unsupervised Domain Adaptation. In Intl. Conf. on Learning Representations (ICLR), 2018.
- [8] Garrett Wilson et al.: A Survey of Unsupervised Deep Domain Adaptation. In Proc. of the ACM Trans. on Intelligent Systems and Technology (TIST), volume 11(5), pp. 1–46, October 2020. doi: 10.1145/3400066.
- [9] Qiao Liu and Hui Xe. Adversarial Spectral Kernel Matching for Unsupervised Time Series Domain Adaptation. In Proc. of the Intl. Joint Conf. on Artificial Intelligence (IJCAI), pp. 2744–2750, August 2021. doi: 10.24963/ijcai.2021/378.
- [10] Felix Ott et al.: Domain Adaptation for Time-Series Classification to Mitigate Covariate Shift. In Proceedings of the ACM International Conference on Multimedia (ACMMM), pages 5934–5943, Lisboa, Portugal, October 2022. doi:10.1145/3503161.3548167.
- [11] Huan He et al.: Domain Adaptation for Time Series Under Feature and Label Shift. arXiv preprint arXiv:2302.03133, February 2023.
- [12] Zhang et al.: Learning Long Term Dependencies via Fourier Recurrent Units. ICML 2018.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: none.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The limitation of this work is that it is not clear whether the proposed approach can be applied to other time series domain adaptation applications or the method is specifically designed for this on-body displacement of flexible sensors and overfitted on these dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Title: Re: Rebuttal
Comment: Supporting what reviewer ncj6 mentioned: please address the rebuttal to each review - this makes it easier for the reviewer to enter the discussion based on the own review.
Now to the point: thank you for the rebuttal and the provided PDF. My main criticism about additional participants is resolved (to some degree).
However, I have still an issue with the experiments and specificity of the dataset/data/application (which is brought by reviewer ncj6 also). Imho the experimental results are still a bit weak (#participants, modalities, applications) and to me it is hard to judge the actual impact on the method beyond from what is specifically done here (I share the critic of reviewer fsrB who refers to a "very narrow application of domain adaptation", which "could it be extend to other settings or problems" - but which I cannot judge.
I know it is framed as an "application paper" but I am not sure about the actual impact of this work.
---
Rebuttal Comment 1.1:
Comment: Thank you again for the updated experimental results. But my point is/was not the statistical significance of the improvement over the state of the art.
My criticism is about a small number of modalities and applications, and a limited number of participants. It is very difficult to judge the impact of the work and how it generalizes to other setups given the provided experimental results.
---
Rebuttal Comment 1.2:
Title: Re:Reviewer Bo9S
Comment: *Clarification: We would like to clarify that our previous response was intended to address the issue raised by reviewer ncj6, please see below for our dedicated response to your questions.*
Thanks for your constructive feedback and for acknowledging that the principal concern regarding the inclusion of additional participants has been partially addressed. Regarding the remaining concerns, we address them as follows:
### 1) Dataset Richness
In flexible sensor studies, participants usually range from one to ten, and data samples range from thousands to hundreds of thousands [1][2][3][4][5], based on task complexity and sensor data dimension. Generally, complex tasks with high-dimensional data require larger datasets (refer to the table below).
| Study | Task | Sensor data dimension | Participants included in dataset| Dataset size |
|:------:|:------:|:-----:|:-----:|:------:|
| Bian, 2021 [1] | Hand gesture recognition| 4 | 1 | 3.5k |
| Kim, 2018 [2] | Full-body motion tracking | 10 | 1 | 2.3k |
| Zhang, 2022 [3] | Gait recognition | 2 | 5 | 0.75k |
| Frediani, 2021 [4] | Body flexion and torsion estimation | 2 | 5 | Non-public |
| Glauser, 2019 [5] | Hand pose estimation | 44 | 10 | 105k |
In our application, we predict joint angles using data from two sensors, with a dimension of 2 and low task complexity (only one rotational degree of freedom). Hence, our dataset (5 participants, 80,000+ samples) adheres to research norms.
### 2) The actual impact of this work
In recent years, flexible sensors have been incorporated into various wearable devices like gloves [5][6], jackets [7], wristbands [1], etc., achieving specific functions through machine learning. Their application has significantly impacted diverse fields including materials science, sensor technology, machine learning, AI, computer graphics, and human-computer interaction. For example, 2019 Nature Technology perspective advocated placing sensors approximately on the body, reducing the need for expert placement [10]. However, collecting datasets covering diverse wear positions for model training increases data costs. Changes in device design also led to additional data collection, consuming time and resources.
The contribution of our paper lies in proposing an adaptive method for on-body positioning of flexible sensors. Additionally, introducing the concept of data distribution shifts due to varying sensor positions enhances flexible sensor design optimization, further advancing their application in wearables.
### 3) How it generalizes to other setups given the provided experimental results.
We rigorously assessed the generalizability of the proposed method using the DIP-IMU dataset. Although the IMU data present in the DIP-IMU dataset [9] significantly different from flexible strain sensor data (3×3 Unit orthogonal matrix vs. 1-d capacitance value), our method maintains its efficacy (for further details, kindly refer to Table 2 in the attached PDF and our comprehensive response to reviewer ncj6).
### 4) Rebuttal Format
We fully endorse the suggestion to "please address the rebuttal to each review." We will address the rebuttal to each review in the upcoming discussions.
[1] Bian, Sizhen, and Paul Lukowicz. "Capacitive sensing based on-board hand gesture recognition with TinyML." Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers. 2021.
[2] Kim, Dooyoung, et al. "Deep full-body motion network for a soft wearable motion sensing suit." IEEE/ASME Transactions on Mechatronics 24.1 (2018): 56-66.
[3] Zhang, Quan, et al. "Wearable triboelectric sensors enabled gait analysis and waist motion capture for IoT-based smart healthcare applications." Advanced Science 9.4 (2022): 2103694.
[4]Frediani, Gabriele, et al. "Monitoring flexions and torsions of the trunk via gyroscope-calibrated capacitive elastomeric wearable sensors." Sensors 21.20 (2021): 6706.
[5] Glauser, Oliver, et al. "Interactive hand pose estimation using a stretch-sensing soft glove." ACM Transactions on Graphics (ToG) 38.4 (2019): 1-15.
[6] Gosala, Nikhil Bharadwaj, et al. "Self-Calibrated Multi-Sensor Wearable for Hand Tracking and Modeling." IEEE Transactions on Visualization and Computer Graphics (2021).
[7] Zhou, Bo, et al. "MoCaPose: Motion Capturing with Textile-integrated Capacitive Sensors in Loose-fitting Smart Garments." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7.1 (2023): 1-40.
[8] Someya, T., Amagai, M. Toward a new generation of smart skins. Nat Biotechnol 37, 382–388 (2019). https://doi.org/10.1038/s41587-019-0079-1
[9] Huang, Yinghao, et al. "Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time." ACM Transactions on Graphics (TOG) 37.6 (2018): 1-15. | Summary: This paper shows an approach for adaptive learning to track motion trajectories from elbow pad sensor, especially concentrating on modelling of sensor displacements in unsupervised manner during the operation. Tracking method is based on multi-layer neural network architecture with learnable affine layer, Fourier feature encoded LSTM, and multiple outputs regression layers. Furthermore, novel sequence discrepancy loss is introduced to reduce data shift caused by sensor displacements. Proposed techniques are evaluated on collected real datasets with ablation study and comparison to other domain adaptation methods from the literature.
Strengths: This is application oriented paper with novel combination of existing techniques (i.e., affine transformation layer for data shift, Fourier LSTM for pattern matching, and sequence discrepancy loss with aux. regressor for unsupervised/self-adaptive learning) for specific problem, which is evaluated with SOTA comparison and ablation study to work in practice (see also weaknesses). Paper is well-structured and its main processing pipeline is illustrated with related formulation.
Summary of strengths
- Practical solution for (very specific) problem in domain adaptation
- Good combination of different techniques for new application
- Empirical evaluation with comparison and ablation study
- Clearly illustrated processing pipeline and formulation
Weaknesses: Paper only considers very specific problem with one type of sensor and small dataset. It is difficult to assess if this could generalise to different settings. Although there are comparison to SOTA models, paper lacks some basic baseline from signal processing field, e.g., low dimensional data could be estimated using state-space models such as extended kalman or particle filters or at least give more justification of need for learnable methods with adaptive capabilities. Also, most of the SOTA methods presented in the paper are designed for the image data domain adaptation, not for low-dimensional time-series data, so there could be some more related approaches. In addition to utilisation of more comprehensive dataset, paper lack detailed analysis of boundaries in relation to how much noise and sensor displacement it could handle.
Summary of weaknesses
- Very narrow application of domain adaptation (could it be extend to other settings or problems)
- Small datasets (one person, four in supplement)
- Lack of basic baseline (e.g., state space models (EKF, PF etc.))
- Missing the study of data noise and/or boundaries for amount of sensor displacement handled
- Most of the compared SOTA methods designed for different data (e.g., images)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - How well the prior SOTA methods evaluated in the paper compares to particular time-series type of domain shift?
- What are the boundaries of sensor displacement (i.e., how much noise or data shifts method can handle)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Some of the limitations of proposed approach are shortly analysed in relation to specific issues such as data quality.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Title: Response to rebuttal
Comment: I have read rebuttal and other reviews. I would like to thank authors for additional benchmarking (i.e., with more subjects, additional dataset, and against more related SOTA methods), which definitely improves the original manuscript and results. However, in overall I still find the work quite specific and to show its possibilities, impact, and generalisation capabilities, more work is needed (e.g., experimenting with different modalities, time-series domain adaption problems, and model properties (FFE, Affine Transformation, SD loss) against other time-series SOTA and baseline models). | Summary: Flexible sensors are useful for tracking human status as wearable systems, but they can become displaced when worn, causing challenges for machine learning algorithms. The proposed solution of this paper is a self-adaptive motion tracking network that includes a learnable Affine Transformation layer, a Fourier-encoded LSTM network for pattern identification, and a sequence discrepancy loss with auxiliary regressors for unsupervised tuning of Affine Transformation parameters.
Strengths: 1. The proposed method of this paper is useful for real-world applications.
2. The proposed method is intuitive and easy to follow.
3. The experiment results show the effectiveness of their method.
Weaknesses: 1. This paper mainly works for the real-world sensor. I would like to know how it works in real-world applications.
2. In the experiments, the author should compare more advanced methods, for example, the least method the author compared is published in 2020. I would like to know how this framework works together with state-of-the-art modules, for example, transformer.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please answer 2 of the weakness
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please answer 2 of the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | null | Summary: This paper presents an approach to self-adaptive motion tracking using on-body, flexible sensors that in real world applications may be subject to displacements. The authors present a network that contains a component that automatically learns an affine transformation between training data (with annotations in form of sensor data with their corresponding target joint angles), a Fourier encoded LSTM part for the actual pattern recognition, and a bespoke loss function that uses auxiliary regressors for automated, unsupervised tuning of the parameters of the aforementioned affine transformation. The authors evaluate their method in practical experiments where participants wore elbow sleeves that contain flexible sensors and performed various bend movements. The objective of the evaluation is to measure the accuracy of the predicted joint angles for motion tracking applications. Comparisons to related SOTA models are made and it is shown that the proposed method outperforms all other analyzed models in terms of much better accuracy (distance from ground truth). An ablation study sheds light on the contributions of the individual components of the network and the supplementary material provides details on the implementations, data characteristics etc.
Strengths: This paper is very well written, technically sound, and the experimental results are convincing. I applaud the author for what seems like rigorous work and great thoroughness in execution. Well done. The papers is very application driven, which is fine of course. The tracked problem is relevant for the targeted application domain of motion tracking using body-worn sensors. As such, I can see that the paper may have impact in this field. The authors carefully designed a technical solution in form of a network that specifically, and successfully, addresses domain specific aspects. The experimental evaluation is relevant at least in parts and comparisons to related work / SOTA are done in a very careful and thorough manner. The reported results look promising.
Weaknesses: The experimental evaluation looks a bit limited. While I appreciate the range of displacement that were tackled, the number of participants is a bit small, and so are their individual recordings. I would imagine that the displacements are dependent on a number of factors including the physique of the participants and the tasks performed. Not much is said about either of these and I fear that the promising results may not (or maybe they will — hard to tell!) generalize too much.
On a higher level note: “application papers” typically have a more challenging time at NeurIPS. I do not want to discount the paper merely by the fact that it is mainly an application paper. However, the presented method is very much tailored to motion tracking and I cannot see how the method itself would generalize beyond this. This is not a criticism per se but rather an assessment of the potential impact that paper might have to the broader NeurIPS audience, which would judge as rather limited. I wonder if the authors could branch out their method beyond the specific motion tracking application here? They argue through domain shifts and such, which is a hot topic in the field, but I fail to see how this domain adaptation etc. community would gain much insight from this paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See my comment above re limited impact beyond the particular application domain. I wonder if the authors could elaborate on this a bit.
Also, see my concern about the evaluation. Can the authors provide more information about their participants?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Discussion on limitations is there and ok.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | null | Rebuttal 1:
Rebuttal: # Reviewer #ncj6
## Q1. Limited participants / recording length / physique of participants / task.
To further demonstrate the generalization ability of our method, we conducted additional experiments with five new participants of varying body types (see Table 1 in the uploaded pdf file).
Each participant wore the devices in three on-body positions and performed three distinct physical activities sequentially: __ping-pong__, __basketball__, and __boxing__. In total, we collected 15 (5 participants × 3 on-body displacements) unique data segments across all participants, comprising 81,848 total frames.
Table 2 in the uploaded PDF shows the experimental results of our method on the aforementioned dataset, including two additional SOTA methods, HoMM [1] and AdvSKM [2], as recommended by Reviewers f2Et, fsrB, and Bo9S.
It can be observed that the proposed method demonstrates the lowest average error across data from five users, supporting its applicability to varied user profiles and movement patterns.
## Q2. Branch out proposed method beyond the specific motion tracking application?
To demonstrate the generalizability of our method beyond the task in our paper, we conducted further experiments on the DIP-IMU dataset [3], which includes IMU sensor data from 10 subjects executing diverse motions, along with corresponding 3D human body poses. The objective is to estimate 3D rotations of 15 SMPL model joints using only six IMU readings from head, pelvis, hands, and feet. Similar to DIP [3], we employed an LSTM network for this, yielding 15 joint rotations in axis-angle notation as output.
We trained the model on data from 8 of the 10 subjects, reserving the remaining 2 subjects' data as test set.
As shown in Table 3 in the uploaded PDF, the proposed method achieves slightly improved performance compared to the SOTA HoMM and AdvSKM methods. This demonstrates the strong generalizability of our approach across different input modalities and sensing technologies.
# Reviewer #f2Et
## Q1. How it works in real-world applications.
Our method can be applied in real-world scenarios through the following steps:
1) To begin, users wear the device and perform a flexed-arm calibration pose.
2) Using the collected sensor data, our method fine-tunes the model to adapt to the present on-body device displacement.
## Q2. Comparison to more advanced methods.
Thanks for the suggestion and please see Table 2 in the uploaded PDF for the comparison to two more advanced methods, HoMM [1] and AdvSKM [2], which further demonstrates the superiority of the proposed method.
# Reviewer #fsrB
## Q1. Narrow application of domain adaptation.
Please kindly review our reply to Reviewer #ncj6, Q2, and Table 3 in the provided PDF, showcasing our method's enhanced performance on IMU data. This underscores our approach's robust adaptability across diverse input modes and sensing technologies.
## Q2. Small datasets.
We performed extra experiments on five new participants. Please refer to our response to Reviewer #ncj6, Q1, and Tables 1 and 2.
## Q3. Missing the study of data noise and/or boundaries for amount of sensor displacement handled.
Thank you for your suggestion. We computed SNR for the sensor signals and aggregated MAE for each set of data. As shown in Fig 1 (in the uploaded PDF), our method demonstrates significant efficacy for SNR > 10 dB, while achieving satisfactory outcomes becomes challenging for SNR < 10 dB.
## Q4. Most of the compared SOTA methods designed for different data.
Thank you for your suggestion! To make a fairer comparison, we included comparison results against AdvSKM [2], witch specifically designed for time series data. Please see Table 2 for more details.
# Reviewer \#Bo9S
## Q1. Experiments are only performed on this specific dataset.
Thank you for your comment. Please see our response to Reviewer #ncj6, Q2 and Table 3 in the uploaded PDF, which demonstrates that our method also achieves improved performance on inertial measurement unit (IMU) data.
we carefully assessed the suggested datasets, discovering their focus on classification tasks that our method does not currently accommodate. Hence, we are unable to gauge our approach's performance on these specific datasets.
## Q2. Experiments in Sec. 5.1 should also cover data from more than one participant.
Thank you for your constructive suggestion. We have conducted preliminary experiments with additional participants, and the results indicate that our conclusions remain valid. We will include the updated results from these expanded experiments in the revision.
## Q3. Missing SOTAs.
Thanks for the suggestion and please see Table 2 for the comparison with two suggested SOTAs, HoMM and AdvSKM, which further demonstrates the superiority of the proposed method.
Please note that HoMM and AdvSKM are the two most suitable benchmark methods for our task, which involves unlabeled target domain data and regression modeling.
## Q4. Paper organization.
To clarify, as Reviewer #ncj6 noted, ours is an ''application paper focused on solving an important real-world problem, rather than a ''methodology paper'' aiming to develop a new algorithm applicable to multiple potential applications.
This is within the remit of NeurIPS 2023 Call for Papers,'' Applications (e.g., vision, language, speech and audio)''.
## Q5. Missing references and minor issues.
Thanks for the suggestions. We will include all references in our revision and correct the minor issues.
# References
[1] Chen, Chao, et al. "HoMM: Higher-order moment matching for unsupervised domain adaptation." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020.
[2] Liu, Qiao, and Hui Xue. "Adversarial Spectral Kernel Matching for Unsupervised Time Series Domain Adaptation." IJCAI. 2021.
[3] Huang, Yinghao, et al. "Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time." ACM Transactions on Graphics (TOG) 37.6 (2018): 1-15.
Pdf: /pdf/967e5f1aead57d157d72970331f329f62f02a33a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Understanding Contrastive Learning via Distributionally Robust Optimization | Accept (poster) | Summary: This paper tackles the sampling bias problem in contrastive learning, where negative samples are usually sampled randomly from the marginal distribution and may contain false positive samples.
The authors provide a connection between the contrastive learning objective and distributionally robust optimization as a first part of the contributions:
By an optimal choice of the temperature parameter, the InfoNCE loss coincides with maximizing the loss for negative samples, leading to the worst-case negative sample distribution.
Consequently, it is revealed that the temperature parameter admits a trade-off between the generalization error and the DRO radius.
Secondly, the authors point out the two issues of standard contrastive learning:
It tends to overweigh negative samples with large distances (called over-conservatism) and be sensitive to outliers.
To mitigate these issues, the authors propose an alternative weight distribution over negative pairs to prevent overweighting of too-far negative samples.
Strengths: - New insights on contrastive learning: One of the main contributions of this paper, the equivalent reformulation of contrastive learning with distributionally robust optimization, provides a new perspective to contrastive learning. In this picture, contrastive learning can be interpreted as optimizing the worst-case distribution over the negative samples. This interpretation can benefit us in developing a new method, as seen in Section 5.
- An new interpretation of the temperature parameter: As a result of the reformulation, we observe that the temperature parameter in contrastive learning serves as the Lagrange multiplier for the DRO constraint, and lower temperature leads to stronger DRO constraint. This insight again contributes to the development of algorithms.
Weaknesses: - Meaning of the connection between contrastive learning and DRO is not clear enough: Theorem 3.2 connects the DRO objective and the InfoNCE loss. Although it is very interesting and insightful, and the authors argue that contrastive learning can be regarded as DRO and thus mitigate the sampling bias, we could argue that DRO is merely contrastive learning with the sampling bias in the opposite way. This would undermine the robustness property of DRO can still suffer from the sampling bias residing in contrastive learning.
- DRO interpretation may not explain hard mining: In Section 3.3.C and Section 5.1, the authors observe that the DRO equivalent formulation explains how contrastive learning puts larger weights to negative samples with larger $f\_\\theta$. Although this seems correct, this phenomenon does not correspond to what is so-called "hard mining." Indeed, Robinson et al. [38] (and many other works) define hard negative samples as "points that are difficult to distinguish from an anchor point," which can be put in the current context as "negative samples with small $f\_\\theta$." This observation may explain why a "strange" trade-off is observed in Section 3.3.A such that stronger DRO constraint leads to looser generalization error, unlike what we expect.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Major comments are listed first.
- In Corollary 3.4 and Theorem 3.5, it is better to explain exactly what the "approximation" indicates.
- With the proposed method ADNCE, can you discuss how we should choose the parameters $\\mu$ and $\\sigma$ in Section 5.2?
Minor comments follow.
- Typo in l.42: "we proof" -> "we prove"
- Typo in l.82: "Donsker-Varadah" -> "Donsker-Varadhan"
- Typo in footnote 1: The expectation looks strange.
- Typo in l.142: $\\mathbb{E}\_Q$ -> $\\mathbb{E}\_{Q\_0}$
- In Theorem 3.2, the definition of $\\mathbb{Q}$ lacks. In Eq. (4), can $\\mathbb{E}\_{Q\_0}[Q/Q\_0]$ be simply written as $\\int Qdx$?
- Typo in l.156: "satisfied" -> "satisfies"
- Typo in l.173: "theorem 3.2" -> "Theorem 3.2"
- Typo in l.177: "subsection 3.4" -> "Subsection 3.4"
- Typo in l.190 and l.191: $E\_{Q\_0}$ -> $\\mathbb{E}\_{Q\_0}$
- In Theorem 4.2, it is better to mention $\\phi^\*$ indicates the convex conjugate because this is the first place where it appears.
- In Corollary 4.3, explaining what two random variables $X$ and $Y$ refer to make the statement complete is better.
- In l.251, what do you compare CL-DRO with by the word "tighter"? By the way, the word "tight" indicates that equality can be attained for a given inequality, so the comparative "tighter" seems strange.
- In l.257, I don't understand how "DRO bridges the gap" in the following paragraph.
- In l.259, it is better to explain what $\\mathcal{I}\_{DV}$ indicates.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately discussed the limitations in the conclusion.
Societal impacts do not apply to this work because this work is mostly theoretical.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer C48x:
Dear Reviewer,
Much thanks for your detailed comments. In the revised version, we will meticulously polish the paper in accordance with your feedback, correcting typos and providing clear explanations of notations. Also, we find there may exist some misunderstandings.
## We would like to make clarifications in the following:
**M1: The meaning of the connection between CL and DRO is not clear enough: ... We could argue that DRO is merely contrastive learning with the sampling bias in the opposite way. This would undermine the robustness property of DRO can still suffer from the sampling bias residing in CL.**
A1: It appears there exists a misunderstanding. It is not accurate to consider DRO as merely contrastive learning with the sampling bias. DRO, in essence, is a broad optimization framework aiming to minimize the worst-case expected loss across a range of potential distributions. Our work (Thm. 3.2) demonstrates that optimizing the InfoNCE loss is equivalent to optimizing a CL-DRO objective, which is a specific loss function utilizing DRO. However, it is not logically sound to extrapolate this specific instance to conclude that DRO in general is equivalent to CL with sampling bias.
Furthermore, the issue of sampling bias arises from the inadequate specification of the negative sampling distribution $Q\_0$. DRO itself does not involve how to specify $Q\_0$. Hence, it is not valid to deduce that sampling bias is intrinsic to DRO. In fact, the adversarial nature of DRO equips CL with robustness to sampling bias.
**M2: DRO interpretation may not explain hard mining: ... Indeed, Robinson et al. [38] (and many other works) define hard negative samples as "points that are difficult to distinguish from an anchor point," which can be put in the current context as "negative samples with small $f\_\theta$.**
A2: There seems to be a misunderstanding that we would like to clarify. Our definition of hard negative samples aligns with the definition provided in [38]. More specifically, $f\_\theta$ in our paper represents the similarity between two points (as stated on line 98) --- a larger $f\_\theta$ value indicates a higher degree of similarity. Therefore, negative samples with large $f\_\theta$ values are considered as hard negative samples, not those with small $f\_\theta$ values.
Regarding the scenario with an overly stringent DRO constraint, the decline in generalization error can be intuitively understood. In such cases, the optimization process takes into account an excessively broad range of distributions, including those irrelevant or harmful ones that significantly deviate from the ideal distribution. This can negatively impact the performance of the model.
## We further provide responses to the questions you have raised:
**Q3: What the "approximation" indicates in Corollary 3.4 and Theorem 3.5?**
A3: The term "approximation" refers to the estimation obtained by employing a second-order Taylor expansion on the objective function $\mathcal{L}\_{\text{CL-DRO}}^{\phi}$. By selecting the form of the Kullback-Leibler (KL) divergence as $\phi^{(2)}(1)=1$, truncating it to the second order, and differentiating it with respect to $\eta\_1$, we can derive $\mathcal{L}\_{\text{CL-DRO,2}}^{KL}$.
Regarding the approximation of corollary 3.4, it is sufficient to differentiate $\mathcal{L}\_{\text{CL-DRO,2}}^{KL}$ with respect to $\alpha$ in order to obtain the final result.
The reviewer could refer to Appendix A.3 and Appendix A.4 for more details.
**Q4: How we should choose the $\mu$ and $\sigma$?**
A4: In most datasets, it is sufficient to set $\sigma=1$ and just needs to tune the hyperparameter $\mu$ via grid search. The reviewer could refer to Tables 6, 7, and 9 in Appendix B for more details.
**Q5: What do you compare CL-DRO with by the word "tighter"?**
A5: Existing common variational approximation of $\phi$-divergences is Donsker-Varadhan target ($I_{DV}$) : $D\_{\phi}(P||Q) \coloneqq \operatorname*{sup}\_{{f}\in \mathcal{F}} \{ \mathbb{E}\_P [f ] - \mathbb{E}\_Q [\phi^*(f)] \}$
which holds for an arbitrary finite measure in $\mathcal{F}$—is loose when applied to probability measures as was first observed in Ruderman et al. [1]. This expression fails to take into account the fact that divergences are defined between probability distributions. If we additionally assume, as we do for $\mathbb{R}$, that $\mathbb{E}_P[1]=1$, then we have the tighter representation: $D\_{\phi}(P||Q) \coloneqq \operatorname*{sup}\_{{f}\in \mathcal{F}} \mathbb{E}\_{P} [f] - \operatorname*{min}\_{\lambda \in \mathbb{R}} \{ \lambda + \mathbb{E}\_{Q} [\phi^*(f-\lambda)] \}$
This result, using the infimum over $\lambda$ in the spirit of Ruderman et al. [1], appears to have been independently proposed by Agrawal et al. [2]. As our proof in Theorem 4.2 revolves around the aforementioned equation, it is appropriate to refer to it as a tighter estimation of mutual information.
[1] Ruderman et al. Tighter variational representations of f-divergences via restriction to probability measures. ICML2012
[2] Agrawal et al. Optimal bounds between f-divergences and integral probability metrics. ICML2020
**Q6: What's the meaning of "DRO bridges the gap"?**
A6: While recent work such as MINE [2] and CPC [36] have demonstrated that InfoNCE is a lower bound for MI, their theoretical analyses still exhibit certain shortcomings [37]. MINE employs a critic in the dual variational form (I_DV) to establish a bound that is neither an upper nor a lower bound on mutual information. Simultaneously, CPC's proof relies on unnecessary approximations. Consequently, the challenge of bridging the gap between contrastive learning (CL) and mutual information estimation from a rigorous theoretical standpoint remains an open question. Fortunately, the power of DRO enables us to rigorously demonstrate the equivalence between CL and mutual information from a theoretical perspective.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the clarifications. I specifically appreciate the clarification of M2: "a larger $f\_\theta$ value indicates a higher degree of similarity," which I misunderstood at the initial review phase. Given this, I increase the evaluation score from 4 to 5. | Summary: This paper starts from the question why the naive form of CL is robust to sampling bias issue, resulting in empirical success in various areas. T this end, the authors first present the relationship between CL and DRO theoretically, where the DRO-constrained CL objective is conceptually equivalent to the objective of CL itself. They further show that the temperature in CL acts as a (inverse of ) robust radius in DRO constraint. Based on these findings, the authors finally propose ADNCE, where the importance weight of negative samples follow Gaussian distribution. The proposed methods are validated via experiments on various domains.
Strengths: - The paper reveals the relationship between CL and DRO in a more comprehensive way to address the bias-tolerant behavior of CL.
- Theoretical findings regarding \tau are validated empirically.
- The proposed framework and the ablative model based on the approximation of CL-DRO (Eq. 8) achieve meaningful performance improvement on the datasets.
Weaknesses: - The design choice of ADNCE needs to be justified. Why such Gaussian-like weights are the most reasonable choice and can address the weakness mentioned in Sec. 5.1?
- Missing sensitivity analysis of \mu and \sigma in (12), comparing to those of \tau in InfoNCE. How much the results change as the hyperparameters vary?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why controlling the variance of negative samples contributes to the success of CL?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitation has been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer rLbF:
Dear Reviewer,
We appreciate your recognition of our contribution on the connection between CL and DRO. We also express our gratitude for your insightful inquiries regarding ADNCE. Below, we present responses to your comments:
**Q1: Why such Gaussian-like weights are the most reasonable choice and can address the weakness mentioned ?**
A1: Thanks for your insightful comment. The primary motivation behind ADNCE is to mitigate the issues of over-conservatism and sensitivity to outliers. These limitations stem from the unreasonable worst-case distribution, which assigns excessive weights to the hardnest negative samples. Consequently, any weighting strategy capable of modulating the worst-case distribution to focus more on the informative region holds promising. In our ADNCE, we opted for Gaussian-like weights due to its flexibility and unimodal nature. However, alternative weighting strategies such as Gamma, Rayleigh or Chi-squared could also be employed. The following experiment demonstrates that these alternative weighting strategies can yield comparable results to Gaussian-like weights.
|Weight Strategy| Probability density function|CIFAR10|
|----------|----------|------|
|Gamma-like| $w(x,m,n)=\frac{1}{\Gamma(m)n^m}x^{m-1}e^{-\frac{x}{n}}$ |91.74 |
|Rayleigh-like| $w(x,m)=\frac{x}{m^2}e^{-\frac{x^2}{2m^2}}$ |91.73 |
|Chi-squared-like| $w(x,m)=\frac{1}{2^{m/2}\Gamma{(m/2)}}x^{m/2-1}e^{-{x}/{2}}$ |91.99|
|Gaussian-like (ADNCE)|$w(x,m,n)=\frac{1}{n\sqrt{2\pi}}e^{-\frac{1}{2} (\frac{x-m}{n})^2} $|91.88 |
The table above presents a comparison of TOP-1 accuracy performance on the CIFAR10 dataset across different weight strategies. The parameters $m$ and $n$ are utilized to denote the parameters within their corresponding probability density function, while the variable $x$ represents a random variable. It is important to note that, due to the domain definition of some PDFs being $(0,+\infty)$, we need to set $x=prediction\\_score + 1$.
As such, any proposal that enables adjusting the weight distribution centered on the main regions of probability density is a reasonable choice.
**Q2: Missing sensitivity analysis of $\mu$ and $\sigma$?**
A2: Much thanks for underlining this point. Here we presented sensitivity analyses of $\mu$ and $\sigma$ on cifar10, cifar100, and stl10 datasets. The results ar presented as follows:
| $\mu$ | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 |
|----------|------|------|------|------|------|
| CIFAR10 | 91.3 | 91.66| 91.9 |91.77 |92.25 |
| STL10 |87.84 |88.22 |87.56 |88.48 |88.45 |
| CIFAR100 |69.34 |69.31 |68.70 |69.24 |68.95 |
| $\sigma$ | 0.2 | 0.4 | 0.6 | 0.8 | 1 | 1.5 | 2 |
|------------|------|------|------|------|------|------|------|
| CIFAR10 | 90.07| 91.85| 92.02| 91.77| 91.72| 91.69| 91.94|
| STL10 | 86.54| 88.30 | 88.10 | 87.54| 88.95| 88.12| 88.40 |
| CIFAR100 | 67.38| 69.36| 69.52| 69.01| 68.70 | 69.24| 69.42|
The above tables showcase the comparisons of TOP-1 accuracy performance on three datasets (CIFAR10, STL10, and CIFAR100) under different values of $\mu$ and $\sigma$. As can be seen, changing the parameters $\mu$ and $\sigma$ would impact the model performance, but not as dramatical as tuning the parameter $\tau$. (For example, on STL10 $\sigma$ from 1.0 to 0.2 just brings 2.7\% performance drops, while changing $\tau$ brings 12.6\% performance gap.) This outcome indicates that tuning $\mu$ and $\sigma$ is not a significant burden. In most scenarios, it may suffice to set $\sigma=1$, requiring only the tuning of $\mu$ within the range of 0.1 to 0.9 (can refer to Table 6, 7, 9 in Appendix B for more details).
**Q3: Why controlling the variance of negative samples contributes to the success of CL?**
A3: Controlling the variance of negative samples is crucial for the success of contrastive learning, for two reasons. On the one hand, by regulating the variance of predicted scores for negative samples, we simultaneously reduce the variance in model loss. This, in turn, enhances the generalization capability and reliability of the model. Analogous, there are numerous studies [16, 26, 35] on the trade-off between bias and variance in model loss, demonstrating significant advancements in performance and robustness enhancement. On the other hand, variance control also manifests as rigorous hard-mining, as evidenced by the gradient expression presented below.
$$
\frac{d}{d\theta}(\mathbb{E}\_{Q\_0} [f\_\theta] + \frac{1}{2\tau} \mathbb{V}\_{Q\_0} [f\_\theta]) = \mathbb{E}\_{Q\_0} \left[\frac{d}{d\theta}(f_\theta)\right] + \frac{1}{\tau} \mathbb{E}\_{Q\_0} [(f\_\theta - \mathbb{E}\_{Q\_0} [f\_\theta]) \frac{d}{d\theta}(f\_\theta)]
$$
Higher weights are allocated to harder negatives having higher predicted scores, capturing the core idea of hard mining. Without variance control, weights lack distinction. This necessitates using a substantially larger set of negatives, further complicating training. | Summary: The paper proposes a novel theoretical framework for understanding contrastive learning (CL) via distributionally robust optimization (DRO).
Under this framework, the paper derives that the InfoNCE loss can be interpreted as DRO with KL ball around the negative sampling distribution, and the paper leverages DRO insights to derive the optimal temperature parameter. Experiments are conducted which show that a novel modification of InfoNCE can lead to better sample complexities.
Strengths: 1. The framework that the paper proposes is quite elegant mathematically and explains the InfoNCE loss from a rigorous mathematical perspective, which endows us with many insights.
2. Using these insights, the paper proposes a new ADNCE loss that overcomes the issues of conservatism in DRO and shows some possibly promising experiments. (I am not an expert in CL so I cannot gauge how convincing these empirical improvements are).
Weaknesses: 1. The proposed approach introduces more hyperparameters, which define the weighting distribution. These hyperparameters can be a hassle to tune in practice, and it is unclear if the reported gains are simply from better hyperparameter tuning with these newly introduced hyperparameters.. It would be informative to show whether the new ADNCE is better for a large number of hyperparameters, or only a few.
Please also answer my questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Corollary 3.4, how is optimality measured? In other words, what objective does the setting of Eqn 6 optimize, and which assumptions are needed for it to hold?
2. Besides the math, is there any intuition on why DRO in the negative samples is what CL should strive for? In other words, why do we want to maximize the distance for all possible distributions of negative samples around the truth? (rather than the one we see data from). I think illustrating this intuition would be important to making this paper stronger.
2b. Does this DRO insight applies to other types of CL losses, besides InfoNCE?
3. What is Q^ideal? It is undefined before Line 154.
4. Can you please shed more light on what is hard-mining and how DRO gives us insights that hasn't been uncovered before? The paper seems to assume that readers know what hard-mining is, but I am not aware of this phenomenon for CL.
5. Can ADNCE also be explained with some modification of DRO? It would be nice to see if the new method is theoretically motivated as well, or simply a heuristic to avoid conservatism of DRO.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses/questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer sD47:
Dear Reviewer,
We greatly appreciate your acknowledgement of our contributions and your insightful comments. In what follows, we provide responses to the questions you have raised:
**Q1: Concerning on the burden of hyperparameter tuning in ADNCE**
A1: Considering the two limitations of InfoNCE, namely its over-conservatism and sensitivity to outliers, we introduce ADNCE that employs Gaussian-like weights with only two additional hyperparameters ($\mu$ and $\sigma$). In most scenarios, it may suffice to set $\sigma=1$, requiring only the tuning of $\mu$ within the range of 0.1 to 0.9 (can refer to Appendix B.1 and Table 6 in Appendix for more details). To further explore the model's sensitivity to the hyperparameter $\mu$, we have conducted additional experiments and got the following results:
| $\mu$ | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 |
|----------|------|------|------|------|------|
| CIFAR10 | 91.3 | 91.66| 91.9 |91.77 |92.25 |
| STL10 |87.84 |88.22 |87.56 |88.48 |88.45 |
| CIFAR100 |69.34 |69.31 |68.70 |69.24 |68.95 |
The table above presents a comparison of TOP-1 accuracy performance across three datasets at different values of $\mu$.
As observed, the performance of our model does not exhibit high sensitivity to $\mu$. Even when $\mu$ is set to a less optimal value (for instance, $\mu=0.1$ in CIFAR10), our ADNCE still outperforms the original InfoNCE. This outcome underscores the efficacy of our ADNCE, and indicates that tuning $\mu$ and $\sigma$ is not a significant burden.
**Q2: In Corollary 3.4, how is optimality measured?**
A2: In this context, $\alpha^*$ denotes the optimal value of $\alpha$ that minimizes the initial formula as presented in equation (4). Specifically, $\alpha^*=\operatorname*{argmin}\_{\alpha\geq 0} \operatorname*{min}\_{\eta\_1} \operatorname*{max}\_{Q\in \mathbb{Q}} \{ \mathbb{E}\_{Q}[f\_\theta] - \alpha [D\_{KL}(Q||Q\_0) -\eta] + \eta\_1 (\mathbb{E}\_{Q\_0} [\frac{Q}{Q\_0}] - 1) \}$. Corollary 3.4 aims to analyze the relationships between $\alpha^*$ and $\eta$. Here, we simply apply a second-order Taylor expansion approximation without imposing any additional assumptions, which renders Corollary 3.4 both general and applicable across various domains. Appendix A.3 provides more comprehensive details.
**Q3: Is there any intuition on why DRO in the negative samples is what CL should strive for?**
A3: In constrastive learning, a uniform distribution is often employed to sample negative instances. However, this distribution is not optimal and may select instances with similar semantics, thereby leading to a potential issue of sampling bias. DRO empowers CL with a resilience to such sampling bias.
DRO can be intuitively understood as a specific adversarial method: it introduces adversarial perturbations to the negative distribution, and the model is subsequently optimized to resist these perturbations. Through this mechanism, DRO entables CL to perform well accross various potential distributions, thereby endowing it with robustness against sampling bias.
**Q4: What is Q^ideal?**
A4: Thanks for underlining this problem. We will provide a comprehensive explanation of Q^ideal in the next revision. The term $Q^{ideal}$ denotes the ideal negative sampling distribution that selects instances with distinctly dissimilar semantics. It should be noted that in practical constrastive learning, the ideal $Q^{ideal}$ is not avaliable. As a substitute, the uniform distribution $Q\_0$ is utilized, which may sample instances with similar semantics, thereby introducing a so-called sampling bias. Theorem 3.3 is aimed at establishing theoretical bounded relationships between the model directly trained on the ideal $Q^{ideal}$ and the model trained on $Q\_0$ using InfoNCE.
**Q5: What is hard-mining and how DRO gives us insights that hasn't been uncovered before?**
A5: Hard-mining refers to a phenomenon that CL adaptively puts more weights on hard negative samples, which have higher prediction score and are challenging to distinguish. While this phenomenon has also been illuminated in recent studies either by examining the gradient magnitude [43] or through coordinate-wise optimization [45], our work presents the hard-mining property from a novel perspective and explicitly provides an expression of the weight.
In addition to hard-mining, DRO unveils some attributes that have not been disclosed by recent work, including its robustness to sampling bias and the pivotal role of the temperature. Furthermore, the DRO perspective enhances our understanding of the limitations of InfoNCE, thereby inspiring us to develop a superior method ADNCE.
**Q6: Can ADNCE be considered as a theoretical modification of DRO?**
A6: We are thankful for your insightful query. ADNCE is a heuristic approach designed to directly address the limitations of InfoNCE. While ADNCE may lack a formal theoretical foundation, this straightforward strategy indeed demonstrates superior performance across a variety of tasks. It would indeed be fascinating to explore the theoretical foundation of ADNCE or to devise a new method that incorporates a theoretical modification of DRO. We aspire to undertake such an endeavor in our future work. | Summary: In this work the authors demonstrate a connection between contrastive learning, in particular InfoNCE, and distributionally robust optimization. In contrastive learning algorithms it is typical during training that samples which are similar are treated as being different, ie a negative pair, since contrastive learning typically does not include class labels. Intuitively this would be a hinder the ability of contrastive learning algorithms to learn useful representations. Interestingly this doesn't seem to happen in practice, choosing a temperature parameter correctly can cause standard CL algorithms to perform as good as, or better than, robust variants, in the presence of false negative pairs.
In this work it is theoretically proven that InfoNCE is naturally distributionally robust, which explains the phenomenon from the last paragraph. This is done by proposing a distributionally robust version of InfoNCE and demonstrating that the loss is equivalent to InfoNCE with scaling and shifting. This work goes on to analyze the effect and selection of the temperature parameter, giving further insight into the behavior of InfoNCE. It is demonstrated that the distributionally robust InfoNCE is an estiamtor of mutual information. This work then proposes ADNCE as a way to overcome shortcoings of InfoNCE, in particular its sensitivity to outliers, i.e. the hardest negative pairs. This is done by weighting samples in the loss so that the most outlying negative pairs have lower weight in the ending risk. It is experimentally demonstrated that this method brings improvements.
Strengths: - The theoretical result of this work is useful, reasonably nontrivial, and regards a topic that is of very large interest to the ML community. Very good!
- ADNCE is an easily algorithm for a problem of signifiant interest, that brings consistent improvements. Also very good!
Weaknesses: I spent a fair amount of time with this paper and I think its contributions are of high significance and importance, however I really feel that the exposition needs some improvement. I spent some time with the proofs and, while I was always able to decipher what was happening and found no errors, but they are written in a way that I found very hard to parse. While its far from the worst writing I have seen, there is quite a lot of the main text that was also difficult to precisely understand. I don't think the topicality here is so mathematically dense or advanced that this couldn't be written in a much clearer way. If this paper were very well written I would give this an 8, I think. Some example issues are listed below:
- Line 63: What is "the ultimate distribution"?
- Line 101 & Proof of Theorem 3.2 : it would be much clearer if $f_\theta$ was instead $f_\theta(x,y)$. This wouldn't take more space and it would be much easier to parse whats going in in the Proof of Theorem 3.2 if this were the case.
- Theorem 3.2: It should be made clear that $\alpha^*$ is a function of $\eta$, this seems quite important. One could simply write $\alpha^*(\eta)$.
- appx. line 10: I think $P$ in this line is supposed to be $Q$
- appx. (17): I think its worth writing the original constrained optimization before applying strong duality.
- appx. (17): $\max_L$ is again a bit vague. I'm assuming whats actually happening is that we are optimizing over $Q$ in the $L$ definition.
- appx. line 21: "fine definition" is nonstandard, something like "always exists" or "is well-defined"
- appx. line 29: this should be an $\arg \min$. Text like "$\alpha^*$ represents the optimal value of ..." is needlessly confusing, just say "$\alpha^* = \arg \min \cdots$."
- Line 148, 165: Maybe I'm missing something, but I don't quite get what "optimal" means in these lines.
- Theorem 3.3: What is $Q^{ideal}$? In what sense is it "ideal"?
If this work used more clear and precise notation I think it would be great, right now it looks like a rough draft. As an example improvement, I would rewrite in appx (17) the fist term in the last line
$\mathbb{E}_{P_0} [f_\theta]$
as
$\mathbb{E}_{(X,Y)\sim P_0}[ f_\theta(X,Y)]$
Some other errors:
- "and graph" should be "and graphs"
- Give the definitions of acronyms, eg DCL HCL RPL
- There are many missing periods for sentences that end with equations, eg at the end of (1), (16),(17)
- Line 70: "downstrea" [sic]
- Page 3 footnote: sloppy ]
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I don't have any questions really, the paper just needs more polish
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 4 excellent
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer CyLz:
Dear Reviewer,
We sincerely appreciate your recognition of our work and deeply regret any confusion caused by typographical errors or unclear notations within our paper. Your detailed comments are highly valued. In the revised version, we commit to meticulously refining the paper in accordance with your feedback, rectifying any typographical errors or unsuitable notations, and providing precise explanations for each definition. Moreover, we will incorporate a comprehensive notation table to augment the clarity of our work. In the following, we provide responses to the questions you have raised:
**Q1: What is "the ultimate distribution" in line 63?**
A1: Here the ultimate distribution refers to the worst-case distribution $Q^*$ that the model is optimized on, formally defined as $Q^*=\operatorname*{argmax}\_{Q} \mathbb{E}\_{Q}[f\_\theta] \qquad s.t. D\_{\phi}({Q}||{Q}\_0) \leq \eta $. From the Appendix A.6, $Q^*$ in InfoNCE can be written as $Q^* = Q\_0 \frac{\exp[f\_\theta / \tau]}{E\_{Q\_0} \exp[{f\_\theta / \tau}]}$. Our ADNCE reshape the $Q^*$ by incorporating Gaussian-like weights. In the forthcoming revision, we will replace the term "ultimate distribution" with "worst-case distribution" to enhance clarity.
**Q2: What "optimal" means in lines 148, 165?**
A2: In this context, the optimal $\alpha^*$ signifies the optimal value of $\alpha$ that minimizes the initial formula in equation (4). Specifically, $\alpha^*=\operatorname*{argmin}\_{\alpha\geq 0} \operatorname*{min}\_{\eta_1} \operatorname*{max}\_{Q\in \mathbb{Q}} \{ \mathbb{E}\_{Q}[f_\theta] - \alpha [D\_{KL}(Q||Q\_0) -\eta] + \eta_1 (\mathbb{E}\_{Q_0} [\frac{Q}{Q\_0}] - 1) \}$. By comparing equation (4) with InfoNCE, we deduce that the optimal $\alpha^*$ functions as the temperature $\tau$. Additonal, we employ the optimal $\alpha^*$ for analyses in Corollary 3.4 to gain insight into the nature of the temperature. We appreciate your insightful query and will give more clear explanations of the optimal $\alpha^*$ in the next version.
**Q3: What is $Q^{ideal}$ in theorem 3.3?**
A3: The term $Q^{ideal}$ denotes the ideal negative sampling distribution that selects instances with distinctly dissimilar semantics. It should be noted that in practical constrastive learning, the ideal $Q^{ideal}$ is not avaliable. As a substitute, the uniform distribution $Q_0$ is utilized, which may sample instances with similar semantics, thereby introducing a so-called sampling bias. Theorem 3.3 demonstrates that InfoNCE is resilient to this sampling bias, establishes theoretical bounded relations between the model directly trained on the ideal $Q^{ideal}$ and the model trained on $Q\_0$ using InfoNCE.
---
Rebuttal Comment 1.1:
Comment: The authors have presented a reasonable response to the questions the issues I mentioned. Its hard to know if the updated paper will overall be more clear without actually seeing it in its totality, but I can bump a point. | Rebuttal 1:
Rebuttal: # Overall Rebuttal:
We thank all reviewers for taking the time to review our paper and for providing valuable and insightful feedback. We are delighted to see that our work has been recognized for its contributions and inspiration to the contrastive learning community, as mentioned by Reviewers $\color{red}{\text{CyLz}}$, $\color{green}{\text{sD47}}$, $\color{blue}{\text{C48x}}$, and $\color{orange}{\text{rLbF}}$.
We would like to express our gratitude to the reviewers for affirming the reliability and nontrivial nature of the theoretical framework we proposed from Distributionally Robust Optimization (DRO) perspective, as highlighted by Reviewers $\color{red}{\text{CyLz}}$, $\color{green}{\text{sD47}}$, and $\color{orange}{\text{rLbF}}$.
Regarding the introduction of the ADNCE loss, we are pleased to see that the reviewers found it both simple and significant, leading to effective results. We appreciate the positive feedback from Reviewers $\color{red}{\text{CyLz}}$, $\color{green}{\text{sD47}}$, and $\color{orange}{\text{rLbF}}$ in this regard.
We carefully considered the comments and suggestions provided by the reviewers, and we have addressed them point by point in our rebuttual. We believe and hope that our response adequately addresses the concerns raised by the reviewers.
Once again, we sincerely thank the reviewers for their valuable feedback, which has helped us improve the quality of our work. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Tree of Thoughts: Deliberate Problem Solving with Large Language Models | Accept (oral) | Summary: The paper introduces an innovative concept of problem-solving that utilizes a tree-like structure of thoughts, constructed and evaluated by the LLM, specifically GPT-4. To illustrate the efficacy of this technique, the authors have incorporated three distinct tasks: the Game of 24, creative writing, and crossword puzzles. These tasks have been chosen as they require different capabilities to be solved. Several iterations of the tree-like structure of thoughts were examined, including those combined with Breadth-First Search (BFS) or Depth-First Search (DFS), along with other variations with different hyperparameters. A comprehensive comparison has been drawn with various other Language Learning Model-based baselines.
Strengths: The value of examining the tree-like thinking process performed by LLM is evident to the scientific community. Primarily, the Tree of Thoughts (ToT) could be applicable to other tasks when adequately adapted and it provides an intriguing exploration of GPT-4's capabilities.
The paper intriguingly selects tasks requiring diverse skill sets:
- The game of 24 demands fundamental mathematical capabilities and the ability to assess whether success is achievable from a partial solution.
- Creative writing needs the generation of coherent and sound text under strict constraints.
- Crossword puzzles necessitate linguistic knowledge and the capacity to search across a vast state space (large word sets that meet given constraints).
The authors have made a commendable effort to objectively assess the results, with blind tests of text coherency serving as a prime example. The elucidation of the method is lucid, with Figure 1 being particularly illuminating. Additionally, most experiments are presented in a comprehensible manner. A significant strength lies in ToT's performance, which significantly surpasses that of the baseline.
I perceive this paper to be a successful proof-of-concept for ToT. Notably, its significance chiefly stems from the ongoing massive efforts aimed at leveraging LLM's capabilities. A prominent challenge in applying LLMs for deliberate problem solving
Weaknesses: The biggest weakness of ToT is that the paper lacks the evaluation of some important properties of ToT. Here I put a list:
1. There is no data about the price of ToT compared with baselines (there is a short mention in the Limitations about the price/success rate trade-off). In the best case, I would like to have a graph where there is a success rate on the y-axis and a price on the x-axis. If you think that such a graphical analysis would be too detailed please put some data in a table. The price of GPT-4 may be different in the future, so alternatively you can add information about the number of tokens used by ToT per solution compared with baselines. Probably the best would be to have both information: price and number of tokens.
2. There is no data about the time execution of ToT compared with baselines.
Comment on 1. and 2. : Knowing the average price and time is very important for readers who consider building their own methods on top of ToT or consider applying ToT to their task.
3. Some additional comparisons are missing. We know that a single execution of ToT is significantly more powerful than a single execution of baseline (for example IO). However is it possible that running IO or CoT many times (until it finds a solution) is actually cheaper and faster than ToT? ToT will still have scientific value even in the case where the answer to the previous question is positive, but the scientific community will benefit from knowing it. Currently, it is not really known what are the maximum capabilities of properly used LLM and how to maximize its performance. In a search for the most effective methods, we need to know as many properties of each approach as possible. Knowing more about ToT would add great value to the paper. Here is a list of my suggestions for additional comparisons:
> How many times on average should I run a baseline to find a solution compared with ToT?
> Can baseline be cheaper or faster than ToT even though it is weaker?
If it is impossible to gather detailed data for this comparison, please provide some estimates.
4. While I see value in the creative writing task, I am not very convinced that a task with just one ToT step is meaningful when evaluating tree search algorithms. More precisely: while formally this task falls under the ToT framework, it's hard to think about it in terms of tree search.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Necessary things (either to address here or in the paper):
- Figure 3b: The title of the figure is “Samples failed at each step”, and there last two bars show success not failures (the one named “Correct”). It is misleading, I first thought that ToT failed mainly at the last step. Another problem with this comparison is that ToT with b=5 vs CoT seems unfair to me (ToT has many samples at each state while CoT does not - it is not surprising that CoT stands no chance). The better comparison would be for example: ToT vs CoT-SC or ToT vs CoT-best of 100.
- In Table 2 and Figure 3 there is no information about the error estimation. It should be at least in text somewhere
The total cost of the experiments done during the work on this paper should be stated in the paper or in supplementary materials.
- line 186. I would change "left" to "remaining" (numbers). For a while, I was wondering if left refers to "left to do" or "left side".
- line 134: dot missing at the end of the sentence
- Algorithm 2 and line 279: how exactly does the pruning work?
- lines 222 to 225: Here you describe an iterative-refine method as a baseline. Would such a method be reasonable for baselines in Game of 24?
Suggestions, ideas, and comments:
-All the tasks presented here could be solved without ToT (and two of them even without LLM). Maybe you have some intuition for what kind of problems ToT would be necessary? I know that this may be hard or even impossible to answer.
- It would be very interesting to see a comparison of ToT based on GPT3.5 vs IO GPT4 or CoT GPT4. Can weaker LLM with ToT be stronger than Sota LLM? I think such an experiment would add great value to the paper.
- The comment at the bottom of page 8 is very interesting
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations section is sound.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your detailed and constructive feedback!
### 1. Cost and efficiency
This is a great point. Please see **General Response (3)**.
### 2. Running IO/CoT baselines many times
We showed Game of 24 IO/CoT best-of-k results in Table 2 and Figure 3, where CoT best-of-100 has a game success of 49%, which is still much worse than ToT's 74%. We also showed iterative refinement approaches for Game of 24 (Table 2) and Creative Writing (Figure 5). These findings suggest ToT might be a better way to spend more resources in hope to get better results for these tasks, compared with parallel sampling or iterative refinement.
### 3. Creative Writing Steps
We note that the ToT for creative writing has two steps: plans are generated and evaluated, then passages are generated and evaluated based on the best plan.
### 4. What tasks need ToT
This is a great question! Please see **General Response (1)**, where we show how easy it is to adapt ToT to more tasks. We think GPT-4+ToT is more suitable for hard tasks challenging GPT-4+CoT, while weaker LLMs+ToT can be used for simpler tasks.
### 5. ToT with weaker LLMs
Please see **General Response (2)**.
### 6. Other smaller questions
- Figure 3b: thanks for the suggestion, we will remove last two "success" bars. The figure is not intended to contrast the performances of ToT over CoT, but rather to support the point that CoT mostly fails at the first step of Game of 24 due to the inherent problems of autoregressive decoding, thus exploration around initial decisions is crucial.
- Table 2 and Figure 3: we did not include error estimation as the performance gaps are significant and running GPT-4 experiments are expensive. As detailed in **General Response (3)**, running ToT experiments on Game of 24 and Creative Writing cost around 100 dollars.
- We will fix line 186 and 134, thanks for your careful reading!
- DFS pruning: as stated in line 265-266, a state is pruned if LLM deems any remaining clue as "impossible" to fill in (e.g. To heap: tm_s_).
- Iterative refine for Game of 24: We tried it, and as shown in Table 2's IO + Refine (k=10, which is already very expensive due to the culcumative context), it does help a bit, but the performance of 27% is still poor compared to ToT's 74%.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for providing the additional data. The most interesting and promising is that GPT 3.5 with ToT can outperform GPT4 IO. | Summary: This paper proposes Tree of Thoughts to promote deliberate problem solving with LLMs. By using a tree-based structure and a four-step process towards problem solving, tree of thoughts successfully address many of the challenges with left-to-right decoding such as looking ahead, backtracking, considering multiple reasoning paths at the same time, and more. Extensive experiments on three reasoning tasks demonstrate the superiority of Tree of Thoughts over ICL, CoT, self consistency, and more.
Strengths: + the Tree of Thoughts approach is well motivated by the limitations of autoregressive decoding
+ the four-step formulation is novel and clearly described
+ experiments on three reasoning tasks are extensive and convincing
Weaknesses: Overall I like this work and I only have a few comments.
- I wonder if it might be possible to discuss the efforts required to adapt Tree of Thoughts to other tasks and reasoning formats. To me, it seems like other than the search algorithm (BFS/DFS), the other three parts would require some hand-crafted engineering from scratch for a different task.
- I wonder if least to most [1] or decomposed prompting [2] might be possible baselines for the three tasks: they also discuss some sort of iterative problem solving potential.
- In the "Game of 24" task, does the base LLM make arithmetic mistakes? If that's the case, would using tool learning strategies [3] further improve Tree of Thoughts in numeric reasoning tasks?
- The supplementary material does not seem to contain an appendix, while some details might be helpful. For example, what is the prompt for GPT-4 zero-shot evaluation in line 213? What are the details of human evaluation for "Creative Writing"? (annotator head count, compensation, etc.) Also, it is increasingly common to present all the prompt texts in the appendix to facilitate reproducibility. I wonder if the authors might consider having an appendix to systematically document all details about the approach and experiments.
[1] Zhou, Denny, et al. "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models." The Eleventh International Conference on Learning Representations. 2022.
[2] Khot, Tushar, et al. "Decomposed Prompting: A Modular Approach for Solving Complex Tasks." The Eleventh International Conference on Learning Representations. 2022.
[3] Schick, Timo, et al. "Toolformer: Language models can teach themselves to use tools." arXiv preprint arXiv:2302.04761 (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: please see above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are discussed in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for finding our work "well-motivated", "novel", "clearly described", and "convincing".
### 1. Adapt ToT to other tasks.
This is a great question. Please see **General Response (1)**, where we show a simple scheme that adapts ToT to StrategyQA and GSM8K with near-zero task-specific hand crafting.
### 2. Least-to-most or decomposed prompting
We think they might not be effective baslines for our studied tasks, where initial decisions (e.g. first step of game of 24, or first filled word in crosswords) are critical and require exploration. Methods like least-to-most or decomposed prompting might help with compositional generalization via task decomposition, but do not have a way to explore and maintain different decomposition plans. So once the first step is generated wrong, they might fail similarly as CoT.
### 3. Arithmetic mistake and tool use
In Game of 24, numbers are usually within 50, and GPT-4 rarely makes arithmetic mistakes (weaker models like GPT-3.5 make more mistakes).
As hinted in Page 8's footnote, we agree it is a great idea and important future direction to enhence ToT with external tool use! We probably need some better and harder tasks that require exploration of both reasoning and acting.
### 4. Appendix
Thank you for the reminder! We will include additioanl experiment details and all prompts used in the appendix.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have increased my rating and wish the authors good luck. | Summary: The paper presents a new framework for language model inference called Tree of Thoughts, which aims to improve the ability of language models to solve complex, multi-step problem-solving tasks.
The framework involves generating a tree of possible plans for solving a given task, with each node in the tree representing a possible step in the plan. The language model then evaluates each node based on its own self-assessment, and the tree is expanded by sampling external paragraphs to generate new nodes. The framework also includes a voting step to determine the best plan to follow.
The paper argues that this approach is more effective than previous methods for prompting language models, such as Chain of Thought, and can be seen as a modern rendition of classical search methods for problem-solving. The contributions of the paper include a detailed description of the Tree of Thoughts framework, an evaluation of its effectiveness on a range of problem-solving tasks, and a discussion of its potential limitations and future directions for research.
Strengths: 1. The idea of upgrading chain-of-thoughts into tree-of-thoughts seems to be an intuitive extension and necessary step. There are many benefits of tree reasoning, including the ability to look ahead and backtrack to search for better traces.
2. The formulation of heuristics leverages the recent self-evaluation ability of GPT-4 to reuse the LLM to evaluate the quality of states via prompting. This relieves the necessity of training accurate state values and enables approximate estimation.
3. The empirical results on three tasks demonstrate the effectiveness of ToT compared with GPT-4 based baselines. The improvement gain is quite large on two tasks, considering GPT-4 based CoT methods don't work well on them.
Weaknesses: 1. Unsurprisingly, the idea of extending linear reasoning to tree reasoning isn't a new thing. Specifically, [1] leverages beam search to guide the chain-of-thought to decode a better reasoning path, which is almost the same as the BFS in this paper. [1] also uses self-evaluation to provide heuristics. [2] also formulates the reasoning as a tree reasoning problem. More importantly, the application of BFS and DFS seems to be ad-hoc, and there could be more principled methods to guide the search. For example, [3, 4] applies MCTS to guide the search, which might have better planning abilities.
2. While ToT improves CoT naturally, it doesn't come without cost. One major concern is the efficiency issue for querying the expensive GPT-4 multiple times. The paper should give the audience a clearer idea of how cost ToT is compared with CoT, probably with an efficiency comparison between them. More interestingly, the paper should propose or at least discuss potential means to reduce the inference cost, such as using various smaller models for sampling or state evaluation, etc.
3. The application scope is largely limited. While the authors say they select the three tasks because they are hard, the selected three tasks are arguably narrow with two text games. One possible reason is that the selected tasks make the thought and state formulation easier and demonstrate improvement significantly. Nonetheless, it would be necessary to extend the scope to be more similar to CoT tasks to demonstrate the broader applicability of the proposed methods.
[1]. Xie, Yuxi, et al. "Decomposition enhances reasoning via self-evaluation guided decoding." arXiv preprint arXiv:2305.00633 (2023).
[2]. Jung, Jaehun, et al. "Maieutic prompting: Logically consistent reasoning with recursive explanations." arXiv preprint arXiv:2205.11822 (2022).
[3]. Zhu, Xinyu, et al. "Solving math word problem via cooperative reasoning induced language models." arXiv preprint arXiv:2210.16257 (2022).
[4]. Hao, Shibo, et al. "Reasoning with language model is planning with world model." arXiv preprint arXiv:2305.14992 (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I raised multiple questions in the weakness section, and I hope the authors could further clarify them. Specifically, some can be reformualted as follows:
1. How hard is it to apply the proposed method to a more general NLP reasoning dataset, like GSM8K or StrategyQA, or other well-known datasets? If possible, I hope the authors could demonstrate one case using the proposed method to solve a more general task.
2. How necessary the GPT-4 is in the proposed framework? It seems GPT-4 is very powerful for self-evaluation, and how critical is this component for the success of the tasks?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your thoughtful comments, all of which are very helpful for improving our work!
### 1. Related work and BFS/DFS vs. MCTS
Thank you for pointing out these recent or concurrent papers related to ToT. We will discuss them in our related work section.
We used BFS and DFS as they are the simplest tree search algorithms that turn out general and effective enough for the studied tasks. Due to the modularity of the ToT framework, application of more advanced algorithms such as MCTS or A* for harder tasks is a clear and promising future direction.
### 2. Cost and potential means to more efficiency
These are great points. Please see **General Response (3)**.
### 3. Extend the scope to more CoT tasks
Please see **General Response (1)**, where as per your suggestion, we show a very simple scheme to extend ToT to StrategyQA and GSM8K.
### 4. Importance of GPT-4
Please see **General Response (2)**.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I appreciate the response and the new experiment results. My score remains the same for the following reasons:
1. Though one pair of experiment comparison shows that gpt-3.5+ToT outperforms gpr4+IO, in most cases, gpt-3.5+ToT doesn't work, esp, when used as the generation model. Therefore, whether the proposed work will be generalized to other models, esp., open-sourced models, is questionable or unanswered for me. If the proposed method can only work when using GPT-4, its cost will be one of the major obstacles for people to use it, also demonstrated by the new cost-efficiency result.
2. This paper is a well-polished one with near ideas implemented very well and I have no doubt about its influence on the community in the short near future. However, its core novelty of applying (simple) search algorithms to LLM reasoning isn't significant enough.
---
Reply to Comment 1.1.1:
Title: Thanks for reply
Comment: Thanks for your reply!
1. In both tasks, we've shown GPT-3.5+ToT can outperform GPT-4+CoT, and the proposed method is not specific to GPT-4. We believe it is very possible to better prompt LLMs (e.g. GPT-3.5) or finetune LLMs (e.g. Llama) to achieve better ToT performances in the near future (e.g. while one-shot proposal prompt in game of 24 was good enough for GPT-4, changing it to three-shot helped a lot for GPT-3.5; more prompt tuning can be done to yield low-hanging fruits).
2. Thanks for endorsing ToT's "influence on the community in the short near future"! Like you said, current (open-source) LLMs might be limited in their capabilities in ToT-style generations and evaluations; also, we don't have enough hard tasks that challenge GPT-4+CoT's deliberate reasoning. Improving new models and proposing new tasks are long-term community efforts beyond a single paper, and that's exactly why we believe ToT's influence could be long-term, by pointing out what generation and evaluation capabilities LLMs should improve on, what kind of tasks should be devised to challenge LLMs, and in general, linking frontier LLM research to classic and everlasting insights at the root of AI and CogSci. | Summary: This paper introduces a new method for prompting large language models (LLMs) for multi-step reasoning tasks. Existing prompting methods are confined to the autoregressive generation scheme, making it difficult for LLMs to finish tasks that require exploration and planning. To alleviate this problem, the authors propose Tree of Thought (ToT), which combines the chain-of-thought (CoT) method with tree-based search. By prompting LLMs to solve the intermediate step and using the self-evaluations as the heuristics, one can effectively leverage LLMs to finish tasks that require exploration and backtracking. Empirical evaluations on three novel tasks demonstrate the effectiveness of the proposed method.
Strengths: - A novel method to combine LLMs with tree-based search. The proposed method alleviates the shortcoming of directly prompting LLMs to generate answers for some tasks that require explorations and backtracking. Compared to CoT prompting which can only sample one solution path, ToT allows LLMs to explore more potential solutions and find the best one.
- Automatic and independent reasoning process. ToT fully depends on the LLMs to generate plans, finish intermediate steps, and generate self-evaluation for the current state as heuristics. Without the dependence on external tools or models, ToT enables LLMs to automatically finish complex tasks.
Weaknesses: More application scenarios. The authors mainly evaluate ToT on the three novel tasks that require exploration and backtracking. It would be better to demonstrate whether ToT can help improve LLMs on commonly-used reasoning tasks.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please see the weakness above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors discussed the limitations and potential negative social impact of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for endorsing our work!
### 1. More application scenarios
This is a great point. Please check **General Response (1)**, where we show a very simple scheme to apply ToT in common NLP tasks (StrategyQA, GSM8K). However, such tasks might not need GPT-4 + ToT as GPT-4 + COT suffices --- weaker LLMs + ToT could potentially outperform weaker LLMs + CoT and could be studied on such tasks.
---
Rebuttal Comment 1.1:
Comment: I have carefully read the authors' responses. I would like to keep my rating. Just a quick comment for more potential ablation studies. One way to adopt LLMs to solve the Game of 24 is to ask LLMs to write a Python program -- in that case, we leave the exploration and search tasks to the program rather than LLM itself. The Python program will finish the search for the final solution through BFS or DFS. That could be a very strong baseline for ToT. This can make the comparison with CoT more comprehensive and fair -- CoT is weaker at search and exploration, but it can write programs to finish the search process. | Rebuttal 1:
Rebuttal: We appreciate all reviewer's great feedback, which will significantly strengthen our draft!
The motivation of ToT is simple: **to explore and extend the capability frontier of autoregressive LLMs**. More specifically, given the SoTA LLM (GPT-4) can already solve many existing NLP tasks, what new tasks can raise new challenges? How can we augment LLMs to tackle these challenges? To this end, we contribute three new tasks for LLMs that challenge even GPT-4, and ToT as a framework to augment LLM's deliberate reasoning. **Our paper is thus focused on a setup with SoTA LLM (GPT-4) and hard tasks for it.**
Here, we report new experiments with weaker LLM or easier tasks, and discuss cost and efficiency concerns.
### 1. New Experiments on Other (Easier) Tasks (ysyd, fAU5, 3c3c)
While more common NLP tasks might be too easy for GPT-4 and do not require ToT (which is why we considered harder new tasks), we believe **applying ToT to new tasks could be straightforward**.
For example, we implemented a simple and generic zero-shot ToT-BFS similar to creative writing (sample 5 problem solving strategies then vote for the best one; then sample 5 solutions based on the best strategy then vote for the best one) for GSM8K and StrategyQA with few extra lines of code:
```python
gsm8k_format = '"the answer is n" where n is a number'
strategyqa_format = 'either "the answer is yes" or "the answer is no"'
standard_prompt = 'Answer the following question with {format}: {input}'
cot_prompt = '''
Answer the following question: {input}
Make a strategy then write. Your output should be of the following format:
Strategy:
Your strategy about how to answer the question.
Answer:
Your answer to the question. It should end with {format}.
'''
vote_prompt = '''Given an instruction and several choices, decide which choice is most promising. Analyze each choice in detail, then conclude in the last line "The best choice is {s}", where s the integer id of the choice.
'''
```
We evaluated on a subset of 100 random GSM8K test and StrategyQA dev questions. As shown below and as expected, ToT improves over CoT on both tasks (but only slightly, given GPT-4 + CoT is already very good on such tasks, and StrategyQA's bottleneck is external knowledge, not reasoning). Considering computational costs, it is more suitable to try smaller LLMs + ToT for traditional NLP tasks, or GPT-4 + ToT for hard tasks that challenge GPT-4 + CoT's reasoning.
| | GSM8k | StrategyQA |
| -------- | ------- |------- |
| IO | 51 | 73 |
| CoT | 86 | 82 |
| ToT | 90 | 83 |
### 2. New Experiments on Other (Weaker) LLM (fAU5, sEEP)
To understand how ToT works with other LLMs, we also ran GPT-3.5-turbo for creative writing, which was reported in the supplementary material. **We find gpt-3.5+ToT outperform gpt-4+IO, and similar to gpt-4+CoT, which suggests ToT could also work well on weaker LLM.**
|Creative Writing| gpt-4 (in paper) | gpt-3.5 |
|-----|-----|----|
| IO | 6.19| 4.47|
| CoT | 6.93| 5.16|
| ToT | 7.56| 6.62|
We also ran GPT-3.5 for Game of 24 (we changed 1-shot proposal prompt to 3-shot to make it work). Here, GPT-3.5+ToT's 19% is far worse than GPT-4+ToT's 74%.
| Game of 24 | gpt-4 (in paper) | gpt-3.5 |
| ---- | ---- |---- |
| IO | 7.3 | 6 |
| CoT | 4.0 | 3 |
| ToT | 74 | 19 |
To further understand the importance of generation vs. evaluation, we ran gpt-4 gen + gpt-3.5 eval (64%) and gpt-3.5 gen + gpt-4 eval (31%). This suggests the game's bottleneck is thought generation, and different gen/eval LLMs might attain decent results while reducing costs.
| gen\eval | gpt-4 | gpt-3.5 |
| ---- | ---- |---- |
| gpt-4 | 74 | 64 |
| gpt-3.5 | 31 | 19 |
### 3. Cost and efficiency (fAU5, sEEP)
Running ToT requires significantly more computations than IO or CoT prompting. For example, in game of 24, solving a problem with ToT requires 5.5k completion tokens, close to 100 CoT trials (6.7k tokens). But the performance of ToT is better than best of 100 CoT trials.
| Game of 24 | completion tokens | prompt tokens | cost per case | success rate |
| ---- | ---- |---- | ---- | ---- |
| IO (best of 100) | 1.8k | 1.0k | $0.13 | 33% |
| CoT (best of 100) | 6.7k | 2.2k | $0.47 | 49%
| ToT | 5.5k | 1.4k | $0.74| 74% |
On Creative Writing, we found ToT takes around 5x completion tokens and money cost, which is intuitive as $b=5$ and most tokens are generated passages.
| Creative Writing | completion tokens | prompt tokens | cost per case |
| ---- | ---- |---- | ---- |
| IO | 0.9k | 0.4k | $0.06 | |
| CoT | 0.9k | 0.4k | $0.07 |
| ToT | 4k | 2.9k | $0.32 |
So completing Game of 24 and Creative Writing's main ToT experiments cost around $0.74 * 100 + 0.32 * 100 = 106$ dollars. Unfortunately we did not record the usage for crosswords' DFS experiments, but it should be within 100 dollars. In general, cost and efficiency of ToT highly depend on the prompts and search algorithms used, and could require 5-100 times more generated tokens than CoT.
Some actionable insights:
- We recommend using ToT on complex tasks requiring deliberate reasoning, on which CoT struggles.
- Flexibility of ToT allows some performance-cost tradeoff, e.g. change beam size or vote number in BFS, few-shot vs. zero-shot prompting, GPT-3.5 vs. GPT-4, etc. One could configure the setup based on some resource constraints or performance goal.
- There is much space for improving efficiency --- e.g. BFS could early stop when solution is found, or trim down beam size to $b<5$ when some thoughts are "impossible".
- We believe that more computation is indeed required in order for the model to achieve stronger intelligence, and this should not become a blocking issue as in the long run, (open-source) LLMs will become much cheaper and more efficient. It is also a great direction how to better train/finetune LLMs for thought generation and/or evaluation. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Characteristic Circuits | Accept (oral) | Summary: I am not qualified to review this paper.
Strengths: I am not qualified to review this paper.
Weaknesses: I am not qualified to review this paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I am not qualified to review this paper.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I am not qualified to review this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 10: Award quality: Technically flawless paper with groundbreaking impact, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations.
Code Of Conduct: Yes | null | Summary: This work proposes a new tractable probabilistic model called characteristic circuits or CC. The CC is defined in a similar way to probabilistic circuits (PCs) but with leave nodes defined as characteristic functions instead of the distributions as in PCs. The authors further show the computation of marginals and the learning algorithms for CC. Empirical evaluations on both synthetic datasets and UCI datasets are presented.
Strengths: - This paper is generally well-written, with sufficient backgrounds provided to help readers understand the proposed new model.
- The authors show that CC shares the same efficient marginal computations as in PCs.
- CC provides a unified view for mixed continuous and discrete distributions. Also, the \alpha-stable distributions are less explored in the previous literature of tractable probabilistic models to my knowledge and the use of these seems help deliver good performance in the UCI experiments.
Weaknesses: - From what is presented, it seems that the authors simply rewrite the distribution nodes and computations of PCs into their characteristic function duals. It is unclear what are the key differences between CC and PCs and when would one prefer one over the other. I don't think the unified view of the discrete and continuous distributions serves as a strong motivation since mixed SPN can also handle the mixed distributions.
- To further illustrate the previous point, one would expect to see investigations on expressiveness, such as are there any distributions that can be tractably represented by CCs but not PCs, or investigations on tractable operations, such as what probabilistic queries would be intractable for PC but tractable for CC, while none of these are discussed in this work.
- A proof for the validity of the marginal computations seems to be missing.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Can the authors elaborate on the motivation for CCs, that is, when one would prefer CC over PC?
- Can the authors provide some discussions on the expressiveness and tractable queries of CC?
- For the empirical results on UCI datasets, I wonder if the improvements are from CC itself or from the alpha-stable distributions. What would happen if the continuous leaves in MSPN are defined as alpha-stable distributions? Such an ablation study might make the results more convincing.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the feedback and questions.
> Key difference between CCs and PCs:
- PCs do not naturally lend themselves to a unified view over heterogeneous data domains, while CCs more naturally provide a framework to model high-dimensional mixed data distributions. To highlight this, recall that PCs are formulated over density/mass functions which is the Radon–Nikodym derivative w.r.t. some base measure. In heterogeneous domains, the relationship to the base measure becomes involved as dimensions might have a density w.r.t. different base measures. This fact is typically hidden in PCs and results in challenges when learning such models over heterogeneous data domains as, for example, gradient-based parameter learning will now depend on different base measures depending on the dimensions considered. This can result in undesirable behaviour and is elevated, for example, in MSPNs by discretizing the continuous domain, hence, ensuring the same base measure for all dimensions. However, discretization introduces challenges going beyond this rebuttal. CCs, on the other hand, evaluate this problem in a more principled way by instead modelling the Fourier transform of the probability measure directly. This makes the learning and modelling process independent of the base measure and provides a truly unified view compared to PCs. We hope that this expose better explains the conceptual difference between CCs and PCs.
> Expressiveness and tractable queries:
- We agree that investigating the expressive efficiency and tractability of CCs is an interesting avenue. Although not discussed in detail in the paper, the moments of any CC can be computed tractably through differentiation given that the leaf nodes allow tractable differentiation. Even though the current work focuses on a unified view by moving to the spectral domain, we believe this to be a particularly useful property and added further details to the updated manuscript. We leave a theoretical analysis of the expressive efficiency of CCs for future work.
> Proof of marginal computation:
- A proof sketch of marginalization is given in lines 221 to 225 and we will add a more detailed proof in the appendix.
- Proof sketch:
Following lines 216 to 225, we have assumed that the circuit decomposes all dimensions into univariate leaves, RVs $Z = X \cup Y$, $t = t_X \cup t_Y$, and we aim to compute $\varphi_X(t_X)$.
Then for any leaf node, we have
$\varphi_L(t_j) = 1$ if $ t_j = 0$, and $\varphi_L(t_j)$ otherwise, by definition of CFs.
Let $P$ be a product node that splits at least one $Y_j$ from its scope into a single child and let this child be denoted as $N_j$, then
$\varphi_P(t \cup 0) = \varphi_{N_j}(0) \prod_{N \in ch(P)\setminus{N_j}} \varphi_{N}(t_{\psi(N)}) = \varphi_{P\setminus{N_j}}(t)$,
where $\varphi_{N_j}(0)=1$.
By the assumption that sum nodes are convex combinations (weights sum up to one) and recursive application of the above, one can show that any marginal distribution can be obtained tractably in CCs.
> Choice of CC over PC:
- There are various tasks in which CCs are preferable over PCs. In particular, as outlined before, CCs provide a more principled and natural representation in the case of heterogeneous data domains. Moreover, CCs enable the modeling and learning of distributions that do not have an analytical density function. Lastly, CCs provide an efficient representation of moments even in case densities are not available in closed form as CCs circumvent the challenge of integration in this case and instead only require differentiation of the model.
> $\alpha$-stable distribution leaves and MSPNs:
- We agree that studying the combinations of MSPNs and CCs is an interesting direction. Henceforth, we will provide additional results using MSPNs as a construction algorithm for the CC structure. However, we want to stress that MSPNs and CCs are conceptually very different as MSPNs aim to model heterogeneous domains non-parametrically through discretization, while CCs directly model the characteristic function of the mixed distribution. Therefore, CCs provide a more flexible framework and allow for meaningful parameter learning that is more suited to mixed data domains. Moreover, we want to note that we empirically observed improvements by fitting CCs (work in the spectral domain) also in the case $\alpha$-stable distribution has not been employed, see results CC-P in Tab. 2, indicating that CCs are a promising modeling family even if tractable densities exist.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the clarification. I'm happy to raise my score. | Summary: The paper introduces characteristic circuits (CCs), a new family of tractable probabilistic models (TPMs) that leverages univariate characteristic functions as leaves of probabilistic circuits (PCs) for modelling a tractable joint of heterogeneous data distributions (i.e. with both continuous and discrete variables).
CCs model the characteristic function of the data distribution in the continuous spectral domain (cf. Equation 1), thus providing a unified framework for discrete and continuous random variables.
As a consequence, one of the main advantage of CCs is that they can model distributions that do not have closed-form probability density functions, such as $\alpha$-stable distributions.
Importantly, authors also show that CCs allow exact and efficient computation of joint and marginal probabilistic queries.
CCs are evaluated on two synthetic datasets and 12 heterogeneous real-world tabular datasets.
Strengths: - The research is definitely original as it proposes a new class of TPMs with many (novel) benefits
- The model naturally lends itself to modelling heterogeneous data
- The model allows to use distributions that do not have closed-form expressions, such as $\alpha$-stable distributions, something that is not possible in current PCs
- Despite having input units with no closed-form expressions, CCs can still deliver exact marginalisation
Overall, a solid contribution.
Weaknesses: - From experiments, it looks like inappropriate structure can limit the modelling power of CCs. This is can prevent using CCs when a good structure is not available.
- It's unclear how reliable/precise numerical integration can be (lines 128-129-130)
- Sampling it's an important inference routine of PCs yet it is not discussed at all, and it's unclear if CCs can provide it
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is sampling possible from CCs? If yes, can we know how CC samples compare with the ones of standard PCs? If not, what are the challenges?
- Can you elaborate a bit more on what precisely you mean with "unified view for discrete and continuous random variables" (line 57)? While, to some extent, I understand what author mean, one may think "but even standard PCs can handle heterogeneous data". I think being more precise here can improve this major selling point of the paper.
- Can you elaborate a bit more on lines 230-231-232? Why MLE is not tractable? How does Eq. 14 relate to MLE? (Also lines 248-249-250 are unclear to me)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are not explicitly addressed.
It looks like sampling can be one of these.
Minors:
- In Lemma 4.2, I think there's no explanation of what $\tau$ and $E(\mathcal{T}_i)$ represent. I know it's notation related to induced trees, but it is not introduced in the text.
- In line 175, there's no $x$ occuring in the definition of $\phi_{L_\text{Normal}}(t)$, why?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for pointing out both the strength and possible weakness of our work.
> Inappropriate structure can limit the modelling power:
- Similar to related modelling families (e.g., PCs, PGCs), the structure can have a high impact on the performance of the CC. To mitigate this issue, we proposed the first structure learning algorithm adapted from the well-known algorithm by [Gens and Pedro, 2013] for PCs. Note that structure learning of circuits (PCs, CCs alike) is a challenging and open task and further investigation is needed. However, an approach similar to the one taken in Einsum networks [Peharz et al. 2020] combined with minimization of the CFD could be a promising future direction.
> Reliability of numerical integration:
- We ran additional experiments with an increasing number of sample points to verify the reliability of the numerical integration through quadrature. The results indicate that a low number of sample points is sufficient as numerical integration is only required on the real line (1D). We thank the reviewer for pointing this out and will include the results into the Appendix and add further discussion on the reliability of the numerical integration.
> Sampling:
- Sampling from a characteristic function is generally not straightforward. There has been literature discussing sampling from CFs [Devroye, 1986, Ridout, 2009, Walker, 2017] and we believe these sampling algorithms can be adapted to sampling from CC in future works. Thanks for pointing this out, we will add this to the discussion of interesting future work.
> Unified view:
- PCs do not naturally provide a unified view and treat discrete and continuous RVs differently. For discrete RVs probabilities or mass values are computed w.r.t. the counting measure, while for continuous RVs the reference measure is the Lebesgue measure. Moreover, RVs distributed according to a singular (continuous) distribution can typically not be represented at all. This dependence on the base measure is hidden in PCs and can result in challenges when it comes to learning these models in heterogeneous domains. For example, a model might focus only on maximizing the likelihood w.r.t. the Lebesgue measure during fitting. Consequently, prior works have suggested discretising the domain of continuous RV (see MSPNs), which introduces new challenges. Moving away from the dependence on the base measure by representing the distribution through its characteristic function, which is independent of the base measure, elevates this issue. Hence, CCs provide a truly unified view compared to PCs. We will clarify the “unified view” in the revised paper to better reflect our contribution.
> MLE at the root and at a leaf node:
- In parameter learning, maximizing the likelihood at the root of a CC needs to apply the inversion theorem to CC for each training data. When leaf nodes do not have a closed-form density function, numerical integration has to be employed to obtain the density value given data. This makes the MLE at the root not guaranteed to be tractable.
- We thank the reviewer for raising this interesting question about the relationship between minimizing the distance and MLE, which is similar to the question from reviewer 1MK1. Connections between maximum likelihood estimation (MLE) and minimizing a distance (e.g., the CFD) to the empirical characteristic function (ECF) is indeed an interesting question. Minimizing the CFD to the ECF can be beneficial if no tractable form of the likelihood exists but the characteristic function can be tractably evaluated. As discussed in prior works (e.g., In [Yu, 2004]), minimizing a distance function to the ECF is most related to moment-matching approaches, but can result in more accurate fitting results. We will add further detail and a discussion on the topic to the revised version.
- Creating leaf nodes in structure learning. Leaf nodes are created by fitting the estimated distribution to local data during structure learning. When closed-form density/mass function is available at a leaf, the leaf parameters can be estimated via MLE. In the case of ECF leaves, the leaf nodes are created from local data following the definition of ECF (in line 134). When there is no closed-form density, e.g. $\alpha$-stable distributions, the algorithm in [McCulloch, 1986] is employed to estimate the parameters at the $\alpha$-stable leaves. We apologise that we did not specify this detail in the manuscript and will add the above to lines 248-250 for better clarification.
> Minors:
- Induced trees notation: Thanks for pointing this out, we will add one section in the Appendix to briefly introduce the notation of induced trees.
- Indeed there is no x in the definition of the characteristic function at the leaf, because a characteristic function is a function of t, as illustrated in Eq (1).
***
[Gens and Pedro, 2013] Robert Gens and Domingos Pedro. "*Learning the structure of sum-product networks.*" In ICML, 2013.
[Peharz et al. 2020] Robert Peharz et al. "*Einsum networks: Fast and scalable learning of tractable probabilistic circuits.*" In ICML, 2020.
[Devroye, 1986] Luc Devroye, "*An automatic method for generating random variates with a given characteristic function.*" SIAM journal on applied mathematics, 1986.
[Ridout, 2009] Martin S Ridout. "*Generating random numbers from a distribution specified by its Laplace transform.*" Statistics and Computing, 2009.
[Walker, 2017] Stephen G Walker. "*A Laplace transform inversion method for probability distribution functions.*" Statistics and Computing, 2017.
[Yu, 2004] Jun Yu. "*Empirical characteristic function estimation and its applications.*" Econometric reviews, 2004.
[McCulloch, 1986] J. Huston McCulloch. "*Simple consistent estimators of stable distribution parameters.*" Communications in statistics-simulation and computation, 1986.
---
Rebuttal Comment 1.1:
Comment: Many thanks for your clarifications. I further confirm the positive impression I had, and I'll keep supporting the paper. However, I'd stick to my score: I would have raised my score if sampling had been possible in CCs (at the very least, it's unclear if it is going to be). | Summary: This manuscript proposes a framework for directly representing characteristic function of random variables by a probabilistic circuit-like structure. Unlike the ordinal probabilistic networks, the proposed framework, characteristic circuits, can treat distributions that do not have closed-form expressions for the density, or even marginals of discrete and continuous random variables. This is because the characteristic function provides a unified view for both discrete and continuous random variables. As a result, the characteristic circuit can treat broader class of distributions than the ordinal probabilistic circuits. Despite this, it is proved that the characteristic circuit keeps tractability of computing densities and marginals, which is an important query for probabilistic inference. Also, the parameter and circuit-structure learning algorithms for characteristic circuits are given. The experiments showed that the proposed characteristic circuits perform well for density estimation task on heterogeneous data sets, which includes both discrete and continuous variables.
Strengths: I think that representing a characteristic function by a circuit is a simple yet strong idea for representing broader class of distributions. Also, proposing an algorithm for computing marginals on characteristic circuits is really good, because it inherits the strengths of the original probabilistic circuits that some rich inference queries are tractable in time proportional to the size of the circuit; although it only proves the marginal query, I think this query is one of the most important one for probabilistic inference. The experimental results truly support the usefulness of characteristic circuits when applied to density estimation task where the evaluation metric is the test log-likelihood. Since the original probabilistic circuits also show their strengths in this task, I think the selection of tasks is appropriate.
Weaknesses: It is not the first attempt to represent a generating function regarding random variables directly; the first one (as far as I know) is:
Probabilistic Generating Functions https://arxiv.org/abs/2102.09768 (published in ICML 2021).
This represents the probability generating function directly, thus I think this framework is for discrete variables only. However, to clearly show the standing position of this paper, the comparison with this work should be clearly done in the main article.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Within the queries that are tractable for the ordinal probabilistic circuits, are there any other tractable queries than marginal for characteristic circuits? Or, are there any queries that is proven to be hard (e.g., NP-complete) for characteristic circuits?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: I think the authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and the suggested related work.
> Comparison to PGCs
- Indeed, PGCs are related to CCs as both can be considered to represent the probability distribution using its generating function rather than its density function. We added a discussion and further details on how the two approaches relate to the updated manuscript. The key difference between PGCs and CCs is that, while PGCs can only represent discrete probability distributions that admit a probability generating function representation (finite countable), CCs can represent any probability distribution as every probability measure has an associated characteristic function. Interestingly, compared to directly modeling the density/mass function of a distribution, we can perform model fitting even in cases where a density w.r.t. the Lebesgue or counting measure does not exist or is not tractable to evaluate by minimizing the CFD to the empirical characteristic function.
> Are there any other tractable queries for characteristic circuits?
- Although not discussed in detail in the paper, the moments of any CCs can be computed tractably through differentiation. Even though the current work focuses on a unified view by moving to the spectral domain, we believe this to be a particularly useful property and added further details to the updated manuscript.
- Sampling from a characteristic function is generally not straightforward. There has been literature discussing sampling from CFs [Devroye, 1986, Ridout, 2009, Walker, 2017] and we believe these sampling algorithms can be adapted to sampling from CCs in future works. We thank the reviewer for pointing this out and will add a discussion to the manuscript.
***
[Devroye, 1986] Luc Devroye, "*An automatic method for generating random variates with a given characteristic function.*" SIAM journal on applied mathematics, 1986.
[Ridout, 2009] Martin S Ridout,. "*Generating random numbers from a distribution specified by its Laplace transform.*" Statistics and Computing, 2009.
[Walker, 2017] Stephen G Walker. "*A Laplace transform inversion method for probability distribution functions.*" Statistics and Computing, 2017.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for detailed reply. I think the comparison of probabilistic generating circuits and characteristic circuits is adequately addressed in the rebuttal comment.
Regarding the tractability of queries, I agree with Reviewer HuCd in that the sampling is one of the most important query that PCs can do. However, I think that even if sampling is generally difficult for CCs, the other parts of the paper (concepts, comparison with PCs, and the empirical results) can constitute significant technical contributions. Thus, I am in favor of accepting this manuscript. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the use of characteristic functions as probabilistic models for heterogeneous data and proposes characteristic circuits (CC) for their representations. The authors propose efficient algorithms for computing (marginal) densities with CCs and show that parameters of CCs can be learned by minimizing the CFD between CC and ECF. The authors also show that CCs achieve strong performance not only on synthetic data but also on some commonly used density estimation benchmarks.
Strengths: To the best of my knowledge, the use of characteristic functions as a new language for probabilistic modeling, especially as a unified framework for heterogeneous data, is very novel. The empirical results is also very strong. I believe this work opens a brand new avenue for density estimation.
Weaknesses: As a non-expert, I spent a lot of time on the background section; despite the uniqueness part in the inversion theorem, it is not completely intuitive how a probability measure is encoded as its characteristic function. It would be helpful if the authors could provide at least one simple example.
Authors use too many acronyms throughout the paper, especially in Section 5; it is not easy for me as a reader to distinguish between CC-N, RS, SL, CFD and etc.
For non-expert readers like me, more details on the empirical evaluations (in the main paper) could be helpful: e.g. how are the likelihoods measured for the heterogeneous datasets? are they computed via numerical integrations? etc.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: In Section 4.2, the authors proposes to do parameter learning by minimizing the CFD between CC and ECF because the likelihood of CC is not guaranteed to be tractable. Yet I wonder if it is possible to discuss the relationship between CFD and likelihood.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The structure learning algorithm seems more of an adaptation of the existing structure learning algorithms for SPNs. This is not a major issue as structure learning of circuits has been a very challenging problem and probably not a main focus of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the feedback and questions.
> How is the probability measure encoded as a CF?
- The characteristic function of a probability measure is its Fourier transform and, hence, can be obtained through the application of the Fourier transform. However, in our work, we do not start from the probability measure but directly model the characteristic function instead. This allows us to implicitly learn any probability measure by instead learning its spectral form. We refer to [Sasvári, 2013] for a more detailed discussion.
> Details on the empirical evaluations:
- The likelihoods in the empirical evaluations are computed based on the inversion theorem. For discrete leaves and Gaussian leaves, the likelihoods can be computed analytically. While for $\alpha$-stable leaves, the likelihoods are computed via numerical integration using quadrature. In general, it depends on the form of the characteristic function that is assumed at the leaf nodes. For example, one might relax the assumption that it is specified by a parametric family and could learn the characteristic functions directly. However, doing so is more involved as one has to ensure that the properties listed in Section 3.2. are still fulfilled. We believe this to be a promising future avenue. Once likelihoods at the leaves are computed, they are propagated bottom up following the inversion theorem in Section 4.1. We will improve the description in the updated manuscript.
> Acronyms:
- We will reduce the use of acronyms (RS, SL, etc.) in the updated manuscript for better readability and thank the reviewer for pointing this out.
> Regarding the question on the relationship between CFD and likelihood:
- Connections between maximum likelihood estimation (MLE) and minimizing a distance (e.g., the CFD) to the empirical characteristic function (ECF) is indeed an interesting question. Minimizing the CFD to the ECF can be beneficial if no tractable form of the likelihood exists but the characteristic function can be tractably evaluated. As discussed in prior works (e.g., [Yu, 2004]), minimizing a distance function to the ECF is most related to moment-matching approaches, but can result in more accurate fitting results. An interesting future direction could be a hybrid objective in which tractability of either the likelihood function or the characteristic function is exploited. We thank the reviewer and will add further detail and a discussion on the topic to the revised version.
***
[Sasvári, 2013] Zoltán Sasvári. "*Multivariate characteristic and correlation functions.*" volume 50. Walter de Gruyter, 2013.
[Yu, 2004] Jun Yu. "*Empirical characteristic function estimation and its applications.*" Econometric reviews, 2004.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking your time to answer the questions. I look forward to reading more about the connections between MLE and minimizing CFD in an updated version of the paper. I'm in favor of accepting this paper. | null | null | null | null | null | null |
Replicable Clustering | Accept (poster) | Summary: This paper studies the design of clustering approximation algorithms in the context of statistical clustering under the notion of replicability. In replicable clustering problem, it requires that with high probability, the output of the algorithms should have the exact same partition of the sample space after two executions on different inputs drawn from the same distribution. Given some black-box $\beta$-approximation algorithm to the $k$-clustering problem, this paper gives an approximation framework that with constant probability, the output of the algorithm can achieves an $O(\beta)$-approximation satisfying $\rho$-replicable property.
Strengths: This paper is the first to initiate the study of replicable clustering, which is a recent hot topic in the field of machine learning. In this paper, a new framework is introduced for handling replicable clustering requirements. This paper also establishes sampling complexities for clustering problems in different norms with different metrics. In order to achieve replicability, a novel tree decomposition method called Replicable Quad Tree is designed such that replicable estimation can be obtained based on the tree decomposition constructed. The proposed Replicable Quad Tree is interesting and is quite different from the well-known HST-based tree decomposition method, which could have great potential for designing approximation algorithms for other replicable problems.
Weaknesses: 1. The sample complexity has exponential dependence on the dimension $d$, which could be the main weakness for this paper when $d$ is large. Although in Euclidean metrics, this can be avoided by dimensionality reduction techniques, for general norms, there is no data-oblivious dimensionality reduction scheme.
2. The main proofs of this paper are hard to follow. It is better to provide intuitive ideas in each section before the proofs such that one could easily follow the main ideas of the proofs.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Can the authors offer some intuitive insights into how the proposed Replicable Quad Tree ensures replicable estimation?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Since this is a theoretical paper, I don't think this paper has any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the extensibility of our work and their time and effort in reading our paper. We address their comments as follows.
> The sample complexity has exponential dependence on the dimension
, which could be the main weakness for this paper when
is large. Although in Euclidean metrics, this can be avoided by dimensionality reduction techniques, for general norms, there is no data-oblivious dimensionality reduction scheme.
We thank the reviewer for raising an important point. Although the dimensionality reduction technique only works for the Euclidean distance, we remark that the majority of clustering applications use the Euclidean distance. Therefore, this does not limit the applicability of our algorithms.
> The main proofs of this paper are hard to follow. It is better to provide intuitive ideas in each section before the proofs such that one could easily follow the main ideas of the proofs.
We thank the reviewer for the thoughtful suggestion and will carry it forward in the next version of our manuscript.
> Can the authors offer some intuitive insights into how the proposed Replicable Quad Tree ensures replicable estimation?
We can view the quad tree construction algorithm as a series of decisions based on heavy-hitter estimations. This basic statistical operation can be made replicable. Once every decision is replicable, the algorithm also becomes replicable. We will add this discussion to the next version of our manuscript.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response and clarification. After looking at other reviews and corresponding rebuttals, I have no further questions. | Summary: This manuscript focuses on the concept of replicability in statistical clustering algorithms. It introduces the notion of replicability, which refers to the ability of an algorithm to produce consistent results when executed multiple times on different inputs from the same distribution. Replicability is seen as crucial for ensuring the validity and reliability of scientific findings. The manuscript proposes replicable algorithms for three common clustering problems: k-medians, k-means, and k-centers. These algorithms utilize approximation routines for their combinatorial counterparts. The replicable algorithm for statistical Euclidean k-medians achieves a replicable O(1)-approximation with polynomial complexity. Similarly, the algorithm for statistical Euclidean k-centers achieves replicable O(1)-approximation with an additional O(1)-additive error but with exponential sample complexity. The manuscript also discusses the importance of clustering algorithms in unsupervised learning and the lack of a universally agreed-upon definition for the quality of clustering solutions. It highlights the challenges and trade-offs that algorithm designers face due to factors such as random initialization, similarity measures, noise in measurements, and outliers in the dataset. These issues contribute to non-replicable results in clustering algorithms. Furthermore, the manuscript provides related work on norms and parameters used in clustering and presents experimental results on synthetic distributions to validate the proposed replicable algorithms.
Strengths: The manuscripts' innovation lies in proposing replicable clustering algorithms for statistical clustering problems. It introduces the concept of replicability and highlights its significance in ensuring the validity and reliability of scientific findings. The document aims to address concerns about replicability in clustering algorithms, which is a crucial topic in the subfields of machine learning and data science. It presents replicable algorithms for k-medians, k-means, and k-centers problems and outlines their theoretical results. Overall, the manuscripts' innovation lies in its contribution to the development of replicable algorithms for statistical clustering.
Weaknesses: There are many theorems that are not well understood, and then there are relatively few experimental parts.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1] How does the proposed replicable algorithm compare to existing clustering algorithms in terms of performance and accuracy?
2] Are there any limitations or assumptions in the proposed replicable algorithms that could affect their applicability in real-world scenarios?
3] How generalizable are the experimental results presented on synthetic distributions in 2D? Can the replicable algorithms be applied to other types of datasets?
4] How sensitive are the replicable algorithms to the choice of parameters such as the choice of exponent in the cost function or the measure of similarity of the data?
5] Does the study discuss any potential implications or future research directions regarding the replicability of clustering algorithms?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: In the experiment, real-world data sets are not used, which is not very convincing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the importance of replicability in clustering as well as their time and effort in reading our paper. We address their comments as follows.
> 1] How does the proposed replicable algorithm compare to existing clustering algorithms in terms of performance and accuracy?
Our algorithm relies on a reduction to (weighted) clustering algorithms in the combinatorial setting. For any existing $\beta$-approximation algorithm, our techniques yield a $\beta(1+\varepsilon)$ solution for any $\varepsilon > 0$.
For the case of (non-replicable) statistical k-medians/k-means, the work of Ben-David [2007] implies randomized polynomial-time constant-ratio approximation algorithms with sample complexity $O(k / \varepsilon^2)$. However, these algorithms are non-replicable and we require greater sample complexity to achieve the replicability guarantee.
> 2] Are there any limitations or assumptions in the proposed replicable algorithms that could affect their applicability in real-world scenarios?
For the case of k-medians/k-means, there is a blow-up in the sample complexity for non-Euclidean metrics that scales exponentially with the dimension. However, we remark that the majority of applications use the Euclidean distance where this exponential blow-up can be avoided using our dimensionality reduction method.
> 3] How generalizable are the experimental results presented on synthetic distributions in 2D? Can the replicable algorithms be applied to other types of datasets?
> In the experiment, real-world data sets are not used, which is not very convincing.
The experimental results can be generalized to higher dimensional datasets through our dimensionality reduction technique. We presented the experimental results in 2D for simplicity of visualization and implementation.
> 4] How sensitive are the replicable algorithms to the choice of parameters such as the choice of exponent in the cost function or the measure of similarity of the data?
We thank the reviewer for raising an interesting question. Although we have not explored this explicitly, we expect the algorithm to be quite sensitive to the choice of parameters. For example, with regards to the exponent $p$, there is a non-trivial blow-up in the sample complexity just going from $p=1$ (k-medians) to $p=2$ (k-means). We acknowledge this is a potential direction for future research.
> 5] Does the study discuss any potential implications or future research directions regarding the replicability of clustering algorithms?
We believe many future directions are possible. For example, determining some lower bounds for statistical clustering, even in the non-replicable case, would be very interesting. Also, it may be possible to improve our sample complexity bounds by considering more recent coreset algorithms. Moreover, It may be fruitful to consider connections with differential privacy and other stability measures. One could also consider the problem of hierarchical clustering. In general, we believe that we have scratched the surface of this problem and hope that it inspires future research. We will add this discussion to the next version of the manuscript.
[Ben-David, 2007]: Shai Ben-David. A framework for statistical clustering with constant time approximation algorithms for k-median and k-means clustering. | Summary: This paper studies the concept of replicability in clustering. This topic is important because replicability is what allows for other researchers to reproduce the results of a study to verify their correctness. This is a big problem these days, 50% of scientists saying that there is a replicability crisis. In the interest of improving the scientific verification process, designing replicable algorithms (i.e., algorithms that function the same when executed twice with the same random seed). This differs from the related notion of clustering stability in that its success is more dependent on the algorithmic design than structures in the data.
In their setting, points are sampled from a random distribution over the d-dimensional unit ball according to some given metric. It models many norms, including \ell_p norms. They use Impagliazzo et al.’s notion of rho-replicability: The probability (over every two sampled sets of points and a single random seed) that the algorithm on each set but with the same random seed yields the same output is at least rho. Outputs to these clustering problems are functions that, given a point, assigns it to a cluster (with no explicit center, just an identifier).
The main results are new algorithms for statistical k-means and k-medians, with extended improvements when using the Euclidean metric. Their approximation factors both near the approximation factor of a given black-box vanilla algorithm, and their sample complexity on general metrics has an exponential dependence on the dimension (it is sub-exponential in the Euclidean version). They also show an approximation algorithm for the replicable statistical k-centers algorithm. Given an approximation of the form a*OPT+b for k-centers, they give an approximation of the form c*OPT+d, where c and d are linear in both a and b. The query complexity is exponential in the dimensionality.
Their algorithm uses a quadtree decomposition, which creates a tree structure over all points. The leaves (which correspond to points) are then mapped to cluster centers. So when you receive a query for a point, you traverse the tree to find the leaf which contains it (as it is a partition of the space), and you map it to the corresponding cluster. However, in order to adequately depict the distribution of the space with their decomposition (which they are not given), they require access to an exponential number of queries in d. In the Euclidean case, they can first reduce the dimensionality via Johnson-Lindenstrauss, and map an epsilon-net from the original space to the low-dimensional space. There still is the difficulty of mapping the clustering between these two spaces, but it seems most of those details are in the appendix.
For replicability, they form a grid, sample points in the space, and map them to the centers of cells to estimate probability mass of each cell. On high-mass cells according to a random threshold, and then they apply a vanilla algorithm.
Finally, they compare k-means++ and k-means++ run on the core produced for their replicability results. It is pretty clear that their approach leads to much greater replicability.
Strengths: This paper is written extremely clearly. I really appreciate how slow and careful the introduction and preliminaries are, which I believe could be followed by someone unfamiliar with the field. For instance, while many of us take for granted that a replicable clustering algorithm optimizes for some utility notion, they take the time to explain this and how it is distinct from their replicability goal (which is a bigger focus of their preliminary section). They also take the time to point out to the reader that their metric function and the ell_p parameter p are different, and give an in-depth but understandable explanation of the intuition behind ell_p for different values of p. The informal presentation at the beginning of results in section 3 is also very nice.
The results are nice and interesting, but I wouldn’t say they are groundbreaking. The techniques have clear foundations in previous methods, but it also seems unique how they combined them together. I wasn’t sure by reading the paper how their replicability results compare to previous literature.
Weaknesses: I found it very odd that the main focus of the paper introduction and title was replicable k-centers, but they spend most of their time on (nonreplicable) k-centers. It seems like if the highlight of the paper was replicability, they should have focused on the methods used to achieve this. I feel the details of this algorithm were insufficient for a reader invested in replicable k-centers. Additionally, I wasn’t sure how their results compared to any baseline results that exist, so it is difficult to judge the novelty.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. I was a bit confused by the sentence “We emphasize that even though we only observe a sample from P we aim to solve the problem on the whole support of the distribution.” Can you clarify exactly what this means? Is it that the algorithm works on an actual given dataset P but needs to be able to work on all possible datasets? I would think that would be assumed.
2. How do your algorithms compare to existing algorithms? Is there any baseline implied by any other work?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: There was no limitations section, though it did not seem needed (but would be preferred).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort in reading our paper and address their comments as follows.
> I wasn’t sure by reading the paper how their replicability results compare to previous literature.
> How do your algorithms compare to existing algorithms? Is there any baseline implied by any other work?
> I wasn’t sure how their results compared to any baseline results that exist, so it is difficult to judge the novelty.
To the best of our knowledge, we are the first to consider replicability as a theoretical property within the clustering setting. Thus it is not clear if there is any baseline against which we can compare our work.
>I found it very odd that the main focus of the paper introduction and title was replicable k-centers, but they spend most of their time on (nonreplicable) k-centers. It seems like if the highlight of the paper was replicability, they should have focused on the methods used to achieve this. I feel the details of this algorithm were insufficient for a reader invested in replicable k-centers.
Although k-centers is a well-studied clustering problem, it is not the only clustering formulation. We also study statistical k-medians/k-means clustering and provide replicable algorithms (Theorem 3.1, Theorem 3.2). Moreover, we derive replicable k-centers algorithms as well (Theorem 3.3). Our title refers to the general clustering problem and not just k-centers. We would appreciate it if the reviewer pointed out the specific parts of the introduction that lead to confusion since we do not explicitly refer to k-centers in the introduction.
> I was a bit confused by the sentence “We emphasize that even though we only observe a sample from P we aim to solve the problem on the whole support of the distribution.” Can you clarify exactly what this means? Is it that the algorithm works on an actual given dataset P but needs to be able to work on all possible datasets? I would think that would be assumed.
We thank the reviewer for identifying a point of potential confusion. We will clarify this in the next version of our manuscript.
Clustering has mainly been studied from the combinatorial point of view, where the distribution is the uniform distribution over some finite points and we are provided the entire distribution. The statistical clustering setting generalizes to arbitrary distributions with only sample access. We wanted to clarify that although we only have access to samples, our output solution should be a good solution for the entire distribution, not just the observed data.
> There was no limitations section, though it did not seem needed (but would be preferred).
We will add a limitations section to summarize the assumptions that we rely on. We remark that these assumptions are explicitly stated in other sections of the manuscript. | Summary: The topic of this paper is replicable clustering which in high-level asks for design of an algorithm that given two runs of the algorithm on different samples from the “same” input distribution. Replicability is a notion introduced recently in a work by Impagliazzo et al. [2022].
The algorithm for statistical clustering with centroid-based objective with $p$-norm cost. For the case of $k$-means and $k$-medians, via corresponding centroid-based $k$-clustering on “finite” size (referred to as combinatorial clustering) in a black-box manner and design $O(1)$-approximation with poly($d$) sample complexity (but exponential in k?). However, the case of $k$-center is more complicated as it depends on the $\ell_\infty$-norm and very sensitive to the largest distance. The results on $k$-center has worse approximation guarantee and sample complexity and requires stronger assumptions. In particular, for the case of $k$-center, the guarantee is bicriterion.
Strengths: At high-level, the notion is interesting and related to several other important topics too. The technical contribution of the paper seems to be satisfactory. However, I'm less familiar with the literature and the technicality of the paper might not be significant for someone working on this area or differential privacy.
Weaknesses: The paper is following the notion of [Impagliazzo et al., 20]; however, I still not quite convinced why the shared randomness assumption is meaningful. A related question is whether an assumption such as Assumption F. 2 can be sufficient for achieving replicability?
The result seems to heavily depend on the coreset construction of Frahling and Sohler [2005]. Can you elaborate why more recent coreset construction can be applied here?
Theorem 3.1 and Theorem 3.2 provide upper bounds but there is no lowerbound. E.g., Any hope to show exponential dependence on k (or d) is needed (d, for the general metric space)?
How the algorithm in this paper is different from those for differentially private clustering?
Can you elaborate on the proof of Corollary E. 9? I particular, how does the dimension show up?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the questions in the weakness section.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is no discussion of limitation by the authors. Though, the paper does not seem to have potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort in reading our paper and address their comments as follows.
> The paper is following the notion of [Impagliazzo et al., 20]; however, I still not quite convinced why the shared randomness assumption is meaningful. A related question is whether an assumption such as Assumption F. 2 can be sufficient for achieving replicability?
We thank the reviewer for raising an important point.
Sharing the randomness can be thought of as a way to couple the two executions of the algorithm. Kalavasis et al. [2023] consider a notion of “replicability” where there is no need to share internal randomness and the two executions of the algorithm can be coupled in an arbitrary way. They also show that such algorithms can be converted to replicable algorithms by a specific implementation of the internal randomness. However, the conversion still requires sharing of the internal randomness and incurs an exponential computation time in the dimension of the data.
For our specific setting, shared randomness is crucial in order to achieve the exact same output in two executions of the algorithm (with high probability). For numerical queries such as mean estimation, this property may not mean much, as the utility of the output (distance to true mean) naturally aligns with replicability (distance between two executions). However, for clustering, the utility (objective function) does not correlate with the replicability (difference between outputted centers along two runs). In this case, sharing randomness seems to be required for replicability.
We show that Assumption F.2 is indeed sufficient for the specific case of k-centers clustering (Theorem F.8).
> The result seems to heavily depend on the coreset construction of Frahling and Sohler [2005]. Can you elaborate why more recent coreset construction can be applied here?
The coreset construction of Frahling and Sohler [2005] can be adapted to the statistical setting since the algorithm can be viewed as a series of decisions based on heavy hitters estimation. This elementary statistical operation generalizes from the combinatorial setting to the statistical setting. We conjecture that more recent coreset constructions can also be adapted this way if they can be viewed as a series of elementary statistical operations.
> Theorem 3.1 and Theorem 3.2 provide upper bounds but there is no lowerbound. E.g., Any hope to show exponential dependence on k (or d) is needed (d, for the general metric space)?
To the best of our knowledge, there is little known about lower bounds for statistical clustering, even in the non-replicable setting. We believe this would be an interesting line of future work and will mention this in the next version of our manuscript.
> How the algorithm in this paper is different from those for differentially private clustering?
We thank the reviewer for raising an important point.
Differential privacy (DP) provides a worst-case combinatorial guarantee for two runs of the algorithm on neighboring datasets, i.e., two executions when the datasets differ in one element. Moreover, the definition of DP uses the max-divergence as a measure of statistical distance of the two posteriors of the algorithm. On the other hand, the definition of replicability we employ is statistical in the sense that it considers two executions where the inputs are (different) i.i.d. datasets from the same underlying distribution. Notice that the definition of DP does not require that the data come from a distribution, since it considers only one change in the dataset. Another difference is that instead of asking for small max-divergence between the posteriors of the algorithm on the two executions like DP, the definition of replicability requires that the outputs are exactly the same when the internal randomness is shared. Hence, these two definitions are not trivially comparable.
Nevertheless, the works of Bun et al. [2023] and Kalavasis et al. [2023] provide connections between DP and replicability for certain classes of problems. Although it is not immediate from these papers, this supports the plausible conjecture that there may be a way to connect the two lines of work in the clustering setting.
> Can you elaborate on the proof of Corollary E. 9? In particular, how does the dimension show up?
We thank the reviewer for identifying a possible point of confusion. In general, we plan to make the appendix more self-contained in the next version of our manuscript.
The main basis of our dimensionality reduction scheme is a dimensionality reduction result for the combinatorial setting (Theorem E.1). The dimension shows up as part of the statement of this result. Section E.3 extends this result to the bounded distributional setting as follows: We first discretize the bounded domain to reduce it to a weighted version of the combinatorial setting with Proposition E.8 quantifying the estimation error of such a discretization. Then Corollary E.9 follows by applying Theorem E.1 to the weighted combinatorial case.
All in all, the dimension shows up since Corollary E.9 is derived from Theorem E.1 which contains the dimension.
[Kalavasis et al., 2023]: Alkis Kalavasis, Amin Karbasi, Shay Moran, Grigoris Velegkas: Statistical indistinguishability of learning algorithms.
[Frahling and Sohler, 2005]: Gereon Frahling and Christian Sohler. Coresets in dynamic geometric data streams.
[Bun et al., 2023]: Mark Bun, Marco Gaboardi, Max Hopkins, Russell Impagliazzo, Rex Lei, Toniann Pitassi, Jessica Sorrell, and Satchit Sivakumar: Stability is stable: Connections between replicability, privacy, and adaptive generalization. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper initiates the study of formal replicability for clustering algorithms. Replicability is defined to be a property of an algorithm for a statistical clustering problem, requiring that fixing the internal randomness of the algorithm while resampling the input data will yield exactly the same (representation of a) mapping from points to clusters. Upper-bounds are given for the statistical k-means, k-medians, and k-centers problems, with poly(d) sample complexity proven via dimensionality reduction in the case of euclidian distance. The theoretical results are also empirically evaluated on synthetic data.
Strengths: This paper addresses an interesting problem by extending recently introduced formal notions of replicability to the task of statistical clustering. Clustering seems like a setting where replicability is particularly well-motivated, as there are many reasonable desirable properties for clustering solutions, and being able to ensure that solutions which demonstrate a particular balance of these properties are in fact a result of the algorithm generating solutions, and not just a fluke of the sample, is a natural objective.
Weaknesses: The writing was very readable, but it would have been nice to have a more thorough comparison of this work to the related work. What is the cost of replicable clustering compared to non-replicable algorithms for the same problems? Does this notion of replicability solve any of the issues that motivated the "stability of centers" condition for selection of k that is mentioned in Section 1.1?
I also found Section 4 a bit difficult to parse. I think some of the notation used in Algorithm 4.1 isn't defined, and though it's reasonably intuitive, using more precise language in the high-level exposition and fully defining all variables/notation would definitely improve the readability of this section.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, the authors address all limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the importance of replicability in clustering algorithms. We appreciate the reviewer for their time and effort in reading our paper and address their constructive comments as follows.
> What is the cost of replicable clustering compared to non-replicable algorithms for the same problems?
For the case of (non-replicable) statistical k-medians/k-means, Ben-David’s work [Ben-David, 2007] implies randomized polynomial-time constant-ratio approximation algorithms with sample complexity $O(k / \varepsilon^2)$.
For the case of (non-replicable) k-centers, ours is the first statistical formulation of the problem. Hence it is not clear how to compare our guarantees.
We will add these remarks to the next version of the manuscript.
> Does this notion of replicability solve any of the issues that motivated the "stability of centers" condition for selection of k that is mentioned in Section 1.1?
We thank the reviewer for raising an interesting point.
The work [Ben-David et al., 2006] shows that the “stability of centers” is determined by the symmetry of data and does NOT reflect anything about the choice of k. Therefore, the stability of centers is perhaps not the right criterion for selecting k. Our work reaffirms the conclusions of [Ben-David et al., 2006] in that “stability” can essentially be achieved for any choice of k.
We believe that exploring more fine-grained notions of replicability, like list replicability and certificate replicability that were proposed recently in [Dixon et al., 2023], and understanding how these quantities vary as a function of k can shed light on this matter.
> I also found Section 4 a bit difficult to parse. I think some of the notation used in Algorithm 4.1 isn't defined, and though it's reasonably intuitive, using more precise language in the high-level exposition and fully defining all variables/notation would definitely improve the readability of this section.
We thank the reviewer for this suggestion. We will fully define all variables/subroutines used in Algorithm 4.1 within the next version of the manuscript.
[Ben-David, 2007]: Shai Ben-David. A framework for statistical clustering with constant time approximation algorithms for k-median and k-means clustering.
[Ben-David et al., 2006]: Shai Ben-David, Ulrike Von Luxburg, and Da ́vid Pa ́l. A sober look at clustering stability.
[Dixon et al., 2023]: Peter Dixon, A. Pavan, Jason Vander Woude, and N. V. Vinodchandran. List and certificate complexities in replicable learning.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. I will keep my score. | null | null | null | null | null | null |
Evaluating the Robustness of Interpretability Methods through Explanation Invariance and Equivariance | Accept (poster) | Summary: The paper focuses on the robustness of explanations. It begins by defining explanation invariance and equivariance concepts using geometric deep learning formalism and demonstrates that certain popular interpretability methods inherently possess theoretical robustness guarantees. Two metrics, invariance and equivariance scores, are introduced for empirically assessing explanation robustness. These metrics are applied to evaluate various interpretability methods across different modalities. Finally, the paper provides a set of five actionable guidelines to ensure that interpretability methods are employed in a manner that guarantees robustness.
Strengths: 1. The paper presents a high-level framework for evaluating the robustness of explanations and introduces two corresponding metrics. Unlike previous work that mostly focuses on saliency-based explanations for image classification, this framework can be applied to various explanation methods (feature-based, concept-based, and example-based) and modalities (images, graphs, and time series);
2. In addition to offering an evaluation framework, the authors also provide guidelines for generating robust explanations. These insights can assist the community in developing improved explanation methods.
Weaknesses: 1. The paper's organization could be better aligned with the summaries of contributions provided in the abstract and on page 3. The structure in later sections does not closely follow these summaries, which may make it difficult for readers to follow the narrative.
2. The paper appears to cover many points, potentially leading to the omission of important details.
I am interested in understanding how different explanation methods relate to theoretical robustness guarantees (invariant, equivariant), but the paper only presents the results (Table 1) without discussing them. Although the mathematical proofs are available in Appendix D, the main paper lacks an explanation and discussion of these results. For instance, while Table 1 indicates that gradient-based methods have conditional equivariance guarantees, the necessary conditions or assumptions are not explicitly stated. Including explanations or discussions in the main paper would significantly improve clarity.
3. The use of the Dihedral Group for CIFAR10 and STL10 in the experimental section is unclear. It would be helpful if the authors provided an example of the transformations applied to the images in this context.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Could the authors provide further discussion and explanations regarding the results in Table 1, especially for those that are conditionally guaranteed?
2. In the experimental section, what does the Dihedral Group represent for CIFAR10 and STL10? Could the authors share an example of the specific transformations applied to images in this scenario?
3. While there are numerous publications on the explanations for NLP, the paper does not mention the robustness of explanations for NLP. Is it possible to apply the proposed framework to evaluate the robustness of these methods within the NLP domain?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: See weaknesses.
In general, I think the paper tackles an important research question in XAI and proposes a valuable framework to address the issue. However, the concerns mentioned in the "weaknesses" section, such as missing or unclear information, may lead to confusion and make it difficult for readers to fully understand the paper.
If these related questions can be well answered, I may consider raising my rating.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Rebuttal Reviewer wVxe
We would like to thank the reviewer for taking the time to make encouraging comments and constructive criticisms. By following the reviewer's suggestions, we were able to:
* Emphasize the theoretical robustness guarantees that we derived in the appendix.
* Clarify the various symmetry groups appearing in the experiments.
* Stretch the applicability of our framework even further with NLP applications.
We believe that all of these points make a great addition to the manuscript. We hope that they will also address any residual doubt the reviewer had about the paper. If that is not the case, we are happy to engage during the discussion phase.
## Theoretical robustness guarantees
As mentioned by the reviewer, theoretical results are indeed detailed in *Appendix D*. This is because a rigorous treatment of these theoretical results requires further definitions and lemmas, which did not fit in the page constraint. We will now give a summary of these results with the right level of mathematical precision. All the details that are not covered here are available in *Appendix D*.
When it comes to feature importance methods, there are mainly two assumptions that are necessary to guarantee $\mathcal{G}$-equivariance. **(1)** The first assumption restricts the type of baseline input $\bar{x} \in \mathcal{X}(\Omega, \mathcal{C})$ on which the feature importance methods rely. Typically, these baselines signals are used to replace ablated features from the original signal $x \in \mathcal{X}(\Omega, \mathcal{C})$ (i.e. remove a feature $x_i$ by replacing it by $\bar{x}_i$). In order to guarantee equivariance, we require this baseline signal to be invariant to the action of each symmetry $g \in \mathcal{G}$: $\rho[g] \bar{x} = \bar{x}$. **(2)** The second assumption restricts the type of representation $\rho$ that can be used to describe the action of the symmetry group $\mathcal{G}$ on the signals $\mathcal{X}(\Omega, \mathcal{C})$. In order to guarantee equivariance, we require this representation to be a *permutation representation*, which means that the action of each symmetry $g \in \mathcal{G}$ is represented by a permutation matrix $\rho[g]$ acting on the signal space $\mathcal{X}(\Omega, \mathcal{C})$.
When it comes to example importance methods, the assumptions depend on how the importance scores are obtained. If the importance scores are computed from the model's loss $\mathcal{L}$, then the $\mathcal{G}$-invariance of the explanation immediatly follows from the model's invariance. If the importance scores are computed from the model's internal representations $h: \mathcal{X}(\Omega, \mathcal{C}) \rightarrow \mathbb{R}^{d_{\mathrm{rep}}}$, then the invariance of the explanation can only be guaranteed if the representation map $h$ is itself invariant to action of each symmetry: $h(\rho[g]x) = h(x)$.
Finally, concept-based explanations are also computed from the model's representations $h$. Again, $\mathcal{G}$-invariance of the explanations can only be guaranteed if the representation map $h$ is itself $\mathcal{G}$-invariant.
## Dihedral group
PLease refer to the section *Clear explanations for symmetry groups* from the global rebuttal for a definition of the dihedral symmetry group.
## NLP application
Please refer to the same section in the global rebuttal.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response.
I have decided to maintain my initial rating of the paper. My primary reservation remains, "The paper appears to cover many points, leading to the omission of important details.", a concern echoed by both Reviewer 7REw and Reviewer LQbZ. The absence of these key details makes the paper difficult for readers to fully grasp, even though some further details are provided in the appendices.
I would recommend the authors reconsider the paper's structure in the next version, whether for NeurIPS or elsewhere.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response.
While we respect the reviewer's decision, we note that the reviewer was willing to update their rating if *related questions can be well answered* in their original review. We believe that our rebuttal provides thorough answers to the reviewer's point about omissions, all of which have been used to update the manuscript. This includes:
* A detailed explanation on the theoretical results from the appendices in Section *Theoretical robustness guarantees* of our rebuttal. This discussion has been added to *Section 2.2* of the manuscript.
* A clarification on the symmetries used in our experiment in Section *Clear explanations for symmetry groups*, hence addressing the concerns both Reviewer 7REw and Reviewer LQbZ had about clarity. These details have been added to *Section 3* of the manuscript.
* An extension of our formalism to NLP applications, verified theoretically and empirically, in Section *NLP applications* of our rebuttal. These additional results have been added to a new appendix.
While all of these changes have been integrated to the manuscript, it is unfortunately impossible for us to upload the updated version on OpenReview. For this reason, we would like to ask the reviewer some additional questions.
Was the reviewers satisfied by the answers we provided in our rebuttal? If that is the case, what changed the reviewer's mind on updating their rating? If that is not the case, which specific points should we clarify?
We would be very happy to take advantage of the discussion phase to address any residual concern the reviewer might have. | Summary: This paper study the robustness of several post-hoc interpretability methods against the transformation of input data. The robustness is measured by invariance and equivariance metrics. Theoretical robustness guarantees and a systematic approach to increase the invariance are derived. Finally, the authors conduct extensive experiments to validate the theoretical analysis of robustness guarantees, using the proposed evaluation metrics. 5 actionable guidelines are derived to improve the robustness.
Strengths: 1. The research problem studied in this paper is interesting and novel. Robustness of interpretability against more general input transformation is few studied before.
2. This paper extends the robustness evaluation to other interpretability methods, like example importance and concept-based explanations. These are missing from the current literature.
3. Based on the evaluation, the theoretical robustness guarantees are derived for the popular interpretability methods.
4. The experiment is comprehensive and insightful that 5 practical guidelines are derived for the robustness improvement.
Weaknesses: 1. The details about the symmetries are vague and even missing. Which kind of input transformation are considered, rotation, crop, or translation?
2. Since the invariance metric is in conflict with the equivariance metric, it may be confusing for users to decide which metric is more suitable, given the symmetries. For example, if the small perturbations are added into the input data, the invariance of feature explanation is expected.
3. The simple Monte Carlo Sampling is inefficient for the evaluation of two robustness metrics, especially for the rare events.
4. The ImageNet dataset should be considered for the experiments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Please provide more details of the symmetry group. How does the change of symmetry group affect the robustness evaluation results?
2. What's challenges when applying proposed evaluation method to real world dataset and interpreter?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Rebuttal Reviewer LQbZ
We would like to thank the reviewer for taking the time to make encouraging comments and constructive criticisms. By following the reviewer's suggestions, we were able to:
* Discuss how to choose between equivariance and equivariance when measuring robustness.
* Emphasize our evaluation of the quality of Monte Carlo estimators.
* Extend our evaluation to a subset of ImageNet images.
We believe that all of these points make a great addition to the manuscript. We hope that they will also address any residual doubt the reviewer had about the paper. If that is not the case, we are happy to engage during the discussion phase.
## Details about symmetries
Please refer to the section *Clear explanations for symmetry groups* of the global rebuttal for details on each symmetry group.
By looking at *Figures 2.a,b,c* from the manuscript, we can observe the effect of changing the symmetry group on the robustness evaluation results. Indeed, different datasets are associated with different symmetry groups. We observe small oscillations in the robustness metrics from one dataset to the other. However, our observations apply to all symmetry groups.
## Invariance vs equivariance
Choosing which metric, between invariance and equivariance,
to record is context dependant and requires domain knowledge. A good rule of thumb is the following: whenever the explanation space $\mathcal{E}$ is equal to the input space $\mathcal{X}(\Omega, \mathcal{C})$, we choose the equivariance metric. This is naturally the case for feature importance methods, but also for counterfactual explanations. On the other hand, whenever the explanation space $\mathcal{E}$ is a generic vector space without a signal interpretation (i.e. the vectors of $\mathcal{E}$ cannot be interpreted as signals mapping a domain $\Omega$ to a chanel space $\mathcal{C}$), we choose invariance. This is the case for concept-based and example-based explanations.
Let us now discuss the small additive perturbations mentioned by the reviewer. Adding a small perturbation to the input data is not a symmetry corresponding to a linear action $\rho$ of a symmetry group $\mathcal{G}$. For this reason, the equivariance/invariance characterization deployed in our work is not the suitable framework to describe this type of transformation. We note that previous works have studied the robustness of explanations with respect to these small perturbation of the input data (see e.g. the sensitivity metric described in *Appendix G*).
The geometric robustness studied in our work has to be considered as orthogonal and complementary with respect to this perturbation robustness. In particular, a quantitative study in *Appendix G* shows that our equivariance metric has only weak correlations with respect to the sensitivity metric. We conclude that the two forms of robustness can exist independently in practice. Any feature importance explanation that is faithful to the model should be robust with respect to these two criteria (i.e. low sensitivity to small additive perturbations, high equivariance to symmetries).
## Monte Carlo sampling
We would like to emphasize that the convergence of Monte Carlo estimators has been studied throroughly in *Appendix E*. In particular, we show that our estimators have converged with the sample sizes considered in the experiments from *Section 3*.
Furthermore, we are not entirely sure what the reviewer refers to when they mention *rare events*. We believe that the reviewer might refer to sampling from the symmetry group $\mathcal{G}$. If this is indeed the case, this sampling process does not admit rare events by definition. Indeed, as explained in *Section 2.2*, the Monte Carlo estimators are built by sampling *uniformly* elements from the symmetry group $g \sim U(\mathcal{G})$.
We hope that the above discussion brings more clarity to the Monte Carlo sampling. That said, it is perfectly possible that we misunderstood the reviewer's remark. In this case, we would be happy to extend the discussion on this subject.
## ImageNet experiment
Given the limited amount of time, retraining a model on ImageNet is infeasible. As a compromise, we have reproduced the experiments from *Section 3.1* with the CINIC-10 dataset.
The CINIC-10 dataset contains ImageNet images that have the same classes as CIFAR-10 classes (and, hence, the same classes as the STL-10 dataset). In this way, we are able to reuse the ResNet trained on STL-10 for evaluation on the CINIC-10 dataset. The only requirements is to resize the CINIC-10 images so that they match the resolution of the STL-10 images ($96 \times 96$ pixels). We first evaluate the equivariance score $\mathrm{Equiv}\_{\mathbb{D}_4}$ with respect to the dihedral group $\mathbb{D}_4$ for a few feature importance methods and find the following averages over the test set $\mathcal{D}\_{\mathrm{CINIC10}}$:
| Method $e$ | $\mathbb{E}\_{x \sim \mathcal{D}\_{\mathrm{CINIC10}}}\mathrm{Equiv}\_{\mathbb{D}_4}[e, x]$ |
|---|---|
| Integrated Gradients | .87 |
| DeepLift | .85 |
| Gradient-Shap | .81 |
This is consistent with the results reported in the main paper for the STL-10 dataset (*Figure 2.a*). We perform a similar analysis for example importance methods:
| Method $e$ | $\mathbb{E}\_{x \sim \mathcal{D}\_{\mathrm{CINIC10}}}\mathrm{Inv}\_{\mathbb{D}_4}[e, x]$ |
|---|---|
| TracIN | .99 |
| Influence Functions | .99 |
| SimplEx Equiv | .83 |
| SimplEx Equiv | .87 |
| Representation Similarity Equiv | .64 |
| Representation Similarity Inv | .98 |
Again, this is consistent with the results reported for the STL-10 dataset (*Figure 2.b*).
## Challenges in applying the evaluation
We did not encounter any significant challenge in the evaluation of our metrics with real-world datasets and interpreters. This is because the computation of our metrics is easy to implement once the interpreters and models are accessible.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks for the rebuttal. The authors' responses address most of my concerns.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their feedback. We are delighted that our rebuttal addressed the reviewer's concerns. | Summary: This paper proposes the definition of the robustness of explanations with respect to the model symmetry group. For models invariant to some symmetry group, the explanation should also be invariant or equivariant to it. The paper derives two metrics to measure the invariance and equivalence of explanations and analyzes the robustness requirement for three explanation methods (e.g., feature importance needs to be equivariant, and example importance and concept-based explanations need to be invariant). It also theoretically analyzes the guarantees of different methods in robustness. In the end, the paper proposes to improve the robustness by aggregating the explanations over several symmetries. Experiments show that different explanation methods have different invariance/equivariance properties.
Strengths: 1. The paper is very well-written and easy to follow.
2. The relevant literature is summarized comprehensively. The considered problem, evaluating the robustness of explanation from a geometry perspective, is not well-studied before.
3. The concepts of explanation invariance and equivaraince are novel. The evaluation metrics are sound.
4. Multiple explanation methods are evaluated in the experiments.
Weaknesses: 1. The biggest weakness is the limited scenarios that can use the proposed evaluation methods. The paper only considers models that are perfectly invariant. Please see Limitations for details.
2. The requirement of the model's invariance is not well defined. Under some group transformations, while the hard-label prediction of the model (i.e. after argmax) is unchanged, the soft-label prediction (i.e. after softmax but before argmax) might be changed. The model is invariant from the first perspective. However, from the second perspective, the model is non-invariant and the explanation should also be non-invariant/non-equivariant. I guess that is why the saliency map is not equivariant while the model is considered invariant in Appendix I.
3. The symmetry groups considered in experiments are limited.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: 1. The paper only considers the post-hoc interpretability. Is it possible to extend the concept of explanation invariance/equivariance to other methods? For example, the attention should be equivariant under input translation?
3. When the explanation space is not identical to the input space, what is group representation $\rho'$?
4. Is there any trade-off between the robustness and the utility of the explanation?
Minor:
1. The colors in Figure 2 are hard to distinguish.
2. What is "Dihedral Group" in Table 2?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The proposed definitions and evaluations heavily rely on the assumption that the model is perfectly invariant. I appreciate the experiments and discussions in sec 3.3 which suggest that there's no linear relationship between model invariance and explanation invariance/equivariance. Therefore, models not designed but trained for invariance can not use the proposed methods. However, this assumption limits the model architectures and group transformations considered and thus limits the application of the proposed methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Rebuttal Reviewer u2n8
We would like to thank the reviewer for taking the time to make encouraging comments and constructive criticisms. By following the reviewer's suggestions, we were able to:
* Stretch the applicability of our framework with models that are not perfectly invariant and NLP applications.
* Discuss how our formalism could be applied beyond post-hoc interpretability.
* Illustrate how to choose appropriate representations when input and explanation spaces don't match.
* Discuss trade-offs between robustness and usefulness of explanations.
We believe that all of these points make a great addition to the manuscript. We hope that they will also address any residual doubt the reviewer had about the paper. If that is not the case, we are happy to engage during the discussion phase.
## Limited use cases
We address the reviewer's main concern on the limitations of our work in the global rebuttal in two ways.
1. In section *Beyond exact invariance and equivariance*, we explain how to use our metrics for models that are not perfectly invariant.
2. In section *NLP applications*, we show theoretically and empirically how our framework can be used with text data.
## Model invariance requirement
All the models manipulated in *Sections 3.1* and *3.2* are perfectly invariant to their respective symmetry group and in the first perspective described by the reviewer. This is guaranteed theoretically by their architecture (e.g. purely convolutional CNN classifiers are translation-invariant, GNN classifiers are permutation-invariant).
Furthermore, we have verified empirically the invariance of each classifier $f$ by measuring the the invariance score $\mathrm{Inv}\_{\mathcal{G}}(f, x) = \frac{1}{|\mathcal{G}|} \sum\_{g \in \mathcal{G}} \cos[f(x), f(\rho[g]x)]$ defined in *Appendix F* of our paper. Note that $f(x)$ denotes the logits predicted by the classifier $f$ and *not* the hard-label prediction of $f$ (i.e. the $\mathrm{argmax}$ of $f(x)$). For all models in *Sections 3.1* and *3.2*, we have observed that $\mathrm{Inv}\_{\mathcal{G}}(f, x) = 1$ for all $x$ in the test set.
We conclude that the differences between the original and transformed saliency maps in e.g. *Figure 8* of *Appendix I* can only be explained by the lack of robustness of the underlying feature importance method (GradientShap in this case).
## Limited symmetry groups
Our paper includes a total of 4 symmetry groups acting on 6 different datasets. Those are summarized in the section *Clear explanations for symmetry groups* of the global rebuttal.
We believe that this is a reasonable number of groups to demonstrate the wide applicability of our framework. That said, if the reviewer has noticed an important symmetry group that is missing in our analysis, we will happily include it in the manuscript.
## Extension beyond post-hoc interpretability
It is indeed possible to use our framework to characterize the invariance/equivariance of methods beyond post-hoc interpretability.
In particular, everything that is stated in our theoretical formulation in *Section 2.2* remains true if we set the explanation method $e$ to be the attention scores outputted by an attention head $e(x) = \mathrm{softmax}(x^\intercal \cdot W_E^\intercal W_Q^\intercal W_K W_E \cdot x)$, where $W_E, W_Q, W_K$ are respectively the token embedding, query and key matrices. Note that one should *not* expect the attention scores to be permutation-equivariant if the embedding encodes positional information.
## Representation for non-identical input spaces
When the explanation space $\mathcal{E}$ is different from the input space $\mathcal{X}(\Omega, \mathcal{C})$, there is no general strategy to pick a group representation $\rho'$ acting on $\mathcal{E}$. This group representation should therefore be selected on a case-by-case basis by leveraging domain knowledge.
Let us give a concrete example of such scenario to illustrate a typical representation selection process. Let us consider RGB images, where the input space is $\mathcal{X}(\Omega, \mathcal{C})$ with a grid domain $\Omega = \mathbb{Z}_W \times \mathbb{Z}_H$ and the chanel space is $\mathcal{C} = \mathbb{R}^3$. We consider feature importance explanations that outputs a segmentation mask, hence corresponding to an explanation space $\mathcal{E} =\mathcal{X}'(\Omega, \mathcal{C}')$ with $\mathcal{C}' = \mathbb{R}$. As we can see, the input and explanation spaces are distinct in this case since $\mathcal{C} \neq \mathcal{C}'$.
Let us consider the translation group as the symmetry group $\mathcal{G} = \mathcal{G}\_{\mathrm{transl}}$. Typically, the translation group will act independently on each of the 3 chanels of the input image. Therefore, we the original group representation $\rho$ acting on $\mathcal{X}(\Omega, \mathcal{C})$ contains 3 copies of a representation $\rho\_{\mathrm{transl}}$, each acting on a single chanel. This is formally written as $\rho = \rho\_{\mathrm{transl}} \oplus \rho\_{\mathrm{transl}} \oplus \rho\_{\mathrm{transl}}$, where $\oplus$ denotes the direct sum of representations.
Now let us explain how to choose an appropriate representation $\rho'$ for the explanation space $\mathcal{E}$. As we have explained above, the segmentation mask contains only one chanel as $\dim(\mathcal{C}') = 1$. In this case, it is therefore natural to directly use the representation $\rho\_{\mathrm{transl}}$ acting on a single chanel. Hence, we simply set $\rho' = \rho\_{\mathrm{transl}}$.
## Trade-off between robustness and utility of the explanation
Rather than a trade-off, we believe that robustness is a necessary condition for the utility of an explanation. When using an invariant model, any explanation that is not invariant/equivariant should be treated with skepticism, as it does not faithfully capture the model symmetries. Believing such explanations could lead to the conclusion that the model is affected by applying the symmetry to the image, which is incorrect. | Summary: The core contribution of this work is the direction of measuring explanation robustness through a more broader set of data perturbations/transformations for different data modalities which have not been discussed in literature thus far. For instance shift transformations in images, cyclic translations in time series, etc. The authors define concepts of explanation invariance
and equivariance and expose their theoretical properties. They conduct experiments to measure these types of invariances for 3 different types of explanations and several datasets and also present guidelines on which invariances to measure in which use cases.
Strengths: Overall, making explanations more trustworthy is an important area to investigate. Measuring robustness of explainability methods is valuable for their use in practical deployed systems.
- Authors define several types of perturbations for different data modalities. Authors define the concepts of invariance and equivariance for different data modalities and show their theoretical properties.
- Authors have presented a broad collection of experiments with several data modalities and explainability types to show that many existing explainability methods are not robust as measured via invariance & equivariance.
Weaknesses: There seem to be links between model robustness and explanation robustness which are not explored in the draft.
For example, if the model is not robust to certain types of suggested data perturbations (when we might expect it to), then would it not make sense to measure if the explanations are not in fact the same? Because the same top reason/explanation why a data sample was classified in class A cannot also be the reason why its not classified in class A.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Is there any reason why text models were not evaluated? It could make sense in tasks for e.g. summarization, etc. if certain facts are shifted from the beginning to the end of a paragraph and if the importance score on words still come up right.
- In your experiments are you assuming that the models you pick are already robust to the type of perturbations you add to the data to measure explanation robustness(invariance/equivarance)? Or is explanation invariance being measured irrespective of whether the model is robust or not.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Rebuttal Reviewer NZiw
We would like to thank the reviewer for taking the time to make encouraging comments and constructive criticisms. By following the reviewer's suggestions, we were able to:
* Show that our metrics can also be used to characterize explanations of models that are not invariant to a group of symmetry.
* Stretch the applicability of our framework even further with NLP applications.
* Clarify the robustness assumptions that are made with the models appearing in our experiments.
We believe that all of these points make a great addition to the manuscript. We hope that they will also address any residual doubt the reviewer had about the paper. If that is not the case, we are happy to engage during the discussion phase.
## (Anti)-robustness for non invariant models
We thank the reviewer for bringing this point. It is indeed true that if a model prediction changes substantially by applying a transformation to the input, one should expect the same for an explanation of this model. While this analysis is not explicitly mentioned in the paper, it can be deduced from *Figure 4*.
By looking at the Standard CNNs (*green icons* in *Figure 4*), we notice that these models have low invariance for both the Electrocardiogram and FashionMNIST datasets. This implies that the model predictions change substantially when we apply a translation to their input data. Therefore, it is legitimate to expect the same for their explanation.
This is indeed what we observe for all the feature importance methods (*left column* of *Figure 4*), whose equivariance is also close to zero.
Some of the example and concept-based methods, on the other hand, keep a high invariance even if the model is not invariant (see *central* and *right columns* of *Figure 4*). This is the case of TraceIN, Representation Similarity and CAR, for instance. This implies that the explanations from these methods don't change substantially when we apply a translation to the input data. This is in contradiction with the fact that the model prediction does change substantially when we apply this translation. Therefore, in spite of the apparent robustness of these methods when used with an invariant model, we observe that these methods fail to track the model's behavior when invariance is destroyed.
## NLP applications
Please refer to the same section in the global rebuttal.
## Robustness assumptions
All the models manipulated in *Sections 3.1* and *3.2* are perfectly invariant to their respective symmetry group. This is guaranteed theoretically by their architecture (e.g. purely convolutional CNN classifiers are translation-invariant, GNN classifiers are permutation-invariant).
Furthermore, we have verified empirically the invariance of each classifier $f$ by measuring the the invariance score $\mathrm{Inv}\_{\mathcal{G}}(f, x) = \frac{1}{|\mathcal{G}|}\sum\_{g \in \mathcal{G}} \cos[f(x), f(\rho[g]x)]$ defined in *Appendix F* of our paper. Note that $f(x)$ denotes the logits predicted by the classifier $f$ and *not* the hard-label prediction of $f$ (i.e. the $\mathrm{argmax}$ of $f(x)$). For all models in *Sections 3.1* and *3.2*, we have observed that $\mathrm{Inv}\_{\mathcal{G}}(f, x) = 1$ for all $x$ in the test set.
We note that the only non-invariant models manipulated in our papers are described in *Section 3.3*. In each case, the average invariance $\mathbb{E}\_{x \sim \mathcal{D}\_{\mathrm{test}}}\mathrm{Inv}\_{\mathcal{G}}(f, x)$ is reported on the x-axis of *Figure 4*. | Rebuttal 1:
Rebuttal: # Global Rebuttal
The below contains a rebuttal for remarks that are common to most reviewers.
## Clear explanations for symmetry groups
In *Table 3* from *Appendix F*, we have included all the details necessary to understand the action of each symmetry group. Following the reviewer's suggestion, we will add the more intuitive table below to the manuscript:
| **Dataset** | **Symmetry group** | **Acting on** | **Description** |
|---|---|---|---|
| Electrocardiograms | 1D Translations | Time Series | Each translation shifts the signal in time.|
| Mutagenicity | Permutation | Graphs | Each permutation changes the ordering of nodes in the graph's feature matrix and adjacency matrix. |
| ModelNet40 | Permutation | 3D Point Cloud | Each permutations exchanges the positions of several points in the cloud. Note that the cloud itself remains the same. |
| FashionMNIST | 2D Translations | Images | Each translation shifts the image content (i.e. the fashion item) horizontally and vertically. Thanks to the padding, the image content never touches the edges of the image. |
| CIFAR100 | Dihedral | Images | Each element of the dihedral group either rotates the image or takes a reflection of the image. The dihedral group $\mathbb{D}_8$ includes rotations by angles 0°, 45°, 90°, 135°, 180°, 225°, 270°, 315°. It also includes reflections with respect to axes tilted by all these angles from the horizontal. |
| STL10 | Dihedral | Images | Same as above. |
In the case of images, we have included visualizations of real transformed images and explanations in *Figures 8* and *9* in *Appendix I*. We will move some of those figures to the main paper in order to facilitate the visualization of these groups (especially the dihedral group).
## Beyond exact invariance and equivariance
We would like to emphasize that our robustness metrics can still be used to characterize the equivariance/invariance of explanations *even* if the model is not perfectly invariant. The metrics would still record the same information, even if the invariance of the model is relaxed.
The main thing to keep in mind when using these metrics with models that are not perfectly invariant is that the explanations themselves should not be exactly invariant/equivariant. That said, for a model that is approximately invariant, we expect a faithful explanation to keep a high invariance / equivariance score. This is precisely the object of *Section 3.3*.
In *Section 3.3*, we analyze the equivariance / invariance of several interpretability methods on models that are approximately invariant (two approximately $\mathbb{D}_8$-invariant ResNets for the STL10 and CIFAR100 datasets as well as two Augmented CNNs for the Electrocardiograms and FashionMNIST datasets). *Figure 4* shows that some explainability methods fail to keep a high invariance/equivariance when the invariance of the model is slightly relaxed (e.g. DeepLift, CAV). Further, we notice that no feature importance method manages to keep a high equivariance when the model's invariance is slightly relaxed. This observation could be the seed of future developments in feature importance methods.
## NLP applications
Our invariance and equivariance metrics can naturally be used with text models with symmetries. To demonstrate this, we consider a bag-of-word classifier.
Let us first show that bag-of-words classifiers are invariant to the permutation group. A bag-of-words classifier $f$ receives a sequence $x = (x_t)\_{t =1}^T$ of tokens and outputs class probabilities $f(x) \in \Delta^C$, where $C$ is the number of classes and $\Delta^C$ is the $C$-probability simplex. By definition, the bag-of-word classifier can be written as a function $f(x) = g \left( \sum\_{t=1}^T \mathrm{onehot(x_t)} \right)$. In this form, the invariance of the classifier with respect to tokens permutation is manifest. Let $\pi \in S_T$ be a permutation of the token indices. Applying this permutation to the token sequence does not change the classifier's output: $f(x\_{\pi}) = g\left( \sum\_{t=1}^T \mathrm{onehot}(x\_{\pi(t)}) \right) = g \left( \sum\_{t=1}^T \mathrm{onehot}(x_t) \right) = f(x)$. We conclude that bag-of-words classifiers are $S_T$-invariant.
Let us now show empirically that our framework extends to these models with the IMDB movie reviews dataset. This dataset contains 50k text movie reviews. Each review comes with a binary label $y \in \{0,1\}$, hence $C=2$. We represent each review as a sequence of tokens $x=(x_t)\_{t=1}^T$, where we cap the sequence length to $T=128$ and set the vocabulary size to $V = 1,000$. We perform a train-validation-test split of this dataset randomly (90%-5%-5%) and fit a 2-layers bag-of-word MLP on the training dataset for 20 epochs with Adam and a cosine annealing learning rate. The best model (according to validation accuracy) achieves a reasonable 86% accuracy on the test set $\mathcal{D}\_{\mathrm{test}}$.
We use the test set to verify that the resulting model $f$ is *perfectly* invariant to token permutations. We then evaluate the equivariance score $\mathrm{Equiv}$ for a few feature importance methods and find the following averages over the test set:
| Method $e$ | $\mathbb{E}\_{x \sim \mathcal{D\}_{\mathrm{test}}}\mathrm{Equiv}\_{S_T}[e, x]$ |
|---|---|
| Integrated Gradients | 1.0 |
| DeepLift | 1.0 |
| Gradient-Shap | .85 |
This is perfectly consistent with the results reported for other models in the paper. We perform a similar analysis for example importance methods:
| Method $e$ | $\mathbb{E}\_{x \sim \mathcal{D}\_{\mathrm{test}}}\mathrm{Inv}\_{S_T}[e, x]$ |
|---|---|
| TracIN | 1.0 |
| Influence Functions | 1.0 |
| SimplEx | 1.0 |
| Representation Similarity | 1.0 |
We note that even representation-based methods are perfectly invariant in this case. This indeed makes sense, as the representation of bag-of-words models are themselves perfectly invariant (and not equivariant). | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors propose a set of desiderata for explanation methods for neural networks ranging from CNNs to GNNs. They postulate that any explanation that is able to faithfully explain the model should be in agreement with the invariance properties exhibited by the underlying model. They formalize this idea by introducing explanation invariance and equivariance with reference to specific symmetry groups. Using this formulation, they derive metrics that measure robustness of several interpretability methods and some theoretical guarantees. Their experiments verify explanation robustness for models trained on diverse modalities such as images, time series and tabular data. They provide guidelines for developers to develop robust model explanations using their metrics.
Strengths: Evaluating explanation approaches is an important area of research not only for evaluating existing approaches but also to aid practitioners in developing new explanations that are robust.
The paper is written very clearly albeit a bit dense to parse – this does not take away from the general reader’s experience of the paper, but it could use a more exemplar way of introducing concepts (Specific comments below)
The explanation methods and the models tested encompass several modalities. This is a real strength of the paper as explanation evaluations are usually limited to analysis of salience maps.
Weaknesses: 1. Dense nature of the writing: While I appreciate the authors’ efforts in introducing the concepts of geometric priors and group symmetry to the readers, the paper could be more clear in explaining what transformations are being considered in different symmetry groups. For instance, what transformations are contained in the D8 symmetry group for the CNN based examples? A table to this effect can help the readers understand what transformations are being evaluated.
2. Example results (positive and negative): The discussion of the results is rather pedantic, examples of why improving invariance and equivariance would make sense given an invariant model would help the reader understand the importance of the metrics. Without this context, it is hard to differentiate, say, an invariance score of 0.9 to a 0.5.
3. Why only exact invariance and equivariance? I wonder if in its current form the metrics are too rigid. As the authors mention in the paper, most networks for real world problems are only approximately invariant or equivariant. In fact, characterizing such property is a hard problem in itself. In these cases, it’s hard to see how the current approach can help.
4. This approach is limited in that the only transformations that are considered are geometric in nature. While this is alluded to in the appendix, more real world examples of invariance are from non-trivial corruptions such as measurement error or signal degradation.
5. The paper presents no conclusion and more discussion + visualization of the results is needed, especially in cases where the metrics fail to capture the expected behavior.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can the metrics be extended to approximately-invariant/equivariant networks? What are the pros and cons of such an extension?
2. Are invariance and equivariance the only properties that need to be considered for explanations from a geometric perspective?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to weaknesses above. There is not much potential for a negative societal impact - evaluating explanations would only serve to impact society positively.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Rebuttal Reviewer 7REw
We would like to thank the reviewer for taking the time to make encouraging comments and constructive criticisms. By following the reviewer's suggestions, we were able to:
* Clarify the various symmetry groups appearing in the experiments.
* Better contextualize the raw values of our robustness metrics.
* Emphasize that our metrics can be used with models that are not perfectly invariant.
* Stretch the applicability of our framework even further with NLP applications.
* Add a conclusion to the manuscript.
We believe that all of these points make a great addition to the manuscript. We hope that they will also address any residual doubt the reviewer had about the paper. If that is not the case, we are happy to engage during the discussion phase.
## Clear explanations for symmetry groups
Please refer to the same section in the global rebuttal.
## More context to the results
We agree with the reviewer that raw invariant equivariance/invariance scores tell a limited part of the story. In order to gain a better intuition of the effect of increasing those scores, we propose to analyze the examples included in *Appendix I*.
By looking at *Figure 8* from *Appendix I*, we observe that the saliency maps between the original and transformed images highlight completely different pixels. For instance, let us look at the last row of *Figure 8*. The saliency map highlights the bottom of the shirt on the original image. As we can see, moving this shirt to the upper-left corner shifts the saliency map, which now focuses the attention on the left part of the shirt. In all the images from *Figure 8*, the equivariance score of the saliency map is particularly low (ranging from .06 to .21).
To contrast with *Figure 8*, let us now consider explanations with higher equivariance scores. In *Figure 9*, the equivariance scores for the saliency map are substantially higher (ranging from .66 to .72). By comparing the saliency maps for the original and transformed images, it is noticeably more difficult to spot their differences. We do see some differences (e.g. the saliency map of row 4 highlights the boat cockpit more after flipping the boat upside down), but those are more subtle than in *Figure 8*.
In conclusion, this qualitative analysis supports the fact that our robustness metrics are a good proxy to measure how the symmetry changes the explanation. In particular, increasing the equivariance/invariance metric makes it more difficult to distinguish between the explanations for the original and transformed inputs.
## Beyond exact invariance and equivariance
Please refer to the same section in the global rebuttal.
## Limited geometric nature of the transformations
While our work indeed focuses on a geometric characterization of interpretability robustness, we do not believe that this limitation is very restrictive. In fact, all the models that fit in the geometric deep learning framework can benefit from our robustness metrics. As suggested in *Appendix H* of our paper, the equivariance property extends well beyond the CNNs, GNNs and Deep Sets described in our experiments. We have collected some examples in the below table. For detailed references, please refer to *Appendix H* from our paper.
| **Architecture** | **Symmetry Group** |
|---|---|
| G-CNN | Any Finite Group |
| Transformer | Permutation S(n) |
| LSTM | Time Warping |
| Spherical CNN | Rotations SO(3) |
| Mesh CNN | Gauge Symmetry SO(2) |
| E(n)- GNN | Euclidean Group E(n) |
All of these examples demonstrate that group invariance/equivariance is a valuable inductive bias that permits to achieve state-of-the-art performances in various tasks. Furthermore, this property seems to be the main focus of the geometric deep learning literature. We do not rule out other geometric properties that might be interesting for interepretability, however none has received as much attention as invariance/equivariance.
To show the wide applicability of our framework, we have conducted additional experiments with models processing text data. Please refer to the *NLP applications* part of the global rebuttal.
## No conclusion
We agree with the reviewer that a conclusion is missing from the main paper. We have summarized the main takeaways of our work in *Appendix A*, where we present all the guidelines that we have distilled from our experiments in a unified flowchart. In addition to this, we have added the following two conclusion paragraphs to the main paper.
Building on recent developments in geometric deep learning, we introduced two metrics (explanation invariance and equivariance) to assess the faithfulness of model explanations with respect to model symmetries. In our experiments, we considered a wide range of models whose predictions are invariant with respect to transformations of their input data. By analyzing feature importance, example importance and concept-based explanations of these models, we observed that many of these explanations are not invariant/equivariant to these transformations when they should. This led us to establish a set of guidelines to help practitioners choose interpretability methods that are consistent with their model symmetries.
Beyond actionable insights, we believe that our work opens up interesting avenues for future research. An important one emerged by studying the equivariance of saliency maps with respect to models that are approximately invariant. This analysis showed that state-of-the-art saliency methods fail to keep a high equivariance score when the model's invariance is slightly relaxed. This important observation could be the seed of future developments of robust feature importance methods. | null | null | null | null | null | null |
Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing | Accept (poster) | Summary: This paper proposes a dynamic prompt learning approach for image editing that modifies self-attentions to more accurately attend to the correct nouns given a text prompt. The proposed approach is used along with null text inversion where the dynamic tokens corresponding with the noun words are updated with a background leakage loss, a disjoint object attention loss and an attention balancing loss. Then p2p is used with the modified cross-attention masks to edit the image. The main benefit of this approach is in editing an image which includes two or more semantically related objects like cat and dog. The authors have collected a set of 60 multi-object images from LAION as a test set and a set of 60 images with semantically related objects for their human evaluations, have shown improvements through qualitative and some quantitative evaluations.
Strengths: 1. This paper is well written and explains the method and related works clearly.
2. A novel method as dynamic prompt learning for image editing has been proposed. The paper is mainly focused on a specific axis of image editing where two or more semantically related objects are present in the image and cross attention maps for each object leak to other objects or the background. This approach modifies the attention masks and thus can be used along with p2p for image editing.
3. A test set containing images of multiple objects from LAION-5B is collected for quantitative and qualitative evaluations and a couple of examples are presented in the paper and supplemental showing the superiority of the proposed method.
Weaknesses: 1. The scope of the image editing problems this paper aims to improve is narrow mainly useful for cases when there are two or multiple semantically-related objects in the image and for word-swapping editing prompts like cat to dog, cat to tiger, etc.
2. Through only a couple of image editing examples shown in the paper, I am not convinced that this paper is beneficial in general image editing problems where there are not necessarily related objects in the scene or for other types of editing prompts rather than word swapping.
- It is unclear what type of editing prompts are included in the collected test set, but from discussions, it looks like it is mainly focusing on word swapping. Also, size of the test set is fairly small, and the studies are all done in the above narrowed down scope in image editing, thus not conclusive.
- The quantitative clip scores are only reported in two scenarios (1) cat → leopard and dog → tiger and (2) book → box and clock → sunflower and improvements are still marginal.
- On the human evaluation study, I'd expect it to be done on other image editing types with single or multiple objects either semantically related or unrelated, on a larger set, and to compare with other baselines such as instructpix2pix and plug-and-play.
3. On auto evaluations, one potential metric to measure the correctness/accuracy of this method could be applying an object segmentation or object detection method on the edited image, predict label for the edited objects, check if the predicted label matches with the given prompt. Compute the average accuracy score and compare with the baselines.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Please describe your evaluation datasets and their editing prompts more clearly. Is the URS-Set similar to the MO-Set? what is the
difference between these two sets? Since LAION-5B is pretty large, I'd suggest the authors to expand their test set to include more varied image editing prompts and examples.
2. I wonder how much gain this method introduces on image editing problems with one object or two unrelated objects in the image (if any).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have neither discussed the limitations of their work, nor its negative societal impact. A broader study on different editing types as mentioned above could clarify the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will use the discussion to improve. Below we use the references in the main paper.
$\textbf{W1}:$ Actually DPL also works for Word-Swap with less related concepts. As we have shown in Fig.7, Fig.13, Fig.17, we successfully edit the bird into airplane/helicopter, the ball/clock into wheel/balloon, the chair into book/window, a person into a cat, etc. As to the scope of text-guided image editing, P2P[14] is designed to solve prompt-based editing in the denoising phases of a pretrained T2I diffusion models. After, NTI[26] generalized this technique to real-images. Experimenting with these codes, we quickly found that they fail for more complex scene with semantically related objects and background. We consider this one of the main short-comings of T2I and addressing this relevant problem can be important for many realistic image editing applications.
$\textbf{W2.1}:$ It is true that DPL is most beneficial when there are leakage issues in the image. In scenarios where no leakage issues are present, using DPL may not be necessary, as it would incur additional time complexity. DPL can be seen as an augmentation to existing cross-attention based image editing methods[4,14,26]. Its purpose is to improve the accuracy and reliability of cross-attention maps, which is particularly essential for tasks like Word-Swap. For other types of image editing tasks, Fig.6 in the main paper illustrates attention refinement, highlighting the effectiveness of DPL in improving attention maps by filtering the background leakage. Fig.14 in the supplementary material showcases global editing, where DPL may have shown comparatively less superiority. We also include more instances in the rebuttal PDF.
$\textbf{W2.2, Q1}:$ We encountered the challenge of lacking benchmarks tailored for multi-object image editing. We adopted the protocol employed in pix2pix-zero[28] and gathered images from the LAION-5B[37] per our dataset protocols (Supp.C). Through search templates and preprocessing steps, we curated a collection of 327 images from 32 distinct prompts. Finally, we retained only the prompts with more images. This contrasts with NTI and pix2pix-zero datasets, which encompass 100/250 single-object images respectively. Thus, our dataset's scale is considerable. Moreover, our method's efficacy on generated images is evaluated in Supp.H and Fig.15. Here, the number of images can potentially be infinite. This framework permits us to showcase DPL's applicability beyond fixed dataset constraints.
| Prompts | Image Number | Split |
| -------- | -------- | -------- |
| a clock and a book | 24 | MO-Set; URS-Set |
| a dog and a bird | 19 | MO-Set; URS-Set |
| a ball and a cat | 19 | MO-Set |
| a book and a pen | 17 | MO-Set |
| a cat and a dog | 16 | MO-Set; URS-Set |
| a knife and a fork | 13 | MO-Set |
| a cat and a bird | 13 | MO-Set; URS-Set |
| a person on a bike | 13 | MO-Set |
| a horse and a sheep | 11 | MO-Set |
| a cake in a plate | 4 | URS-Set |
| a keyboard and a mouse | 4 | MO-Set |
| a cat and a dog on the grass | 3 | MO-Set |
| a piano and a chair in the room | 2 | URS-Set |
| a pear and an apple | 2 | URS-Set |
| a pizza on a table | 2 | URS-Set |
$\textbf{W2.3,W2.4}:$ To assess the performance of our editing method, we adopt a similar approach to pix2pix-zero[28], creating two scenarios that demand changes to both objects. However, the improvements achieved in these cases are limited due to the narrow improvement spaces. Additionally, we observe that even the original multi-object images do not closely match the textual prompts, which further impacts the editing outcomes. To gain a more comprehensive evaluation, we conducted a user study that covered a broad spectrum of image editing tasks, going beyond these two setups. As you requested, we extended the user study with more methods as below. Importantly, DPL received higher satisfaction ratings from the participants.
| method | DPL (ours) | NTI | DiffEdit | InstructP2P | Pix2Pix | PnP |
| :--------: | :--------: |:--------: |:--------: |:--------: |:--------: |:--------: |
User study (%) | 47.1 | 3.8 | 0.9 | 11.1 | 7.3 | 29.8 |
$\textbf{W3}:$ In our evaluation of editing accuracy, we established ground truths for our dataset images. These ground truths serve to compute IoU with corresponding cross-attention maps from our DPL inversion. This assessment is presented in Fig.5 of the main paper and Fig.8 in the supplementary. Further evaluation encompasses CLIP semantic similarity analysis and a user study, with summarized outcomes in Table 1. As per your request, we incorporated the Segment-Anything model for segmentation map generation based on prompts from edited images. The resultant IoU scores for our DPL method and the baseline NTI stand at 0.48 and 0.37, respectively, assessed over the MO-Set Test-Split. These scores affirm DPL's superior segmentation accuracy compared to the baseline.
$\textbf{Q2}:$ In cases where images feature a single main object, our DPL may exhibit limited improvement over other methods, as depicted in Fig.6 of the main paper and Fig.13 in the supplementary. This observation arises from the fact that in such scenarios, the cross-attention mechanism might lack additional objects to attend to, thus offering minimal influence on the editing process. However, DPL shines in scenarios with multiple disparate objects. Here, DPL excels in effectively filtering both background and distractor object leakage, showcased in Fig.12 and Fig.13 of the supplementary. Furthermore, DPL is specifically tailored to address situations where background and distractor object leakage significantly impact editing performance. It is an augmentation to existing editing methods. We do not assert DPL as a comprehensive solution to all existing editing challenges.
$\textbf{Limitations}:$ We have presented our limitations and broader impacts in Supp.A and Supp.B and aim to move these to the main paper in the future version.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response and incorporating my suggestions to have a more solid evaluation. Most of my concerns have been resolved by this rebuttal, and I'd be happy to increase my rating.
---
Reply to Comment 1.1.1:
Title: Thanks for the positive response from the reviewer
Comment: We sincerely appreciate your valuable feedback and raising the rating score. The discussion surely will help us to improve the paper in future versions. Moreover, we are always available for any further discussions to address all of your remaining concerns. | Summary: The paper proposes a new method to improve the attention masking for attention-based local editing.
Strengths: The proposed loss functions are novel, intuitive and effectively improves the inversion process of text-to-image diffusion model.
Weaknesses: 1. Please show the quantitative evaluation of the object masking performance between the mask generated from Diffedit.
2. Quantitative comparison with baseline models are not enough. Please put more comparison between other baselines such as Plug-and-play and Pix2pix-zero, instructPix2Pix. At least include user study.
3. Does it improve other attention-based editing mechanism?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper.
$\textbf{W1}:$ As requested, we performed an evaluation of the binary masks obtained from the DiffEdit method on our MO-Set Test-Split. The resulting Intersection over Union (IoU) score for DiffEdit is 0.41, which is similar to NTI but notably lower than our method, DPL (as depicted in Fig. 5-(a)). Moreover, we observed that DiffEdit generated masks with numerous noisy points, as illustrated in Fig. 1. As a consequence, the editing results were not satisfactory according to our user study and qualitative comparisons, as detailed in the author rebuttal PDF. This comparison demonstrates the superior performance of DPL in accurately matching desired editing areas, highlighting its effectiveness in comparison to other methods.
$\textbf{W2}:$ As requested, we incorporated four additional popular text-guided image editing methods into the user study, and the results are presented in the table below. Based on the evaluation, we can infer that our method, DPL, and Plug-and-Play [40] are the two approaches that primarily satisfy the evaluators' subjective preferences. These findings demonstrate the strong performance and user satisfaction of DPL and underscore its competitiveness in comparison to the other methods.
| method | DPL (ours) | Null-Text Inversion [26] | DiffEdit [9] | InstructPix2Pix [4] | Pix2Pix-zero [28] | Plug-and-Play [40] |
| :--------: | :--------: |:--------: |:--------: |:--------: |:--------: |:--------: |
User study (%) | 47.1 | 3.8 | 0.9 | 11.1 | 7.3 | 29.8 |
$\textbf{W3}:$ Our method DPL is regularizing the cross attention maps during the denoising phases of the DDIM inversion. As a result, it has the potential to enhance the performance of other existing cross-attention based editing mechanisms. As of the NeurIPS 2023 deadline, we found only one available method, Null-Text inversion [26], that works on real image inversion and text-guided image editing. Moreover, their image editing approach is based on references from P2P [14]. On the other hand, Pix2pix-zero [28] is also working on cross-attention injection, but it adopts noise-regularization, which can disrupt the reconstruction property of DDIM inversion. Furthermore, Pix2pix-zero is primarily focused on changing the entire image style by discovering edit directions using GPT and CLIP models. In our case, we share more similarities with P2P streams, where we modify textual prompts to guide the editing process.
---
Rebuttal Comment 1.1:
Comment: The author rebuttal addressed my concerns. Therefore I keep my score to weak accept.
---
Reply to Comment 1.1.1:
Title: Thanks for the positive response from the reviewer
Comment: We sincerely appreciate all of your efforts to provide your feedback. The discussion will help to improve the paper quality in any future versions. | Summary: The authors first point out that inferior cross-attention maps regarding noun text tokens are the main causes of failures cases in prompt-based editing methods, such as prompt-to-prompt (P2P).
To tackle this, they propose to optimize noun text features with three objectives at each denoising timestep;
i) minimize the overlap between different cross-attention maps, ii) keep the background mask intact, which is automatically inferred from both cross- and self-attention maps, and iii) keep the max value of cross-attention maps related to nouns high.
This way, they achieve better editing results with cross-attention maps that are better aligned with the object regions.
Strengths: - the authors tackle a meaningful task
- it is easy to understand the motivation and method with various visualizations
- the proposed method presents improved performance on a variety of evaluation metrics
Weaknesses: - increased inference time
- only can solve cases related to noun text tokens representing the type of object
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - regarding the algorithm 1, isn’t it better to use updated text features by implementing the line 9 after the line 13?
- the thresholds defined by Equation 10 appear to prevent overfitting of the model. what happens if you don't use these thresholds?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: please refer to the Weaknesses section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper and any references not included in the main paper are provided in a list at the end of this response in alphabetical order.
$\textbf{W1}:$ $\textbf{(1)}$ In the training phase, the DPL method focuses solely on updating the token embeddings that are registered in the language model dictionary, without updating the UNet of the Stable Diffusion models [33]. More specifically, DPL exhibits an extended training duration relative to Null-Text Inversion [26] due to our objective of updating token embeddings to enforce regularization on attention maps for specific objects. In our single-GPU experiments using the A40, NTI requires 130 seconds to process one image, while DPL necessitates approximately 280 seconds. $\textbf{(2)}$ However, during inference, both DPL and NTI do not incur any discernible additional time overhead compared to the fundamental Stable Diffusion model [33].
$\textbf{W2}:$ The reviewer is right to point this out (and we will improve our limitations section):
Indeed, there have already been papers related to image editing that involve verbs. They typically necessitate fine-tuning the T2I models [5,18], which aligns with our approach and runs parallel to our contributions. As a common practice, we freeze the T2I models, similar to previous works [4,9,14,26,28,40], to prevent the models from overfitting to a specific image. However, it is essential to acknowledge that freezing the T2I models comes with some shortcomings, as pointed out by the reviewer. These limitations should be carefully considered when evaluating the overall performance and effectiveness. There are also certain papers [A,B] focusing on removing objects or inpainting using user-defined masks [1,27,42] in given images. However, the exploration of adding an object in an appropriate position without relying on a mask is currently a very active research area. This direction typically requires model fine-tuning to achieve desirable results and is a parallel research to text-guided image editing as our method DPL. We will include a discussion on this in our limitations section.
$\textbf{Q1}:$ Thanks for pointing it out. You are correct here. The text conditions' initialization appears on line 9. Then the updating of token representations causes changes in the text conditions. To avoid misunderstandings, it is preferable to add another line to describe this process after Line 13 in the future version.
$\textbf{Q2}:$ By using Equation 10, we aim to prevent overfitting of the cross-attention maps, which could otherwise lead to erroneous cross attentions and adversely impact the editing process, as can be observed in Fig.18 of our rebuttal PDF. The gradual optimization approach is thus strategically designed to enhance the robustness and accuracy of our method throughout the editing procedure.
[A] Ablating Concepts in Text-to-Image Diffusion Models. ICCV 2023
[B] Erasing Concepts from Diffusion Models. ICCV 2023 | Summary: The paper propose DPL to solve the cross-attention background and distractor object leakage problem in
image editing using text-to-image diffusion models. The presentation is well written and easy to follow. The discussion and analysis is extensive and interesting. But the experiment dataset is too small to clarify the robustness of this method.
Strengths: 1. The presentation is well written and easy to follow.
2. The discussion and analysis is extensive and interesting.
3. The framwork to solve the cross-attention background and distractor object leakage problem is novel.
Weaknesses: 1. The experiment dataset is too small to clarify the robustness of this method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: null
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: null
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper.
$\textbf{W1}:$ For the creation of our multi-object real-image dataset, we faced the challenge of the absence of standardized benchmarks specifically tailored to real-image text-guided editing tasks. To overcome this limitation, we adopted the protocol used in the pix2pix-zero [28] framework. In doing so, we retrieved images from the LAION-5B dataset [37] as detailed in our dataset protocols section (Supp.C in the supplementary material). With various predefined search templates followed by necessary preprocessing (including watermark removal and manual selection of images without complete objects, etc.), we collected a total of 327 images from 32 different searches. However, these 32 prompts have image numbers ranging from 1 to 37. Therefore, for a meaningful comparison with other methods, we selected only the prompts with more images for our MO-Set and URS-Set, and the others are serving as qualitative candidates, which we already included in Fig.5, Fig.7, Fig.12, and Fig.13. The detailed information is listed in the table below.
In comparison, the NTI [26], pix2pix-zero [28] datasets contain 100/250 single-object images respectively. Therefore, our data scale is not small in text-guided image editing problems, and we aim to publish our created dataset as a new benchmark for public usage in the future.
Furthermore, in our evaluations, we also assess the effectiveness of our proposed method, DPL, on generated images. This is demonstrated in Supp.H and Fig.15. In this context, the number of images can potentially be infinite due to the generative nature of the task. This allows us to test and showcase the versatility and applicability of our method beyond the constraints of a fixed dataset.
The comprehensive dataset statistics are presented as follows:
| Prompts | Image Number | Split |
| -------- | -------- | -------- |
| a clock and a book | 24 | MO-Set; URS-Set |
| a dog and a bird | 19 | MO-Set; URS-Set |
| a ball and a cat | 19 | MO-Set |
| a book and a pen | 17 | MO-Set |
| a cat and a dog | 16 | MO-Set; URS-Set |
| a knife and a fork | 13 | MO-Set |
| a cat and a bird | 13 | MO-Set; URS-Set |
| a person on a bike | 13 | MO-Set |
| a horse and a sheep | 11 | MO-Set |
| a cake in a plate | 4 | URS-Set |
| a keyboard and a mouse | 4 | MO-Set |
| a cat and a dog on the grass | 3 | MO-Set |
| a piano and a chair in the room | 2 | URS-Set |
| a pear and an apple | 2 | URS-Set |
| a pizza on a table | 2 | URS-Set |
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. Some of my concerns have been addressed. I'll keep my original rating.
---
Reply to Comment 1.1.1:
Title: Thanks for the positive response from the reviewer
Comment: We sincerely appreciate your time and efforts to provide your insightful feedback. And the discussion will help us to improve the paper in any future versions. | Rebuttal 1:
Rebuttal: In the author rebuttal PDF file, we include three additional figures for the reviewer's reference:
$\textbf{(1)}$ Fig.17: extended comparison with DiffEdit and Imagic for image editing;
$\textbf{(2)}$ Fig.18: ablation study of the Gradual Optimization for Token Updates;
$\textbf{(3)}$ Fig.19: progressively infusing the attention maps across diverse diffusion steps.
Pdf: /pdf/4c795ce254a0b9118eb28129a8fa0b99dd82f780.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes Dynamic Prompt Learning (DPL) to address the cross-attention leakage issue for text-based image editing. Based on the observation that inaccurate cross-attention maps cause unintended modifications of regions outside of the targeted area for text-based image editing, the authors propose Dynamic Prompt Learning to force cross-attention maps to focus on correct noun words in the text prompt. By updating the dynamic tokens for nouns in the textual input with the proposed leakage repairment losses, the proposed approach achieves fine-grained image editing over particular objects while preventing undesired changes to other image regions. Experiments of Stable Diffusion models are conducted for word-swap, prompt refinement, and attention re-weighting.
Strengths: 1. This paper proposes an interesting approach to address the attention leakage problem for text-based image editing. By adding constraints on the cross-attention maps, the proposed approach regularizes the attention regions of the diffusion models and thus achieves detail-preserving and high-fidelity image editing.
2. The proposed approach is well-motivated and the discovery of the attention leakage problem is interesting and inspiring.
3. The proposed approach achieves better text-based image editing performance on various editing scenarios, as demonstrated in the extensive experiments.
Weaknesses: 1. The proposed DPL approach seems like a combination of several components (including the Disjoint Object Attention Loss, Background Leakage Loss, Attention Balancing Loss, Gradual Optimization for Token Updates, and the trick from null-text inversion), and it is not clear about the contribution and necessity of each component. More ablation study is needed to demonstrate the necessity of combining those methods and loss functions.
2. It is not shown how accurate and how robust the attention-based background estimation is. When does the background estimation fail and how this will affect the model if this part fails? The choice of hyperparameters (self-attention with size 32, cross-attention with size 16 and TH = 0.2, V = 5) for this subsection is based on empirical study, and ablation study is not presented.
3. The writing can be improved. The structure of section 3 can be organized more logically and more clearly. The paragraph "Gradual Optimization for Token Updates" is unclear to me.
4. Experiments are conducted on a self-built dataset with a small size (only 100 images for ablation study and 60 images for user study).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to the weakness section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors addressed the limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper.
$\textbf{W1}:$ This study primarily addresses cross-attention leakage issues in the realm of real image editing, encompassing both background and distractor object leakage concerns. These issues have been insufficiently tackled by existing methods. To effectively mitigate leakage, we propose dynamic prompt learning (DPL), aiming to alleviate cross-attention leakage's influence on the editing process. To combat distractor object leakage, we introduce specialized loss functions: Disjoint Object Attention loss and Attention Balancing loss. These are meticulously designed to improve object focus and attention distribution balance. Background leakage is addressed through the Background Leakage Loss. Ablation of each loss component is detailed in Section 4.1 (Fig. 5-(a)) of the main paper, revealing that standalone use of the disjoint object attention loss and background leakage loss underperforms compared to NTI [26]. Combining them is key for enhanced performance. Moreover, the attention balancing loss enhances cross-attention quality, especially when coupled with the other two proposed losses. While detailed hyperparameter ablation is presented in Supp.D, Fig.8, and Fig.9 of the supplementary material, providing a comprehensive analysis of various experimental setups and corresponding outcomes.
$\textbf{W2}:$ Due to page limits we decided to move this part to the supplementary section (Supp.E); we regret any resulting misunderstandings. To address the critical aspect of background estimation in our method, we have extensively investigated this component in Supp.E of the supplementary material. The success of DPL in filtering background leakage relies heavily on accurate background estimation. Hence, we have thoroughly examined various factors, including attention size, feature components, threshold values, and other relevant parameters, to ensure the robustness and reliability of this essential component of our approach.
$\textbf{W3}:$ Given the multifaceted nature of diffusion-based text-guided image editing, we deeply regret any potential confusion arising from our content organization. Regarding "Gradual Optimization for Token Updates," its central aim is to alleviate update pressures at each step of the process, particularly due to the accumulation of cross-attention leakage during denoising. We've introduced a mechanism ensuring all losses attain predefined thresholds at each step, aiming to prevent overfitting of cross-attention maps. Such overfitting could lead to erroneous cross attentions, detrimentally affecting the editing process, as can be observed in Fig.18 of our rebuttal PDF. Hence, the gradual optimization strategy enhances robustness and accuracy throughout editing.
$\textbf{W4}:$ When constructing our multi-object real-image dataset, we encountered the challenge of lacking standardized benchmarks tailored to real-image text-guided editing. To overcome this limitation, we adopted the protocol employed in the pix2pix-zero framework [28]. We gathered images from the LAION-5B dataset [37] per our dataset protocols (Supp.C in the supplementary material). Through predefined search templates, along with requisite preprocessing steps (like watermark removal and manual image selection without complete objects), we curated a collection of 327 images derived from 32 distinct prompts. However, these 32 prompts involve image numbers spanning from 1 to 37. Thus, for meaningful comparison with other methods, we retained only the prompts with higher image counts for our MO-Set and URS-Set. The remaining ones serve as qualitative candidates, showcased in Fig.5, Fig.7, Fig.12, Fig.13, etc. This contrasts with NTI[26] and pix2pix-zero[28] datasets, which encompass 100/250 single-object images respectively. Our dataset's scale is thus considerable within the context of text-guided image editing problems. Furthermore, our method's efficacy on generated images is evaluated, as demonstrated in Supp.H and Fig.15. Here, the number of images can potentially be infinite due to the generative nature of the T2I models. This framework permits us to test and showcase our method's versatility and applicability beyond fixed dataset constraints.
The comprehensive dataset statistics are presented as follows:
| Prompts | Image Number | Dataset |
| -------- | -------- | -------- |
| a clock and a book | 24 | MO-Set; URS-Set |
| a dog and a bird | 19 | MO-Set; URS-Set |
| a ball and a cat | 19 | MO-Set |
| a book and a pen | 17 | MO-Set |
| a cat and a dog | 16 | MO-Set; URS-Set |
| a knife and a fork | 13 | MO-Set |
| a cat and a bird | 13 | MO-Set; URS-Set |
| a person on a bike | 13 | MO-Set |
| a horse and a sheep | 11 | MO-Set |
| a cake in a plate | 4 | URS-Set |
| a keyboard and a mouse | 4 | MO-Set |
| a cat and a dog on the grass | 3 | MO-Set |
| a piano and a chair in the room | 2 | URS-Set |
| a pear and an apple | 2 | URS-Set |
| a pizza on a table | 2 | URS-Set |
---
Rebuttal 2:
Title: Official comment by reviewer XeHW
Comment: Thank the authors for the rebuttal. The authors addressed some of my questions so I have increased my rating.
---
Rebuttal Comment 2.1:
Title: Thanks for the positive response from the reviewer and we are available for any further discussions
Comment: We sincerely appreciate your valuable feedback and raising the evaluation score. We believe that the discussion will help us to improve the paper quality in any future versions. Furthermore, we will always be available for any further discussions if you have any remaining concerns or further inquiries. Your continued engagement is greatly appreciated and contributes to the ongoing improvement of our research. | Summary: This paper works on the fidelity problem (i.e., 'unintended changes of background and distractor objects') in text-based image editing of text-to-image diffusion models. The authors attribute the fidelity problem to cross-attention leakage and propose dynamic prompt learning to force cross-attention maps to focus on correct noun words in the text prompt. The paper shows a comparison with a couple of approaches, comprehensive image editing evaluations and ablations.
Strengths: 1. The fidelity problem that this work focuses on is important. The overall idea of the paper is really simple and seems effective to this problem.
2. The authors present a good number of experiments validating the effectiveness of their approach. The results from the paper are promising. The editing quality is decent with observable changes.
3. The paper is overall well-written and easy to follow.
Weaknesses:
1. This work has the following potential limitations:
- The attention map limits the size of the generated target, as shown in Figure 7.
- Not applicable to the scenarios in that the noun describes the background, or changes are reflected in the verb word, such as adding an item or deleting objectives. Thus, more experiments in the above scenarios should be provided and discussed.
- Not training-free and may have high latency in practical applications.
2. Both qualitative and quantitative results miss comparison with important methods such as DiffEdit [1], Imagic [2], StructureDiffusion [3].
3. The manuscript lacks explanations for some symbols, such as symbols in Equation (6). It would be great if the author could highlight these symbols in the diagram to refine the explanation of the method.
[1] Diffedit: Diffusion based semantic image editing with mask guidance. ICLR 2023.
[2] Imagic: Text-based real image editing with diffusion models. CVPR 2023.
[3] Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis. ICLR 2023.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Please clarify the similarities and differences between DPL and Imagic [2]. In my opinion, both are looking for optimized text embedding, while Imagic seems to remain open to control objectives, while DPL adds too many restrictions through attention maps.
2. How long do training and inference take in the proposed method? Can the authors detail the advantages of DPL compared to the training-free methods (e.g., StructureDiffusion [3])?
3. What are the dimensions of the attention maps, the query/key matrix, and the deep features of the noisy image. It will be great if the authors could indicate those dimensions in the paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have provided discussion on the limitations and broader impact in the Supplementary Material. However, as mentioned in the weaknesses, this work has the following potential limitations:
- 1) The attention map limits the size of the generated target.
- 2) Not applicable to the scenarios that the noun describes the background, or changes are reflected in verb word, such as adding an item or deleting objectives.
- 3) Not training-free and may have high latency in practical applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will integrate the discussions to improve. We use citations for references in the main paper and any omitted references are listed below.
$\textbf{W1.1}:$ Imposing strict DPL indeed limits the editing region, which aligns with practical scenarios. Our goal is to position the generated target object at the source object's original spot while maintaining overall stability. This constraint is akin to attention injection seen in cross-attention[4,9,14,26,28] and self-attention methods[5,40]. Adjusting the size of the generated target remains unexplored. Nonetheless, the constraint can be relaxed using partial attention injection as depicted in Fig.6 of P2P[14]. The effect of attention injection across varied steps for DPL is shown in Fig. 19 of the rebuttal PDF.
$\textbf{W1.2}:$ Your observations are correct: $\textbf{(1)}$ In the context of background descriptions, the current research refers to it as “global editing”. While we've included global editing performance in Fig.14 of the supplementary, our main emphasis is on identifying object positions corresponding to noun words. Consequently, we don't exhibit significant advancements in this case. $\textbf{(2)}$ Editing involving verb words often requires T2I model fine-tuning, as seen in [5,18], aligning with our approach as parallel contributions. Like previous works [4,9,14,26,28,40], we freeze T2I models to prevent overfitting to specific images. However, it's important to recognize the limitations associated with model freezing. $\textbf{(3)}$ Some papers [A,B] have concentrated on object removal or inpainting using user-defined masks[1,27,42]. In contrast, the act of adding an object in a suitable position without relying on a mask is an active domain. This direction usually entails fine-tuning models for optimal outcomes and runs parallel to DPL.
$\textbf{W1.3}:$ Training-free methods[D,E] focus on augmenting T2I models to enable compositional generation but lack capabilities for image editing. In contrast, text-guided editing methods (NTI[26], DPL, etc.) drastically reduce editing time from hours of human editing to mere minutes. Consequently, these techniques find suitability in practical domains such as creative arts and publicity editing. However, we agree that these techniques are not applicable for real-time editing.
$\textbf{W2}:$ DiffEdit[9] proves unsatisfactory for the cases under our consideration, as evident in Fig. 1. Thus, a direct comparison is omitted from the main paper. Nonetheless, we append a comprehensive DiffEdit comparison in the rebuttal PDF, along with an extended user study below. Note that DPL is the most preferred method (by 47.1%). Regarding StructureDiffusion[D], it is essential to clarify that this is a generation method and not designed for image editing. Imagic[18] is a closed-source approach focusing on non-rigid object variation generation, necessitating per-image T2I model fine-tuning. Our method aligns with text-guided editing pipelines[9,14,26,28,40] by freezing T2I models. Notably, Imagic takes about 20 minutes per image on an A40 GPU, while our DPL achieves approximately 4.5 minutes. Qualitative Imagic comparisons are included in our rebuttal PDF. These clarifications aim to underscore the differences between our method and the aforementioned approaches.
| method | DPL (ours) | NTI | DiffEdit | InstructP2P | Pix2Pix | PnP |
| :--------: | :--------: |:--------: |:--------: |:--------: |:--------: |:--------: |
User study (%) | 47.1 | 3.8 | 0.9 | 11.1 | 7.3 | 29.8 |
$\textbf{Q1}:$ It is correct that DPL and Imagic share similarities in updating text embeddings, but it is crucial to emphasize the differences: $\textbf{(1)}$ DPL updates specific tokens in the CLIP dictionary, which grants it better fine-grained control over individual objects. On the other hand, Imagic updates the entire text embedding, which may result in limited control over individual objects, especially in complex scenes with multiple objects. $\textbf{(2)}$ DPL indeed updates only the token embeddings in the CLIP dictionary. In contrast, Imagic requires updating the entire T2I model per image, which is computationally expensive and potentially catastrophic forgetting when the T2I models are finetuned[20,35]. $\textbf{(3)}$ DPL is designed to address a more general text-guided editing task. By comparison, Imagic focuses on non-rigid editing while keeping the same object, e.g., it can not be applied for Word-Swap.
$\textbf{Q2}:$ In training, our DPL exclusively focuses on updating token embeddings, leaving the UNet of the LDM untouched. Notably, DPL's extended training duration, relative to NTI[26], stems from the intent to regulate attention maps. In our experiments, NTI takes 2 minutes per image, while DPL requires about 4.5 minutes. However, both methods introduce no noticeable inference time overhead compared to the base LDM. Diverging from training-free methods[C,D] augmented on LDM, which primarily target compositional generation, DPL stands distinct. Aligning with previous pipelines [9,18,26,28,40], we empower tangible image editing driven by textual cues. Notably, DPL's ability for specific region editing further distinguishes it from existing text-guided editing methods.
$\textbf{Q3}:$ The attention map employed in DPL is 16x16, as indicated in Fig.1, Line 186, and Supp.A. The determination of this cross attention size adheres to practices from prior works[4,14,26]. Moreover, our investigation into various attention sizes is shown in Fig. 3 and the supplementary. Notably, this 16x16 cross attention encompasses QKV and feature shapes of 16x16x160 and 16x16x1280, respectively.
[A] Ablating Concepts in Text-to-Image Diffusion Models. ICCV 2023
[B] Erasing Concepts from Diffusion Models. ICCV 2023
[C] Compositional visual generation with composable diffusion models. ECCV 2022
[D] Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis. ICLR 2023.
---
Rebuttal Comment 1.1:
Comment: The authors acknowledge the limitations of their work and also explain their strengths. The authors have addressed most of my concerns and I am willing to keep my original rating.
---
Reply to Comment 1.1.1:
Title: Thanks for the positive response from the reviewer
Comment: We sincerely appreciate your thoughtful input and appreciate the time you've taken to share your valuable feedback. We're pleased to have been able to address your concerns. Your insights and the ensuing discussion undoubtedly contribute to enhancing the quality of our work. Moreover, we're committed to integrating your suggestions and incorporating any new results into the upcoming version of our paper. Thank you once again for your contribution to our research. | null | null | null | null |
Uni3DETR: Unified 3D Detection Transformer | Accept (poster) | Summary: The authors explore and analyze the existing LiDAR-based 3D object detection framework, and propose to adopt transformer-based method to perform detection from differnt LiDAR inputs. The experimental results on a number of datasets are better compared to the existing methods.
Strengths: 1. The task of 3D object detection is very important in the 3D community. Interestingly, the authors propose to adopt transformers to generalize 3D detection across diverse scene inputs.
2. The paper is easy to follow.
3. The authors conduct the experiments on widely-used indoor and outdoor datasets, including SUN RGB-D, ScanNet, S3DIS,m nuScenes, and KITTI.
Weaknesses: 1. Missing important SOTA works on KITTI. The manuscript does report validation results on KITTI, however, the authors forgot to compare with LiDAR-only SOTAs 'PV-RCNN++' (PV-RCNN++: Point-Voxel Feature Set Abstraction With Local Vector Representation for 3D Object Detection). However, it would be much convincing if the authors can compare with this SOTA.
2. Results on KITTI and nuScenes test set. In table 2 & 3, the proposed Uni3DETR is comparable/better with the cited methods. However, it would be much convincing if the authors can report the test set results on KITTI and nuscenes.
3. Computation/memory footprint comparison. The authors didn't make a comparison of their work in terms of memory/speed with the existing 3D detection methods. The time consumption might be large since the memory/time are espeically heavy for the transformer-based methods from my point.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the questions that I describe in the Weakness part. I would also consider the rebuttal and other reviews.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: It would be much convincing if the authors can compare with this SOTA (PV-RCNN++)**
A1: To compare with PV-RCNN++, we follow the same setting to train our model on the training and validation set of KITTI, and evaluate on the KITTI test set. The comparison is listed in the below Tab. 5-1. Our method obtains the 82.26% AP on the car category of the KITTI test set, surpassing PV-RCNN++ by 0.38 points. This validates the effectiveness of our method.
Table 5-1: The comparison of Uni3DETR against PV-RCNN++ on the KITTI test set with 40 recall positions.
| |AP-car|
| :--: | :--: |
|PV-RCNN|81.43|
|PV-RCNN++|81.88|
|ours|82.26|
**Q2: It would be much convincing if the authors can report the test set results on KITTI and nuscenes.**
A2:
We conduct the experiment and evaluate our method on the KITTI test set with 40 recall positions. The comparison is listed in the below Tab. 5-2. For the most important KITTI metric, AP on the moderate level of car, we obtain the 82.26% AP, which is 0.83 points higher than PV-RCNN and 0.49 points higher than CT3D. The consistent superiority further demonstrates the ability of Uni3DETR on outdoor 3D detection.
Due to time constraints, we have not yet been able to provide results on the NuScenes test set. We will include such experimental comparisons in the final version.
Table 5-2: The performance of Uni3DETR for outdoor 3D object detection on the KITTI test set with 40 recall positions. *: AP on the moderate car is the most important metric for KITTI.
| |easy|moderate*|hard|
| :--: | :--: | :--: | :--: |
|SECOND|88.61|78.62|77.22|
|PointPillar|82.58 |74.31|68.99|
|Part-A2|87.81|78.49 |73.51|
|PV-RCNN|90.25 |81.43 |76.82|
|CT3D|87.83 |81.77 |77.16|
|ours|91.14|82.26|77.58|
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Please look over the author response and the other reviews and update your opinion. Please ask the authors if you have additional questions before the end of the discussion period.
---
Rebuttal Comment 1.2:
Title: Authors have addressed most of my concerns.
Comment: Thanks for the answers and clarification in the rebuttal, which covered most of my concerns.
---
Reply to Comment 1.2.1:
Title: Thanks for your time
Comment: We sincerely thank your feedback and support. We will polish the paper and add the experimental comparisons following your suggestions. Your constructive comments will help improve the quality of this work. | Summary: The authors propose a unified detr-style detector for both the indoor and outdoor 3D object detection. Besides common learnable query, they also adopt unlearnanle query sampled from raw point cloud.
Strengths: The proposed method is simple and easy to follow.
Weaknesses: The novelty is limted, and the experimental results are not fair/convincing.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Uni3DETR follows the design of DETR, which is a ''sparse'' and end-to-end detector. However, Uni3DETR uses 4 groups of query points for indepedent prediction and needs extra pos-procedure to filer redundant boxes. The authors claim that the learnable/unlearnable queries capture the local and global information of the scene. If so many different queries are neccesary to cover the whole scene, why not use a dense detector like pointpillar or centerpoints instead?
2. I am surprised that the results in Tab. 6. I hope the authors could provide more explanation about why multiple groups of queries brings no performance gain? Why this ablation is not conducted on other datasets (e.g. KITTI) like Tab.5 and Tab. 7?
3. Using 4 groups of query points (learnable & unlearnable) as proposed in the paper, or simply use more learnable queries, which one is better?
4. The authors are suggestd to provide results on the KITTI and nuscenes test split. Besides, KITTI provides a more fair and relibale evaluation setting with 40 recall positions.
5. What is the diffence between Uni3DETR and DAB-Deform in Tab. 4.
6. With differenent backbone, model size, inference speed, ect., it is not fair for the comparison with other sota methods in Tab.2 & Tab.3.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The novelty is limited, and more comprehensive analysis is required.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Why not use a dense detector like pointpillar or centerpoints instead?**
A1:
* The main reason is that these dense detectors are usually based on the 3D convolution structure and 3D anchors. They perform 3D box generation directly on the extracted features, thus are sensitive to the distinction from the point cloud data. As a result, they cannot perform 3D indoor and outdoor detection with the same structure. In comparison, our 3D detection transformer directly matches 3D predicted boxes and ground-truth boxes, thus tends to be resistant to the difference from data. Therefore, our architecture is more suitable for 3D detection in various environments.
* We further list the indoor performance on SUN RGB-D and outdoor performance on KITTI of these methods in the following Tab. 4-1. As these dense detectors are designed for outdoor 3D detection, their performance deteriorates seriously for indoor detection. In comparison, our method obtains a satisfying detection performance for both indoor and outdoor detection.
Table 4-1: The performance comparison on the indoor SUN RGB-D and outdoor KITTI dataset with dense detectors PointPillar and CenterPoint. The metrics are AP25 for SUN RGB-D and AP70 for the KITTI car class.
| |AP-indoor (SUN RGB-D)|AP-outdoor (KITTI)|
| :--: | :--: | :--: |
|PointPillar|N/A|77.6|
|CenterPoint|18.9|74.4|
|ours|67.0|86.7|
**Q2: I hope the authors could provide more explanation about why multiple groups of queries brings no performance gain? Why this ablation is not conducted on other datasets (e.g. KITTI) like Tab.5 and Tab. 7?**
A2:
* The reason is that multiple groups of queries simply utilize local information multiple times, and does not provide complementary information. Therefore, it cannot bring further performance improvement. The reason why multiple groups are effective for 2D detection is mainly that it speeds up convergence, which is one of the main problems of transformer-based models for 2D detection. However, in 3D detection, since 3D voxel features contain stronger spatial information, convergence has not been a major problem for transformer-based 3D detectors. Therefore, multiple groups of queries are no longer effective for 3D detection.
* To validate this, we further conduct such an ablation study on the KITTI dataset and list the comparison in the Tab. 4-2 below. It can be seen that our mixture of query points helps improve AP-car by 0.67%. However, multiple groups of queries fail to improve the 3D detection performance. Such comparison on the outdoor dataset validates that multiple groups of queries cannot boost the performance of 3D detection.
Table 4-2: Comparison with multiple groups of learnable query points on the KITTI dataset.
|query|AP-car|
| :--: | :--: |
|{$P_l$}|85.59|
|{$P_l$}x2|85.32|
|{$P_l$}x3|85.18|
|{$P_l$}x5|85.43|
|{$P_l$, $P_{nl}$}|85.94|
|{$P_l$, $P_{nl}$, $P_{nlv}$}|86.26|
**Q3: Using 4 groups of query points (learnable & unlearnable) as proposed in the paper, or simply use more learnable queries, which one is better?**
A3: We conduct the experiment on the SUN RGB-D dataset by simply using 4 times of learnable queries, and list the comparison of learnable query only and our mixture of query points in the following Tab. 4-3. It can be seen that by using more learnable queries, both AP25 and AP50 decrease. Using more learnable queries cannot provide complementary information, and will even make it more difficult for Hungarian matching. More false positive examples may thus appear, which hurt the performance. In comparison, by using the mixture of query points, we comprehensively utilize the global and local information, thus is better for 3D object detection.
Table 4-3: Comparison with more learnable queries on the SUN RGB-D dataset.
||AP25|AP50|
| :--: | :--: | :--: |
|learnable only|62.6|46.4|
|query number x 4|61.7|43.6|
|learnable + non-learnable|67.0|50.3|
**Q4: The authors are suggested to provide results on the KITTI and nuscenes test split. Besides, KITTI provides a more fair and reliable evaluation setting with 40 recall positions.**
A4:
We conduct the experiment and evaluate our method on the KITTI test set with 40 recall positions. The comparison is listed in the below Tab. 4-4. For the most important KITTI metric, AP on the moderate level of car, we obtain the 82.26% AP, which is 0.83 points higher than PV-RCNN and 0.49 points higher than CT3D. The consistent superiority further demonstrates the ability of Uni3DETR on outdoor 3D detection.
Due to time constraints, we have not yet been able to provide results on the NuScenes test set. We will include such experimental comparisons in the final version.
Table 4-4: The performance of Uni3DETR for outdoor 3D object detection on the KITTI test set with 40 recall positions. *: AP on the moderate car is the most important metric for KITTI.
| |easy|moderate*|hard|
| :--: | :--: | :--: | :--: |
|SECOND|88.61|78.62|77.22|
|PointPillar|82.58 |74.31|68.99|
|Part-A2|87.81|78.49 |73.51|
|PV-RCNN|90.25 |81.43 |76.82|
|CT3D|87.83 |81.77 |77.16|
|ours|91.14|82.26|77.58|
**Q5: What is the difference between Uni3DETR and DAB-Deform in Tab. 4.**
A5: The detection transformer in Uni3DETR takes 3D points as the input of the deformable attention, while DAB-Deform takes 3D anchor boxes (7 dims) as the input. The superiority of taking 3D points as input is mainly twofold. The first is that 3D anchors are negatively affected by the center point missing in 3D point clouds, which makes formulating queries as 3D anchors less effective. In comparison, formulating queries as 3D points better adapts the structure of 3D data, thus achieving better performance. The second is that this accommodates our designed mixture of query points, since FPS points can directly forward into the detection transformer. The mixture of query points can bring further improvement. Ultimately, our Uni3DETR achieves the 66.4% AP (v.s. the 62.0% AP from DAB-Deform).
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks, the response addresses my concerns.
---
Reply to Comment 1.1.1:
Title: Thanks for your time
Comment: We sincerely thank your feedback and support. We will polish the paper and rewrite confusing parts following your suggestions. Your constructive comments will help improve the quality of this work. | Summary: Uni3DETR is a unified 3D object detector that is capable of handling both indoor and outdoor scenes within the same framework. This is significant as many existing detectors are specialized for either indoor or outdoor environments, but not both.
The method employs a detection transformer with a point-voxel interaction mechanism and a mixture of query points. This allows it to exploit both global and local information effectively.
Uni3DETR introduces a decoupled Intersection over Union (IoU), which serves as an easy-to-optimize training target for localization. This is aimed at improving the accuracy of the detector.
The paper demonstrates that Uni3DETR exhibits excellent performance consistently on both indoor and outdoor 3D detection tasks. Moreover, it shows strong generalization ability under heterogeneous conditions, which is beneficial for real-world applications.
Strengths: Versatility Across Environments: One of the major strengths of Uni3DETR is its ability to handle both indoor and outdoor environments within the same framework. This is a significant advancement as it eliminates the need for different models for different environments, which is common in existing approaches.
Optimization of Localization: The introduction of a decoupled Intersection over Union (IoU) as a training target for localization is a notable strength. This provides an easy-to-optimize objective that can potentially lead to more accurate localization of objects, which is often a challenging aspect of object detection.
Empirical Validation: The paper provides empirical evidence showing that Uni3DETR consistently performs well on both indoor and outdoor 3D detection tasks. This is important for establishing the credibility and effectiveness of the proposed approach.
Weaknesses: Innovative Element. The core pipeline appears to be a Deformable DETR complemented by non-learnable queries (voxel features derived from FPS). It essentially resembles a simple combination of two model architectures.
Efficiency. The model's efficiency is not entirely at the cutting-edge level. For instance, in nuScenes, several competitive 3D detection techniques are not listed in your tables. (https://www.nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Lidar).
Increase in Precision. According to Table 5, the non-learnable query appears to have a marginal impact on outdoor detection. This aligns with our prior understanding that FPS is not well-suited for outdoor detection. The mixed query's design might be superfluous for outdoor scenarios.
Computational Complexity. It is widely recognized that greater computation often results in superior performance. The authors should include a breakdown of the mixed query's complexity, such as inference latency and FLOPS, in comparison to other techniques.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As evidenced in Table 5, the non-learnable query makes a small contribution to outdoor detection. Can you provide a comparable analysis of the impact of the mixed query points on nuScenes? I'm curious to determine if the performance on nuScenes is primarily driven by your learnable query.
Could you supply a comparison of computational complexity and inference latency between your approach and alternative methods?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Integrating two distinct 3D detection algorithms in this method might potentially increase computational complexity. Nonetheless, computation overhead is a key factor for outdoor detection tasks, such as autonomous driving. Given this, the inclusion of a non-learnable query design seems to be superfluous in such outdoor environments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: It essentially resembles a simple combination of two model architectures.**
A1:
* Our main contribution is that we propose a unified architecture to address both indoor and outdoor 3D detection within the same framework. Unlike previous methods, which consider indoor and outdoor 3D detection as two problems and address them with a separate set of benchmarks, we are the first to build a unified architecture and eliminate the need for different models in different environments. This is the main novelty of our work. Through Uni3DETR, it is demonstrated that although significant distinction exists for indoor and outdoor point clouds, a unified structure is still possible. Therefore, our framework can be heuristic and become a basic platform or foundation for future works.
* A simple combination of two model architectures cannot address the problem of for both indoor and outdoor 3D detection. The deformable DETR is built on learned reference points, thus cannot take non-learnable points as the input. We thus modify its structure to directly take 3D points as the input, and perform cross-attention with voxel features. In addition, previous indoor detectors usually utilize FPS for learning point-wise features. Different from them, we utilize FPS points to remedy global information for learning voxel-wise features, which are more resistant to the data distinction. The learnable and non-learnable query points will interact with each other through attention, for a comprehensive understanding of both global and local information. By focusing on information of different levels, the mixture of query becomes a necessary condition for the success of both indoor and outdoor detection.
* We also propose a decoupled IoU to provide an easy-to-optimize objective for localization. As can be seen from the ablation study, the decoupled IoU improves a lot for both indoor and outdoor detection, which is thus also a necessary element of our framework and a main contribution to the community for unifying indoor and outdoor detection with the same model.
**Q2: In nuScenes, several competitive 3D detection techniques are not listed in your tables.**
A2: Actually many methods in the nuScenes leaderboard apply many techniques, like larger and more complicated backbones or multiple model ensembling. To evaluate different methods, we should put them into the same setting. Under the same setting and comparing in a fair way, our method is actually better. Take the Real-Aug++ method (top 1 in the leaderboard) for example. It obtains the 81.7% AP on the car category of KITTI validation. In comparison, we obtain the 86.6% AP. The effectiveness of our method can thus be validated.
**Q3. The mixed query's design might be superfluous for outdoor scenarios.**
A3:
* Indeed, introducing non-learnable query points is more effective for indoor 3D detection, as global information is more important for dense small-range indoor scenes. However, what we want is a unified architecture for both indoor and outdoor detection. Without the non-learnable query, the performance of 3D indoor detection will be limited because of the insufficiency of global information. Therefore, although the mixture of query points improves less for outdoor scenarios, it is still necessary.
* In some certain situations of outdoor scenes, like small, unclear, or occluded objects, local information only will be insufficient, and the non-learnable query will become more effective. To demonstrate this, we conduct the 3D detection experiment on the 3 classes of KITTI (Tab. 3-1). The mixture of query points improves more for the pedestrian and cyclist class, 2.05% for pedestrian and 3.19% for cyclist. Therefore, although the mixed query is less effective for KITTI car, it is still effective for those small and occluded objects.
* We also provide one visualized example from KITTI in the rebuttal PDF file. As can be seen from the third example of Fig. 1, the available points for the left car are insufficient, which makes it difficult to be detected. In this situation, by leveraging a global understanding of the scene, the non-learnable query helps detect it. Therefore, the non-learnable query is still effective for certain outdoor situations.
Table 3-1: Effect of the mixture of query points on the 3 classes of the KITTI dataset.
| |AP-car|AP-ped|AP-cyc|
| :--: | :--: | :--: | :--: |
|learnable query only|85.24|60.44|69.71|
|mixture of query points|86.57|62.49|72.90|
**Q4: The authors should include a breakdown of the mixed query's complexity**
A4: The time consumption during inference (latency) retains nearly the same, and the FLOPS also does not increase too much. This demonstrates that the mixture of query points does not bring too much complexity, thus will not bring extra computation overhead.
Table 3-2: The complexity analysis about the mixture of query points.
| |latency (s)|FLOPS (G)|
| :--: | :--: | :--: |
|w/o mix query|0.51|452.23|
|w/ mix query|0.52 (+ 1.9%)|458.74 (+1.4%)|
**Q5: Can you provide a comparable analysis of the impact of the mixed query points on nuScenes?**
A5: As can be seen, the mixture of query points brings a 1.9% mAP improvement. The improvement is a little more than KITTI, because nuScenes consists of more small objects. We further select four classes and list their per-category AP. For the car category, the AP improvement is 1%. For other categories with usually small or occluded objects, the improvement is more significant: the mixture of query points boost the pedestrian class by 2.1%, the motorcycle class by 3.6%, and 4% for the traffic cone class. In a word, the non-learnable query is equally effective for these small or occluded objects in outdoor scenes.
Table 3-3: Effect of the mixture of query points on the nuScenes dataset.
| |mAP|car|pedestrian|motorcycle|traffic cone|
| :--: | :--: | :--: | :--: | :--: | :--: |
|learnable query only|59.8|85.6|84.3|64.0|66.0|
|mixture of query points|61.7|86.6|86.4|67.6|70.0|
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Please look over the author response and the other reviews and update your opinion. Please ask the authors if you have additional questions before the end of the discussion period.
---
Rebuttal Comment 1.2:
Comment: Thanks for the author's comprehensive response! These address my concerns. This work may have practical value in autonomous agents that should both handle the indoor and outdoor scenes.
---
Reply to Comment 1.2.1:
Title: Thanks for your time
Comment: We sincerely thank your feedback and support. We will polish the paper and add necessary discussions following your suggestions. Your constructive comments will help improve the quality of this work. | Summary: The paper proposes Uni3DETR, a unified architecture suitable for indoor and outdoor scenes. Uni3DETR employs the DETR with point-voxel interaction for object prediction. It uses a mixture of query points to exploit global and local information. Finally, Uni3DETR uses a decoupled IoU loss by disentangling the depth and other dimensions to ease out training. Training a per-dataset model and testing on the same dataset improves over the baselines.
Strengths: + The problem of using a unified architecture for indoor and outdoor point clouds is super relevant.
+ The idea of employing DETR, a mixture of query points, and decoupling 3D IoU is nice.
+ The per-dataset results are good.
Weaknesses: - The first claim that the architecture is uniform is only partially correct. L228 says that the authors chose a 0.02m grid size for indoor datasets, while they chose a different voxel size for the KITTI and the nuscenes datasets. The authors should pick a single voxel size and then run it on all the datasets. (It is OK to change the range for nuScenes since it is a multi-camera dataset, but the authors should choose a single voxel size)
- The claim of "strong universal ability of our model (L336)" is an overstatement, in my opinion, since the paper only carries out Per-dataset model training and testing. A related paper that trains a single RGB 3D detector is Omni3D + Cube R-CNN [A]. I would urge the authors to follow a similar protocol where the train and test sets are
- KITTI + nuScenes front and then test on indoor indoor/outdoor
- Train on indoor scenes and then test on outdoor indoor/outdoor
- Finally, train on all indoor/outdoor scenes, then test on indoor/outdoor scenes.
By training Uni3DETR on KITTI + nuScenes front and testing outdoors, one can compare against the Cube R-CNN model of [A].
- How well does the method generalize in zero-shot settings? E.g., How does the Uni3DETR model trained on outdoor scenes work on an unknown lidar scan from an unknown say, Waymo dataset.
- A sad part of 3D detection papers is to use a bigger backbone and show improvements on nuScenes. Please quantitatively compare the flops, model size, training time, and inference time for Uni3DETR against the baselines in Table 3.
- The authors say the learnable query points mostly contain local information and fit the outdoor detection well. In contrast, non-learnable query points emphasize global information and are more effective for dense indoor scenes. Another way to enforce local and global information is to use a dual-path architecture where one branch is attention-based for global information while the other is convolution-based for local information. One can later fuse their outputs by concatenation or predicting a coefficient to add them. Why did the authors not consider this design choice?
- An alternative to using the decoupled IoU loss is the disentangled loss [B], where one constructs a 3D box from GT parameters except the one in consideration. That way, one can avoid explicitly weighing the losses for respective parameters (e.g., the authors use weighing xy and z terms by 0.5). The authors should, therefore, quantitatively compare the decoupled IoU loss against the disentangled loss.
- The details of the 3D feature extractor (Section 2.1) need to be included.
- The authors use the mean average precision (mAP) under IoU thresholds of 0.25 and 0.5 for evaluating datasets (L216). Using one or two thresholds is quite sensitive. A better alternative is to use exact IoU3D over a series of thresholds of 0.05:0.5:0.05 and then average them as in [A].
Minor:
- Do the authors plan to release the code?
- Typo: L203: final
References:
A. Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild, Brazil, Kumar, Ravi, et al., CVPR 2023.
B. MonoDIS: Disentangling monocular 3D object detection: From single to multi-class recognition, Simonelli, Bulo, Porzi, et al., PAMI 2020.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see the weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors do not list out the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The claim that the architecture is uniform is partially correct.**
A1:
* The word “unified” in our paper specifically refers to the architecture aspect. The voxel size is a data-related parameter, not architecture-related. Since point clouds are collected with different sensors, their ranges and distributions vary significantly (about 3m for indoor but more than 70m for outdoor datasets). Here we follow the settings of previous works and utilize the same grid size. Then 3D voxels can be processed with a unified architecture. We are the first to build a unified 3D detector for both indoor and outdoor point clouds with the same architecture.
* We also conduct the experiment with the same KITTI voxel size. The resolution of indoor voxels deteriorates, which hurts the detection. However, compared with other outdoor detectors, our superiority is still obvious. Therefore, even if standardizing such a data-related parameter, our model can still make a higher AP.
Table 2-1: 3D detection with the same voxel size of (0.05m, 0.05m, 0.1m).
| |AP-indoor (SUN RGB-D)|AP-outdoor (KITTI car)|
| :--: | :--: | :--: |
|3DSSD|9.5|78.6|
|CenterPoint|18.9|74.4|
|UVTR |35.9|72.0|
|ours (a single voxel size)|47.3|86.7|
|ours (different voxel sizes)|67.0|86.7|
**Q2: The claim of "strong universal ability of our model" is an overstatement**
A2:
* The “universality ability” here refers specifically to our construction of a unified architecture for both indoor and outdoor detection. In the current point cloud based 3D detection research, indoor and outdoor detectors are still accomplished with totally distinct structures, and a unified architecture for per-dataset experiments is still absent. From this perspective, the universality ability of our detector is at least better than existing 3D detectors. We will revise it to “Extensive experiments demonstrate that our model can address both indoor and outdoor 3D detection with a unified structure” in the final version.
* Cube RCNN only takes RGB images for 3D detection. Compared with 2D images, the dataset-interference issue is extremely serious for point clouds. More cross-dataset discrepancies, including sensor type differences (the RGB-D sensor v.s. LiDAR), scene changes (small-range and dense indoor v.s. large-range and sparse outdoor), object distribution (more sparse and small-size in outdoor scenes), make the joint combination of multiple point cloud datasets extremely challenging. As a result, there is still no work for joint training on multiple point cloud datasets. Actually, if we want one model like Cube RCNN but taking point clouds as input, a unified structure should inevitably serve as its prerequisite and foundation. Here we propose a unified architecture for different point clouds, as the foundation towards this. How to bridge the cross-dataset difference for point clouds is not our focus.
* We also follow a similar protocol to compare with Cube RCNN - training on KITTI and nuScenes then evaluating on KITTI. We observe that even if we do not work on the cross-dataset difference, the performance is still better.
Table 2-2: Comparison by training on KITTI and nuScenes front, then evaluating on KITTI.
| |AP-car |
| :--: | :--: |
|Cube RCNN|15.0|
|ours|65.3|
**Q3: How well does the method generalize in zero-shot settings**
A3: We adopt two protocols: training on SUN RGB-D then evaluating on ScanNet; training on KITTI then evaluating on waymo. Its zero-shot AP on ScanNet surpasses GroupFree by 6.2%. Its outdoor zero-shot AP is also 5.2% better than existing outdoor detectors. This well demonstrates its generalization ability.
Table 2-3: 3D detection on ScanNet of SUN RGB-D trained detectors.
| |AP25|AP50|
| :--: | :--: | :--: |
|VoteNet|52.5|31.7|
|GroupFree3D|51.2|23.3|
|ours|57.4|38.6|
Table 2-4: 3D detection on waymo of KITTI trained detectors.
| |AP-car|
| :--: | :--: |
|SECOND|5.8|
|PointPillar|12.1|
|Part-A2|14.9|
|ours|20.1|
**Q4: Why did the authors not consider the dual path?**
A4:
* Convolution-based structures perform 3D box generation directly on the extracted features, thus are sensitive to data distinctions. Such structures thus cannot extract local information well for both indoor and outdoor scenes. We conduct the experiment, where the convolution branch is from Part-A2. Our model outperforms it by more than 10% for both indoor and outdoor datasets. This explains why we do not adopt the dual-path structure.
* A dual-path architecture will also inevitably bring in more computation budget.
Table 2-5: Comparison with the dual-path structure.
| |AP-indoor (SUN RGB-D) |AP-outdoor (KITTI car)|
| :--: | :--: | :--: |
|dual-path |54.2 |75.6|
|ours |67.0|86.7|
**Q5: The authors should compare against the disentangled loss**
A5: Our decoupled IoU is 12.5% higher than disentangled loss. The reason is that the original disentangled loss takes RGB images as input, and point clouds only will hurt its performance.
Table 2-6: Comparison on KITTI between our decoupled IoU and the disentangled loss.
| |AP-car|
| :--: | :--: |
|disentangled loss |74.24 |
|decoupled IoU loss |86.74|
**Q6: The 3D feature extractor details need to be included.**
A6: We add the visualization of our 3D feature extractor in Fig. 2 of our rebuttal PDF file.
**Q7: A better alternative is to use IoU3D over a series of thresholds of 0.05:0.5:0.05.**
A7:
* Here for indoor 3D detection, most previous detectors adopt the AP25 and AP50 metrics. For a fair comparison, we adopt the same metrics.
* We adopt the same metric as Omni3D for evaluation. We observe that the improvement is consistent. This demonstrates that our model achieves better performance for both recognition and localization.
Table 2-7: Comparison on SUN RGB-D with the Omni3D metric.
| |AP[0.05:0.5:0.05] |
| :--: | :--: |
|VoteNet | 53.4|
|GroupFree3D |56.8|
|FCAF3D |60.6|
|ours|64.3|
**Q8: code release and typos**
A8: We will correct typos in the final version and release the code.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you, authors, for putting an excellent rebuttal. The benefits of your architecture are not fully visible if the experiments are not cross datasets/cross domains.
- I understand that scaling to datasets with different voxel sizes and domains is an issue with lidar-based 3D detection. Why do not we keep a single voxel size and do a quantitative cross-dataset evaluation with the Omni3D metric over all combinations of dataset **within** the indoor and outdoor domains for Uni3DETR and your baselines? (such as KITTI --> NuScenes, nuScenes --> KITTI, ScanNet --> SUN, SUN --> ScanNet) In other words, please compare with Cube R-CNN table 5. The comparison will let us know how good the within-domain generalization is keeping the voxel size unchanged.
- If possible, please also quantitatively report the following results with a single voxel size.
Train on indoor and test outdoor for Uni3DETR, your lidar baselines, and Cube R-CNN with the Omni3D metric.
Train on outdoor and test indoor for Uni3DETR, your lidar baselines, and Cube R-CNN with the Omni3D metric.
- Your Omni3D numbers in Table 2-2 of the rebuttal differ from Table 5 of the Omni3D paper. They report AP on KITTI as 42.5, while you report as 15.0. Why is it so?
- It would also be good to quantitatively compare the flops, model size, training time, and inference time for Uni3DETR against the baselines in Table 3.
Please note that it is completely OK if the Uni3DETR numbers on some of the experiments are bad. We are not here to compete on benchmarks and penalize for the bad results. All we want is to clearly understand the limitations of the methods and advance science :)
---
Reply to Comment 1.1.1:
Title: Thanks for your response
Comment: We appreciate your reply. We acknowledge that the cross-dataset ability of 3D detectors is an important problem. As a unified architecture is still absent now, this problem is more serious for point cloud based 3D detection. In this paper, we address the problem of the absence of a unified architecture. We are the first to propose such a unified architecture, which can serve as the foundation and prerequisite for cross-dataset generalization. We follow your suggestions to conduct the experiments below. The experimental results (about 10% ~ 30% improvement over Cube RCNN) demonstrate that our unified architecture has the potential for future directions, such as cross-dataset 3D detection. Through the below experiments, the benefits of our architecture can be better illustrated.
**Q1: a quantitative cross-dataset evaluation**
We follow your suggestions to conduct the cross-dataset evaluation with the Omni3D metric. For the outdoor setting, we conduct the experiment of KITTI→nuScenes, nuScenes→KITTI, KITTI+nuScenes (OMNI3D_OUT) → KITTI and nuScenes. For the indoor setting, since Cube RCNN does not train on the ScanNet dataset (the ScanNet dataset provides multi-perspective images for one scene, while Cube RCNN cannot fit such images), we conduct the experiment of ScanNet→SUN RGB-D, and compare it with Cube RCNN trained on OMNI3D_IN.
From the results, it is worth noticing that Uni3DETR has a good cross-dataset generalization ability. The performance is better than Cube RCNN for both indoor and outdoor evaluation. For the indoor SUN RGB-D dataset, the cross-dataset AP (ScanNet→SUN RGB-D) is 16.2% higher than Cube RCNN trained on the SUN RGB-D dataset. For outdoor scenes, our method also surpasses Cube RCNN, more than 30% higher for nuScenes→KITTI and 15% higher for KITTI→nuScenes. The reason is that our Uni3DETR takes point clouds as input for 3D detection, while Cube RCNN takes RGB images for detection. By introducing 3D space information from point clouds, the superiority of a unified architecture for point clouds over Cube RCNN can be demonstrated.
We further emphasize that cross-dataset evaluation is a more difficult problem for point cloud based 3D object detection, as the dataset-interference issue is more serious for point clouds. Cube RCNN only takes RGB images as input, which helps it avoid the dataset-interference issue of point clouds. This makes it eligible to conduct cross-dataset evaluation experiments. In this work, we build a unified structure for point clouds, which can serve as the prerequisite and foundation of point cloud based cross-dataset experiments. The experimental results below can demonstrate its potential for cross-dataset generalization. We believe our Uni3DETR can become the basic platform and facilitate related research.
Table 2-8: Cross-dataset performance on the indoor SUN RGB-D dataset compared with Cube RCNN
|Method|Trained on|AP3D-SUN|
| :--: | :--: | :--: |
|Cube RCNN|SUN RGB-D|34.7|
|Cube RCNN|OMNI3D_IN (containing SUN RGB-D)|35.4|
|Uni3DETR|SUN RGB-D|64.3|
|Uni3DETR|ScanNet|50.9|
Table 2-9: Cross-dataset performance on the outdoor KITTI and nuScenes dataset compared with Cube RCNN.
|Method|Trained on|AP3D-KIT|AP3D-NU|
| :--: | :--: | :--: | :--: |
|Cube RCNN|KITTI|37.1|12.7|
|Cube RCNN|nuScenes|20.2|38.6|
|Cube RCNN|OMNI3D_OUT|42.4|39.0|
|Uni3DETR|KITTI|83.8|19.4|
|Uni3DETR|nuScenes|54.2|57.3|
|Uni3DETR|OMNI3D_OUT|72.3|52.1|
**Q2: Train on indoor and test outdoor, train on outdoor and test indoor**
Since there are no overlapped categories between indoor and outdoor scenes, both our Uni3DETR and Cube RCNN cannot perform cross-dataset experiments from indoor to outdoor or from outdoor to indoor. Up to now, no works can achieve such a goal, no matter point cloud based or RGB image based models. However, it can be expected that if such datasets can be created, with overlapped categories between indoor and outdoor scenes, our Uni3DETR will continue to demonstrate the corresponding potential.
**Q3: Your Omni3D numbers in Table 2-2 of the rebuttal differ from Table 5 of the Omni3D paper**
The reason is that the Table 5 of Omni3D reports AP on KITTI based on the Omni3D metric, while we adopt the AP70 metric on the car category, the official KITTI evaluation metric. The 15.0% AP70 of Cube RCNN comes from the Table 3 of Omni3D. If we utilize the same Omni3D metric, our performance on the KITTI dataset can be 88.7%, as can be seen in the table below. This further demonstrates the effectiveness of our method.
Table 2-10: Comparison with Cube RCNN by training our Uni3DETR on the joint of KITTI and nuScenes front, then evaluating on the KITTI dataset.
| |AP-car|AP3D (Omni3D metric)|
| :--: | :--: | :--: |
|Cube RCNN (omni3D)|15.0|42.4|
|ours|65.3|88.7|
**Q4: It would also be good to quantitatively compare the flops, model size, training time, and inference time for Uni3DETR against the baselines in Table 3.**
Please refer to the above author's rebuttal to all reviewers. | Rebuttal 1:
Rebuttal: Dear all reviewers
We sincerely thanks for valuable comments and suggestions. We first address the common concerns, followed by detailed responses to each reviewer separately. We hope our responses clarify existing concerns and make these points clear.
**Q: Comparison of computational complexity against the baselines in Table 3.**
A:
* In this paper, we mainly target a unified structure. To ensure that the detector can accommodate both indoor and outdoor detection, we have, to a certain extent, made sacrifices in terms of efficiency, in order to prioritize its unified ability. We list the comparison of both performance and efficiency in the following Table. We can observe that the computational budget compared with these baselines is not significant: the inference time (latency) is almost the same as UVTR and the FLOPS is only 1.16% more than UVTR. In addition, we obtain significantly better detection performance on both indoor and outdoor datasets. The indoor AP is 16.8% higher than UVTR, and the average of indoor and outdoor AP is also 8.8% higher than it. Compared with VoxelNeXt, one model that mainly focuses on reducing the FLOPS of 3D detectors, we achieve more than 40% indoor AP improvement and more than 25% average AP improvement. Meanwhile, the increase in training time is also negligible: 2d 10h v.s. 2d 7h of UVTR. This demonstrates that our model can obtain better detection performance on both indoor and outdoor datasets, with a relatively subtle increase in computational complexity.
* As the first work to build a unified architecture for both indoor and outdoor detection, we mainly focus on the performance level. From now on, there is still no work that can achieve a good performance on both indoor and outdoor datasets. Therefore, here our Uni3DETR mainly addresses the problem of achieving a good detection performance on both indoor and outdoor datasets with a unified architecture. For model efficiency, there are many other general approaches to address it. For example, we can adjust the number and ratio of sparse and dense convolution layers. We can leave this as one of our future work.
Table: The comparison of both performance and efficiency of our Uni3DETR against previous works on the indoor SUN RGB-D dataset and the outdoor nuScenes dataset. The metrics are AP25 for SUN RGB-D and mAP for nuScenes.
|method |performance | | | efficiency| | |
| :--: | :--: | :--: | :--: | :--: | :--: | :--: |
| |avg. |AP-indoor |AP-outdoor |latency (s) |params (M) |FLOPS (G)
|Centerpoint (CVPR 2021) |37.75 |18.9 |56.6 |0.32 |9.17 |121.10 |
|PillarNet (ECCV 2022) |44.00 |28.2 |59.8 |0.31 |12.55 |100.10 |
|VoxelNeXt (CVPR 2023) |39.30 |18.1 |60.5 |0.29 |7.12 |42.57 |
|UVTR (NeurIPS 2022) |55.55 |50.2 |60.9 |0.51 |26.12 |451.12 |
|ours |64.35 |67.0 |61.7 |0.52 |26.71 |458.74|
Pdf: /pdf/e54b843b73e60fd2557ecb8ed5a59599cc1dd378.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes Uni3DETR, a unified 3D detection transformer that addresses indoor and outdoor 3D detection within the same framework. The paper provides a specific analysis of the inconsistency in the structure of current indoor and outdoor scene detection models. Due to the differences in data distribution between indoor and outdoor environments, indoor datasets have a smaller range of point clouds, with objects being closer together and denser, occupying a larger portion of the point cloud. On the other hand, outdoor scenes have smaller and sparser objects, with background point clouds dominating the overall point cloud. Utilizing the mixture of query points which consist of the learnable and non-learnable query, the detector is able to exploit the global information for dense small-range indoor scenes and local information for large-range sparse outdoor scenes. Besides, decoupled IoU is proposed for easier and faster training target localization optimization by disentangling the xy and z space.
Strengths: 1. The paper provides a detailed analysis of the inconsistent models of indoor and outdoor 3D detectors, with the difference in data distribution being the most important reason. The author explains how to design a unified 3D detector and solves the problem of inconsistent architectures between indoor and outdoor scenes.
2. The overall logic of the paper is clear, and it is well written.
3. Uni3DETR demonstrates the strong generalization ability under heterogeneous conditions while maintaining caomparable performance.
Weaknesses: 1. The experiments of model architectures are a bit insufficient. In line 99, the paper mentions, "Then, we convert the extracted sparse features into dense ones and apply 3D dense convolutions for further feature processing." However, no experiments show the performance gain brought by 3D dense convolution to support the claim. Furthermore, will the model architectures vary from datasets? For example, are the numbers of layers different for models trained on KITTI and S3DIS?
2. The author claims that learnable queries primarily capture the local information of the scene, and non-learnable queries usually concentrate more on global information. Although experiments show that combining different queries could bring better performance, there is no evidence in the paper (e.g., visualization or other analysis) to support the claim.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Decoupled IoU is only used in Uni3DETR, and it can be seen from the ablation study that this loss has a significant performance improvement for the detector. However, Decoupled IoU loss was not used in other mothods when compared to other methods. Can Decoupled IoU improve other detectors, such as GroupFree3D, and also bring significant performance improvements?
2. The learnable query mentioned in the paper lacks its initialization process, such as how to obtain $P_q$ and $C_q$ mentioned in line 107.
3. How about the generalization ability of feature extraction based on this structure for indoor scenes? For example, how it performs zero-shot testing by training on SUN RGBD and directly testing the trained model on ScanNet?
5. The author uses a mixture of query points to obtain detection results through different categories and a sufficient number of queries. Will this result in noticeable time consumption during inference?
6. It would be better to illustrate more visualization results to support the claim that the learnable and non-learnable queries have different capabilities.
7. According to the paper: ConQueR: Query Contrast Voxel-DETR for 3D Object Detection: DETRs usually adopt a larger number of queries than GTs (e.g., 300 queries v.s. ∼40 objects in Waymo) in a scene, which inevitably incur many false positives during inference. Could this problem occur in Uni3DETR since many more queries are used in inference?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It seems that the paper now provides a unified paradigm for solving 3D object detection tasks of indoor and out door scenes and do not use one model for testing on all the datasets. Furthermore, the model is separately trained with different hyper-parameters on different datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The experiments of model architectures are a bit insufficient.**
A1:
* We conduct the experiments on the SUN RGB-D dataset in the below Tab. 1-1. Dense 3D convolutions contribute to the 6.3% AP25 improvement and 10.1% AP50 improvement. Its effectiveness to the performance can be demonstrated. As sparse convolutions are only conducted to the point positions in the 3D space, it will lead to the feature missing problem of center points. Dense convolutions help alleviate this problem, thus improving the detection accuracy.
* The architecture of our model is totally the same for all different datasets, including indoor datasets SUN RGB-D, ScanNet, S3DIS and outdoor datasets KITTI, nuScenes. We are the first to build a unified 3D detector, which can perform both indoor and outdoor 3D object detection with the same architecture.
Table 1-1: The performance on the SUN RGB-D dataset with or without 3D dense convolution layers.
| | AP25 | AP50 |
| :--: | :--: | :--: |
|w/o dense conv | 60.7 | 40.2|
|w/ dense conv | 67.0 | 50.3 |
**Q2: There is no evidence in the paper (e.g., visualization or other analysis) to support the claim (the mixture of query points).**
A2:
* We provide visualized results about the mixture of query points in the section 2 of our supplementary material. With only learnable query points, the detector mainly leverages local information to detect objects in the scene. However, some objects, especially partly occluded and with insufficient points, are easily ignored and missed. This is because without enough points, the local information will be insufficient for these objects to be detected. By providing global information, learning with non-learnable query points can better consider the background. As a result, the detector can obtain an overall understanding of the whole scene, and such global understanding can help the model find and detect these objects. Therefore, with the mixture of query points, more comprehensive detection results can be obtained.
* We also provide three more visualized results in Fig. 1 of the rebuttal PDF file to better illustrate the effectiveness of our mixture of query points. We will put such a visualized analysis in the final version of our paper.
**Q3: Can Decoupled IoU improve other detectors and also bring significant performance improvements?**
A3: We involve our decoupled IoU into VoteNet and GroupFree3D, and conduct the experiments on the ScanNet dataset. As we can see, Decoupled IoU still helps improve the detection AP: 7.2% for VoteNet and 2% for GroupFree3D, which demonstrates the easy-to-optimize property of decoupled IoU also helps positional information learning in previous detectors. However, the improvement is less than that in our Uni3DETR. This is because our detector is based on the transformer structure, which is seriously affected by the issue of L1 loss for different scales. Therefore, decoupled IoU is more urgently demanded in our transformer-based detector.
Table 1-2: The performance on the ScanNet dataset. We apply our decoupled IoU loss on the VoteNet and GroupFree3D.
| | Decoupled IoU | AP25 | AP50 |
| :--: | :--: | :--: | :--: |
|VoteNet | | 58.6 | 33.5 |
| | √ | 65.8 | 46.2 |
|GroupFree3D | | 69.1 |52.8|
| | √ |71.1 |53.8|
**Q4. The learnable query mentioned in the paper lacks its initialization process, such as how to obtain $P_q$ and $C_q$ mentioned in line 107.**
A4:
We initialize both the content query $C_q$ and learnable query points $P_q$ with the standard normal distribution, i.e. randomly initializing them using a commonly utilized standard Gaussian distribution with zero mean and unit standard deviation. As these queries will be learned during training, the initialization approach does not affect the ultimate performance too much.
**Q5. How about the generalization ability for indoor scenes? For example, how it performs zero-shot testing by training on SUN RGBD and directly testing the trained model on ScanNet?**
A5:
We perform inference on the ScanNet dataset with SUN RGB-D trained model, and compare it with two previous methods VoteNet and GroupFree3D. The results show that Uni3DETR can also obtain a satisfying result for this zero-shot testing experiment, obtaining a 57.4% AP25 and 38.6% AP50. Its zero-shot AP on ScanNet surpasses VoteNet by 4.9% and GroupFree by 6.2%.
Table 1-3: The performance on the ScanNet dataset of a SUN RGB-D trained Uni3DETR.
| | AP25 | AP50 |
| :--: | :--: | :--: |
| VoteNet | 52.5 | 31.7 |
|GroupFree3D|51.2|23.3|
|ours|57.4|38.6|
**Q6. The author uses a mixture of query points. Will this result in noticeable time consumption during inference?**
A6: As the 3D detection transformer is based on the 3D voxel feature after downsampling, the computational budget of the whole detection transformer is actually not large. As a result, the mixture of query points does not affect the inference time consumption too much. We conduct the complexity analysis on the mixture of query points in the below Tab. 1-4. Both the time consumption during inference (latency) and FLOPS retain nearly the same with the mixture of query points. This demonstrates that our mixture of query points does not introduce too much extra computation.
Table 1-4: The complexity analysis about the latency and FLOPS about the mixture of query points.
| | latency (s) | FLOPS (G)|
| :--: | :--: | :--: |
|w/o mix query | 0.51 |452.23|
|w/ mix query |0.52 (+ 1.9%) |458.74 (+1.4%)|
**Q7: Could the large number query problem occur in Uni3DETR since many more queries are used in inference?**
A7: During training, queries regarding negative samples will be assigned as the background category through Hungarian matching, and will be learned towards a low objectness score in the classification layer. Then, in the inference process, the confidence score of most false negatives will be low. Therefore, such a false negative problem does not quite matter in our problem.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Please look over the author response and the other reviews and update your opinion. Please ask the authors if you have additional questions before the end of the discussion period. | null | null | null | null | null | null |
Variational Inference with Gaussian Score Matching | Accept (poster) | Summary: The submission proposes a method for black-box variational inference based that is based on score matching. The method follows an iterative procedure, where at every iteration the variational approximation is updated by first obtaining a single sample and then updating the approximation such that the gradient of the log-densities of the approximation matches the gradient the gradient of the target distribution on that sample. As in general, several solutions might to this constrained problem might exist, the proposed method aims to also minimize the forward KL divergence to the approximation of the previous iteration.
The paper shows that the optimal parameters are a stationary point, if the optimal distribution can match the score everywhere. This can be easily seen from the constrained optimization problem, as the last approximation minimizes the KL to the last approximation and also satisfies the constraints.
The constrained optimization problem (minimizing the forward KL to the previous distribution, while matching the gradients of the target) can in general not be solved in closed form, and the submission does not discuss how the algorithm can be implemented in the general setting, nor does it perform any evaluations. Instead, the work focuses on Gaussian variational approximations, and shows that in this (quite relevant) special case, the constrained optimization problem can be solved in closed form to provide a hyperparameter-free update.
The method is evaluated on several test problems and compared to a baseline which maximizes the ELBO using the reparameterization trick. The proposed method converges around two orders of magnitude faster than using the reparameterization trick and also often learns better or similar approximations. However, in particular for target distributions that are very different from Gaussian distributions, the methods suffers from high oscillations and does not seem to converge.
Strengths: Originality
-------------
The proposed method seems to be novel, and performing variational inference without explicitly minimizing a particular divergence could be interesting.
Clarity
--------
The method is described perfectly clear. The paper was very to read and follow. I also like that the limitations (the method may not converge when the target distribution is more expressive than the variational approximation, the method does not minimize the KL or any other divergence) is clearly communicated.
Relevance
--------------
Gaussian variational inference is an important problem setting with various applications.
Weaknesses: Technical Soundness
-----------------------------
While the paper shows that the optimal parameters are a stationary point if the target distribution can be perfectly approximated, it does not provide any analysis for the setting that the approximation can not perfectly approximate the target distribution. Given that the paper focuses on Gaussian variational approximations, this lack of analysis is a major shortcoming. Of course, without specifying a particular divergence it is not even clear which parameter would be "optimal" in the setting where the target distribution can be perfectly matched. However, it would be important to better analyze, if there are any mild conditions for convergence, and which criteria the learned approximation fulfills (is there any---potentially obscure---divergence that is minimized?). Further, even in the setting where the target distribution can be perfectly approximated, technically the paper only shows that the optimal approximation is a stationary point, but it does not prove that the method actually converges to that point or that there are no other stationary points. Actually, both can be shown straightforwardly from the fact that a Gaussian distribution has full support, but the current submission does not make this point, and, furthermore, the sample complexity might nevertheless be quite bad.
Related Work
------------------
The paper focuses on Gaussian variational inference, but misses the most relevant works in that area. Natural gradient based methods are known to significantly outperform methods based on the vanilla gradient (such as the BBVI/Reparameterization-Trick baseline used in the paper).
For Gaussian variational approximations, the natural gradient can be approximated extremely efficient (almost as fast as the vanilla gradient) and enjoys faster convergence and empirically also better exploration. There is broad literature of natural gradient descent for Gaussian variational approximations based on zero-order, first-order and second-order information:
- VIPS (Arenz et al.. 2020) uses a modified version of a policy search method (called MORE), which is based on compatible function approximation. This approach use least-squares based on samples from the current approximation to fit a quadratic model to the target distribution (therefore, it only require function evaluations but no differentiable target distributions). It can be shown that the parameters of this quadratic model can be used to compute the natural gradient (also in closed form). Their work focuses on GMM approximation, but it uses independent updates to the Gaussian components.
- Lin et al. (2019) show how first order information can be used for approximating the natural gradient. They also consider the GMM setting, but also use independent update for the Gaussian components. Their method estimates the natural gradient of a Gaussian variational distribution using Stein's Lemma. Their estimate of the natural gradient is a simple linear function of the gradient of the log-ratio (between target and approximation) evaluated on samples from the approximation.
- VOGN (Khan & Nielsen, 2018) estimates the natural gradient using both gradients and Hessians of the joint distribution.
Furthermore, there are also other methods (not based on natural gradients) that can be used for optimizing Gaussian variational approximations. For example, when the Hessian of the target distribution is available, HFSGVI (Fan et al., 2015) or TRUST-VI (Regier et al. 2017) can be applied.
Evaluation
--------------
The evaluation is insufficient as it lacks the most important baselines VIPS and the method by Lin et al. (2019). Both methods are applicable to the problem setting of the submission, and as shown by Arenz et al. (2020, see Appendix K) even the zero-order method significantly outperforms the reparameterization-baseline used in the submission.
Furthermore, the performance of the baseline seems to be worse than it should. In the experiments of Section 3.1, where a Gaussian distribution is used as target distribution, the paper states that the reparameterization trick converges to suboptimal solutions. However, the ELBO between two Gaussians has a single stationary point which is the global minimizer. Hence, the reparameterization trick should not converge to a suboptimal solution. I can imagine that the vanilla gradient can lead to crawling, however, when using ADAM this should actually not happen as momentum should build up. I suspect that either the hyperparameters are badly chosen (too large stepsizes may lead to oscillations that prevent ELBO improvements) or problems with the implementation (numerical errors when computing the log-density of the Gaussians).
Minor Comment
---------------------
"In the experiments with synthetic models in Sections 3, 3.1, and 3.1 [sic]"
References
---------------
Arenz, Oleg, Mingjun Zhong, and Gerhard Neumann. "Trust-region variational inference with gaussian mixture models." The Journal of Machine Learning Research 21.1 (2020): 6534-6593.
Lin, W., Khan, M. E., & Schmidt, M. (2019). Stein's Lemma for the Reparameterization Trick with Exponential Family Mixtures. arXiv preprint arXiv:1910.13398.
Mohammad Emtiyaz Khan and Didrik Nielsen. Fast yet simple natural-gradient descent for variational inference in complex models. In 2018 International Symposium on Information Theory and Its Applications (ISITA), pp. 31–35. IEEE, 2018.
K. Fan, Z. Wang, J. Beck, J. T. Kwok, and K. Heller. Fast second-order stochastic backpropagation for variational inference. In Advances in Neural Information Processing
Systems, NIPS’15, pages 1387–1395, 2015.
J. Regier, M. I. Jordan, and J. McAuliffe. Fast black-box variational inference through stochastic trust-region optimization. In Advances in Neural Information Processing
Systems 30, pages 2399–2408, 2017.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: Why does the reparameterization trick converge to suboptimal solutions when the target distribution is Gaussian? Can you provide code and additional details on hyperparameter tuning? Can you provide an explanation for this surprising result?
For changing my opinion, the paper also needs to discuss the relevant methods in this problem setting (above references), and ideally add a comparison to the work by Lin et al. (2019) or Arenz et al. (2020).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The approach does have serious limitations, mainly by not proving convergence. However, I already stated the limitations under weaknesses and I also think that the limitations are adequately discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >It would be important to better analyze, if there are any mild conditions for convergence, and which criteria the learned approximation fulfills.
We agree with the referee that its important, but challenging to analyze given that GSM does not explicitly minimize any divergence. Empirically, we have so far observed that GSM converges to solutions that have the best forward KL divergence (and similar reverse KL divergence as BBVI for non-Gaussian target). Furthermore, as far as we know, there are no theoretical convergence guarantees for BBVI when the variational family is misspecified either. It is a difficult open question still under active research years after the algorithm has been widely used.
> In the setting where the target distribution can be perfectly approximated, ..., the method actually converges to that point or that there are no other stationary points. Actually, both can be shown straightforwardly from the fact that a Gaussian distribution has full support, but the current submission does not make this point
We thank the referee for making this point. We have now updated Lemma 1 to clarify this.
> Natural gradient based methods are known to significantly outperform methods based on the vanilla gradient (such as the BBVI). For changing my opinion, the paper also needs to discuss the relevant methods and ideally add a comparison to the work by Lin et al. 2019 or Arenz et al. 2020.
We thank the referee for pointing to natural gradient descent based VI as a baseline. We will certainly discuss these works and add this baseline for comparison in the revision. As requested, we have added a comparison to NGD-VI based on Lin et.al. for a Gaussian target in Fig. 2 in the uploaded pdf. However, note that for full rank Gaussians, NGD has cubic scaling $d^3$ with dimension while GSM has quadratic $d^2$. We now summarize our implementation & results (also see the general answer above)
*Implementation*
- We use only the first order information for fair comparison to GSM.
- The updates for variational parameters $\mu_i$ and $\Sigma_i$ follow Eq. 16 of Lin et.al (arxiv:1906.02914) adapted for a single Gaussian instead of GMM
- Eq 16 requires the Hessian of the ELBO. To approximate this with 1st-order information, we implement VOGN update (Eq. 10 of Khan et.al, arXiv:1806.04854). This uses Gauss-Newton approximation for the Hessian of the likelihood $\log p(\theta | x)$ and $\Sigma_i^{-1}$ for Hessian of $\log q$.
*Results*
Fig. 2 of attached shows results for NGD-VI for a D=64 Gaussian target by varying the hyperparameters- learning rate and batch size. We find that
- As the referee pointed out, NGD-VI significantly outperforms BBVI.
- “Optimally tuned" NGD-VI can be competitive with GSM.
- However the performance of NGD-VI is very sensitive to the hyper-parameters. Larger batch sizes allow us to use larger learning rate so they need to be tuned together. However a large step size gets stuck oscillating around the minima and hence we need to additionally tune the scheduling of the learning rate.
- We couldn't experiment with optimizers like Vprop/Vadam from Khan et.al. for the rebuttal but will investigate those for the updated manuscript.
- For $D\geq256$, NGD-VI was prone to diverging for $lr>0.01$. To ensure convergence, we added an additional regularization parameter for Hessian estimates in VOGN.
*Compared to GSM*
1) For VI with a full-rank Gaussian, GSM has a quadratic complexity of $d^2$ while NGD-VI is cubic $d^3$ as it requires estimating both $\Sigma$ and $\Sigma^{-1}$ for each iteration.
2) An optimally tuned NGD-VI can be competitive with GSM, but this will require tuning the learning rate, it's scheduling and batch size. In contrast, GSM has no learning rate parameters to tune, and is largely insensitive to batch size.
3) For large problems, we need a regularization hyperparameter for Hessian in VOGN updates. GSM has no such issues in scaling.
Thus based on our experiments, we argue that as compared to NGD-VI, GSM is faster in terms of gradient evaluations, has a smaller iteration complexity, and is easy to tune.
However we agree with the referee that it was a relevant baseline missing from the draft and would like to thank them for pointing this.
> BBVI should not converge to a suboptimal solution for Gaussians. Can you provide code and details on hyperparameter tuning?
We agree that BBVI will converge for Gaussian targets, and have now uploaded additional figures to show this. In particular, BBVI with a well chosen learning rate and batch size, will converge given a sufficient number of iterations. For instance, in Fig. 1 of uploaded pdf, we show the performance of BBVI for a D=10 Gaussian target for different values of the hyperparameters- learning rate $lr$ & batch size $B$. In this case, BBVI for $B\geq4$ & $lr \leq 0.01$ eventually does converge to the same solution as GSM in terms of reverse KL (right panel, blue & orange lines). We hope that this resolves the misunderstanding- we are not claiming that BBVI *never* converges to the correct solution for the Gaussian target. Instead, it did not converge under our finite budget. With more iterations BBVI eventually converges.
*Hyperparameters*- We fixed $B=2$ and then tuned $lr$ with a grid-search, but we didn't tune it's scheduling. As seen in Fig 1, while larger step-sizes converge faster at the beginning, they only converge in a small radius around the minima. Thus one also needs to tune the scheduling of step size. In Fig.1 attached, we now also tune the batch size $B$ of BBVI and find that GSM still outperforms BBVI. We will include this discussion and figure in revision to give more context to the results of BBVI.
We have provided our code with the initial submission in the supplementary. We will make a package public with all 3 algorithms after the review process.
---
Rebuttal Comment 1.1:
Title: Baseline Comparisons still suboptimal
Comment: Thank you very much for the response and the additional experiments. However, the choice of using VOGN for estimating the expected Hessian is quite suboptimal, because it only provides a biased estimate. Instead, Stein's Lemma ( see the citation to Lin et al. above) provides efficient unbiased estimates of the expected Hessian while just relying on first-order information.
I quickly tested a recent implementation of NGVI with Stein's Lemma for GMMs (using the special case of using one component). And I obtained an ELBO of 0 for the frgaussian.py task after 25 samples.
To reproduce, you can extend the runs.py with the code block below. The new method run_gmmvi can be called from frgaussian.py (likely also for the other environments) in the same way as gsm /bbvi. However the class does not save results and log the output in the same way, but just uses the prints from the used framework (https://gmmvi.rtfd.io). The code is based on an example script provided at gmmvi.rtfd.io
It only takes one parameter as input, which is --batch for specifying the maximal number of samples to be drawn per iteration.
The code should run after `pip install gmmvi`. I did not make any more evaluations, so it would be interesting to compare the performance on the whole test suite, also with respect to other divergences (e.g. forward kl).
I encourage the authors to improve the experiment section. Personally, I would not oppose publication of the work despite the limitations that I raised in my original review, as long as the limitations are well discussed (so far the submission did make a good job in that respect!). I do think that the submission provides interesting novel ideas. However, I think that currently the empirical performance compared to strong baselines is still overstated.
```
from gmmvi.gmmvi_runner import GmmviRunner
from gmmvi.configs import get_default_algorithm_config, update_config
def run_gmmvi(args, model, x0, modelpath, callback, samples):
'''This function is based on the following example https://gmmvi.readthedocs.io/en/latest/get_started.html#using-the-gmmvirunner-with-custom-environments
'''
# For creating a custom environment, we need to extend
# gmmvi.experiments.target_distributions.lnpdf.LNPDF:
from gmmvi.experiments.target_distributions.lnpdf import LNPDF
class Target(LNPDF):
def __init__(self):
super(Target, self).__init__(safe_for_tf_graph=False)
self.model = model
def get_num_dimensions(self) -> int:
return self.model.d
def log_density(self, samples: tf.Tensor) -> tf.Tensor:
return self.model.log_prob(samples)
# We can also use the GmmviRunner, when using custom environments, but we have
# to put the LNPDF object into the dict. Furthermore, we need to define the other
# environment-specific settings that would otherwise be defined in
# the corresponding config in gmmvi/config/experiment_configs:
environment_config = {
"target_fn": None, ## I somehow couldn't add Target() here, but I had to add it after merge_configs()
"start_seed": 0,
"environment_name": "GSMTARGET",
"model_initialization": {
"use_diagonal_covs": False,
"num_initial_components": 1,
# Does GSM/BBVI use the same initial Gaussian???
"prior_mean": 0.,
"prior_scale": 1.,
"initial_cov": 1.,
},
"gmmvi_runner_config": {
"log_metrics_interval": 1
},
"use_sample_database": True,
"max_database_size": int(1e6),
"temperature": 1.
}
algorithm_config = get_default_algorithm_config("SEMTRON") # The recommended variant is SAMTRON, SEMTRON does not add additional components
algorithm_config['sample_selector_config']['desired_samples_per_component']=args.batch # Only hyperparameter worth tuning?
# Now we just need to merge the configs and use GmmviRunner as before:
merged_config = update_config(algorithm_config, environment_config)
merged_config['target_fn']=Target()
gmmvi_runner = GmmviRunner.build_from_config(merged_config)
for epoch in range(100):
gmmvi_runner.iterate_and_log(epoch)
return None, None, None, None
```
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for providing a detailed code, especially the one that followed the format of our own submitted code. We were able to install the `gmmvi` package and use the code block to run the experiments along the lines you suggest.
The only parameter that we changed in the code is the batch size, as that is the only free parameter in our GSM method. (Note this means we did not tune the learning rate or its scheduling, but used the default settings.)
> I quickly tested a recent implementation of NGVI with Stein's Lemma for GMMs (using the special case of using one component). And I obtained an ELBO of 0 for the frgaussian.py task after 25 samples.
We assume this is for the experiment with default parameters in frgaussian.py submitted in the code. These correspond to a Gaussian target with $D=4$. For this sized problem, we are able to exactly replicate the referee's result with `gmmvi`. However in this example, our GSM-VI method also takes only ~20 iterations.
We then experimented with changing $D$ (the size of the problem) to $D=10, 20$. We found that GSM significantly outperforms the NGD implementation of the code. Specifically NGD's performance is sensitive to the batch size and initialization. Further we found that some of the runs of NGD diverged. This was mostly driven by the fact that, as we increase dimensions, the Hessian approximation of NGD becomes singular. We have contacted the AC to see if we can share a new figure summarizing these results through them.
> I encourage the authors to improve the experiment section.
We will be happy to include results from NGD-VI, both from our own implementation and gmmvi package, in the revision of our work.
> However, I think that currently the empirical performance compared to strong baselines is still overstated.
In our revision, we agree to make our claims and statements regarding baselines more precise. Though note, we still found that GSM-VI consistently outperformed NGD-VI (with both Gaus-Newton and Stein's Hessian approximation) in our experiments. It's true that, when properly tuned, NGD-VI can be competitive with GSM. But we found it is very sensitive to tuning parameters and is harder to scale on account of its approximating the Hessian with first order information. Furthermore, the method remains cubic in computational complexity, while GSM is quadratic.
Title: New experiments with GMMVI | Summary: A new variational inference (VI) framework is presented by matching the score of the variational distribution, q, with the target posterior p. Specifically, the closed form score matching equations for the Gaussian variational family are derived and the resulting method is named Gaussian Score Matching VI (GSM-VI). GSM-VI shows favorable results in multiple instances when comparing the method performance to Black Box VI (BBVI) on a multivariate Normal target, a sinh-arcsinh Normal target and real data from posteriordb database.
Strengths: A simple, yet demonstrably fruitful, and novel VI framework is presented. The paper is easy to digest; it follows a natural progression of ideas and theoretical results are completed with intuitive explanations. GSMVI shows faster convergence w.r.t. number gradient estimates for several models and datasets w.r.t. BBVI. Furthermore, the paper presents clear scenarios where GSMVI is favorable to BBVI, e.g., when the covariance matrix is ill-conditioned.
Weaknesses: The paper presents a novel VI approach, but the only instantiation of the framework in the paper is restricted to the Gaussian variational family setting, without any non-trivial theoretical results beyond this setting. Where VI provides techniques to find the optimal q within a variational family in terms KL-divergence to p, it is not clear how well score-matching VI performs in this setting (except for when q is in the same family as p).
The paper lacks explanation or deeper analysis of the large amplitude oscillations presented in the experiments; The cause of variance in BBVI is well-known and therefore research can focus on taming it. I am not well-acquainted with the score matching literature, to me these oscillations require an explanation.
Only reporting the Forward KL (FKL) could be misleading when comparing GSM-VI to other VI methods which optimization is based on the reversed KL. Reporting only the FKL could favor diffuse variational posteriors over a mode-seeking peaked variational posteriors (which are notoriously produced by VI methods); however, it is not clear that the diffuse variational posterior is to be preferred over the peaked variational posterior.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I don't concur with the use of the term "closed-form updates" of line 229 as GSM-VI relies on sampling to be able to evaluate those update equations. This introduces variance in the updates, as opposed to CAVI updates for e.g. the exponential family where closed-form variational parameter updates can be derived without the need of sampling.
I would be ready to raise my score if the rebuttal addresses the following points:
1. Nuancing the experiments on simulated data with a metric less disadvantageous of VI, e.g. reverse KL, for at least one data set.
2. Addressing the issue raised under weaknesses regarding oscillatory behavior.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > I don't concur with the use of the term "closed-form updates" of line 229 as GSM-VI
By closed-form updates, we refer only to the fact that (4) has a closed form solution for the Gaussian variational family for a generated sample. That is, the explicit update equations are given in equations (5) and (6) (detailed in Theorem 2.2). By closed-form we do not mean that the updates in equations (5) and (6) are a solution to global score matching equations in (3). We will clarify this distinction in the revision.
> I would be ready to raise my score if the rebuttal addresses the following points:
We thank the reviewer for this consideration. We have now addressed both these concerns regarding additional metrics in the experiments, and the presence of oscillations.
> Nuancing the experiments on simulated data with a metric less disadvantageous of VI, e.g. reverse KL, for at least one data set.
For real-world problems from posteriodb models, we already show the results for reverse KL divergence.
But we agree with the referee that even for synthetic models, showing the evolution of reverse KL i.e. the objective which is explicitly optimized by BBVI, in addition to forward KL can be instructive. We have now attached a version of this figure in the uploaded pdf in the general response for a Gaussian target of 16 dimensions (see Figure 1). In this figure, we also present the results by varying hyperparameters of BBVI i.e the learning rate and batch size. This clearly shows that for Gaussian targets, BBVI can converge to the same quality solution as GSM but requires much larger batch size and small learning rates (only B>4, lr<0.01 converged in the 5000 iterations). Hence the conclusions remain unchanged from the Forward KL metrics presented in the original manuscript. Please see the attached pdf and the discussion in the "Author Rebuttal" at the top for more details.
We will add this discussion and results showing reverse KL for other examples in the supplementary material of the revised manuscript.
> Addressing the issue raised under weaknesses regarding oscillatory behavior.
Through additional experiments in the attached pdf, we believe we now have a better understanding of the oscillatory behavior that the referee points out, and we summarize this in the following points:
- When the variational family is well specified, i.e the target distribution is Gaussian or can be fit by one, we find no oscillations in GSM for any batch size and the algorithm converges to an optimal solution.
- When the target is non-Gaussian, the monitored KL divergence has oscillations. It turns out that this was because we were using batch size B=2 for GSM for all experiments. For larger batch sizes, these oscillations are suppressed very easily. We have attached a figure (see Figure 3) in the pdf. In the right panel of that figure, we also show the marginal histograms of a parameter for every 200th iteration in the last 1000 iterations for a GSM run with batch size of 8, and these show that the distribution has indeed converged. A detailed discussion of this figure is presented in the general answers section above.
We would also like to point out that these oscillations in KL divergence are also present in BBVI. Any stochastic optimization can dampen these by scheduling a learning rate to converge to 0, which is also what is often done in BBVI. While we have chosen not to do this for GSM, this same procedure can be applied to dampen the oscillations for GSM.
---
Rebuttal Comment 1.1:
Title: response rebuttal
Comment: Thank you for your detailed response regarding the concerns in my review. The added experiments and discussion sufficiently address the concerns and provides further empirical evidence for the soundness of the paper, and so I will raise my score to a 7. | Summary: This paper proposes a novel alternative optimization strategy for approximate Bayesian inference in statistical modeling.
The starting point of the proposed score-based VI approach is the realization that two distributions are the same if their derivative is the same almost everywhere.
This principled is used to derive an objective to match the gradient of the log-density of the posterior and the gradient of the log-density of the approximate variational distribution.
The objective is set up in a way so as to promote a minimal change in the approximate variational distribution such that the scores are matched on a set of samples drawn from it.
Interestingly, in the Gaussian case this minimization has clased form.
This work is inspired by previous literature on Passive-Aggressive learning.
Strengths: I believe that alternative optimization strategies for VI are an important area of research and, being very familiar with VI and unfamiliar with the literature on PA learning, I find the proposed idea quite original and well realized.
Someone may disagree that the idea is novel due to its derivation from PA learning, but in my opinion convincingly showing that this yields an effective strategy to obtain an approximation to the posterior distribution makes a good contribution to the literature.
The results are impressive, showing good performance compared to standard optimization (black-box VI) at a fraction of the number of gradient evaluations.
Weaknesses: One weakness is that the work focuses heavily on the case when the approximation is Gaussian due to the close form solution of the proposed optimization strategy.
It would have been nice to get more insights into other approximations, e.g., Normalizing Flows.
Having said that, I believe that the Gaussian case is important and I understand that it was necessary to extensively study it in this version of the paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The paper is very well written. However, I think that a bit more intuition on the algorithm with Eq 4 as objective would be ideal. At the moment, it seems a bit disconnected from Eq 3 - why would you want to minimally adjust q under that score-mathing constraint? A more comprehensive explanation could help here.
Also, even though it is written in some places, I think it is important to constantly remind the reader that there is no longer the traditional ELBO in the formulation, and that the optimization problem is really a different thing altogether.
I wonder about possible situations in which the optimization of Eq 4 could reach a bad local optimum. For example, it the initial q has variance which is too low, is it possible that samples have not enough diversity to allow for a proper coverage of the score-matching objective - and as a result this could result in the algorithm stopping with scores matched in a narrow region of the space with low posterior density.
I think that some insights into the behavior of the optimization wrt some of the choices that can be made in the initialization would add a lot to the revised version of the paper.
I was intrigued by the comments on the cases when the number of variational parameters is less than the number of parameters/latent variables. A bit more intuition on what is going on in these cases could also be useful for the revised version. And perhaps ways in which the proposed approach could be extended to handle these cases, which may not be that uncommon (e.g., a small hyper-net to generate large sets of parameters of a neural net).
I think that the experimental campaign is solid. However, it would have been really nice to try the proposed score-matching VI on largely parameterized models where the ELBO is known to be problematic. I wonder whether this could mitigate these effects due to the overparameterization (e.g., [1]).
[1] S. Rossi, P. Michiardi, and M. Filippone. Good Initializations of Variational Bayes for Deep Models. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, Long Beach, USA, 2019.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I haven't seen any specific text on the limitations of the approach and I think this could be included in the revision.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The paper is very well written. However, I think that a bit more intuition on the algorithm with Eq 4 as objective would be ideal. At the moment, it seems a bit disconnected from Eq 3 - why would you want to minimally adjust q under that score-mathing constraint?
We can certainly add some more intuition. We know now from Lemma 1 that if Eq 3 holds for every $\theta$, then our variational family has found a perfect fit. Any practical method based on this observation can however only sample a batch of $\theta$’s at each iteration. When updating our variational parameters to satisfy the score matching equations over a given batch, we need to be careful that in the remainder of the domain (outside the batch), our variational family remains a reasonable fit. One way to ensure this is to always make the smallest possible change in our variational family to fit the score matching constraints over the current batch. This ensures that for the rest of the domain that has been explored before, the variational family will still be an approximately good fit. Thus the need to minimally adjust $q$
is because we only have partial information (a batch) of the constraints in (3). This approach is called the “least change” approach in quasi-Newton methods [Goldfarb] or the “passive-aggressive” approach in PA methods [Cramer].
[Cramer] Online Passive-Aggressive Algorithms (JMLR 2006) \
[Goldfarb] Goldfarb, D. (1970), "A Family of Variable Metric Updates Derived by Variational Means", Mathematics of Computation, 24 (109): 23–26,
> Though it is written in some places, I think it is important to constantly remind the reader that there is no longer the traditional ELBO in the formulation, and that the optimization problem is really a different thing altogether.
We thank the referee for this suggestion and will put more emphasis on this in the revision
> If the initial q has variance which is too low, is it possible that samples have not enough diversity to allow for a proper coverage of the score-matching objective - and as a result this could result in the algorithm stopping with scores matched in a narrow region of the space with low posterior density. I think that some insights into the behavior of the optimization wrt some of the choices that can be made in the initialization would add a lot to the revised version of the paper.
We agree with the referee’s intuition that mode collapse is a possibility in our algorithm. However this is not necessarily due to score matching, but rather due to generating samples from the variational distribution itself. This is also an issue with BBVI, but unlike BBVI, sampling in this way is not a requirement for GSM, but only convenience. If we have access to another distribution that covers the domain (for e.g. a prior distribution) and allows efficient sampling, we can always generate samples from it.
This being said, we did investigate different initializations for both Gaussian and non-Gaussian targets. These included starting from– a standard normal, random Gaussian with broad and narrow covariance, from the mode with identity or using an LBFGS approximation for the inverse covariance matrix, from the resulting approximation of the Pathfinder algorithm etc. \
In all these cases, we always found the same, global solution for GSM. The only setting where the GSM solution depended on initialization was when the target distribution is mulit-modal with separated modes and in this case GSM generally converged to the nearest mode. However this is a known issue with almost all inference algorithms. We can elaborate on this in the revised manuscript.
> Comments on the cases when the number of variational parameters is less than the number of parameters/latent variables. A bit more intuition on what is going on in these cases could also be useful for the revised version. And perhaps ways in which the proposed approach could be extended to handle these cases, which may not be that uncommon (e.g., a small hyper-net to generate large sets of parameters of a neural net).
Our setting is Gaussian families where there are $d^2+d$ variational parameters and $d$ latent variables. Thus the number of variational parameters is always greater than the number of parameters/latent variables. In the general setting, for non-Gaussian variational families that have fewer parameters than latent variables, this score matching approach is not possible. But we are unaware of any practical examples where such a situation arises. Thus we now find that discussing this in our paper was a digression, and as such we will either remove this discussion, or expand upon it to make it clear. We also apologize that we are not familiar with this setting of “small hyper-net to generate large sets of parameters of a neural net”, and welcome a reference and are open to consider this during the discussion phase.
> The experimental campaign is solid. However, it would have been really nice to try the proposed score-matching VI on largely parameterized models where the ELBO is known to be problematic. I wonder whether this could mitigate these effects due to the overparameterization.
In this paper, we have focused only on the full-rank Gaussian variational family which cannot be extended to these highly over-parameterized models like deep networks. In the future, we plan on studying applications for mean-field Gaussian families (updates for which can be trivially implemented), and low-rank approximations. Both of these are better suited in the setting that the referee has pointed out. In that regard, we thank the referee for the reference and pointing to an interesting direction of research.
> I haven't seen any specific text on the limitations of the approach and I think this could be included in the revision.
We have discussed some limitations throughout the paper, especially in the conclusion and future work section, but will be happy to collect them in one sub-section in the revised paper.
---
Rebuttal Comment 1.1:
Comment: Many thanks for your response - I don't think I will need any further clarifications.
Having said that, after reading the other reviews, I wonder about the impact of the results with respect to BBVI with natural gradients. I'm looking forward to hearing the opinion of Reviewer CVEd to your response and to the discussion among reviewers.
---
Reply to Comment 1.1.1:
Comment: We thank the referee for their time and comments.
> I wonder about the impact of the results with respect to BBVI with natural gradients. I'm looking forward to hearing the opinion of Reviewer CVEd to your response and to the discussion among reviewers.
Based on the experiments with NGD-VI, both using our implementation with a Gauss-Newton approximation of the Hessian (VOGN updates) and the new package `gmmvi` that the reviewer CVEd pointed us to in the discussion, we find that GSM-VI consistently outperforms NGD-VI. Specifically, we find that while a properly tuned NGD-VI can be competitive with GSM in smaller dimension, it is very sensitive to parameter tuning and it does not scale well with respect to the dimension. This is on account of its Hessian approximation which becomes singular and unstable as the dimension grows. Furthermore, for both variants of NGD-VI, the method remains cubic in computational complexity, while GSM is quadratic. | Summary: The paper proposes score matching as a new approach to (black box) variational inference (BBVI) where the variational family is Gaussian.
The usual way is to minimize the KL divergence (or equivalently to maximize the ELBO) using stochastic gradient descent (SGD) to update the variational parameters.
Instead, score matching imposes the constraint that the gradient of the log joint equal the gradient of the logarithm of the variational distribution for all parameter values.
The update step picks the new variational parameters to minimize the KL divergence between the new variational distribution and the previous variational distribution, under the score matching constraint at parameter values that are sampled from the previous variational distribution. The paper proves that this update step has a closed form solution if the variational family is Gaussian and can be computed in $O(d^2)$ time assuming the gradients can be computed in $O(d)$ time where $d$ is the dimension of the distribution.
The experimental evaluation compares the new Gaussian score matching VI (GSM-VI) with standard BBVI. If the target distribution is Gaussian, it finds that the number of iterations needed for convergence scales linearly with the dimension for GSM-VI, but worse for BBVI. It also investigates the effect of the condition number of the covariance matrix of the Gaussian target distribution and finds that it does not affect GSM-VI, but BBVI does not perform well if the condition number is large. Using a sinh-arcsinh normal distribution, the paper looks into what happens as the target distribution departs from being Gaussian. If the target is close to a Gaussian, then GSM-VI continues to require fewer gradient evaluations than BBVI, converging to a similar solution. If the target is far from being Gaussian, GSM-VI does not converge and can experience larger oscillations than BBVI. Finally, GSM-VI and BBVI are compared on real-world data from the posteriordb repository. In most of them, GSM-VI outperforms BBVI by a factor of 10 to 100 in terms of number of gradient evaluations.
Strengths: The score matching idea is a novel approach to BBVI. It is a simple idea that is communicated clearly in the paper and shown to be very effective. It is clearly interesting and relevant to the variational inference community. The code for the experiments is available, which helps with the reproducibility of the results.
Weaknesses: The main weakness of the paper is that it is almost purely empirical and provides no explanations for its (strong) empirical result: why is GSM-VI so effective? Why does it scale so well with the dimension? Why is a batch size of 2 best? (This seems particularly odd. I would expect batching to either provide no benefits or the benefit to increase with larger batch size.) Without even an attempt at an explanation, the empirical results almost seem too good to be true.
A potential weakness in the experimental evaluation is that the performance is always measured in terms of number of gradient evaluations, not actual running time. Given that the update step in GSM-VI takes $O(d^2)$ time, the paper should report the actual measured running times as well. It is also unclear if the quadratic update step will worsen GSM-VI's performance for very high-dimensional problems.
The experimental evaluation should include more details: how was GSM-VI implemented? Is it built on existing VI implementations? What kind of system was it run on? Not all (hyper-)parameters for the experiments were reported: what is the dimensionality of the real-world benchmarks? What was the batch size for BBVI? How were the posteriordb examples selected? And so on.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: These questions overlap with what I wrote under "Weaknesses". I copy them here.
- Why do you think GSM-VI is so effective and scales so well?
- What are the measured running times for the experiments (since the update step takes $O(d^2)$ time)?
- In a similar vein, does the $O(d^2)$ update step have a negative effect for very high-dimensional problems? Does BBVI overtake GSM-VI again for high dimensions in terms of running time?
- What was the batch size for BBVI in the benchmarks?
- What's the dimensionality of the posteriordb examples? How were they picked?
- Regarding figure 4: it would be interesting to see what happens for settings where both $s \ne 0$ and $t \ne 1$. Did you try this?
I will update my rating if the author's answers address my concerns.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The paper mentions some limitations in the text, but they are not collected in one place. It would be helpful if the paper elaborated on the limitations in a separate section/paragraph.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > I will update my rating if the author's answers address my concerns.
We thank the referee for their thoughtful reviews and kind consideration. Below, we address the weakness and questions raised above and will include this discussion in the revised manuscript.
> Why do you think GSM-VI is so effective and scales so well?
There are 3 intuitions that help explain why GSM-VI is more efficient than BBVI-
1) GSM-VI does not rely on Taylor approximations. BBVI is a stochastic gradient method and hence relies on the 1st order Taylor approximation of the objective function. In contrast, GSM-VI computes the exact projection onto the score matching constraint.
2) BBVI requires tuning a learning rate and its scheduling. In contrast, GSM-VI does not have a learning rate parameter. Instead, it is able to adaptively make large jumps in the initial iterations (see Fig. 1.a in the paper) and make smaller adjustments as the approximation converges to the target.
3) BBVI relies on a scalar signal, that is it attempts to increase the ELBO (scalar function) by using the steepest ascent direction (gradient). GSM-VI instead uses $d$ equations at each iteration to determine the update, one for each element of the score. Since the number of constraints increases linearly with dimensions, we believe it partly explains why GSM-VI scales well with dimension.
> Why is a batch size of 2 best? This seems odd.
Referee's intuition is correct. Our understanding of the impact of batch size has now improved with more experiments-
- For Gaussian targets, GSM performs equally well for all $B\geq1$. There are minor gains for larger batches as the dimensionality increases $d\gtrapprox100$, but these are marginal enough that B=2 is a good conservative default.
- For non-Gaussian targets, we now recommend a larger batch size. We find that these converge to a more stable solution i.e. smaller oscillations in KL divergence. We show this in Fig. 3 of the above attached pdf. We found that $B \geq 8$ suffices for $d\lessapprox100$
> Performance is always measured in terms of number of gradient evaluations. The paper should also report actual running times.
There are 2 reasons to measure gradient evaluations-
1. Actual running time is very sensitive to actual implementation details.
2. Often in real-world problems, the computational bottleneck is evaluating the target log density and its gradient (for instance it can involve solving complex ODEs). Hence we focus on minimizing these evaluations.
However we acknowledge that the referee raises a good point. We have now timed GSM and BBVI updates for fitting a full rank Gaussian target with 2048 dimensions using batch size of 1 in Jax after JIT-compilation. GSM update takes 230 ms/iteration while BBVI takes 227 ms/iteration.
> Will the quadratic update step will worsen GSM-VI's performance for high-dimensional problems. Does BBVI overtake GSM?
In this paper, we only consider the Gaussian variational family with full-rank covariance matrix and hence both BBVI and GSM have the same $O(d^2)$ complexity. This complexity is unavoidable since we need to store and update the $d \times d$ covariance matrix.
Thus based on our timing test for D=2048 problem above, and the fact that GSM requires $\sim10$x less gradient evaluations for almost all examples considered, we are confident that BBVI will not overtake GSM-VI as the dimension increases.
> What was the batch size for BBVI in the benchmarks?
We fixed the batch size of BBVI to 2 for all experiments, same as GSM, to keep the two algorithms at equal footing. Since we did not tune the batch size of GSM for individual experiments, we did not vary the batch size of BBVI either (however we did tune learning rate of BBVI). In Figure 1 in the uploaded response pdf, we now show results for varying both- batch and learning rate of BBVI and find that GSM-VI still outperforms BBVI in all cases. Please see general answer for more details.
> How was GSM-VI implemented? What kind of system was it run on?
Our implementation of GSM-VI was in Tensorflow but now we have a Jax package that will be released after the review process. We will also implement GSM in BlackJax package. Also note that if one has access to the score function of the target, GSM updates can be written in native python without any ML or auto-diff. All the experiments were done on a single core of a personal CPU desktop.
> How were the posteriordb examples selected? What's their dimensionality?
We compared GSM & BBVI on all the 50 models in PosteriorDB and found GSM to outperform on all of them. For brevity, we chose 8 of these models for the paper based on 2 considerations- i) Each model represents a different class of statistical models, with different complexities, and ii) we have an equal representation of both Gaussian and non-Gaussian targets.
All the models are low-dimensional which allows us to look at marginal and joint distributions of parameters and thus go beyond only comparing KL divergences in evaluating different algorithms.
Similar choice of models was made by a recent paper on Pathfinder algorithm for fitting a Gaussian variational distribution (Lu et.al. JMLR ; 23(306):1−49, 2022)
Classes and dimensionality of the 8 models used are- Generalized linear models (d=26), Differential equation dynamics (8), Hierarchical meta-analysis with centered & non-centered parameterization (10), hidden Markov models (8), time-series model (7),
Gaussian processes (13), Gaussian mixture model (d=5).
> Figure 4: what happens for settings where both $s\neq1$ and $t\neq1$.
We did experiments varying both skewness and tail-weights at the same time for different dimensions and found that in all our experiments, GSM with batch $B=2$ always converged to a similar solution as BBVI, but faster. Hence for the sake of clarity, we separated these two axes and showed them separately. An example of $s\neq1$ & $t\neq1$ case is in Fig. 3 of the attached pdf.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the thorough response. You have addressed essentially all my concerns, so I'm raising my rating from 5 to 7.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their time and comments, and kind consideration in raising the score. | Rebuttal 1:
Rebuttal: We thank all the referee for their thoughtful reviews. Based on comments from different reviewers, we have now compiled three new figures from some old, and one entirely new experiment. Please see the attached pdf. These serve to answer some of the questions raised below and so we begin by discussing these figures here as a preamble. We will add these figures and discussion to the revised manuscript.
**Figure 1. Convergence of BBVI:**
In response to Reviewer **DLrt**'s question we have included an experiment tracking the reverse KL.
In Fig. 1 of the attached pdf, we fit a full-rank Gaussian target distribution of 16 dimensions with BBVI and GSM. Here we monitor the reverse (backward) KL divergence which is explicitly minimized by BBVI, and show results for doing a hyper-parameter search on both the parameters of BBVI- learning rate and the batch size. The top row shows reverse KL (y-axis) on linear scale and the bottom row on log-scale for clarity. We highlight two key takeaways from the figure:
1) Given enough computational budget, BBVI does converge to the same quality of solution as GSM, in the current example for larger batch sizes ($B\geq 4$) and smaller learning rates $lr \leq 0.01$ (see blue and orange lines in last two panels). However its performance is quite sensitive to the hyperparameters. The figure also suggests that performance of BBVI can be improved by scheduling the learning rate, but this will require a greater hyperparameter search.
2) Even for optimally tuned batch size and learning rate, GSM significantly outperforms BBVI and requires orders of magnitude less gradient evaluations for convergence even in terms of reverse KL, which is the metric that is explicitly minimized by BBVI.
We will include similar figures for other synthetic experiments in the revised version of the paper..
**Figure 2. Natural Gradient Descent (NGD)-VI**
We have added natural gradient descent VI for maximizing ELBO as a new baseline for comparison with GSM since it has been shown to significantly outperform BBVI. We indeed find this to be the case in our experiments. However as shown in Fig. 2, in our regimes of interest, we find that even compared to NGD-VI, GSM is faster in terms of gradient evaluations, has a smaller iteration complexity, and does not require tuning for variational inference with a full-rank Gaussian distribution.
*Implementation of NGD-VI:* We implemented the updates to variational parameters $\mu$ and $\Sigma$ based on Eq. 16 of Lin et.al (arxiv:1906.02914) (adapted for a single Gaussian instead of mixture model by setting $\delta_c=1$). These updates however require the Hessian of the objective function (ELBO). To approximate this with only first-order gradient information for comparison with GSM, we have used the VOGN update (Eq. 16 of Khan et.al., arxiv:1806.04854). This approximates the Hessian of the likelihood term $\log p(\theta | x)$ with a Gauss-Newton approximation and combines it with the correct Hessian of the entropy term $\log q(\theta)$- $\Sigma^{-1}$.
Figure 2 shows results of fitting a 64 dimensional Gaussian target with GSM and NGD-VI while varying its hyperparameters- the learning rate and the batch size. We monitor reverse KL, which is explicitly optimized by NGD-VI, for both linear (top) and log-scale (bottom row) on y-axis for clarity. We find that-
1) While an optimally tuned NGD-VI can be competitive with GSM, its performance is very sensitive to tuning the hyperparameters. This will require not only tuning the batch size and learning rate, but also the scheduling of learning rate. In comparison, GSM has no learning rate parameter to tune.
2) For large dimensions $d \geq 256$, Gauss-Newton approximation for Hessian in VOGN algorithm starts to diverge. This can possibly be corrected by using a regularization hyper-parameter for Hessian, or a different Hessian approximation, but both these require more tuning.
GSM does not face any such issues in scaling to high dimensions
3) Finally, we also note that for a full-rank Gaussian variational family, every iteration of GSM has a quadratic complexity $d^2$ as compared to cubic $d^3$ of NGD-VI which requires one to estimate both $\Sigma$ and $\Sigma^{-1}$ for each update.
Based on these results, we conclude that GSM-VI outperforms NGD-VI in our regimes of interest. We will add this baseline to other experiments in our revisions.
**Figure 3. Batch size, GSM-VI and non-Gaussian targets**
We have run more experiments with GSM-VI for different non-Gaussian targets and varied the batch size to better understand its convergence. Figure 3 summarizes our findings. In it, we compare the performance of GSM-VI on a synthetic non-Gaussian target (D=10) for different batch sizes (B=2, 8, 32) with BBVI (B=8, 32). We show the same forward KL divergence as the paper (left panel), marginal histogram for one of the parameters for all algorithms/configurations (middle panel) and additionally show the same marginal histogram for every 200th iteration in the last 1000 iterations of GSM-VI with B=8 to demonstrate that the distribution has indeed converged (right panel).
We find that larger batch sizes of GSM ($B \geq 8$) lead to a more stable solution than both GSM with $B=2$ and BBVI, i.e. they have smaller (or no) oscillations in the forward KL metrics. GSM also converges to a point with a better forward KL divergence. This is also reflected in the marginal histograms which have larger dispersion for GSM than BBVI. Thus GSM does not lead to mode-seeking behavior. This can be desirable, for instance, as it means GSM posteriors can be corrected with importance sampling more easily.
Pdf: /pdf/9479c82ae6a5cd9be0963b436f6f7690b91382c9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Inconsistency, Instability, and Generalization Gap of Deep Neural Network Training | Accept (poster) | Summary: This manuscript propose new notions of inconsistency, instability, and information-theoretifc instability based on the output confidence score to estimate the generalization gap of deep neural networks. Theoretical and empirical results are presented and show that the proposed notions, especially inconsistency, are correlated well with well-trained neural networks.
Strengths: 1. Three novel notions are proposed to measure the generalization gap of neural networks;
2. Extensive experiments have been conducted to verfify the good correlation of instability and inconsistency with the generalization gap of neural networks;
3. The manuscript is well organized and the writing is clear;
4. Emprical study show the better correlation of the inconsistency than disagreement.
Weaknesses: 1. The novelty of proposed measurements are limited:
- The Inconsistency and Instability are very similar to the definition of disagreement, while the former notions replace the outout from one-hot predictions to softmax confidence score.
- The Instabilty of model parameter distributions ($\mathcal{I}_P$) can be regarded as a kind of algorithm stability. Therefore, I suggest the authors to provide a discussion on the differentce or advantages w.r.t. former definition of algorithm stabiliy.
2. Marginal contributions on the theoretical results (Theorem 2.1):
- The upper bound in Theorem 2.1 is not a uniform convergence-based generalization bound, and does not show the relation to training sample size and may be loose, which greatly undermines the significance of the theoretical results;
- The right hand of the given upper bound is hard to estimate due to the existence of the Instabiliyt of model parameter distributions $\mathcal{I}_P$, although In consistency $\mathcal{C}_P$ and Instability $\mathcal{S}_P$ can be convenient to estimate on unlabeled data.
Based on above weaknesses and considering the marginal contributions on the proposed notions and theoretical results, I think this work is slightly below the acceptance bar.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The Inconsistency and Instability are very similar to the definition of disagreement, while the former notions replace the outout from one-hot predictions to softmax confidence score.
We find it interesting that in spite of the similarity in the definitions, they behave quite differently, as shown in the submission as well as the one-page pdf attached to the global response. The theoretical justifications are also quite different -- the disagreement study shows that disagreement=test error (not the loss gap like our theorem) if the training procedure produces well-calibrated ensembles. At a high level, we believe that it would be useful to study the connection between generalization performance and discrepancies of model outputs in general (whether instability, inconsistency, disagreement, or else), and considering the clear observable differences from disagreement, we feel that it is useful to study instability/inconsistency as well as disagreement.
> The Instabilty of model parameter distributions (${\mathcal I_P}$) can be regarded as a kind of algorithm stability. Therefore, I suggest the authors to provide a discussion on the differentce or advantages w.r.t. former definition of algorithm stabiliy.
In our view, ${\mathcal I_P}$ is more like the covering number or entropy than algorithmic stability. ${\mathcal I_P}$ appears in recently popularized information theoretical analyses, which we used to quantify randomness effects of learning algorithms into our generalization bound. We will add a discussion in our revision.
> The upper bound in Theorem 2.1 is not a uniform convergence-based generalization bound, and does not show the relation to training sample size and may be loose, which greatly undermines the significance of the theoretical results
In the theorem, $n$ is the training sample size.
Information theoretical results using mutual information inherently hold in expectation because mutual information is defined with respect to expectation. The results can be easily translated into a uniform convergence bound if we choose a prior and use the KL divergence with respect to the prior (instead of mutual information), and we thought this was well known. We will add a discussion.
> The right hand of the given upper bound is hard to estimate due to the existence of the Instabiliyt of model parameter distributions, although Inconsistency and Instability can be convenient to estimate on unlabeled data.
The estimation of ${\mathcal I_P}$ is possible using bootstrap, though not easy as noted, and we are happy to add a discussion in this regard. On the other hand, ${\mathcal I_P}$ is a standard quantity in the mutual information-based analyses, and so all the previous analyses using this quantity share the same trait. While we employ ${\mathcal I_P}$ to deal with the stochastic nature of neural network training, we would like to emphasize that the strength of this analysis is the inclusion of new easily-measurable quantities, as noted, and that the main focus of this paper is the empirical study of these new quantities. | Summary: The paper presents two measures for a stochastic training algorithm: inconsistency and instability. The former measures the inconsistency (or "disagreement") within the random ensemble of models trained from the same training set. The latter measures the inconsistency of two ensembled predictors, each obtained from an ensemble trained from an independent training set. A theorem is presented stating that the sum of inconsistency and instability modulates the mutual information (between the training set and algorithm output) in the generalization bound of Xu/Raginsky' 2017. Empirical investigation is performed to assess the predictiveness of inconsistency and instability for generalization gap. Algorithmic implications are also investigated.
Strengths: To this reviewer, a particularly novel and interesting aspect of this work is bringing the notion of inconsistency, or "within-training-set agreement" into the landscape of generalization bounds. This notion is akin to the notion of "generalization disagreement equality" (GDE) in the work of Jiang et al 2022 (reference [17] of this paper). Notably -- although not adequately discussed by the authors -- a sufficient condition of GDE is a notion of calibration in [17]. A potential impact of this work is extending the development of information-theoretic generalization bounds to include the calibration-alike quantities. This, I found intriguing and inspiring.
The theoretical development is light. Nonetheless interesting.
Among various empirical results, the most interesting and novel aspect to this reviewer is the observation that inconsistency is more predictive for generalization than sharpness. Algorithmic exploitation of this aspect is also interesting.
Weaknesses: Some conclusions from the empirical study appear speculative to this reviewer. For example, the authors hypothesized that a low degree of randomness is required for inconsistency+instability to be predictive, and in other places the author attributed complex phenomena arising from experiments to the interaction with the mutual information term. It is desirable that such claims are better corroborated.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Are the notions of inconsistency and instability related to the notion of functional-CMI (or functional MI) in the work of Harutyunyan et al, "Information-theoretic generalization bounds for black-box learning algorithms", NeurIPS 2021? Exploring this connection might enhance this work.
2. What is the loss function used in evaluating generalization gaps in the experiments?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Nothing to add.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Are the notions of inconsistency and instability related to the notion of functional-CMI (or functional MI) in the work of Harutyunyan et al, "Information-theoretic generalization bounds for black-box learning algorithms", NeurIPS 2021? Exploring this connection might enhance this work.
It seems to us that the notions of inconsistency and instability are not the same as the notion of functional-CMI (or functional MI) because the latter does not lead to a Bernstein-style bound and does not use variance information. The latter might lead to the so-called first-order bound, but we need to study it further to understand the relationship. We will be happy to add a discussion.
> What is the loss function used in evaluating generalization gaps in the experiments?
It was the cross-entropy loss.
---
Rebuttal Comment 1.1:
Title: Thank you for the reply.
Comment: I will keep the rating. | Summary: This paper investigates the generalization gap in deep neural networks, and propose that this gap is influenced by the inconsistency and instability of model outputs, two quantities which are defined by the authors, and justified theoretically via a new information theoretic generalization bound. The authors conduct empirical studies that confirm the predictive power of inconsistency and instability on the generalization gap, and demonstrate that explicitly reducing inconsistency during training improves performance.
Strengths: The paper is generally well-written, and includes several interesting observations. While I'm unsure of the novelty of Theorem 2.1, its form is compelling and I like the fact that $\mathcal{D}_P$ and $\mathcal{C}_P$ can be estimated efficiently (unlike the mutual information term $\mathcal{I}_P$ that also appears in other information theoretic bounds). I also find the results at the end on explicitly encouraging consistency interesting and a strong contribution -- in my opinion it would be good to expand on this aspect of the paper.
Weaknesses: While the form of the bound in Theorem 2.1 is compelling, I have some concerns with its novelty/improvement relative to existing results (see questions below). In particular, it's unclear to me that the bound represents an improvement on existing information theoretic generalization bounds in the literature.
On the empirical side, I think a more rigorous correlation analysis of the $\mathcal{C}_P/\mathcal{D}_P$ metrics along the lines of prior work (e.g. Jiang et al. 2020) would help strengthen the claims of the paper, since the empirical correlation between these measures and the generalization gap seems to be one of its main contributions. I also think it would be helpful to have a more detailed comparison of the inconsistency/error relationship compares with the disagreement/error relationship observed in prior work, as these seem very closely related.
Overall, while the paper is well-written and contains some interesting insights, at the current stage I think it lacks sufficient novelty/improvement on the theoretical side, and comparison to prior work on the empirical side to recommend acceptance.
**References**
Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, Samy Bengio, Fantastic Generalization Measures and Where to Find Them, 2020.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - How does the bound Theorem 2.1 compare with the large existing literature of information-theoretic generalization bounds? Perhaps the simplest of these from Xu & Raginsky (2017) is of the form $\sqrt{2\sigma^2 I/n}$, upon which your result seems like a small improvement at best, and only when $\mathcal{D}_P$ is very small. More recent bounds (e.g. Steinke and Zakynthinou, 2020 using the conditional mutual information) have made significant improvements on this, and so clarification as to which regimes your result provides an improvement would be helpful.
- Relatedly, is there any evidence that $\mathcal{D}_p$ is the dominant term in Theorem 2.1? Previous work has noted that mutual-information based bounds can be extremely large (though they are difficult to numerically estimate), and seemingly here the term $\mathcal{I}_P$ would dominate.
- Could the authors clarify what is being varied in the plots in, e.g. Figures 1 and 2? Are these plots illustrating the generalization gap/$\mathcal{D}_P$ in time, i.e. as a function of the iteration of optimization? If so, it would be useful if there was some indication of the direction of time in this plot.
- Do the authors have any hypotheses for why the disagreement/error relationship exhibits different behavior than the generalization gap/inconsistency relationship (maybe specifically in the zero training error regime, in which the generalization gap = test error)? It seems like this may have to do with the distribution of the logits in the trained models (which is ignored by the disagreement but not by the inconsistency metric), which would be very interesting to understand.
- Could the authors explicitly state the penalty that is added to the loss function during training to encourage consistency and explain how it is computed?
**References**
Thomas Steinke and Lydia Zakynthinou, Reasoning About Generalization via Conditional Mutual Information, 2020.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding theoretical novelty: in our view, our analysis is quite different from Xu & Raginsky (2017) as their bound is a sub-Gaussian bound which cannot lead to a faster rate than $\sqrt{1/n}$ as long as the noise variance is nonzero. In comparison we have a Bernstein-style bound, which is needed for a faster rate in classical statistics and quite different from a sub-Gaussian bound. Also our bound incorporates new quantities (inconsistency and instability of model outputs) different from the noise variance. Steinke and Zakynthinou (2020), using the conditional mutual information, is orthogonal and complementary to our work. We focus on new easily-measurable quantities and derive a new Bernstein bound to utilize them, but it will be interesting to incorporate conditional mutual information into the analysis in the future.
> a more rigorous correlation analysis of the metrics along the lines of prior work (e.g. Jiang et al. 2020) would help strengthen the claims of the paper
As suggested, we performed correlation analyses using the metrics from [Jiang et al. 2020] and confirmed that the results are consistent with the submission. We included a part of the results (comparison between inconsistency and disagreement) in the one-page pdf attached to the global response. We will include the complete results in the paper (including other settings and the inconsistency+instability results). It would indeed improve the paper. We appreciate the suggestion.
> I also think it would be helpful to have a more detailed comparison of the inconsistency/error relationship compares with the disagreement/error relationship observed in prior work, as these seem very closely related.
We performed this comparison using the metrics from [Jiang et al. 2020] and included part of the results in the one-page pdf (global response). The results show that essentially inconsistency is correlated to generalization gap while disagreement is correlated to test error. This is consistent with our submission and the previous study -- the inconsistency/gap relation is suggested by our theorem and the disagreement/error relation is suggested by the theorem of the disagreement paper (disagreement = test error if the training procedure produces well-calibrated ensembles).
> what is being varied in the plots in, e.g. Figures 1 and 2?
In Figures 1--4, the learning rate and training length were varied, and the points are connected in the order of training length. This was said somewhere in the text, but now we realize that it should be said also in the captions and there should be arrows in the graphs, as suggested. We will improve it if accepted.
> Do the authors have any hypotheses for why the disagreement/error relationship exhibits different behavior than the generalization gap/inconsistency relationship (maybe specifically in the zero training error regime, in which the generalization gap = test error)? It seems like this may have to do with the distribution of the logits in the trained models (which is ignored by the disagreement but not by the inconsistency metric), which would be very interesting to understand.
As noted, inconsistency takes how strongly the models disagree into account while the disagreement ignores it. Suppose that the confidence level of model outputs (on unseen data) is fixed to a high value so that the models always either strongly agree or strongly disagree. In this situation, inconsistency and disagreement should be highly correlated. Further assume that test error and generalization gap are highly correlated, and then the disagreement/error relationship and the inconsistency/gap relationship should be similar. However, in the reality, even if all the models have zero training error, their confidence level on unseen data could vary, thus, the strength of agreement between models would not be binary, which means that inconsistency could use information ignored by disagreement. This is an interesting point, and we will add a discussion to the revision.
> Could the authors explicitly state the penalty that is added to the loss function during training to encourage consistency and explain how it is computed?
For encouraging consistency, two models are trained simultaneously with two distinct random sequences (for initialization, mini-batch sampling, etc.) with the inconsistency penalty term, which is the KL-divergence of the model output with respect to the model output of the other model. With the absence of unlabeled data, Algorithm 1 (in the Appendix) was used, and with unlabeled data (Table 4), Algorithm 2, which computes the inconsistency penalty term on the unlabeled data, was used. | Summary: In this work, the authors introduce the ideas of “instability” and “inconsistency” of model outputs, and investigate the relationship between these quantities and the generalization gap. In particular, they empirically find a positive correlation between instability + inconsistency and the generalization gap. They further show that inconsistency can be more predictive of the generalization gap than m-flatness.
Strengths: 1. The paper is written cleary, well–organized, and well-motivated. The authors do a good job in providing some intuition behind the mathematical definitions of inconsistency and instability.
2. The paper includes extensive experiments to support their hypothesis (e.g., use of various architectures, vision and text datasets, training methods, etc.)
Weaknesses: 1. In Figure 2, the authors observe that $\mathcal{D_P}$'s predictive ability of generalization gap is sensitive to the learning rate. The authors further suggest that when the “final randomness is high,” $\mathcal{D_P}$’s predictive ability of the generalization gap is not as strong. In practice, practitioners may use a large learning rate and small batch size to obtain well-generalizing models (and so the final randomness would be high in this situation). Thus, $\mathcal{D_P}$ may not be useful here. (In some sense, it is not surprising that low final randomness correlates with $\mathcal{D}_{P}$’s predictive ability of generalization gap).
2. The authors choose a constant learning rate schedule for the starting experiments to avoid confounding variables. However, the architectures used in the experiments have normalization layers, which can induce learning rate schedules. Perhaps running the preliminary experiments with at least one architecture w/out any normalization layers would be beneficial.
3. The benefit of $\mathcal{D}_{P}$ over prior metrics such as disagreement is not evident. In [17], test error is calculated at the end of training (when the train error is nearly zero). Since this is not true in figure 10, I would be interested in a plot of the gap between train accuracy and test accuracy vs. training loss instead of figure 10d. (aside: the gap between train accuracy and test accuracy is an alternate definition of generalization gap to the definition used in this work). Perhaps this would lead to a more appropriate comparison of inconsistency vs. disagreement detailed in lines 113-127.
More generally, it would be nice to also include plots for this alternative definition of generalization gap, especially since experiment performance in the paper is often measured via test error.
4. In the appendix (lines 600-604), $\rho$ was set based on reference or the development data. However, the optimal value of $\rho$ can be quite sensitive to the architecture used. Thus, it would be nice to use some type of grid search to choose $\rho$ for fair comparison in e.g., figure 8.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It would be helpful to include legends in each figure (e.g., figure 4 is missing a legend. I assume the legend is consistent with figure 2. However, it would be convenient to include the legend again.)
What are the training errors for models in each experiment?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have described the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > In practice, practitioners may use a large learning rate and small batch size to obtain well-generalizing models (and so the final randomness would be high in this situation). Thus, ${\mathcal D_P}$ may not be useful here.
Even when the initial learning rate is large, the final randomness can be reduced by either decaying the learning rate (e.g., by multiplying 0.1 a few times, using the cosine schedule, etc.) or performing iterate averaging, and both are commonly practiced. Also, Section 3.2 experiments with known high-performing training procedures and shows that inconsistency is predictive of generalization gap. And so we believe that inconsistency can be useful in practical settings.
> The authors choose a constant learning rate schedule for the starting experiments to avoid confounding variables. However, the architectures used in the experiments have normalization layers, which can induce learning rate schedules. Perhaps running the preliminary experiments with at least one architecture w/out any normalization layers would be beneficial.
Thanks for the suggestion. In our preliminary experiments, we tried some normalization-free network architectures and observed that inconsistency is predictive of generalization gap. We designed the main experiments without them as their training was too expensive. However, we will give them another try.
> I would be interested in a plot of the gap between train accuracy and test accuracy vs. training loss instead of figure 10d.
The suggested plot turned out to be similar to 10a (the gap between training loss and test loss vs. training loss) in this particular setting of Fig 10, but it may be useful in some other settings.
> More generally, it would be nice to also include plots for this alternative definition of generalization gap, especially since experiment performance in the paper is often measured via test error.
We conducted additional correlation analyses, which include this alternative definition of gap (test error minus training error), and showed some results in the one-page pdf attached to the global response. We hope that the analyses of this form, when added to the paper, will clarify the interesting difference from (and the merit over) disagreement.
> In the appendix (lines 600-604), $\rho$ was set based on reference or the development data. However, the optimal value of $\rho$ can be quite sensitive to the architecture used. Thus, it would be nice to use some type of grid search to choose for fair comparison in e.g., figure 8.
Lines 600-604 describes selection of $\rho$ for training SAM models, and yes, $\rho$ was set to the optimal value for each combination of the architecture and data. When the optimal value is known from the literature, we used that value but also made sure it was actually optimal by testing the grid in the neighbor. When the optimal value was not known for the combination of the architecture and data, we performed a grid search (on a held-out subsample of training data serving as development data).
As for measuring 1-sharpness, we tested the values on a grid in the neighbor of the $\rho$ value used for training SAM (= the optimum $\rho$ for SAM) and found that the prediction power of 1-sharpness is similar in this neighbor. Therefore, for simplicity, we used this optimum $\rho$ for SAM also for 1-sharpness for producing the results in the paper, as described in Line 624.
> It would be helpful to include legends in each figure (e.g., figure 4 is missing a legend. I assume the legend is consistent with figure 2. However, it would be convenient to include the legend again.)
To save space, the legend is sometimes described in the caption, e.g., "Same procedures and legend as in Fig 2" in the caption of Fig 4, but we will improve it if accepted.
> What are the training errors for models in each experiment?
The training error varies and we cannot describe it for all the experiments here, but we will include it in the paper if accepted. Essentially, it is often near zero for CIFAR-10 (easier data; tiny images with only 10 classes) and for harder data (e.g., ImageNet; much larger images with 1000 classes), it varies more.
---
Rebuttal Comment 1.1:
Comment: I acknowledge and appreciate the authors' responses. I intend to keep my score. | Rebuttal 1:
Rebuttal: Thank you very much for the valuable feedback.
In response to the suggestions, we conducted additional correlation analyses using the metrics from [Jiang et al. 2020], with generalization gap (loss difference), test error, and test error minus training error. One of the metrics from [Jiang et al. 2020] is more rigorous in the sense that it was designed to eliminate spurious correlations. The results are consistent with the submission. We uploaded a one-page pdf that contains a part of the results. We will include the complete results in the paper, if accepted, which would be a great addition. We really appreciate the helpful suggestions.
The rest of our response is to each reviewer individually.
Pdf: /pdf/3c5a14a770d6e283a6523948ba1bb15a149611e2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Smoothed Analysis of Sequential Probability Assignment | Accept (spotlight) | Summary: This paper focuses on the contextual sequential probability assignments, and specifically examines cases where the contexts $x_{1:T}$ are generated by $\sigma$-smooth adversaries as introduced in [Haghtalab et al. 2021], and where the labels $y_{1:T}$ are adversarially generated. The primary findings pertain to the minimax regrets under the logarithmic loss, as compared to a generic set of experts $\mathcal{F}$. More specifically, the paper demonstrates the following:
1. If the class $\mathcal{F}$ possesses a scale-sensitive VC-dimension of order $\epsilon^{-p}$, then the information-theoretical minimax regret grows as $T^{p/(p+1)}\cdot \text{poly}\log(T/\sigma)$, which is tight up to polylogarithmic factors.
2. An efficient algorithm exists, given an MLE oracle, that attains regret of order $T^{2/(2+w)}\sqrt{1/\sigma}$, provided the Rademacher complexity of $\mathcal{F}$ is of order $T^{-w}$. For the VC-class, specifically where $w=1/2$, this algorithm yields a regret bound of order $T^{4/5}$.
3. For the VC-class, it demonstrates that the information-theoretical regret is of the order $d\log(T/\sigma)$ up to a constant factor (the same result was also found in concurrent work [Wu et al. 2023]).
The principal contribution of this paper that distinguishes it from previous work on contextual sequential probability assignments is the introduction of an oracle-efficient algorithm that achieves sublinear regret. Although the primary techniques are rooted in the FTPL (Follow The Perturbed Leader) framework as outlined in [Haghtalab et al. 2022], this unique contribution sets a new precedent in this line of research.
Strengths: The primary strength of this work lies in the oracle-efficient algorithm, which presents the first known instance of computationally efficient sublinear regrets under log-loss for general non-parametric classes, albeit using an MLE oracle. This is likely to inspire further research in this field.
Weaknesses: The main weakness is the novelty of the techniques used in this paper. From a purely technical standpoint, Algorithm 1 and its analysis seem to merely mimic those used in [Haghtalab et al. 2022]. In fact, there are many places in the proof that appear to be verbatim copied!
There are also some presentation and technical issues, which I outline below:
- There are many places where the index of summation is $i$ but the summand is written as $s_t$, for instance, in Lemma 4.2. I suggest the authors conduct a thorough proofreading, including the appendix, to correct issues like this;
- I recommend that the authors include a proof for Corollary 4.1.1. It's unclear to me how Theorem 4.1 implies this bound for certain values of $n$, $\alpha$, and $m$;
- The authors claim that a $T^{3/4}$ bound could be achieved for an alternative oracle. This is not clear to me at all. Are you refering to the Theorem 7 of [Block et al., 2022]? If so, it should be properly cited and clearly explained by what you mean "mixed objective function";
- In line 636 (appendix), it appears that Lemma 3.3 from [Haghtalab et al., 2021] requires $\epsilon> \frac{\sigma}{T\log T}$? The choice of $\epsilon=\frac{\sigma}{T^2}$ does not align with the condition stated in that Lemma. I believe a stronger concentration result, such as Lemma 21 from [Wu et al., 2023], would be required in this context;
- In the proof of Lemma 4.4, presented in Appendix G, you assert that the distributions $P, Q$ are solely dependent on the ensembles $\\{n_0(x),n_1(x)\\}$. However, I contend that this is not a self-evident fact. It is essential, at the very least, to posit that your OPT oracle is permutation invariant with respect to the input sample sequence. This assumption, it should be noted, does not necessarily hold for all optimization methods; stochastic gradient descent, for example, may not exhibit this property.
These are the primary reasons I give a "weak accept".
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I have the following questions for the authors:
1. It seems that your FTPL algorithm relies on an exact ERM oracle. I'm curious, how would the performance or operation of the algorithm change if the oracle was only approximately optimal?
2. Is there any computational lower bounds as in [Haghtalab et al. 2022] that can be proved for the log-loss?
3. Your bound in Theorem 4.1 seems to rely solely on the Rademacher complexity. Does this suggest that your analysis isn't capable of achieving a $\log(T)$ dependency for the VC-class? For instance, consider the linear function $|\langle w,x\rangle|$ with $w,x$ in a unit ball (of a Hilbert space). This class admits a $1/\sqrt{T}$ Rademacher complexity, but it's known that the information theoretical lower bound is $T^{2/3}$.
4. Is it possible to achieve better dependency on $T$ with your FTPL algorithm if the contexts are generated from an i.i.d. process (i.e., with $\sigma=1$)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No issue with negative societal impact.
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes
Flag For Ethics Review: ['No ethics review needed.'] | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review and the careful reading of our paper. We will fix the typographical errors and omissions.
**Choice of parameters**: Thanks a lot for noticing this. Upon further inspection our proof gives the desired bound with the choice $\epsilon = \sigma / T \log T$ as would work with the original lemma in Haghtalab et al. We will fix this in the revision.
**Permutation Invariance**: We agree that the claim made in the paper would need permutation invariance of the oracle. But in our notation we consider the input to the oracle as the unordered set of historical point and the objective (the average loss) is permutation invariant. In particular, we can achieve the desired permutation invariance by randomly shuffling the data. But we agree that explicitly mentioning this would be helpful and we will elaborate on this in a revision.
**Approximate ERM**: The question regarding the approximate oracle is a good one. Previous works on oracle efficiency have studied the effect of approximation on FTPL type algorithms (See section 6 https://arxiv.org/pdf/1611.01688.pdf). Incorporating (additive) approximations is usually simple but multiplicative approximations are considered challenging and have received study even in the binary case. Even there we don’t have a fully satisfactory solution beyond the linear case. (See for example https://arxiv.org/abs/1804.07837, https://arxiv.org/abs/2102.11050). Studying the appropriate notion of approximate regret in this setting is an interesting avenue for further research.
**Dependence on Rademacher Complexity**: As the reviewer notes the dependence on the Rademacher complexity is a reason for a lack of sharpness in the oracle efficient setting. Unfortunately, this dependence (or on a similar quantity) seems fundamental to FTPL/stability type analysis due to the requirement to analyze the “out of sample generalization” (in our setting, this comes up in analyzing the error between the hallucinated sample and hypothetical coupled random variables). We believe circumventing would require novel ideas and is an exciting avenue of research.
**Computational Lower Bounds**: The main challenge in proving computational lower bounds in the log-loss setting is that most natural algorithms here are fundamentally improper. In general, we do not have good tools to prove lower bounds for improper algorithms (even in the worst-case binary setting). Since the lower bounds in Haghtalab et al are primarily for proper algorithms, they do not port naturally to this setting. Either proving improper algorithm lower bounds or improving oracle efficient regret for improper algorithms is an excellent area for research, whose relevance is further accentuated by oracle efficiency for the log loss introduced in our work.
**IID contexts**: When both the contexts and labels are generated in an iid fashion, the analysis of our regret of our algorithm reduces to the classical analysis of the error of MLE. When the contexts are iid and labels are adversarial, the bound is not immediately clear. Similar setups are studied in online learning (for example in contextual bandits https://arxiv.org/pdf/1606.00313.pdf, https://arxiv.org/pdf/2003.12699.pdf). But usually even in this setting, the analysis is fairly involved. But, the iid context setting is a natural first place towards improved regret bounds.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my questions, I have no additional questions. | Summary: The paper studies the sequential probability assignment problem with a smoothed adversary. The learner sequentially assigns a probability given contexts, which the adversary generates from a distribution that can be far from a known base distribution by a factor of $1/\sigma$. For the problem with the i.i.d. setting, there is an $O(\log T)$ regret method, while we must incur $\Omega(T)$ regret in the pure adversarial setting. Thus, it is well motivated to study what regret is possible in the smoothed setting with respect to $T$ and $1/\sigma$.
The authors provide a general regret bound via reduction to transductive learning. E.g., if the class of functions that map contexts to a probability has the VC dimension of $d$, the regret is $O(d \log (T/\sigma))$, achieving the logarithmic dependence on $T$ and $1/\sigma$. The authors also present an FTPL-style algorithm that is efficient in terms of calls to MLE oracles. It is shown that the algorithm attains $T^{4/5}\sqrt{d/\sigma}$ regret for the VC class.
Strengths: 1. The paper studies an interesting online learning problem under the smoothed setting, which is well motivated by the known results in stochastic and adversarial settings.
2. The results imply some interesting consequences. Although I'm not an expert, I found Section 3.2.2 to be interesting, where the authors discuss the benefit of their covering-argument-free idea.
Weaknesses: 1. Theorem 3.1, the reduction to transductive learning, seems to be relying heavily on the coupling lemma by Haghtalab et al. [2021].
2. The analysis of the oracle-efficient algorithm in Section 4.1 also appears to be relying on Haghtalab et al. [2022].
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I would appreciate it if the authors could describe intuitive technical differences from Haghtalab et al. [2021, 2022].
2. The scale-sensitive VC dimension appears to be related to the fat-shattering dimension. Is there some connection between them?
#### Minor comments
- In Algorithm 1's input, capitalization is inconsistent.
- In Algorithm 1, line 2, a period or a line break is missing before "Call the oracle..."
- I prefer references arranged in alphabetical order.
- Hyperlinks do not work.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Although limitations are not explicitly discussed, the range to which their results apply seems clear from the description.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments. We will incorporate corrections to the typographical errors in a revision.
**Differences between Haghtalab et al 21,22 and our work**: The main focus of the work by Haghtalab et al 21, 22 was on the case of the binary loss. Haghtalab et al 21 focused on the case of the binary loss and VC classes and presented a statistical rate for the smoothed setting matching the rates for the iid case. In our work, we focus on the log loss which does not fall into their framework because of the poor Lipschitz constant and unboundedness. Furthermore, their original result can be seen as a covering based argument while our general result is a direct reduction from the smoothed setting to the transductive setting. Our work goes beyond finite covering numbers. This allows our work to extend to non-parametric settings and others where the covering numbers do not capture the regret in the log-loss setting. To get these stronger results, our approach differs entirely from Haghtalab et al. Rather than looking at coverings, our main statistical result establishes a reduction from the smoothed online setting to a version of transductive setting. In particular, we show that the regret is small as long as the notion of transductive regret is small without the need to restrict to settings with small covering numbers.
Haghtalab et al 22 focuses on the oracle efficiency (also for the binary loss or finite regression loss). The main algorithmic difference between our result and theirs is the choice of the oracle. In our paper, we focus on the MLE oracle which just outputs the best model in the class on the data while their paper uses a stronger notion of what is commonly referred to as a mixed binary-regression oracle. The difference is important in the case of the log loss because the MLE oracle can be seen as a natural subroutine in statistical analysis while the general mixed binary-regression oracle is unnatural in most settings including the log loss setting. Technical challenges arise from the fact that the log loss is unbounded and non-Lipschitz which was a necessary requirement from previous work. Furthermore, a technical challenge in handling the MLE oracle corresponds to a different way of controlling the perturbation term in the regret.
**On the use of coupling lemma**: Indeed, the coupling lemma of Haghtalab et al ‘21 was a fundamental tool in the analysis of smoothed online learning. Since then, every work in this space published at top tier conferences such as NeurIPS, STOC/FOCS, and COLT (which constitutes 10s of papers) has relied on that coupling lemma. This should be taken as the evidence of the unusual versatility of the coupling lemma of Haghtalab et al ’21, but not to underestimate contributions of a long line of work on smoothed analysis since Haghtalab et al ’21.
Furthermore, the original result of Haghtalab et al ‘21 can be seen as a covering based argument while our general result is a direct reduction from the smoothed setting to the transductive setting. Our work goes beyond finite covering numbers. This allows our work to extend to non-parametric settings and others where the covering numbers do not capture the regret in the log-loss setting. To get these stronger results, our approach differs entirely from previous work. Rather than looking at coverings, our main statistical result establishes a reduction from the smoothed online setting to a version of transductive setting. In particular, we show that the regret is small as long as the notion of transductive regret is small without the need to restrict to settings with small covering numbers. Our reduction to the transductive setting is a novel and promising approach that we expect will have an equally versatile role to play (compared to the coupling lemma of Haghtalab et al ‘21) for the study of statistical and computational perspective of unbounded losses and losses with curvature.
**Scale-sensitive Dimension and Fat shattering dimension**: The notions of complexity are different terminology arising from different communities. For example, compare definition 3.2 in https://home.ttic.edu/~tewari/lectures/lecture15.pdf and section 2.4 in https://arxiv.org/pdf/2202.04690.pdf
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' thorough response. My concerns have been adequately addressed. While my expertise in this field is limited, I believe that this paper is technically solid and deserves acceptance. | Summary: The paper considers the sequential probability assignment problem, which is the following:
The algorithm (forecaster), based on past context and outcomes, must assign probabilities to 0-1 values for the next outcome given the latest context. The forecaster competes against a reference class of predictor functions (experts) and aims to suffer low regret compared to the best of these functions. For a logarithmic loss incurred for incorrect predictions, regret is defined as the difference in the total log-loss of the forecaster and the best of the experts.
When the contexts are drawn independently from an unknown distribution, and the class of predictor functions is special (say, has a bounded VC dimension), the sequential probability assignment has low (i.e. sublinear) regret. However, if the contexts are adversarial, there are impossibility results even for simple classes of predictors.
Therefore, following Haghtalab et al. (2021), the authors study a smoothed version of the problem where contexts can only be chosen adaptively from the set of sigma-smoothed distributions. They study how the properties of the class of the predictor functions like bounded “scale-sensitive” VC dimensions affect the regret.
The main tool used is a powerful coupling lemma from Haghtalab et al. (2021) that converts the problem with an adaptive sequence of t contexts to a problem with t*K uniformly and independently distributed contexts (the length of the sequence increases but they are no more adaptive). Thus, results for i.i.d. samples may be applied where the regret is essentially logarithmic in T (number of time steps), which is the first result of the paper.
The second question studied is an algorithmic approach to solving the problem using calls to MLE. A follow the perturbed leader type algorithm using ideas generalized from Haghtalab et al. (2022) is used. Here, however, the regret is not logarithmic but T^0.8. This is the second main result in this paper.
Strengths: The paper is well-written and the considered problem is natural. The results are interesting.
Weaknesses: It seems the first main result of this paper is already presented in a paper that has been accepted at COLT 2023 (Wu et al.) Even though this seems to be independent work of a different group of authors, I would still find it strange if the same result was now also accepted at NeurIPS, in particular because the proofs are quite similar. Both rely essentially on the coupling arguments from the work of Haghtalab et al. I also have to say that my impression is that the known coupling argument is really the key ingredient and the proof in this submission is "only" an adaption of the arguments by Haghtalab to the log-loss setting. Maybe the authors could point out more clearly if there were significant challenges in this adaption.
The second main result is not contained in the COLT paper. However, it unfortunately only shows a regret of T^0.8 and no logarithmic one and it also follows known algorithmic approaches from the literature like follow the perturbed leader.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: --
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n.a.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review.
**Regarding the concurrent work by Wu et al**: As the reviewer pointed out that both papers are concurrent and independent works. We agree that at first sight the ideas in the statistical aspect of our paper are related to the ones in Wu et al 23 and are both inspired by the coupling lemma from Haghtalab et al 21. As we explain below, the overlap between the results are minor and limited to one of our statistical rates, our statistical rates are stronger than Wu et al in general, and one of our primary contributions (computational perspective) forms a very novel setting that has not been studied before. In short, our work has independent insights, results, contributions, and open directions than Wu et al. 23 and will draw independent readership and interest from the field.
The small overlap between the results is only limited to the rates for the case of VC classes (Corollary 3.4.1). Even in the statistical case more broadly, our work goes beyond VC classes as opposed to Wu et al, who only considered the case of finite covering numbers. This allows our work to extend to non-parametric settings where the covering numbers do not capture the regret. To get these stronger results, our approach differs entirely from Wu et al. Our main statistical result establishes a reduction from the smoothed setting to a transductive setting. Interesting consequences of this approach are discussed in Section 3.2.2.
**Algorithmic Challenges**: A major contribution of our work is the study of oracle-efficient online learning for the log-loss, which has not been considered by Wu et al. Indeed, oracle efficiency has not been considered at all for the log loss even for worst-case settings. Considering oracle-efficiency for log-loss is an important line of research as the ERM oracle corresponds to the maximum likelihood estimator (MLE) that is commonly used in practice. Our paper presents sublinear regret oracle efficient algorithms (which does not have an analog in Wu et al 2023). In our opinion, this is a significant contribution.
We acknowledge the reviewer’s concern regarding regret rates. As we state in our section 5, we believe that obtaining subpolynomial bounds is an interesting direction for future work. We also provide discussions and some evidence that highlights why any MLE oracle (and FTPL analysis) will face significant technical challenges in obtaining subpolynomial regret bounds, due to these methods’ reliance on $\sqrt{T}$ “out of sample generalization'' variance term. We believe that this discussion and our proof approach will be insightful more broadly for online algorithm design. Proving lower bounds is also challenging, as log-loss necessitates considering improper algorithms which are historically challenging for obtaining lower bounds. This puts the problem in an intriguing situation that warrants further investigation. We believe that significant progress requires perspectives and the publication of our work will bring this to the attention of the community.
**On the algorithmic framework of FTPL**: One of our main technical contributions is an oracle-efficient algorithm that taps into the MLE oracle to achieve sublinear regret for the log-loss. We emphasize that most work on oracle efficiency considers a “mixed loss” oracle that minimizes a signed combination of the historical losses. This is unnatural in the log loss setting and a major technical consideration that sets our work apart from other work on oracle-efficient online learning is to work just with MLE oracles due to the natural connection to statistical estimation.
Further we believe "known algorithmic approaches from the literature like follow-the-perturbed leader" again minimizes our contributions. In particular, algorithms in the oracle efficient setting are forced be similar to FTPL, since the algorithm is only promised a "leader" oracle. The fact that all oracle-efficient algorithms utilize the FTPL framework is a testament to its fundamental nature. The main challenge in designing oracle efficient algorithms is getting regret and running times roughly logarithmic in the number of experts (note that the basic FTPL analysis gives polynomial dependence on the number of experts). Since this (provably) cannot be done in generality each paper adapts the FTPL framework to the particular setting.
**On the use of coupling lemma**: Indeed, the coupling lemma of Haghtalab et al ‘21 was a fundamental tool in the analysis of smoothed online learning. Since then, every work in this space published at top tier conferences such as NeurIPS, and COLT (which constitutes 10s of papers) has relied on that coupling lemma. This should be taken as the evidence of the unusual versatility of the coupling lemma, but not to underestimate contributions of a long line of work on smoothed analysis. Additionally, see the comments above regarding the reduction between smoothed analysis and transductive learning for further differences between our work and prior work.
**On treatment of concurrent literature**: As demonstrated above, our results are novel and have only minor overlap with Wu et al. Indeed, the work of Wu et al did not consider computational aspects to smoothed analysis. Even for statistical analysis, our approach presents a more general approach that extends to general losses. In short, our work has independent insights, results, contributions, and open directions than Wu et al. 2023 and will draw independent readership and interest from the field.
The publication of the concurrent works (with a minor overlap) in other conferences is only evidence that the community is very interested in this line of work. The first versions of our work were made public within days of each other. We believe that rejecting a work for a minor overlap with concurrent work is a deviation from standard practice in the review process and would be counterproductive for the research community.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. I understand now that the overlap with the paper by Wu et al. is not as large as I had originally thought. Hence, I have raised my score. | Summary: The paper discusses smoothed analysis of probability assignments in an online setting. The paper shows how the problem can be reduced to a transductive setting and obtains an upper bound on the regret using covering numbers which is further bounded by the scale sensitive VC dimension. The results are instantiated for various classes of functions, including VC function class and the non-parametric class where the scale sensitive VC dimension grows as \epsilon^{-p} at scale \epsilon.
On the algorithmic side, the authors propose an oracle efficient FTPL style algorithm to predict the probability assignments and show weaker regret bounds for the same (by bounding the Rademacher complexity)
Strengths: - The paper is very well written and is very clear and easy to follow.
- The paper is the first to consider Oracle efficient algorithms for the sequential prob. assignment problem
- The paper creatively combines earlier ideas (Coupling Lemma) to reduce the smoothed adversary setting to a transductive setting.
Weaknesses: Please see questions below
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - The authors can elaborate the results of Wu.et al [2023] and highlight the differences and similarities (both in terms of results and techniques).
- Is Algorithm 1 actually implementable in practice - especially without the knowledge of \sigma, the smoothness parameter?
- How do the results change in case the adversary is realizable w.r.t to the class F that the regret is measured against?
- How critical is the assumption on uniform distribution over X. For instance if X does not support a uniform distribution, how does Algorithm 1 and it's guarantees change?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments.
**Regarding the concurrent work by Wu et al**: As the reviewer pointed out that both the papers were concurrent and independent works. We agree that the ideas in the statistical aspect of our paper are related to the ones in Wu et al 2023 and are both inspired by the coupling lemma from Haghtalab et al 2021.
The small overlap between the results of these two papers is only limited to the statistical rates for the case of VC classes (the statement of Corollary 3.4.1). Even in the statistical case, our work goes beyond VC classes as opposed to Wu et al, who only considered the case of finite covering numbers. This allows our work to extend to non-parametric settings and others where the covering numbers do not capture the regret in the log-loss setting. To get these stronger results, our approach differs entirely from Wu et al. Rather than looking at coverings, our main statistical result establishes a reduction from the smoothed online setting to a version of transductive setting. In particular, we show that the regret is small as long as the notion of transductive regret is small without the need to restrict to settings with small covering numbers. Interesting consequences of this covering-number free approach are discussed in Section 3.2.2.
A major contribution of our work is the study of oracle-efficient online learning for the log-loss, which has not been considered by Wu et al. Indeed, to the best of knowledge, oracle efficiency has not been considered at all for losses such as the log loss even for worst-case settings. Considering oracle-efficiency for log-loss is an important and natural line of research as the ERM oracle corresponds to the maximum likelihood estimator (MLE) that is commonly used in practice. Our paper presents sublinear regret oracle efficient algorithms for the log loss (which does not have an analog in Wu et al 2023). In our opinion, the algorithmic questions in our paper are a significant contribution.
**Implementation in practice and Smoothness parameter**:
We believe that the algorithm is very reasonable to implement in practice when given access to a class of models that one can optimize over. Note that the algorithm just choses some samples from the base distribution and adds it to the training set and uses the optimization oracle. For many classes of interest such as neural networks one can implement optimization oracles using (stochastic) gradient descent (in practice).
Further, we clarify that the exact knowledge of $\sigma$ is not needed by our approach. Our algorithms and regret bounds can work with any approximation of sigma value that is a lower bound of the real up to constant multiplicative factors. This corresponds to settings where the world is more smooth than we give it credit. Even when we have extremely poor upper and lower bounds, we can use hedging to still get non-trivial regret with only a minor blow up in computation. We will provide more details next as to how we work with knowledge of approximate sigma.
In general, given (loose) upper and lower bounds on the exact value, we can use a geometric doubling approach to deal with the unknown $\sigma$. To be specific, one could construct experts, where each expert runs a local version of our algorithm with parameters $ \sigma_i = \sigma_{low} 2^i$. Here $\sigma_{low}$ is a loose lower bound on $\sigma$. We maintain experts till $\sigma_i = \sigma_{high}$ which is a loose upper bound on $\sigma$. We then run Hedge on these experts. Note that the parameter of the best expert satisfies $\sigma/2 \leq \sigma_i \leq 2 \sigma$ , so its regret matches the regret of the same algorithm that runs on true $\sigma$ up to a constant factor. Therefore, the expected regret of this meta algorithm is comparable to the bound in with known $\sigma$, with an additive term of order at most $\sqrt{T \log \log (\sigma_{high} / \sigma_{low} )}$. This could potentially be improved using a more aggressive step size for the Hedge meta algorithm.
**Realizability**: For the statistical rates, realizability (appropriately defined) does not affect the rates significantly.But for the oracle efficient case this is an interesting question. Even for the binary loss case, this is not fully understood in the sense that we don't know an algorithm that achieves oracle efficient fast rates in the realizable case. Understanding this in the log loss case is an excellent avenue for improving the rates.
**Uniformity on $\mathcal{X}$**: The notion of smoothness with respect to an arbitrary measure in general without the domain to have a uniform distribution. The algorithm 1 generalizes in a natural way to this setting by sampling from the base measure $\mu$ instead of the uniform distribution. The analysis also remains the same (by and large) but some care needs to taken in order to define appropriate notions (such as Poisson processes) that we use in the analysis to the arbitrary domain.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. I have read the rebuttal. The clarifications on certain points in the rebuttal such as the practical implementation, arbitrary domain, etc can be added to the paper/supplementary as appropriate.
---
Rebuttal 2:
Comment: Dear Reviewer oQa8,
Could you please acknowledge the author's rebuttal?
Thank you,
Your AC | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the problem of sequential probability assignment in the smoothed setting. In particular, a learner receives labelled examples sequentially, where the contexts are drawn from some smooth distribution which can otherwise be chosen adversarially (in each step) and the labels can be chosen adversarially. The goal of the learner is to minimize the regret (minimax rate of excess error with respect to the optimum hypothesis on the subsequence seen so far).
The first main result provided is that the optimum regret for the problem is characterized by the optimum regret of a different problem where the adversary chooses a priori the set of contexts that they will have to choose from throughout the interaction with the learner. Using the demonstrated relationship between the two problems, the authors provide an upper bound on the regret in the smoothed setting, which involves the scale sensitive VC-dimension of the hypothesis class and a number of free parameters that can be chosen accordingly to adapt the bound to different regimes for the VC-dimension. For VC-classes the provided adapted bound is essentially tight.
The second main result provided is an oracle-efficient algorithm achieving sublinear regret on VC-classes (and more generally on classes with polynomially decaying Rademacher complexity), provided access to an ERM oracle for the hypothesis class considered. The algorithm follows the approach of Follow-the-Perturbed-Leader, which takes advantage of the trade-off between the stability of the algorithm's intermediate states and the excess error due to perturbing the current set of samples (by adding a number of hallucinated uniformly random data points).
Strengths: The paper demonstrates that the smoothed analysis framework is relevant to providing provable guarantees for the problem of sequential probability assignment, hence initiating (and motivating) the (further) study of the problem through the lens of smoothed analysis. The results provided include essentially tight bounds on the (smoothed) regret for VC-classes and two main conceptual contributions: First, characterizing the minimax regret for the problem considered in terms of the minimax regret of another relevant problem (rather than some combinatorial notion of dimension, which might even be hopeless in general). Second, repurposing ERM as an oracle to be exploited by algorithms with sublinear regret.
Overall, the results are presented with clarity and sufficient detail, and provide bounds for the considered problem that are fairly general and adaptable to hypothesis classes with different properties.
Weaknesses: The first weakness of the paper is that the oracle-efficient algorithm proposed is not necessarily efficient, due to the potentially high complexity of the ERM oracle, especially since the labels are chosen adversarially. In other words, it is not clear whether assuming oracle access to an ERM is reasonable. A discussion on existing or simple positive results regarding the oracle's implementation or even a pointer to empirical results that demonstrate its success on a relevant setting would be appreciated.
Furthermore, many of the technical contributions of the paper are not discussed in the main text. For example, the only hint provided for the proof of Theorem 3.1 in the main text is that it uses Theorem 2.1. In particular, it is not clear what (if any) technical obstacles arise when one tries to apply Thm 2.1 to this setting, or if adapting some approach from prior work is sufficient. Similarly, a proof sketch for Theorem 3.2 would be helpful for the reader.
Overall, while the problem considered is well-motivated and the results are concrete, it is not clear whether there are strong conceptual and technical contributions relatively to prior work (smoothed analysis has already been applied to online learning and most of the tools used to demonstrate the results existed in the literature, at least in some less general form -- e.g., Theorem 2.1 and the idea of decomposing the stability term in Haghtalab et al. [2022]).
-- Most of my main concerns where addressed in the rebuttal and for this reason I increased my score from 6 to 7.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: My main questions are related to the two main points raised in the weaknesses section. Given the concreteness of the provided results, I would be happy to increase my score if the issues I pointed to are sufficiently addressed by the authors' answer.
1. Are there some simple function classes for which the ERM oracle can be efficiently implemented under the marginal assumptions that correspond to the considered setting? Alternatively, is there a(n empirical, conceptual or theoretical) reason for which one might expect to heuristically obtain an approximate ERM oracle?
2. What are the main technical hurdles arising when one tries to instantiate prior techniques in the considered setting? In which ways does the technical work differ from prior work?
I have also found a small number of typos:
- line 294: likliehood $\to$ likelihood
- lines 306-307: i=1 $\to$ t=1 (3 times, one for each summation)
- line 318: at in $\to$ in
- line 321: refers $\to$ refer
- line 344: achieve $\to$ achieved
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a thoughtful review. We will incorporate the suggested typographical and expository corrections.
**Applicability of oracle-efficiency**: The oracle-efficient framework is important because it allows us to directly tap into existing deployed algorithms, without having to design and implement an algorithm from scratch. These sub-routine algorithms can be heuristics and do not have to be provably efficient. Modern computer science is full of such heuristics that perform exceedingly well in practice even when hardness barriers exist in theory; a great example of this is deep learning. The oracle-efficient method for designing online algorithms has been extremely popular recently and has seen a lot of use in varied contexts such as contextual bandits and reinforcement learning (see https://vowpalwabbit.org/) and is even used in production. We see our work as following this line of work to design online algorithms. In particular, our paper elucidates the relative complexity of maximum likelihood estimation and sequential conditional density estimation.
Note that the oracle in our algorithm is called on either “smoothed” instances given by the adversary, or random instances sampled from the uniform distribution. In such settings, hardness results usually do not hold since they are proven mostly for worst case instances. Therefore, when implementing our algorithms in practice, instead of using an oracle that is provably efficient for all worst-case inputs, it suffices to have a weaker oracle that performs reasonably well on “average” case instances.
Further, for many practical classes of conditional probability densities, heuristic algorithms (for example bases on deep learning) are often used in practice. One perspective on this line of work is developing machinery to convert algorithms for heuristic optimization into provable algorithms for sequential decision making
**On technical hurdles of our work and contributions relative to prior work**:
Prior work focused on the case of the binary loss and VC classes and presented a statistical rate for the smoothed setting matching the rates for the iid case. In our work, we focus on the log loss which does not fall into their framework because of the poor Lipschitz constant and unboundedness. Furthermore, their original result can be seen as a covering based argument while our general result is a direct reduction from the smoothed setting to the transductive setting. Our work goes beyond finite covering numbers. This allows our work to extend to non-parametric settings and others where the covering numbers do not capture the regret in the log-loss setting. To get these stronger results, our approach differs entirely from Haghtalab et al. ‘22. Rather than looking at coverings, our main statistical result establishes a reduction from the smoothed online setting to a version of transductive setting. In particular, we show that the regret is small as long as the notion of transductive regret is small without the need to restrict to settings with small covering numbers. Interesting consequences of this covering-number free approach are discussed in Section 3.2.2.
The algorithmic questions in our paper require new technical innovations. To the best of knowledge, oracle efficiency has not been considered at all for losses such as the log loss even though we believe this is a natural setting for oracle efficiency since the ERM oracle corresponds to the maximum likelihood estimator (MLE). Technical challenges arise from the fact that the log loss is unbounded and non-Lipschitz which was a necessary requirement from previous work. Furthermore, a main technical contribution is techniques to handle the MLE oracle (which differs from the previous work which uses a more general regression oracle). Technically, this corresponds to a different way of controlling the perturbation term in the regret.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I have increased my score in light of your clarifications. | null | null | null | null | null | null |
Debias Coarsely, Sample Conditionally: Statistical Downscaling through Optimal Transport and Probabilistic Diffusion Models | Accept (spotlight) | Summary: This paper proposed a new two-stage method for statistical downscaling by combining a coarse de-biasing step based on optimal transport and a conditional up-sampling step based on a diffusion model.
Strengths: Overall, the paper is very well written and clearly explains the proposed method. The proposed method is divided into two components in a straightforward manner and is justified clearly. Empirical results confirm the applicability of the proposed method and its superior performance compared to several baselines such as cycle-GAN and ViT based super-resolution.
Weaknesses: The main novelty of this work is the introduction of a debiasing step for downscaling applications due to the biased nature of the problem. But Figure 3 shows the debiased result can be quite different from LR data. Because the diffusion model is very dependent on the debiased result, this debiasing step is crucial for the success of accurate downscaling in my opinion. Comparatively, cycle-GAN seems to output a more similar result to the LR data. So it seems necessary to discuss further if this debiasing step is indeed correcting the bias in LR data or introducing additional error/dissimilarity.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: 1. Using diffusion models for super-resolution has been widely studied, so it would be helpful to clarify if Section 3.2 is related to any relevant works such as SNIPS and DDRM? If so then please cite them in the paper.
2. What is Constraint RMSE in Table 2? And why is cycle GAN not included in this comparison? From my above comment, such a comparison could be helpful.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors have addressed limitations of the work adequately. Broader impacts are not applicable for this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review. Please find our response below.
> Because the diffusion model is very dependent on the debiased result, this debiasing step is crucial for the success of accurate downscaling in my opinion.
Thank you for your sharp observation that the debiased results from OT are less close to the low resolution inputs than the output of cycle-GAN. We believe this is a feature, not a bug. While appearing counterintuitive, there are several reasons that this type of **“visual/pixel-based similarity” might not be preferred**.
First, the goal of debiasing is to transform the low-resolution data to a high-resolution one such that the latter lies close to the manifold of the high-resolution data (so that diffusion models can be sampled through posterior constraints), rather than just super-resolving/upsampling the low-resolution sample. In our response to Review wBNS, we used the analogy to restore a distorted image due to an aberrative lens or distorting mirror. There, the desirable outcome would be an image that is close to the original (high-res, undistorted) image but less so to the distorted version. Note that, since our data is unpaired, there is no point-wise measurement of similarity to a single high-res image - instead, we have to resort to measuring distribution differences. Our results do show that while visually the debiased images look different from the LR ones, **in distribution, they are close to the high resolution ones**.
Secondly, cycle-GAN’s results might indicate a weakness of the method in solving the problem considered in this paper. The debiased outputs stay close to the low-resolution images so that they do not explore enough to match the samples from the high-res manifold. We do not think this is the design goal of cycle-GAN; the method’s assumption does not take into consideration that the bias can be large enough so that the low-resolution and high-resolution manifolds can be apart significantly. Also, we point out that as the downscaling factor become higher, and therefore the downscaling becomes more delocalized and therefore, it contains a bigger bias, cycle-GAN struggles to produce realistic images as seen in Fig. 3 c) in the manuscript.
To have a more thorough comparison, we added two new baselines and metrics to the suite of experiments, as shown in the Table found in the general response. We use one of the new baselines, Bias Correction and Spatial Downscaling (BCSD), to showcase the need for non-local debiasing that is able to nudge the samples from one manifold to another. BCSD, is a popular technique in statistical downscaling. In a nutshell, it performs a cubic interpolation, and then a debiasing step based on pixel-wise quantile matching. The latter ensures the correct pixel-wise statistics of the downscaled output, as shown by the very low pixel-wise Wasserstein-1 error in the Table. We can observe in Figure 1c) of the included PDF that the resulting downscaled image preserves most of the geometrical information of the low-resolution input. However, all the spatially-dependent metrics are much worse for this method. This indicates that this marginal de-biasing method fails to "nudge" the low-resolution input towards the correct high-resolution manifold.
> Using diffusion models for super-resolution has been widely studied, so it would be helpful to clarify if Section 3.2 is related to any relevant works such as SNIPS and DDRM? If so then please cite them in the paper.
Thank you for the references. We were not aware of them. Our approach is indeed related, although less intrusive in the unconditional score function. Also, as shown in [1] our approach can be also used for non-linear constraints, even though such property was not used in the current manuscript. We will add the references to the manuscript.
> What is Constraint RMSE in Table 2? And why is cycle GAN not included in this comparison? From my above comment, such a comparison could be helpful.
The constraint was not added because cycleGAN does not hinge on conditional sampling with respect to the low-resolution debiased snapshot ($y’$ in Fig. 1) using a user defined downsampling map. We will add a comparison with respect to the original low-fidelity low-resolution snapshot.
References:
[1]: User-defined Event Sampling and Uncertainty Quantification in Diffusion Models for Physical Dynamical Systems
M. A. Finzi, A. Boral, A. G. Wilson, F. Sha, and L. Zepeda-Núñez, International Conference on Machine Learning, 10136-10152
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their detailed response. Like Reviewer wBNS, my main concern is still the optimal transport step of the proposed method. While I understand in theory the points raised in the response ('feature, not a bug'; 'visual/pixel-based similarity' might not be preferred), it seems that these arguments are weak and not supported by concrete evidence. Indeed, while I agree the optimal transport map does transport between the biased and debiased manifolds, it is not clear to me if the learned map is indeed close to 'optimal'. It appears that the added metrics do not reflect this aspect. However, please feel free to reply and correct any points I made.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response.
First, we would like to clarify the nomenclature “optimal” in this case. The map is optimal in the sense that it satisfies optimality conditions of a minimization problem averaged upon the full distribution, subject to the hard constraints that the marginals are respected. In particular, preserving these constraints is fundamental to correcting statistical biases, but it makes the problem much harder. Furthermore, the map presented here is an approximation via an entropic regularization, which renders the optimization problem tractable. In our formulation, OT allows us to find maps directly between the Y and Y’ spaces, when the biased and unbiased distributions are only prescribed using samples. On the other hand, other transport maps (which are not based on optimality conditions) often need to specify an intermediate distribution (e.g., a standard Gaussian latent space), which results in complex maps that need to be approximated when the distributions on Y and Y’ are highly non Gaussian even if they are similar.
Second, we want to point out that Table 1 in our main text shows that OT effectively corrects statistical biases in the low-resolution data with respect to various metrics. As additional evidence to support the OT-based debiasing map outperforms the others, we have downsampled the BCSD, cycGAN and ClimAlign baselines to the low resolution space so they can be directly compared to the output of the OT component in our method. We have computed additional distribution metrics (covariance error, uMELR, wMELR, MMD) for these baselines and attached the resulting metrics in the table below. It is clear that OT yields superior performance in all of these metrics. As mentioned in our first response, this comes at the “cost” of pixel-wise similarity to some degree, which we quantified through the pointwise symmetric mean absolute percentage error (sMAPE) metric in the attached table (measures how much the explicit/implicit debiasing moves the low resolution samples on average by computing the relative $\ell^1$ distance between the input low-resolution sample $y$ and its debiased output for each method). We observe that OT does not move significantly more than other baselines (except for BCSD, which in fact solves the OT problem independently at each pixel, but does not respect pixel correlations). We hope this addresses your concern sufficiently well.
| | OT | BCSD | cycGAN | ClimAlign |
|--------------|------|------|--------|-----------|
| 8xdownscale | | | | |
| cov | 0.08 | 0.31 | 0.16 | 2.21 |
| uMELR | 0.01 | 0.95 | 0.08 | 0.53 |
| wMELR | 0.03 | 0.13 | 0.04 | 0.54 |
| MMD | 0.04 | 0.06 | 0.06 | 0.61 |
| sMAPE | 0.53 | 0.25 | 0.41 | 0.74 |
| 16xdownscale | | | | |
| cov | 0.08 | 0.35 | 0.33 | 2.50 |
| uMELR | 0.02 | 0.63 | 0.34 | 0.67 |
| wMELR | 0.03 | 0.16 | 0.15 | 0.58 |
| MMD | 0.03 | 0.34 | 0.09 | 0.55 |
| sMAPE | 0.54 | 0.36 | 0.63 | 0.76 | | Summary: The authors proposed a two-stage probabilistic framework for unpaired data. The problem is factorized into two steps, an optimal transport (OT) based mapping for debiasing and a diffusion-based model for up-sampling. The problem is demonstrated on fluid mechanics datasets representing difficult fluid and weather problems. The predicted results matched the statistics of the physical properties well.
Strengths: The current paper developed a statistical downscaling framework for unpaired data. Tackling this problem is very crucial in learning from multi-scale, multi-fidelity models, especially for large-scale applications like weather/climate modeling.
Originality: The idea of correcting the low-frequency bias by an OT map is novel and interesting. Moreover, the OT map is also integrated into the SOTA diffusion model framework. Due to the gradient information in diffusion modeling, a posterior conditioning sampling can further improve the performance at inference time and satisfy the given constraints well. The two-stage factorization is novel since it doesn't require a cycle-consistency type of loss and allows computing the debiasing map in a lower dimensional space.
Quality: The numerical results show the developed model has a superior performance compared to the baseline models. The developed model also has the ability to provide reasonable uncertainty estimation. Moreover, comprehensive ablation studies and training details are provided to help evaluate the model.
Clarity: The paper is well-written and clearly guides the reviewer to understand it. Math is accurate, and adequate proof and derivation are provided.
Significance: The current work tackles the statistical downscaling problem in the weather/climate model. The ability to improve the accuracy of high-resolution forecasts from low-fidelity data has a broader impact in practical applications, like real-time weather forecasting.
Weaknesses: Some additional details and explanations are needed. See the question part.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. What is the Reynolds number of the NS equations.
2. Are there any difficulties applying it to real-world turbulence data set?
3. In Figure 2 (a), what does the true trajectory look like?
(2) What does the corrected in Figure 2(b) mean? Does it mean OT correction only? Moreover, it is hard to distinguish the OT+cDfn, corrected, true, and UncondDfn. Therefore, it is hard to see the advantage of OT+cDfn from the figure.
4. What is the training cost?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The computational cost of OT mapping can be further reduced, which leads to a future research direction.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your positive feedback. Please find our response below.
> What is the Reynolds number of the NS equations?
The Reynolds number is 1000. The (high-fidelity) simulation setup is identical to [1].
> Are there any difficulties applying it to real-world turbulence dataset?
Conceptually, we don’t expect any difficulty. However, in practice, we envision that the biggest hurdle would be the amount of data necessary to obtain an accurate debiasing step, unless some symmetries are exploited. In our example we used the Kolmogorov flow, which is known to be ergodic with an unknown but relatively low-dimensional manifold, which we can cover with a relatively small amount of samples. For a truly turbulent flow this condition may not hold, and therefore, a large amount of data may be required.
> In Figure 2 (a), what does the true trajectory look like? (2) What does the corrected in Figure 2(b) mean? Does it mean OT correction only? Moreover, it is hard to distinguish the OT+cDfn, corrected, true, and UncondDfn. Therefore, it is hard to see the advantage of OT+cDfn from the figure.
We would like to point out that only snapshots are considered, not trajectories. Assuming that the former is what is asked, we want to note that even at the dimensionality of the KS system, it is prohibitive to draw “ground truth” conditional samples (e.g. via rejection sampling) that can be used to compare against the ones shown in Figure 2(a).
“Corrected” does mean OT correction only. The figure is meant to provide a sense of the qualitative nature of the methods. It tends to emphasize the method’s ability to capture large-scale features. To further distinguish them, the energy and covariance metrics are especially informative as they give quantitative measures for the small-scale features not easily visible from the figure.
> What is the training cost?
Training the unconditional network required about a day in a V100 GPU. The tuning of the sampling parameters took roughly one day. The debiasing, which is the most time consuming part of the algorithm due to the large memory footprint that makes GPU acceleration hard using an off-the-shelf method, took roughly three days. Using GPU acceleration and on-the-fly matrix-vector products we believe it should be possible to reduce the training time of the debiasing map significantly. One could also take advantage of several advances in computational optimal transport seeking to reduce the memory footprint of this computation. These include low rank and sparse approximations to the optimal transport plan/map, as in [2, 3,4].
References:
[1] Kochkov, Dmitrii, et al. "Machine learning–accelerated computational fluid dynamics." Proceedings of the National Academy of Sciences 118.21 (2021): e2101784118.
[2] Low-rank Optimal Transport: Approximation, Statistics and Debiasing, Meyer Scetbon, Marco Cuturi, NeurIPS 2022.
[3] Approximating Optimal Transport via Low-rank and Sparse Factorization, Weijie Liu, Chao Zhang, Nenggan Zheng, Hui Qian, 2021.
[4] Monge, Bregman and Occam: Interpretable Optimal Transport in High-Dimensions with Feature-Sparse Maps, Marco Cuturi, Michal Klein, Pierre Ablin, ICML 2023.
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: Thanks for the author’s detailed response and additional baselines and metrics results. After reading other reviews, I find the concerns are addressed satisfactorily by the elaboration and the empirical evaluation. The motivation is elaborated, and I was convinced that dealing with unpaired data is practical and challenging in climate systems. And the OT is a good approach to achieve debiasing and map data between different manifolds. The OT+diffusion can improve the sample quality compared with diffusion only. The strong empirical evidence showed that the proposed method outperforms all the baselines for versatile metrics. I believe it is a solid work and can bring insights into the ML+climate science community. Overall, I’d be happy to recommend a strong acceptance. | Summary: The authors suggest a simple approach for the problem of statistical downsampling, which is the super-resolution of low-resolution weather grids. The approach involves first "debias-ing" the low-resolution grid via solving an optimal transport problem, then obtaining a high resolution image by solving an image super-resolution problem with a score-based diffusion model.
Strengths: **Novel application of diffusion models.** The authors appear to apply diffusion modeling to a novel problem, even though the application appears to be straightforward.
**Promising results.** According to the metrics provided by the authors, the results are promising. (However, I am concerned about the validity of the metrics.)
Weaknesses: **Unclear motivations.** Since this is predominantly a climate modeling paper submitted to a machine learning conference, the posed problem is certainly unfamiliar to me, and probably unfamiliar to most readers. The authors need to clearly describe the motivation of the problem. How can statistical downscaling ever succeed in reproducing the high fidelity, high resolution outputs if the model is so chaotic that the initial conditions do not even matter (Lines 32-34, and Footnote 1, Page 6)?
**Validity of "debiasing".** The authors propose to reduce the discrepancy between low-resolution and high-resolution weather grids via optimal transport. This assumes some kind of continuity of the grids, i.e. similar weather systems in the lower dimensional weather grids produce similar simulated trajectories, that also correspond to similar weather systems in the higher dimensional weather grids. But a central assumption that necessitates the practice of using "unpaired" data is the "discontinuity" of the problem. Similar weather systems do not produce similar trajectories (Lines 32-34, and Footnote 1, Page 6). Therefore, is debiasing even possible? Moreover, is optimal transport the correct approach?
**Concerns with empirical evaluations.** This paper suggests that the proposed method obtains better performance according to their metrics, which appear to measure the deviation of the modeled statistics with the ground truth statistics. This comparison seems somewhat unfair, since the proposed super-resolution model is trained solely by modeling the statistics of the ground truth data. No other method except for the CycleGAN implementation even attempts to similarly model the ground truth distribution $\mathcal{X}$, and even CycleGAN has auxiliary (i.e., cycle consistency) losses. This is corroborated by Table 1: only CycleGAN approaches the performance of the proposed method. No competing deep learning-based approaches to statistical downsampling are used in the comparison. Moreover, the metrics used for evaluation differ greatly from [1], which appears to be a well-cited paper in deep learning-based statistical downsampling.
**Clarity in writing and formatting.** There are multiple places where the writing quality affects the readability of the text.
5: "tandeming" Tandem is not a verb and feels awkward here, consider using a different word?
14-16: "Moreover, our procedure correctly matches the statistics of physical quantities, even when the low-frequency content of the inputs and outputs do not match, a crucial but difficult-to-satisfy assumption needed by current state-of-the-art alternatives." What does this sentence mean?
21-22: "Consequentially, accurate predictions ... need to be *downscaled* from *coarser lower-resolution* models' outputs." (*Emphasis* mine.) Since this paper is submitted to a machine learning conference, where topics in computer vision are much more common than those in climate modeling, the authors need to clearly delineate where *downscaling* corresponds to statistical downscaling in the climate modeling sense, versus downscaling in the computer vision sense, especially when their meanings are completely reversed. I spent a lot of time trying to parse this sentence.
Figure 1 and its caption is unclear. Y' is mentioned here but it is not explained until Section 3.
Citation links do not work.
[1] DeepSD: Generating High Resolution Climate Change Projections through Single Image Super-Resolution. https://arxiv.org/pdf/1703.03126.pdf
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See "Weaknesses" section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See "Weaknesses" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review, especially your comments on places we could have explained better.
**Problem Setting, Motivation and Validity**
The problem we study in this paper is analogous to image or video super-resolution on a high level, but it has several important distinctions. We appreciate the opportunity to clarify this point.
Weather and climate are examples of an inherently chaotic dynamical system with an (approximately) stationary distributions. All snapshots of the system state may be considered samples drawn from the system’s stationary distribution. The proverbial [Lorenz butterfly](https://en.wikipedia.org/wiki/Lorenz_system) is such an example of the (low-dimensional) attractor of a (simplified) dynamical system used to describe the weather and climate. While two trajectories can be close at one time, they could be quite far from each other at later times while still belonging to the same stationary distribution over the attractor. In this aspect, the initial condition does not matter as the stationary distribution “forgets” the initial condition. Thus, the objective of statistical downscaling is to recover the stationary distribution, embodied by its samples.
The bias is introduced because different numerical schemes (e.g., with different integration order, step sizes, etc.) yield different perturbed versions of the stationary distribution on a (possibly different) attractor. When this happens, samples from two different distributions do not have a correspondence and hence can only be seen as unpaired samples from different attractors.
The central problem we hope to resolve is: given one sample X from a stationary distribution over an attractor A, can we obtain a set of representative samples from the stationary distribution over the attractor B, where A and B stems from the underlying dynamical system?
You are correct that it is impossible to find the sample that corresponds to X, as there is no such correspondence to begin with (unless we assume infinite precision). Instead, we identify the conditional distribution of samples from B that correspond to X – in other words, if we sample X from the distribution over A, and we collect all the samples from the conditional distribution, then we can recover the samples of the distribution over B that correspond to X. In this sense, debiasing is possible.
OT is the approach we consider to debias. A key contribution of our approach is to recognize the need to debias and propose a two-stage factorization for downscaling. Since we need to compare distributions (so as to debias), directly incorporating the debiasing map into the diffusion model is challenging as one would need to sample first and compute the distribution discrepancy measure (say, a kernel maximum mean discrepancy) and then differentiate through the score network to learn the debiasing map. We attempted such an approach, but found the computational cost to be prohibitive. Using OT we decouple the two steps by learning the debiasing map without the need to sample from the diffusion model.
If we could use image super-resolution as an analogy, the bias could be seen as a distortion to the original high-resolution sample (say using aberrative lens or distorting mirror) while downsampling to a lower-resolution. Thus, the super-resolution would need to unwarp the distortion in the low-resolution first and then upsample.
**Metrics and Baselines**
Thanks for the suggestions. Please see our general response and response to reviewer wBNS.
**Other Questions**
> 5: "tandeming" Tandem is not a verb and feels awkward here, consider using a different word?
Acknowledged. We will modify this sentence.
> 14-16: "Moreover, our procedure correctly matches the statistics ..." What does this sentence mean?
We agree that this sentence is hard to read. We wanted to contrast our methodology to other approaches such as [1], in which the method needs the low-frequency component of the input and output to match (i.e., the large scale features in both low and high resolution distributions match). Thus most of the debiasing is performed in the medium- to high-frequency regime (i.e., only the medium and small features), which renders the problem much easier. Our methodology does not enforce this constraint. The lack of this constraint makes the problem more challenging to solve, but it broadens the applicability of the methodology. We will clarify the context of this sentence, and add the reference.
> 21-22: "Consequently, accurate predictions ... " ... I spent a lot of time trying to parse this sentence.
To clarify, we use downsampling as the opposite to upsampling (or super-resolution), following the typical ML-jargon. For downscaling, however, we refer not only upsampling/super-resolution but also debiasing - this is a standard term used in weather/climate community.
We hope that the explanation above has made this difference clear. This is in contrast with super-resolution in which only upsampling is considered, which we explain in Fig. 1. A typical upsampling (or super-resolution) method will seek to extrapolate in frequency the red line, which would provide results that are not in the correct distribution. Thus, we are required to debias the input (go from the red to the blue line) and then upsample it. This process is what we refer to as downscaling.
> Figure 1 and its caption is unclear. Y' is mentioned here but it is not explained until Section 3.
Thank you for the observation. We will add a definition in the caption for $y’ \in \mathcal{Y}’$.
> Citation links do not work.
We will make sure this is fixed.
References:
[1] DeepSD: Generating High Resolution Climate Change Projections through Single Image Super-Resolution. https://arxiv.org/pdf/1703.03126.pdf
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your response. I appreciate work the authors put into elucidating their points, and addressing my comments. However, I still lack some clarity on the two of the main concerns I initially raised in the review.
1) **Validity of debiasing**: In the rebuttal, the authors justify debiasing via a discussion on the stationary systems of weather and climate systems. I have trouble understanding this framework involving stationary distributions, as it seems ill-fit for describing weather and climate systems. Stationary systems are time-invariant. Aren't weather systems inherently variant over time (which is why predicting them at future times are of interest)? At any rate, optimal transport appears to me to still pose a strong assumption on the structure of the weather systems, which have highly "discontinuous" dynamics with respect to their initial state: similar weather systems do not produce similar trajectories (Lines 32-34, and Footnote 1, Page 6). Theoretically, why should optimal transport provide the correct "debiasing" solution?
2) **Metrics**: I could not find any discussion on most of the concerns I raised, namely this part:
> This comparison seems somewhat unfair, since the proposed super-resolution model is trained solely by modeling the statistics of the ground truth data. No other method except for the CycleGAN implementation even attempts to similarly model the ground truth distribution
, and even CycleGAN has auxiliary (i.e., cycle consistency) losses. This is corroborated by Table 1: only CycleGAN approaches the performance of the proposed method. No competing deep learning-based approaches to statistical downsampling are used in the comparison.
Additionally, the authors mention that further discussion is provided here:
> Please see our general response and response to reviewer wBNS.
I am reviewer wBNS. I assume the authors mean reviewer zUt7?
I still could not find the discussion on the above statement.
For these reasons, I am still hesitant to raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for reiterating your concerns and giving us the opportunity to clarify further. Please find our point-by-point response below.
> Aren't weather systems inherently variant over time (which is why predicting them at future times are of interest)?
The systems considered in the current work are regarded as being close to an [ergodic dynamical system](https://en.wikipedia.org/wiki/Ergodicity) which admits a stationary distribution that can be sampled by evolving the system in time (a close analogy would be some Markov chains having a invariant distribution). While over a few steps the system is time-variant as its state changes, the set of all states visited by the system over a long horizon is not.
Perhaps the simplest example to showcase these properties (as mentioned in our previous response) is the Lorentz system, which is a simplified mathematical model for atmospheric convection. The system is non-stationary in time (it relies on solving a time-dependent ODE), but if you sample snapshots (without the time-stamp) of a trajectory (or set of trajectories) for a long enough time, the samples will follow a given invariant distribution, which is often referred to as the Lorenz butterfly.
Climate systems (not weather, which is intrinsically transient) fall into this category approximately. One needs to sample from really long time spans (at least 10s of years in real time) to have a workable coverage of the underlying distribution.
In our work, we are concerned with two such stationary distributions - one ground truth with samples from the true system evolutions and the other consisting of samples from an approximation model (i.e. low-order PDE solver) with errors.
We are not sure if we were able to answer your question. If you feel that our response was not clear or if we missed the point of your question, would you be able to rephrase your question? We would be glad to answer it as soon as we can.
> At any rate, optimal transport appears to me to still pose a strong assumption on the structure of the weather systems, which have highly "discontinuous" dynamics with respect to their initial state: similar weather systems do not produce similar trajectories (Lines 32-34, and Footnote 1, Page 6).
The "discontinuous"/chaotic property mentioned highlights the difficulty of the “paired data” setup - even if there is correspondence in the initial conditions (from the two distributions considered): this correspondence will eventually get lost over time. This is why in our view learning a sample-to-sample mapping **paired via time** is not feasible. We adopt the “unpaired” data setup and instead attempt to learn the many-sample-to-many-sample (i.e. distributional) mapping. As stressed above, a critical assumption is that the system admits a stationary distribution and is sufficiently sampled. We are exploiting this particular property to re-establish the “pairedness” in the data via OT.
> Theoretically, why should optimal transport provide the correct "debiasing" solution?
The solution to the distribution mapping problem is not unique. OT treats the matching of the distributions as a constraint and additionally imposes a “cost” associated with the map and explicitly minimizes it. This “cost” is a measure of the deviation of $T(x)$, the map applied to $x$, from its input $x$. Intuitively, this means that we want the transport map to move the states **as little as possible** - in other words, the mapped state should still “look like” the original in some sense (here by minimizing the $L^2$ norm). We choose this cost to re-establish the paired relationship in the data. It is by no means the only constraint one can impose (e.g. cycGAN consistency enforces cycle consistency between the transports, while ClimAlign imposes the invertibility of the transport), but we believe it is the most intuitively sensible way.
In order to showcase this issue we have added a table in the [response](https://openreview.net/forum?id=5NxJuc0T1P¬eId=vufkqwLpfq) to reviewer zUt7. The table provides a comparison of the deviation and matching of the distributions for the distribution mapping method benchmarked, under as many applicable metrics as we can think of. Compared to the other baselines considered, OT is able to best match the marginals, i.e., mapping one distribution to another, while having a relatively small deviation from the input. We would be happy to add any other applicable metric that you would like us to include, to the final manuscript.
Title: first part of the comment | Summary: This work introduces a new framework to tackle statistical downscaling, a climate science equivalent of super-resolution, in two steps: The first step removes the bias while staying at low resolution with an optimal transport method and the second step increases the spatial resolution with a diffusion-based model. The performance of the method is shown on two fluid-dynamics datasets: 2d Navier-Stokes and 1d Kuramoto-Sivanshinski equation. The proposed method outperforms baselines such as CycleGAN and ViT on all suggested metrics.
Strengths: Originality: This work develops a new method to tackle statistical downscaling
Quality: Well written and well setup experiments.
Clarity: Good motivation and explanation for the two-step approach tackling this problem.
Significance: This work addresses an important and impactful problem and has high practical societal relevance. Especially, large upsampling factors like used here make the problem hard to solve and motivate the need for advanced methods like the one proposed here. The work addresses a probabilistic formulation of statistical downscaling, which is still under-researched.
Weaknesses: This is a great paper, but some weaknesses are:
- The metrics used are not common in statistical downscaling. Very common metric that could be included is e.g. Continuous Rank Probability Score, see e.g. https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2022MS003120 for more commonly used climate downscaling metrics
- No comparison with existing work like ClimAlign (https://arxiv.org/pdf/2008.04679.pdf), BCSD
- References to some relevant works are missing: https://arxiv.org/pdf/2211.16116.pdf (stat. downscaling using diffusion models)
- The work is motivated by climate/weather modeling but doesn’t include real climate/weather datasets. There are datasets available that include to different simulations at lower and higher resolution. It would be great to see how this method performs on real-world data. Datasets that could be used are NorESM data at different resolution or WRF simulations. Or learning the mapping from ERA-interim (https://climatedataguide.ucar.edu/climate-data/era-interim) to WRF (https://rda.ucar.edu/datasets/ds612.0/#!) data, like done in the ClimAlign work.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - It would be great to motivate the selection of metrics more and explain why other, more common metric like CRPS are not used here
- A discussion at the beginning of the paper talking about different kinds of statistical downscaling setups such as perfect prognogsis (see e.g. https://www.cambridge.org/core/books/statistical-downscaling-and-bias-correction-for-climate-research/4ED479BAA8309C7ECBE6136236E3960F)
- To be applied in climate modeling it is also relevant to know the inference runtime (not only training time).
References:
I would suggest including more of existing climate super-res/statistical downscaling work. A extensive collection of DL for statistical downscaling literature can be found here: https://github.com/paulaharder/deep-downscaling-overview
Minor:
Why are don't the first two rows in Table 2 have the best scores in bold?
Typos:
Figure 1 caption: "an invertible" instead of "a invertible"
Line 52: "an unknown" instead of "a unknown"
Line 129: "a structured" instead of "an structured"
Line 141: "an unconditional" instead of "a unconditional"
Figure 2 caption: "and and" typo
Line 309: "s" missing of "sample"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I think the limitations of this work could be made clearer. Either in an additional limitations section or in the conclusion should be a mention that the method has only been applied to an idealized datasets and not a real climate model dataset. Also, it should be mentioned that this work tackles a specific subfield of statistical downscaling and other subfields, eg. perfect prognosis where predictors differ
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your detailed review. Please see below the responses to the issues raised.
> Use common metrics in statistical downscaling such as CRPS, Motivate the use of the metrics listed in the paper
Thank you for the comment. We will provide a more thorough explanation for the evaluation metrics considered in this work. Our main consideration was to assess the effectiveness of debiasing by measuring differences in statistics between the generated and true samples.
CRPS is designed to assess an ensemble forecast with respect to a ground-truth deterministic observation, which does not quite fit our setup with unpaired data. Nonetheless, we have considered a somewhat clumsy application of the CRPS metric: treating each true sample as the deterministic observation and all generated samples as the ensemble forecast; the reverse is also performed treating each generated sample as the observation and all true samples as the ensemble. The resulting metrics are averaged and shown in the general response above. The results indicate that this symmetrized CRPS metric has little discrimination power across all elements. This is unsurprising considering that CPRS may be decomposed into a bias and a spread component and the latter completely dominates the metric - even comparing the ground truth distribution with itself leads to a large reference value for the metric.
That said, we are adding two new metrics, namely the maximum mean discrepancy (MMD) between the real and generated high-resolution distributions and the pixel-wise Wasserstein-1 metric. These are common distribution metrics and we hope that they will help to provide a more comprehensive evaluation of our proposed method.
> Comparison to existing work like ClimAlign, BCSD
We thank the reviewer for the suggestion. We had tried to run ClimAlign using the code provided in the original GitHub repository, but were unable to adapt it to our setup directly due to code version and numerical issues. In particular, we found that the code was prone to produce singular matrices inside the GlowFlow module. We were not able to solve the problem and instead reimplemented the original AlignFlow code following [this Github repository](https://github.com/ermongroup/alignflow). Metrics are included in the general response.
BCSD was implemented and added as a baseline (see general response). The preliminary results suggest that it is not competitive in general due to the lack of spatial correlations beyond the cubic interpolation result. However, the table above shows that the Wasserstein-1 metric is very small. This is expected because the algorithm uses a pixel-wise quantile matching procedure, which is the solution to the $L^1$ OT problem and gives rise to the Wasserstein-1 metric.
Finally, we would like to comment that our methodology can be thought of as a generalization of BCSD using modern ML-tools.
Our method considers covariant structures through the solution of a global OT problem using entropy-regularized Sinkhorn iterations, instead of solving an OT-problem marginally to each pixel by matching pixel-wise quantiles. Our method leverages diffusion models to model high-dimensional joint distribution that maintain spatial coherence, instead of using polynomial interpolation. We point out that, even though this connection can be made for the resulting algorithm, our formulation hinges on a two-step factorization of the probabilistic description of statistical downscaling recasted as a sampling problem, which is not present in BCSD.
> Missing References, including more existing work,
Thank you for the pointer to the deep-downscaling-overview repo - it is a wonderfully curated source. We will add selected references.
Regarding https://arxiv.org/pdf/2211.16116.pdf: we will add it to the bibliography. The authors tackle the problem in the setting of paired data. Thus, this is more analogous to super-resolution.
Regarding perfect prognosis: Perfect prognosis is out of the scope of this paper (e.g., dynamical downscaling was also out of scope) as it still requires paired data between different predictors to learn empirical relationships. We will add a few sentences in the introduction providing some background and rendering the scope of this contribution more transparent viz-à-viz other statistical downscaling setups.
> Inference runtime
The inference runtime is around 1700 samples/min (in batches of 128) for KS and 20 samples/min (in batches of 16) for NS. We will add this information to the text.
> I would suggest including more of existing climate super-res/statistical downscaling work.
Thank you for the references! We will go over the list and add relevant ones to the final version of the paper.
> Clearer discussion on the limitations of this work
We agree and will remind readers that the current paper mainly demonstrates the plausibility of the proposed methodology (i.e., the factorization of the downscaling process) for a particular instance of the statistical downscaling problem. Subfields of statistical downscaling are certainly very important and we will discuss those in the final version of the paper. In particular, we will discuss how to extend the algorithmic framework to tackle those problems.
While turbulence data provides a strong proof of concept, we are actively working on real climate/weather datasets. Preliminary results suggest the algorithmic framework is robust. We look forward to hearing your suggestions and comments. | Rebuttal 1:
Rebuttal: **General Rebuttal Response**
We thank all reviewers for providing such detailed reviews. We are encouraged by the comments that the current idea has novelty, presented in a clear way, and that the results have potentially high (societal) impact and relevance.
To address the weaknesses and questions raised in the comments, we want to highlight the main changes below:
* **Added baselines.** We have added two more baselines - *Bias Correction and Spatial Downscaling (BCSD)* and *ClimAlign*. The former is a popular method for statistical downscaling consisting of an upsampling step using cubic interpolation followed by a pixel-wise bias correction using quantile matching. The latter is a neural network approach based on normalizing flows coupled with a cyclic regularization step.
* **Added metrics.** We have also added two more metrics: the *mean minimum discrepancy (MMD)* and a mean pixel-wise *Wasserstein-1* distance. We show that our methodology outperforms, or at least remains competitive, in these new metrics compared to the old and new baselines.
* **Writing.** We thank all reviewers again for catching typos and phrasing issues. Most of these are acknowledged in our response to individual reviewers. We will make sure to fix them in the next version.
* **Figure and Table updates.** The updated metric table (replacing Table 2) is attached below. In the attached PDF, we show the updated sample comparison (Figure 3) and energy plots (Figure 2c and 4c) that include the newly added baselines.
Furthermore, we would like to re-emphasize the fact that our methodology **targets unpaired data**. Consequently, we do not have access to the ground truth conditional (i.e., posterior) samples, which are computationally prohibitive to obtain using methods like rejection sampling. This is the primary reason that the metrics we used may differ from metrics in similar works (e.g. RMSE, CRPS) that assume access to paired data during training and/or evaluation. To this end, we focus on metrics based on distributional differences, as a way to measure the fidelity of the generated samples. This is analogous to the way in which the fidelity of class- or text-conditioned image generation methods are evaluated using unconditional metrics like FID.
**Updated metric table**
| Model | Var | covRMSE↓ | MELRu↓ | MELRw↓ | KLD↓ | Wass1↓ | MMD↓ |
|-------------------|------|----------|--------|--------|-------|--------|------|
| **8x downscale** | | | | | | | |
| BCSD | 0 | 0.31 | 0.67 | 0.25 | 2.19 | 0.23 | 0.10 |
| cycGAN | 0 | 0.15 | 0.08 | 0.05 | 1.62 | 0.32 | 0.08 |
| ClimAlign | 0 | 2.19 | 0.64 | 0.45 | 64.37 | 2.77 | 0.53 |
| Raw+cDfn | 0.27 | 0.46 | 0.79 | 0.37 | 73.16 | 1.04 | 0.42 |
| OT+Cubic | 0 | 0.12 | 0.52 | 0.06 | 1.46 | 0.42 | 0.10 |
| OT+ViT | 0 | 0.43 | 0.38 | 0.18 | 1.72 | 1.11 | 0.31 |
| (ours) OT+cDfn | 0.36 | 0.12 | 0.06 | 0.02 | 1.40 | 0.26 | 0.07 |
| **16x downscale** | | | | | | | |
| BCSD | 0 | 0.34 | 0.67 | 0.25 | 2.17 | 0.21 | 0.11 |
| cycGAN | 0 | 0.32 | 1.14 | 0.28 | 2.05 | 0.48 | 0.13 |
| ClimAlign | 0 | 2.53 | 0.81 | 0.50 | 77.51 | 3.15 | 0.55 |
| Raw+cDfn | 1.07 | 0.46 | 0.54 | 0.30 | 93.87 | 0.99 | 0.39 |
| OT+Cubic | 0 | 0.25 | 0.55 | 0.13 | 7.30 | 0.85 | 0.20 |
| OT+ViT | 0 | 0.14 | 1.38 | 0.09 | 1.67 | 0.32 | 0.07 |
| (ours) OT+cDfn | 1.56 | 0.12 | 0.05 | 0.02 | 0.83 | 0.29 | 0.07 |
**Results for symmetric CRPS**
(To show that CRPS does not properly discriminate - see response to reviewer fyEE for context)
| | Reference | BCSD | cycGAN | Raw+cDfn | OT+Cubic | OT+ViT | OT+cDfn |
|------|-----------|------|--------|----------|----------|--------|---------|
| CRPS | 2.50 | 2.25 | 2.42 | 1.87 | 2.47 | 2.18 | 2.53 |
Pdf: /pdf/0101bb46e4e404a1730170a1b902f7827acdbe20.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Global Optimality in Bivariate Gradient-based DAG Learning | Accept (poster) | Summary: The authors give a simple optimization algorithm for DAG-learning-inspired optimization problems that avoids the limitations of known techniques.
Strengths: Originality:
Work is original.
I particularly liked the reduction from a combinatorial problem to a non-convex optimization one.
Quality:
Simple and strong paper.
Clarity:
Clear writing, which can be improved; see questions.
Significance:
Significant topic/contributions.
Weaknesses: No significant weaknesses found.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Discussion around Equation (1) is very clean.
Line 49:
Can you please explain Equation (2) a bit more?
Line 64:
Is it really easy to see? :)
Line 90:
Please explain "homotopy."
Remark 2:
Can you please further explain the difficulties that you mention here?
Can you please elaborate on Equation (7)?
Line 217:
Can you please elaborate on bounding $a$?
Please add more details in the caption of Figure 2.
Line 235:
Why is the interesting regime the one where $\mu < \tau$?
Line 258:
Not clear!
Line 261:
"we know" instead of "we known."
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewer for acknowledging the value of our contributions and our clear presentation.
> Line 49: Can you please explain Equation (2) a bit more?
>
Equation (2) is a penalized version of Equation (1), where $h(W(\Theta))$ acts as a penalty. This change turns the initial constrained problem into a set of simpler unconstrained problems. The role of $\mu_k$ in Equation (2) is to control the magnitude of constraint violation by penalty. When $\mu_k$ is large, $f(\Theta)$ takes priority, so the solution minimizes this but may not meet the constraint $h(W(\Theta)) = 0$. By decreasing $\mu_k$, we give more weight to the penalty. As we approach $\mu_k = 0$, the solution focuses more on satisfying $h(W(\Theta))=0$.
> Line 64: Is it really easy to see? :)
>
We will be sure to expand on this in the final version. Intuitively, let's recall that $f(\Theta)$ is presumed to be a convex function in our study. When $\mu_k$ is large, $f(\Theta)$ dominates in Equation (2). Consequently, $g_{\mu_k}(\Theta)$ behaves similarly to $f(\Theta)$ as a "convex" function, implying a benign loss landscape for $g_{\mu_k}(\Theta)$ when $\mu_k$ is large. This can be formally established as long as $f(\Theta)$ is a strongly convex function, which our population LS loss is.
> Line 90: Please explain "homotopy."
>
“homotopy optimization", also known as "continuation optimization" [1, 2], is a strategy for finding solutions to complex optimization problems. The term "homotopy" is borrowed from topology (a branch of mathematics), which is a continuous transformation from one function to another. This concept is applied to optimization to form a bridge from a problem with a known or easy-to-find solution to a more complex problem. In our content, resolving $g_{\mu_k}(\Theta)$ becomes straightforward with larger values of $\mu_k,$ while it becomes challenging when $\mu_k$ is small.
> Remark 2: Can you please further explain the difficulties that you mention here?
>
The main challenge lies in the increasing complexity of the loss landscape of $g_{\mu}(\Theta)$ as the model dimensions increase. As a result, analyzing the basin of attraction for the global minimum of $g_{\mu}(\Theta)$ becomes increasingly difficult. Moreover, the penalty is an order-$d$ matrix polynomial, which becomes more complex as $d$ increases.
> Can you please elaborate on Equation (7)?
>
Equation (7) is a detailed version of Equation (2). In the context of our study, we focus on the bivariate case, where the loss function is represented by the expected least square, given by $f(x,y) = \frac{1}{2}((1-ay)^2+y^2+(a-x)^2+1)$. The constraint function is denoted by $h(x,y) = \frac{x^2y^2}{2}.$
Therefore, we can express $g_{\mu}(x,y) = \mu f(x,y)+h(x,y)=\frac{\mu}{2}((1-ay)^2+y^2+(a-x)^2+1)+\frac{x^2y^2}{2}$. The function $g_{\mu}(x,y)$ is used recurrently in our Algorithm 2 or 3, so our principal interest lies in understanding its properties.
> Line 217: Can you please elaborate on bounding $a$?
>
Thanks for bringing it up! The lower bound on $a$ is indeed a standard and sensible assumption. One can view the magnitude of $a$ as a measure of problem difficulty, with $a$ essentially behaving like the “signal strength" of the underlying structure, larger $a$ indicates the structure is easier to learn. For a more direct insight, consider Equation (6)'s local optimal solution, namely $(x_1^*,y_1^*) = (0,\frac{a}{a^2+1})$. The corresponding loss is $\frac{1}{2}(a^2+1+\frac{1}{a^2+1})$. Now, the global optimal solution is $(x_0^*,y_0^*) = (a,0)$, with loss $f(x^*_0,y^*_0) = 1$. The loss difference between the local and global optima is therefore $\frac{1}{2}(a^2+1+\frac{1}{a^2+1}) - 1$, an increasing function of $a$. As $a$ enlarges, both the loss difference and the basin of attraction for the global optimum increase in general, making the optimization easier.
> Please add more details in the caption of Figure 2.
>
Thanks for the suggestion. Here, for $\mu>\tau,$ there exists a single solution to $r(y;\mu) = 0$, which implies there is one stationary point in Equation (7). When $\mu=\tau,$ two solutions are found for $r(y;\mu) = 0$, suggesting that there are two stationary points in Equation (7). Conversely, when $\mu<\tau,$ we observe three solutions for $r(y;\mu) = 0$, indicating that there are three stationary points in Equation (7)- a local optimum, a saddle point, and a global optimum.
> Line 235: Why is the interesting regime the one where $\mu<\tau$?
>
Thanks for bringing up this point! In this regime where multiple stationary points exist in $g_{\mu}(\Theta)$ - a local optimum, a saddle point, and a global optimum, as described in Lemma 1, the usage of gradient descent for $g_{\mu}(\Theta)$ may result in trap within the local optimal solution. This scenario is precisely what our methodology aims to avoid, our updating scheme of $\mu_k$ in Algorithm 2 or 3 can do the job. Conversely, when $\mu>\tau,$ only a single stationary point is observed. As demonstrated in Theorem 3 (Appendix), gradient descent invariably converges to this point, which consistently represents the global optimum. Hence, in such conditions, there are no concerns to be addressed.
> Line 258: Not clear!
>
We are happy to explain this in a more accessible way. The fundamental idea is to identify the basin of attraction for the global optimum solution$(x^*_{\mu_{k+1}},y^*_{\mu_{k+1}})$ of $g_{\mu_{k+1}}(\Theta)$, and make sure previous initialization point $(x^*_{\mu_{k}},y^*_{\mu_{k}})$ fall into such region. We then use gradient descent or flow to converge towards $(x^*_{\mu_{k+1}},y^*_{\mu_{k+1}})$. This proof aligns with this reasoning and affirms that our updating scheme for $\mu_k$ can make it happen!
[1] Lin, Yang, Zhang, and Zhang. "Continuation Path Learning for Homotopy Optimization." 2023.
[2] Hazan, Levy, and Shalev-Shwartz. "On graduated optimization for stochastic non-convex problems." 2016. | Summary: This paper studies the problem of learning the correct Directed Acyclic Graph (DAG) that describes the data using continuous optimization. The connection of this problem with continuous methods comes from a prior work of Zheng et al., where they introduce a differentiable function $h$ whose level set at 0 exactly characterizes DAGs. The problem then becomes to optimize a score function of the data subject to the constraint that h is 0. This is a non-convex problem, since h is non-convex. The standard way of solving it is by converting it to an unconstrained problem and penalizing the solutions with high values of h. This paper provides a way of solving a sequence of these problems with varying penalty parameters, so that the final output provably converges to the true solution, when we have two nodes in the DAG. The model is analyzed in the population case, i.e. when we have access to infinite data. The authors first show that for sufficiently small values of the penalty parameter, the loss landscape is not benign, as there exists a saddle point and a spurious local minimum in addition to the global minimum. Using this observation, they show a way to pick a sequence of penalty parameters, which provides a solution path that avoids the local minima and converges provably to the true model asymptotically. Their main result is phrased in terms of gradient flow, but they also provide a finite step analysis for gradient descent.
Strengths: Consider the optimization problem
\begin{equation}\label{eq1}
\min_\Theta g_\mu(\Theta) := \mu f(\Theta) + h(W(\Theta))
\end{equation}
where $f$ is the least squares loss, $\mu$ is the penalty parameter, $\Theta$ represents the model parameters and $W(\Theta)$ represents the adjacency matrix of the DAG. $f$ is a function that was introduced in prior work, which is $0$ if and only if $W(\Theta)$ corresponds to a DAG.
This paper is the first that provides theoretical guarantees about the optimization landscape of this unconstrained problem for various values of $\mu$, in the case of $2$ nodes. It is worth noting that this is actually the method that is used in practice to learn DAGs, since enumeration methods quickly become intractable when the number of nodes in the DAG increases.
The authors establish interesting properties of the solutions space, namely that if $\mu$ is sufficiently small, the loss is not benign.
Also, the idea of successively solving different optimization problems and using the solution of the last as a starting point for the next is an intriguing one and could have applications beyond this work. The proof strategy is also very intuitive and elegant, by choosing $\mu$ so that the optimizer of $g_\mu$ lies in the basin of attraction of the global minimum of the next problem.
All arguments are very clearly explained, which makes this paper enjoyable to read.
Weaknesses: -The result is only proven for $d=2$ nodes in the DAG. This makes most calculations tractable, since there is a closed form for the loss $g_\mu$ (a quadratic polynomial in two variables). Hence, while the proof strategy is elegant and novel, the technical arguments required to establish the claims do not have significant innovation.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: -It would be interesting to see whether similar results hold for higher values of $d$. Have the authors tried to run simulations for $d=3$ or $4$ and observe similar solution landscapes? What about other choices of loss function or penalty?
-Is it always true that there is a unique DAG that describes the data? For example, for two nodes, suppose we have $X_2 = X_1 + \epsilon$, where $\epsilon\sim \mathcal{N}(0,1)$. Then, clearly it is also true that $X_1 = X_2 + \epsilon'$, where $\epsilon' \sim \mathcal{N}(0,1)$. In that case, it seems that we have two solutions $x=1,y=0$ and $x=0,y=1$. I'm curious where would this come up in the analysis.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort into carefully reviewing our paper and provide such valuable feedback.
> The result is only proven for $d = 2$ nodes in the DAG. This makes most calculations tractable, since there is a closed form for the loss (a quadratic polynomial in two variables). Hence, while the proof strategy is elegant and novel, the technical arguments required to establish the claims do not have significant innovation.
>
We appreciate reviewer for acknowledging the analysis is elegant and novel! See discussion in **Common Concern II**.
> It would be interesting to see whether similar results hold for higher values of $d$. Have the authors tried to run simulations for $d = 3$ or $d = 4$ and observe similar solution landscapes?
>
Thanks for the keen question! Please refer to **Common Concern I** for more details.
> What about other choices of loss function or penalty?
>
We appreciate this interesting question!
Crucially, the proof techniques in our work are notably general. Specifically, the implicit function theorem offers a means to trace the trajectory of stationary points, and the Lyapunov asymptotic stability theorem consistently can be used to identify the basin of attraction. These techniques clearly generalize to more general loss functions, which is an intriguing feature of our results. Extending our analysis in this way is an important future direction. Furthermore, it is worth emphasizing that under our model, the least squares loss is the most appropriate loss function: This is because its global minimizer is unique and equals the underlying matrix $W_*$.
In relation to alternative penalties, we explored the matrix exponential $h_{\text{expm}}(W) = \operatorname{tr}(e^{W\circ W}) - d$ as described in [3] and the log-det formulation $h_{\text{ldet}}(W) = -\operatorname{logdet}(sI-W\circ W) +d\log s$ where $\rho(W\circ W)<s$(with $\rho$ representing the spectral radius) from [1]. Firstly, all these penalties are designed to enforce the DAG constraint on $W$. They are largely equivalent in application, implying that understanding one offers insights into the others. This equivalence is evident in experimental results; for instance, the loss landscapes are similar to each other. When $\mu$ falls below a certain threshold, the presence of local optima and saddle points for $g_{\mu}(\Theta)$ remains consistent irrespective of whether $h_{\text{expm}}(W)$ or $h_{\text{ldet}}(W)$ is employed. However, incorporating $h_{\text{expm}}(W)$ or $h_{\text{ldet}}(W)$ introduces exponential or logarithmic terms to $g_{\mu}(W)$, adding layers of complexity to the analysis, as well as additional constraints in the case of $h_{\text{ldet}}$ since its validity rests on the condition $\rho(W\circ W)< s$.
> Is it always true that there is a unique DAG that describes the data?
>
We appreciate your important question. This is the identification problem in the DAG learning literature. In essence, the model is identifiable if different DAGs cannot generate the same distribution. In our setting, a linear model with equal (or known) error variances is identifiable (this is well-known, e.g. [5,6]). Therefore, solving Equation (6) will yield a unique DAG that accurately generates the data distribution. Although identifiability is not universal for all models, many have been confirmed as identifiable, e.g. [4,5,6,7,8]. Extending our techniques to these models is an important direction for future work. We hope this clarifies your question.
> For example, for two nodes, suppose we have $X_2 = X_1 +\epsilon$, where $\epsilon \sim N(0,1)$. Then, clearly it is also true that $X_1 = X_2 +\epsilon'$ , where $\epsilon'\sim N(0,1)$. In that case, it seems that we have two solutions $x = 1,y = 0$ and $x = 0,y = 1$. I'm curious where would this come up in the analysis.
>
This is a great question! The issue boils down to independence of the noise terms (see L154), which is violated by your example (see below for details). Another way to interpret this assumption is that $N_2$ is independent of $X_1$, or more generally, that each noise term is independent of the parents in the structural equation model. See [5] or [6] for more discussion.
Details: Your two models are (a) $X_2 = X_1 +\epsilon$ and (b) $X_1 = X_2 +\epsilon'$ with $\epsilon'=-\epsilon$. Recalling our model assumptions that $(N_1,N_2)$ are independent and after transcribing the notation, in (a) this is equivalent to $X_1$ independent of $\epsilon$. Clearly, $X_2$ is not independent of $\epsilon$. But then in (b), $\epsilon'$ cannot be independent of $X_2$, which violates our model assumptions. So, there is only one model with independent noise terms. In fact, this argument generalizes quite substantially to LinGAM and additive noise models. In the Gaussian case there are some subtleties, but see [5-7] for details on this case.
This fact does not explicitly come up in our analysis, but is implicit in the fact that only (a) will minimize the LS loss, a fact which follows from known results such as [5] or [6].
[4] Peters, Jonas, Joris M. Mooij, Dominik Janzing, and Bernhard Schölkopf. "Causal discovery with continuous additive noise models." (2014).
[5] Loh, Po-Ling, and Peter Bühlmann. "High-dimensional learning of linear causal networks via inverse covariance estimation." *The Journal of Machine Learning Research* 15, no. 1 (2014): 3065-3105.
[6] Peters, Jonas, and Peter Bühlmann. "Identifiability of Gaussian structural equation models with equal error variances." *Biometrika* 101, no. 1 (2014): 219-228.
[7] Aragam, Bryon, and Qing Zhou. "Concave penalized estimation of sparse Gaussian Bayesian networks." *The Journal of Machine Learning Research* 16, no. 1 (2015): 2273-2328.
[8] Shimizu, Shohei, Patrik O. Hoyer, Aapo Hyvärinen, Antti Kerminen, and Michael Jordan. "A linear non-Gaussian acyclic model for causal discovery." *Journal of Machine Learning Research* 7, no. 10 (2006). | Summary: This paper presents a novel approach to the non-convex optimization problems associated with learning the structure of a structural equation model (SEM) or Bayesian network. Considering the equivalent penalty form, the authors propose a homotopy-based optimization scheme that finds global minimizers of the problem by iteratively decreasing the penalty coefficient according to a given schedule. They prove that this algorithm converges globally to the global minimum, regardless of the initialization for W. The authors also demonstrate that the non-convex program is non-benign, meaning that naïve implementation of black-box solvers are likely to get trapped in a bad local minimum.
Strengths:
I am not an expert in this field. But the paper's findings seems to have significant implications in the context of learning the structure of SEMs or Bayesian networks. The paper is well-structured and the authors clearly explain their methodology and findings. They also provide a clear visualization of the non-convex landscape and the solution trajectory.
Weaknesses: The authors' approach is primarily focused on the bivariate case, which may limit its applicability in more complex settings. Some numerical study on large-scale problems is desirable.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors:
How does the landscape look like in high dimensional setting? Can you provide any experiments to more than two variables?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Does not apply
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful critiques and comprehensive understanding of our work, and for providing such useful feedback. We will try our best to address the reviewer’s concern.
> The authors' approach is primarily focused on the bivariate case, which may limit its applicability in more complex settings. Some numerical study on large-scale problems is desirable.
>
We appreciate the reviewer's insightful question about the scalability of the Homotopy method on large-scale problems. Indeed, this is exactly what is done in practice on large-scale problems (see discussion in Line 84-87). Since this point is closely related to your follow-up questions, please see below for more details.
> Can you provide any experiments to more than two variables?
>
Many existing papers show that the Homotopy method can perform well empirically, even with hundreds or thousands of nodes. For example, the study presented in [1] employs the Homotopy method to solve Equation (1) and has achieved state-of-the-art results, as demonstrated in Figures 4, 5, and 6 in [1], and further elaborated in their appendix. The experiments, encompassing both linear and nonlinear models, offer compelling evidence in support of the homotopy method's effectiveness in practice. Furthermore, while [2] and [3] don't adopt the Homotopy algorithm in its exact form, their approaches share a similar spirit of the homotopy algorithm, solving Equation (2) repeatedly with previous solutions. For more empirical results, see Figure 1 and Appendix H in [2], and Figure 3,7,8 in [3]. More discussions can be found in **Common Concern II**.
> How does the landscape look like in high dimensional setting?
>
Thanks for this insightful question. In higher-dimensional settings, the landscape remains non-benign (e.g. even for $d = 3$, experiments indicate that multiple local optima and saddle points persist), however, existing work [1,3] has shown that the homotopy method is still very effective at finding good (in some cases, near optimal) solutions. This is in fact part of the motivation for our study. Extending our results to higher dimensions is an important direction for future work, and it is worth pointing out that our techniques are indeed generalizable in principle: The main tools we use (the implicit function theorem and the Lyapunov asymptotic stability theorem) possess broad applicability, extending beyond two dimensions. While the direct translation of our findings to other contexts remains challenging, they undeniably pave the way for future explorations. More discussions can be found in **Common Concern I**.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. There could be some interesting future directions to pursue, but my concern about the limitation of this current paper is not fully addressed. I will keep my score. | Summary: the paper provides a theoretical study on the loss landscape and convergence in gradient-based DAG learning framework. By focusing on linear functions with the number of variable d = 2, They provide a homotopy-based optimization scheme to guarantee the global optimality. Some numerical validations are provided.
Strengths: the paper focuses on the theoretical study on the global optimality and convergence. This is an important but difficult questions. The theoretical results mainly shows the initialization regime as requirements for convergence, which makes sense given the nonconvex nature, although I did not check the proof in details (given its length).
Illustrative examples and results are provided.
Weaknesses: Unfortunately, the current results are only applicable to d=2 and linear functions (understandably, of course). No discussions on how nonlinear or a large number of variables would affect the loss landscape, and/or how the homotopy algorithm would be affected.
Would be more interesting if the insights of the theoretical results can be used in practical algorithms.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - "...bears a resemblance to the success of training deep models, which started with AlexNet for image classification": First of all, modern deep models have success before alexnet on imagenet, so this is not accurate. Second, such a statement is over the top and should be removed.
- L82: indeed only two DAGs are in the entire search space yet the analysis is quite complex. There is no indication on how such analysis and approach would scale or behave with an increasing number of variables, hence it is hard to judge the usefulness of these analysis yet. In comparison, although the combinatorial search is not used, its expected performance is the same with larger d.
- Eq 4: x is already used to represent data, and should avoid using it as parameter (lack of notation clarity).
- Eq 5: while re-ordering variable to obtain such a upper triangular structure is not an issue here yet, in datasets W* would generally not be so. How does a non-upper triangular structure impact analysis, for example a's usage afterward?
- Some derivation are not given fully. For example, f(W) and h(W) skips many steps in between.
- L199: what is k here?
- the regime of initialization for \mu: very interesting observation.
- optimization landscape with nonlinear functions: current analysis focuses on linear function. How would it change for nonlinear functions (such neural networks), even for d=2? Given the nonlinear nature, one can imagine it would make the analysis even hard, given two layers of neural networks are the most complex cases that has some theoretical understanding.
- can the homotopy-based optimization scheme be used to improve convergence of NOTEARS related algorithms?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: not discussed but not needed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewer for their time, effort, and valuable suggestions.
> No discussions on how nonlinear or a large number of variables would affect the loss landscape, and/or how the homotopy algorithm would be affected.
>
Thanks for this insightful question! See **Common Concern I**.
> Would be more interesting … be used in practical algorithms.
>
It’s excellent point! We go deeper into this in the discussion below.
> can the homotopy-based optimization scheme be used to improve convergence of NOTEARS related algorithms?
>
Indeed, our homotopy-based scheme is proposed precisely because it is what is used in practice [1][4]! Thus, our theory provides guarantee that homotopy methods are indeed a viable approach in practice. Concretely, our results show that tuning the penalty schedule is important, and that global initialization is possible if the signal strength is sufficiently large. When done properly, convergence is very fast (exponential).
In more detail: When considering both Algorithm 2 and Algorithm 3, any given initialization can converge towards the global optimum, consequently, there's no need to prioritize specific initializations. Furthermore, note that any initial penalty parameter, $\mu_0$, fall into a specific range can achieve this, negating the need to set $\mu_0$ to an exceedingly large value to ensure a favorable landscape for $g_{\mu}(\Theta)$(i.e., one that only possesses a singular stationary point), as a result, reducing the number of outer iterations in Algorithms 2 and 3. As for the decay rate of $\mu_k$, it can be quite rapid - in specific terms, an exponential decay. Our analysis offers clarity on the explicit dependence between convergence rates and various parameters, e.g. Theorem 2 in the paper and Theorem 4 in appendix for details. In principle, this knowledge can be used to accelerate NOTEARS-related algorithms, which is an important direction for future work. To illustrate, a greater weight in $W_*$ can allow for a more progressive decay of $\mu_k$ and accommodate a smaller initialization for $\mu_0$, as a result, faster convergence. Empirically, these theoretical conclusions are aligned with our experimental findings (arbitrary initialization, and convergence in a few iterations.) The experiments detailed in [1] confirm this – they begin with a zero initialization point and achieve their objectives in a mere four iterations, utilizing $\mu$ values set at $\{1,0.1,0.001,0\}$, even for hundreds and thousands of nodes.
> "...bears a resemblance … over the top and should be removed.
>
Thanks for flagging this. in hindsight we see how this is a bit over the top. We will take out this comment in the final version.
> L82: indeed only two DAGs … the same with larger d.
>
We appreciate the reviewer's insightful question. See **Common Concern II** for more details.
> Eq 5: while re-ordering … for example a's usage afterward?
>
Thank you for the insightful observation! Absolutely, in a broader context, $W_*$ could be neither an upper nor a lower triangular matrix, posing a considerable challenge for analysis. Determining how this affects our analysis is an interesting future direction. Nonetheless, there is another finding that could be useful. The underlying structure decides the nature of the analysis. For instance, for a three-node system with variables *$X_1, X_2, X_3$,* the analysis for a chain structure like $X_1\rightarrow X_2\rightarrow X_3$ should mirror that of $X_2\rightarrow X_1\rightarrow X_3$, given their similar structure in essence. Conversely, the analysis would be different if we compare the former chain with a collider structure like $X_1\rightarrow X_2\leftarrow X_3$. Yet, when we constrain the dimensions to $d=2$, the two choices are $X_1\rightarrow X_2$ or $X_2\rightarrow X_1$, thus, $W_*$ will either be an upper or lower triangular matrix. Consequently, the scenarios become equivalent, suggesting that our analysis for one suffices for both.
> the regime of initialization for $\mu$: very interesting observation.
>
Thanks for acknowledging this interesting observation. This is based on our analysis of $g_{\mu}(\Theta)$, and such regime yields several compelling implications. First, as $a$ increases, the regime typically expands, suggesting the problem get easier when the underlying structure has stronger signal (e.g. $a$ in Equation (5)). Second, any initial penalty parameter, $\mu_0$, fall into a specific range can achieve this, negating the need to set $\mu_0$ to an exceedingly large value to ensure a favorable landscape for $g_{\mu}(\Theta)$. Finally, it is essential to notice that such regime is the key for us to figure out an universal initialization $\mu_0 = \frac{1}{27}$ for Algorithm 3, since $\frac{a^2}{4(a^2+1)^3}$ is upper bounded by $\frac{1}{27}$.
> optimization landscape with nonlinear functions… theoretical understanding.
>
This is a very good question! When neural networks are introduced into the mix, the loss function $g_{\mu}(\Theta) = \mu f(\Theta )+h(W(\Theta))$ inherits a dual-layer complexity. The first layer arises from the inherent nonconvexity of the neural network as seen in the loss $f(\Theta)$, while the second stems from the nonconvexity of the DAG constraint present in $h(W(\Theta))$. Such compounded intricacies can be expected to substantially alter the landscape of $g_{\mu}(\Theta)$, potentially introducing numerous local optima and saddle points. Although we currently are uncertain on this matter, it's an interesting direction to explore how advancements in the study of two-layer neural networks can be integrated with the techniques we employed, such as the Lyapunov asymptotic stability theorem and the implicit function theorem, to better understand the theoretical facets of DAG learning using neural networks.
[4] Ng, Lachapelle, Ke, Julien, and Zhang. "On the convergence of continuous constrained optimization for structure learning." 2022. | Rebuttal 1:
Rebuttal: To all reviewers,
We thank all reviewers for their time put into reading our work and their valuable comments. We appreciate the consensus that our paper is well-written and theoretical contributions are delivered clearly. Finally, we appreciate Reviewer Y11K acknowledging our analysis elegant and novel. We next respond to the common concerns.
*Due to character limits, any minor concerns omitted will certainly be addressed based on the reviewers' suggestions in the final version.*
*Regarding the other major concerns raised, we will respond to each reviewer individually.*
**Common Concern I**
> How landscape is affected for high dimensional setting and how the homotopy algorithm would be affected.
>
This is a very insightful question. In higher-dimensional settings, the landscape remains challenging, however, previous work has convincingly demonstrated the utility and applicability of homotopy-based approaches in practice. Indeed, the original NOTEARS implementation as well as more recent advances use some form of homotopy algorithm. See [1,2,3] for details. Although homotopy methods generally faces challenges with getting trapped in local optima, there are certain scenarios where this can be potentially circumvented. Based on our own experiments, global convergence remains feasible in these instances:
- Increasing the signal strength (e.g. $a$ in Equation (5)) appears to prevent the Homotopy method from being trapped into local optima. This weight can be seen as the signal strength of the underlying structure, and enhancing it could also alleviate the "trapping into local optimum" issue. This aligns with our findings in the case of $d = 2$; when $a > \sqrt{5/27}$, there exists a universal updating scheme for $\mu_k$ that functions effectively, and global convergence is guaranteed as stated in Corollary 1.
- Enhancing the sparsity of the underlying structure indeed seems to mitigate the risk of the Homotopy method getting trapped into local optima. This intriguing observation coincides with prior experimental results in [1][2][3], which indicate that sparse graphs are generally easier to learn. As such, this important discovery further reinforces the value of investigating sparsity as a potential direction for future work.
In higher-dimensional settings, the landscape remains non-benign. For instance, even when $d = 3$, experiments indicate the presence of multiple local optima and saddle points. Determining how to incorporate sparsity structure and signal strength into the analysis to ensure global convergence of the homotopy method is an important direction for future work.
**Common Concern II**
> How such analysis and approach would scale or behave with an increasing number of variables?
>
This is really insightful point!
- Firstly, many existing papers suggest that the Homotopy method can perform well empirically, even with hundreds or thousands of nodes. This is described in our paper (specifically, Remark 1, L84-87). The study presented in [1] employs the Homotopy method to solve Equation (1) and has achieved state-of-the-art results, as demonstrated in Figures 4, 5, and 6 in [1], and further elaborated in their appendix. Furthermore, while [2] and [3] don't adopt the Homotopy algorithm in its exact form, their approaches share the similar spirit of homotopy algorithm, solving Equation (2) repeatedly with previous solutions. It is essential to note, however, that a theoretical understanding of the convergence dynamics and the loss landscape of $g_{\mu}(\Theta)$ remains largely unexplored. Our investigation into the bivariate scenario aspires to address this gap, offering a theoretical framework to explain and improve these known empirical findings.
- Secondly, we would like to emphasize that the current analysis is intuitive but far from trivial and has the potential to inform more complex settings. Initially, we delineate the conditions leading to multiple stationary points of $g_{\mu}(\Theta)$ as evidenced by Theorems 5, 6(v) – a process that relies heavily on a clever analysis of two related objectives $r(y_{\text{ub}};\mu)$ and $t(x_{\text{lb}};\mu)$ , especially given the absence of explicit formula for the solutions to $r(y;\mu) = 0$ and $t(x;\mu) = 0$ (see proof of Theorems 5,6 in appendix). Subsequently, we leverage the implicit function theorem to identify the evolving trajectories of these stationary points as a function of $\mu$ (see Theorem 5, 6(vi)), pinpointing the precise trajectory to pursue. Finally, our use of the Lyapunov asymptotic stability theorem allows us to define the basin of attraction for the correct stationary point, detailed in the proof of Lemma 7. Each of these components not only presents an intuitive understanding but can also serve as foundational blocks for more intricate cases. It's noteworthy that tools like the implicit function theorem and the Lyapunov asymptotic stability theorem possess broad applicability, extending beyond the confines of linear models. While the direct translation of our findings to other contexts remains challenging, they undeniably pave the way for future explorations.
[1] Bello, Kevin, Bryon Aragam, and Pradeep Ravikumar. "Dagma: Learning dags via m-matrices and a log-determinant acyclicity characterization." *Advances in Neural Information Processing Systems* 35 (2022): 8226-8239.
[2] Ng, Ignavier, AmirEmad Ghassami, and Kun Zhang. "On the role of sparsity and dag constraints for learning linear dags." *Advances in Neural Information Processing Systems* 33 (2020): 17943-17954.
[3] Zheng, Xun, Bryon Aragam, Pradeep K. Ravikumar, and Eric P. Xing. "Dags with no tears: Continuous optimization for structure learning." *Advances in neural information processing systems* 31 (2018). | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Fast Bellman Updates for Wasserstein Distributionally Robust MDPs | Accept (poster) | Summary: * tailored algorithms for solving Wasserstein distributionally robust MDPs and fast implementations with $L_1$, $L_2$, $L_\infty$-based Wasserstein distance.
Strengths: * Fastest algorithms (in terms of dependency on N, S, and A - the number of kernels/states/actions) for solving Wasserstein distributionally robust MDPs.
* Paper is easy to follow, short and straight-to-the-point.
Weaknesses: * This is a rather niche application of principles that have been developed in several other papers for robust MDPs [1,2,3].
[1] Chin Pang Ho, Marek Petrik, and Wolfram Wiesemann. Fast bellman updates for robust mdps. In International Conference on Machine Learning, pages 1979–1988. PMLR, 2018.
[2] Chin Pang Ho, Marek Petrik, and Wolfram Wiesemann. Partial policy iteration for l1-robust markov
decision processes. The Journal of Machine Learning Research, 22(1):12612–12657, 2021.
[3] Chin Pang Ho, Marek Petrik, and Wolfram Wiesemann. Robust phi-divergence mdps. arXiv preprint
arXiv:2205.14202, 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Model : can your algorithms be extended to handle the case of both transitions and reward uncertainty?
Theorem 4.4: how do the $\epsilon_1$ and $\epsilon_2$ error propagate into the overall error of $v_t$, the t-th iterate of value iteration?
Wasserstein balls: how is the radius $\theta$ chosen in practice, given some dataset?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I do not see any serious limitations in this work. The only downside is the somewhat narrow scope in terms of outreach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive comments and for taking the time to read our manuscript!
**Weaknesses**
1. This is a rather niche application of principles that have been developed in several other papers for robust MDPs.
In the same spirit of using first order methods for solving both the robust MDPs (RMDPs) and the distributionally robust MDPs (DRMDPs) [iv-v], while we agree that the proposed algorithm belongs to the class of bisection methods that solve RMDPs [i-iii], we would like to highlight the differences between our work and [i-iii]. (a) Firstly, we consider DRMDPs, which is a generalization of RMDPs, and so we are different from [i-iii] in terms of problem settings. (b) Secondly, the resemblance of our bisection step and [i-iii] only occurs in lines 200 - 213, which is less than 1/2 page in this paper. In particular, the second bisection step in the proposed nested bisection method and Proposition 4.3 are novel in this paper; without them, it is not easy to design algorithm(s) that have time complexity that is almost linear in the number of expected kernels, which do not exist in RMDPs [i-iii]. In our opinion, this is one of the key factors that our proposed algorithm outperforms state-of-the-art algorithms, such as [v]. (c) Finally, the subproblems, such as (11), are different from the case of RMDPs, and so it requires new theoretical results and algorithms for fast computations. Thank you for your question and we will clarify this point in our next version of manuscript.
**Questions**
1. Model : can your algorithms be extended to handle the case of both transitions and reward uncertainty?
Thank you for your great question. Yes, our algorithm can be extended to reward uncertainty, because the decomposition technique can be seamlessly applied to this extension. We will clarify this point in our next version of the manuscript.
2. Theorem 4.4: how do the $\epsilon_1$ and $\epsilon_2$ error propagate into the overall error of $v_t$, the t-th iterate of value iteration?
Thanks again for your insightful question. We have proved a new result on the error bound for the Bellman update, which is $\mathcal{O}(\epsilon_1+\epsilon_2)$. Since the complexities of the proposed algorithms are in $\mathcal{O}(\log\epsilon_1^{-1}\log\epsilon_2^{-1})$, one can compute a highly accurate solution with very small $\epsilon_1$ and $\epsilon_2$. We will provide this result in our next version of the manuscript.
3. Wasserstein balls: how is the radius $\theta$ chosen in practice, given some dataset?
In practice, $\theta$ is often chosen via cross-validation, e.g. [vi]. However, one could also derive theoretical bounds such that the ambiguity set contains the unknown true distribution with high confidence [vii].
[i] Fast Bellman updates for robust MDPs. 2018.
[ii] Partial policy iteration for l1-robust Markov decision processes. 2021.
[iii] Robust phi-divergence MDPs. 2022.
[iv] Scalable first-order methods for robust MDPs. 2021.
[v] First-order methods for Wasserstein distributionally robust MDP. 2021.
[vi] Distributionally robust inverse covariance estimation: the Wasserstein shrinkage estimator, 2020.
[vii] Data-driven distributionally robust optimization using the Wasserstein metric: Performance guarantees and tractable reformulations. 2018.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. In light of the responses of the authors I am more convinced of the importance of the results in the paper and I have increased my score.
---
Reply to Comment 1.1.1:
Title: Thank you!!
Comment: Thank you very much for your time reading our rebuttal! Thanks a lot for being very supportive and recognizing our work! | Summary: - The paper proposes a computationally efficient solution framework to solve the distributionally robust Bellman operator induced by Wasserstein ambiguity sets, which is critical in performing distributionally robust value iteration algorithms.
- The proposed framework features a novel decomposition of the optimization problem involved in DR Bellman updates, based on which the updates are reduced to solving small subproblems.
- The paper shows the overall complexity of the proposed framework is quasi-linear in $S$ and $A$ when considering Wasserstein distance with $L_1$, $L_2$, or $L_\infty$ norm, proving the advantage of the proposed framework. The theory is further supported by numerical experiments.
Strengths: - The problem of solving distributionally robust Bellman operator in a computationally efficient manner is a critical problem in DRMDP literatures. The presentation of the paper is also clear.
Weaknesses: - The framework seems to rely on the specific structure of the Wasserstein ambiguity set with specific reference distribution of the Wasserstein ball, which makes the application of the framework limited.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Regarding the formulation of distributionally robust MDPs, the reference distribution of the ambiguity set $\nu_s$ is defined as $\mu_s(\cdot) = \frac{1}{N}\sum_{i=1}^N\delta_{\hat{\boldsymbol{p}}_s^i}(\cdot)$ with $\hat{\boldsymbol{p}}^i$ being $N$ empirical transition kernels. What do these empirical distributions corresponds to in the real-world application of reinforcement learning algorithms? Typically, we have a single empirical transition kernel estimated from historical data. How can we interpret these $N$ empirical transition kernels?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation is stated in previous parts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive comments and your time to review our paper!
**Weaknesses**
1. The framework seems to rely on the specific structure of the Wasserstein ambiguity set with the specific reference distribution of the Wasserstein ball, which makes the application of the framework limited.
We would like to clarify that Wasserstein ambiguity sets have many nice properties such as consistency in optimality and finite-sample bounds [i], and thus provides a general and powerful metric for distributionally robust MDPs. In recent years, Wasserstein ambiguity sets are widely used in stochastic optimization [iv] and multistage stochastic optimization [v]. Compared to the $\phi$-divergence, the Wasserstein structure does not suffer from the absolutely continuous restriction and addresses how close two points in the support are to each other. More advantages of the Wasserstein structure are provided in [iii].
**Questions**
1. Regarding the formulation of distributionally robust MDPs, the reference distribution of the ambiguity set is defined with $N$ empirical transition kernels. What do these empirical distributions correspond to in the real-world application of reinforcement learning algorithms? Typically, we have a single empirical transition kernel estimated from historical data. How can we interpret these $N$ empirical transition kernels?
Thank you for your very insightful question! You are absolutely correct; the common practice is to estimate just one empirical transition kernel. However, in our context, one could/should consider the Bayesian point of view (instead of frequentist), see e.g. [vi]; That is, one could use historical data to estimate the posterior distribution of the transition kernel. Then the $N$ empirical transition kernels could be sampled from the posterior distribution.
[i] PM Esfahani and D Kuhn. Data-driven distributionally robust optimization using the Wasserstein metric: Performance guarantees and tractable reformulations. 2018.
[ii] I. Yang. A convex optimization approach to distributionally robust Markov decision processes with Wasserstein distance. 2017.
[iii] R. Gao, A. Kleywegt. Distributionally robust stochastic optimization with Wasserstein distance. 2022.
[iv] D. Wozabal. Robustifying convex risk measures for linear portfolios: A nonparametric approach. 2014.
[v] G.C. Pflug and A. Pichler. Multistage stochastic optimization. 2014.
[vi] R. H. Russel, M Petrik. Beyond confidence regions: tight Bayesian ambiguity sets for robust MDPs. 2019.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions. I have read the rebuttals and the comments from other reviewers. I have no further concerns and I would raise my score to 6.
---
Reply to Comment 1.1.1:
Title: Thank you!!
Comment: Thank you very much for your time reading our rebuttal and supporting our work!! | Summary: The paper studies Wasserstein Distributionally robust MDPs (WDRMDP) problem when the ambiguity set is defined based on Wasserstein distance and rectangular. It is then well known that the optimal policy can be computed by solving Bellman equations, which have the form of distributionally robust linear programs. The authors then focus on solving the Bellman updates. The main idea is to transform the robust linear problem into parameterized problems so that the Bellman updates can be done via bisection. The paper then provides some new algorithms running in polynomial time complexity and claims that their complexities are smaller than those from prior works. Experiments show that their algorithm performs better than some baselines.
Strengths: The problem under investigation holds significant importance. Over the past few decades, robust and distributionally robust Markov Decision Processes (MDPs) have garnered considerable attention. Resolving robust MDPs typically entails addressing computationally expensive minimax problems, necessitating the development of efficient algorithms. In this regard, the paper appears to succeed by presenting novel and efficient algorithms for solving the problem. The proposed algorithms exhibit sound and good time complexities. The experimental results seem convincing, effectively showcasing the efficiency of the new algorithm in comparison to prior algorithms.
Weaknesses: The WDRMDP problem itself is not novel, and the main focus of the paper centers around solving the Bellman equation, which is a well-studied distributionally robust linear program extensively explored in the fields of optimization and mathematical programming. As a result, certain results claimed in the paper, such as Propositions 4.1 and 4.3, appear rather obvious. Additionally, it seems that some of the problems, such as those presented in Equations (11) and (13), bear a strong resemblance to those examined in a previous work cited as [1]. It is possible that similar techniques have been employed to derive the bisection algorithm and determine the running time complexities. In this regard, the technical contributions seem quite incremental.
Some theorems and corollaries are not properly stated. Please see the Question section.
The experiments conducted in the paper do not achieve a satisfactory level of validation. The comparisons made are limited to just two basic baselines, which fail to provide sufficient justification for the introduction of the bisection algorithm. For example, the paper suggests that (5) and (8) can be solved directly using convex optimization, rather than employing bisection. Therefore, it would be necessary to include a comparison between solving (5) and (8) through convex optimization and the proposed approach.
[1] Chin Pang Ho, Marek Petrik, and Wolfram Wiesemann. Fast bellman updates for robust mdps. In International Conference on Machine Learning, pages 1979–1988. PMLR, 2018.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What would happen if you solve (5) and (8) by convex optimization? How does this affect the performance, in terms of both theoretical complexities and numerical experiments?
- In Theorem 4.4 and Corollary 5.2.1, what is the quality of the returned solutions? Does the bisection return an optimal solution or just a near-optimal one? Does the quality of the returned solution depend on $\epsilon_1$ and $\epsilon_2$
- What are the technical distinctions between the proposed algorithm and the bisection method presented in [1]
- Can the results be extended to the robust MDP with (s)-rectangular ambiguity sets?
[1] Chin Pang Ho, Marek Petrik, and Wolfram Wiesemann. Fast bellman updates for robust mdps. In International Conference on Machine Learning, pages 1979–1988. PMLR, 2018.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I do not see any negative societal impact from this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your encouraging comments and your time to review our paper!
**Weaknesses**
1. The WDRMDP problem itself is not novel...well-studied distributionally robust linear program extensively explored in the fields of optimization and mathematical programming...Propositions 4.1 and 4.3, appear rather obvious...Equations (11) and (13), bear a strong resemblance to those examined in a previous work cited as [i]. It is possible that similar techniques have been employed...the technical contributions seem quite incremental.
We would like to clarify that, although distributionally robust linear programs are extensively explored in the fields of optimization and mathematical programming, our contributions focus on the exploitation of the specific problem structure of distributionally robust Bellman updates. In particular, the proposed decomposition scheme does not work for generic distributionally robust linear programs.
In the same spirit of using first order methods for solving both the robust MDPs (RMDPs) and the distributionally robust MDPs (DRMDPs) [ii-iii], while we agree that the proposed algorithm belongs to the class of bisection methods that solve RMDPs (such as [i]), we would like to highlight the differences between our work and [i]. (a) Firstly, we consider DRMDPs, which is a generalization of RMDPs, and so we are different from [i] in terms of problem settings. (b) Secondly, the resemblance of our bisection step and [i] only occurs in lines 200 - 213, which is less than 1/2 page in this paper. In particular, the second bisection step in the proposed nested bisection method and Proposition 4.3 are novel in this paper; without them, it is not easy to design algorithm(s) that has a time complexity that is almost linear in the number of expected kernels, which do not exist in RMDPs and [i]. In our opinion, this is one of the key factors that our proposed algorithm outperforms state-of-the-art algorithms, such as [iii]. (c) Finally, the subproblems, such as (11), are different from the case of RMDPs, and so it requires new theoretical results and algorithms for fast computations. Thank you for your question and we will clarify this point in our next version of the manuscript.
2. Some theorems and corollaries are not properly stated. Please see the Question section. Also, The experiments conducted in the paper do not achieve a satisfactory level of validation.
Please find our answers to your questions below.
**Questions**
1. What would happen if you solve (5) and (8) by convex optimization? How does this affect the performance, in terms of both theoretical complexities and numerical experiments?
Sorry for the confusion caused. In our experiments, one of our baselines is Gurobi, which is a state-of-the-art commercial solver for solving convex programs, such as (5) and (8). In Section 6, we provide the comparisons between our proposed algorithms and Gurobi (i.e. convex optimization solver) on the runtimes of solving (5), and our algorithm outperforms Gurobi. In terms of solving (8), below please find our additional results for this rebuttal. As we can see, the results are consistent with our conclusion in Section 6.
S=A=N=50, Gurobi (q=1): 722ms, fast (q=1): 6.10ms, Gurobi (q=2): 2959ms, fast (q=2): 32.61ms
S=A=N=70, Gurobi (q=1): 998ms , fast (q=1): 6.26ms, Gurobi (q=2): 5372ms, fast (q=2): 35.21ms
S=A=N=90, Gurobi (q=1): 1292ms, fast (q=1): 6.91ms, Gurobi (q=2): 8861ms, fast (q=2): 35.82ms
In terms of theoretical complexities, from [iv-v], the time complexities for solving (5) using convex optimization solver are $\mathcal{O}(N^{4.5}S^{4.5}A^{4.5})$ and $\mathcal{O}(N^{3.5}S^{3.5}A^{3.5}\log \epsilon^{-1})$ for $q=1$ and $q=2$, respectively. The time complexities for solving (8) using convex optimization solver are $\mathcal{O}(N^{4.5}S^{4.5})$ and $\mathcal{O}(N^{3.5}S^{3.5}\log \epsilon^{-1})$ for $q=1$ and $q=2$, respectively. Therefore, the theoretical complexities of the proposed algorithms, as stated in Section 5, are orders of magnitudes lower than using generic convex optimization solvers.
2. In Theorem 4.4 and Corollary 5.2.1, what is the quality of the returned solutions? Does the bisection return an optimal solution or just a near-optimal one? Does the quality of the returned solution depend on $\epsilon_1$ and $\epsilon_2$?
The proposed algorithm is exact (up to tolerances $\epsilon_1$ and $\epsilon_2$). In particular, we have proved a new result on the error bound for the Bellman update, which is $\mathcal{O}(\epsilon_1+\epsilon_2)$. Since the complexities of the proposed algorithms are in $\mathcal{O}(\log\epsilon_1^{-1}\log\epsilon_2^{-1})$, one can compute a highly accurate solution with very small $\epsilon_1$ and $\epsilon_2$. We will provide this result in our next version of manuscript.
3. What are the technical distinctions between the proposed algorithm and the bisection method presented in [i].
Please refer to the **Weaknesses**
4. Can the results be extended to the robust MDP with (s)-rectangular ambiguity sets?
Thank you for your insightful question. The problem of interest, s-rectangular distributionally robust MDPs, is in fact a generalization of s-rectangular robust MDPs. Thus, distributionally robust MDPs, can recover (instead of extend) to s-rectangular robust MDPs. By setting the radius of the Wasserstein ball to be $\infty$ (or a large enough number, such as the diameter of $(\Delta_S)^A$), the ambiguity set contains all possible probability distributions. It is known that the worst-case distribution is a Dirac distribution, hence the distributionally robust MDP degenerates into robust MDP.
[i] Fast Bellman updates for robust MDPs. 2018.
[ii] Scalable first-order methods for robust MDPs. 2021.
[iii] First-order methods for Wasserstein distributionally robust MDP. 2021.
[iv] A new polynomial-time algorithm for linear programming. 1984.
[v] Applications of second-order cone programming. 1998. | Summary: The paper focuses on the computational complexity of Wasserstein Distributionally Robust MDPs with Lp norm. By decomposing the calculation of Bellman updates to smaller subproblems, the algorithm can achieve linear complexity in the number of actions and kernels, and quasi-linear complexity in the number of states.
Strengths: The proposed method shows state-of-the-art complexity results.
**DISCLAIMER:**
I have not checked the proof thoroughly and cannot verify the correctness of the theorems.
Weaknesses: * The experiments are carried out on very simple, randomly generated distributionally robust MDP. While this approach provides a controlled setting for their work, it risks oversimplifying the problem and limiting the potential real-world applicability of the proposed methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The paper follows the common rectangularity assumption. However, it is known that the rectangularity assumption would lead to overly conservative policies. Can the proposed method easily extend to non-rectangular settings?
* How will the problem decomposition in section 4 affect the resulting value of the Bellman update? In other words, will there be tradeoffs between the computational complexity and the accuracy of the optimal value introduced by the proposed decomposition?
* Considering that the following paper also focuses on the model-based distributionally robust RL setting. It would be interesting to also compare with it.
* Shi, Laixi, and Yuejie Chi. "Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity." arXiv preprint arXiv:2208.05767 (2022).
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation of the proposed method is not fully discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your comments and your time to review our paper!
**Weaknesses**
1. The experiments are carried out on very simple, randomly generated distributionally robust MDP. While this approach provides a controlled setting for their work, it risks oversimplifying the problem and limiting the potential real-world applicability of the proposed methods.
We are sorry for being unclear on the purpose of our experimental setup. This paper focuses on algorithmic development and computational complexity for solving distributionally robust MDPs. Therefore, randomly generated distributionally robust MDPs are perfect for our purpose, as additional problem-specific structures should not be considered for the sake of generality. Moreover, these randomly generated distributionally robust MDPs are more computationally challenging than stylized problems as their transitional kernels are dense, and so they are used to test the performance of various algorithms in our experiments.
**Questions**
1. The paper follows the common rectangularity assumption. However, it is known that the rectangularity assumption would lead to overly conservative policies. Can the proposed method easily extend to non-rectangular settings?
Thank you for raising this interesting question. Solving the general robust MDP problem with nonrectangular ambiguity sets is strongly NP-hard and intractable [ii]. Moreover, Bellman equations for nonrectangular robust MDPs do not exist in general, so our approach could not be extended to non-rectangular settings. However, the rectangularity assumption can always be satisfied via outer approximation [i]. Therefore, most research focuses on rectangular robust MDPs [i-iii], such as $s$-rectangular and $s,a$-rectangular robust MDPs, and so does the research in distributionally robust MDPs [iv-v]. In this paper, we adopt the $s$-rectangular setting, which is less conservative than the $s,a$-rectangular case.
2. How will the problem decomposition in section 4 affect the resulting value of the Bellman update? In other words, will there be tradeoffs between the computational complexity and the accuracy of the optimal value introduced by the proposed decomposition?
We are sorry for the confusion caused. By exploiting the specific problem structure of the optimization problem for the distributionally robust Bellman update, the proposed algorithm is exact (up to user-specified tolerances on the accuracy). Therefore, we can compute the Bellman update exactly up to an arbitrarily small tolerance. We will clarify this point in our next version of manuscript.
3. Considering that the following paper also focuses on the model-based distributionally robust RL setting. It would be interesting to also compare with it.
Shi, Laixi, and Yuejie Chi. "Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity." arXiv preprint arXiv:2208.05767 (2022).
Thank you for bringing up this interesting reference paper. We would like to clarify that our paper focuses on the computations and time complexity of distributionally robust MDPs, but the reference paper focuses on learning and sample complexity. Therefore, the purposes of both papers are complementary to each other. Moreover, by our definition in Section 1 (which is also adopted in [i-v]), this reference paper considers the settings of robust MDPs rather than distributionally robust MDPs; thus, the problem settings are different in the two papers. We are sorry for the confusion caused, and we will cite this paper and clarify this point in our next version of manuscript.
**Limitations**
1. The limitation of the proposed method is not fully discussed in the paper.
We apologize for missing this discussion. Our method does not consider the settings of continuous state and action spaces. Besides, we focus on discounted MDPs, and it is not obvious that our proposed algorithms could be extended to the case of optimizing average expected return. We will provide the discussion in the next version of manuscript.
[i] A. Nilim and L. El Ghaoui, Robust control of Markov decision processes with uncertain transition matrices, 2005.
[ii] W. Wiesemann, D. Kuhn, and B. Rustem, Robust Markov decision processes, 2013.
[iii] G. N. Iyengar, Robust dynamic programming, 2005.
[iv] I. Yang, A convex optimization approach to distributionally robust Markov decision processes with Wasserstein distance, 2017.
[v] J. Grand-Clement and C. Kroer, First-order methods for Wasserstein distributionally robust MDP, 2021.
[vi] L. Shi and Y. Chi, Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity, 2022.
---
Rebuttal Comment 1.1:
Title: Re
Comment: Thanks for the clarification. I raised my rating to 5.
---
Reply to Comment 1.1.1:
Title: Thank you!!
Comment: Thank you very much for reading our rebuttal, recognizing our work, and updating our score! | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
GeoTMI: Predicting Quantum Chemical Property with Easy-to-Obtain Geometry via Positional Denoising | Accept (poster) | Summary: Edit: Updating score from 5 to 6 based on the discussions.
This work presents a variation of denoising autoencoder type model that uses an easy-to-obtain (corrupted) input geometry to predict properties of molecules. The assumption is that the corrupted geometry can be denoised to the correct geometry, and then used for predicting the properties of interest during training. At inference only the corrupted geometry is used, thus alleviating the need for more expensive, correct geometries. Experiments on three datasets with different choices of corrupt/correct geometries show that this approach has benefits over methods that only use the corrupt geometry for predicting properties.
Strengths: * Formulating the denoising autoencoder with property prediction using a mutual information paradigm between $\tilde Z, Z, Y$ provides structure to models that use denoising AE for property prediction.
* The final objective converges to standard objectives that are commonly used in GNN literature; the method itself is easy to integrate into other GNN frameworks and as the authors claim it is _model agnostic_.
* Experiments are comprehensive on three datasets. Results show improvements compared to methods that only use the correct geometry.
Weaknesses: * **Relation between $X$ and $\tilde X$**: The main assumption about the relation between $X$ and $\tilde X$ in this work is that $\tilde X$ is a _corrupted_ version of $X$, which I understand to be that $\tilde X$ contains less information than $X$. However, this empirical distinction is one of the main weaknesses of this work. What is the exact nature of relation between the two variables? Is the corruption an information destroying process?
Without a clear relation between these two variables, the complexity of predicting Y from $X$ or $\tilde X$ is difficult to appreciate. The studies in Appendix B.1 where interpolated geometries are used to predict Y from $(X+\tilde X)/2$ assumes that there is a linear relation between $X,\tilde X$ in the data space? So, when training a neural network to predict Y from $\tilde X$ via $X$ is a simpler task than being claimed in this work. Does this not simply become a supervised task of predicting Y from $\tilde X$ with an auxiliary task of denoising to $X$ during training?
* **Model converges to denoising AE cascaded with property predictor**: Continuing with the point above, obtaining $X$ from $\tilde X$ is the classical denoising AE, and then cascading a property predictor at the output of the decoder. The authors seem to make distinctions about this setting which are unclear. The objective function in Sec. 3.3 basically converges to two property decoders (one for $\tilde Z$, one for $Z$) and a reconstruction term.
* **Selecting $\tilde X$**: As a general method, what types of $\tilde X$ and $Y$ can be used in this setting? In each of the three tasks, the selection of $\tilde X$ is based on assumptions about the complexity of these properties that are specific to each dataset. How would this generalize to other datasets/tasks?
* **Rotation and translation equivariance**: The authors state:
>> GeoTMI ensures the equivariance because it directly updates the position vector
There are no further elaborations about these claims. And how does directly updating the position vector ensure rotation and translation equivariances? Is the coordinate system also being updated? How are pairwise relations maintained when position vector of individual atoms are being updated?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See Weaknesses above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Authors have discussed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Weakness 1 and 3] “Relation between $X$, $\tilde{X}$, and $Y$” and “Selecting $\tilde{X}$”**
Please refer to the second response in “Response to all reviewers”.
---
**[Weakness 1 and 2] Distinction of GeoTMI from "the supervised task of predicting $Y$ from $\tilde{X}$ with an auxiliary task of denoising to $X$ during training" and "denoising AE cascaded with property predictor"**
We genuinely appreciate your insightful observation and question regarding the comparison between GeoTMI and the suggested method in the context of supervised prediction from $\tilde{X}$ with an auxiliary task involving denoising to $X$.
Indeed, the distinction between GeoTMI and the suggested method lies in their fundamental objectives. As a reviewer rightly pointed out, while the supervised prediction from $\tilde{X}$ through $X$ can be related to an auxiliary task of denoising, it's important to recognize that GeoTMI's core objective diverges from this.
GeoTMI's objective is intuitively designed to address the specific inductive bias arising from the inherent Markov chain relationship among $\tilde{X}$, $X$, and $Y$, as highlighted in lines 127-128. This bias asserts that the information in $X$ is indispensable for predicting $Y$ when utilizing $\tilde{X}$ for predictions ($I(\tilde{X};Y\vert{X})=0$).
To better predict $Y$ from $\tilde{Z}$ while preserving the corresponding Markov property in the representation space, we introduced the term $I(\tilde{Z} _\theta;Z _\theta;Y)$ as our objective (see section 3.2).
The maximization of $I(\tilde{Z} _\theta;Z _\theta;Y)$ incorporates the implicit consideration of $I(\tilde{Z} _\theta;Y\vert{Z} _\theta)$, as elaborated below:
$$
\begin{aligned}
\arg\max _\theta{I(\tilde{Z} _\theta;Z _\theta;Y)}&=\arg\max _\theta\big(I(\tilde{Z} _\theta;Y)-I(\tilde{Z} _\theta;Y\vert{Z} _\theta)\big)\newline
&=\arg\max _\theta\big(H(Y)-H(Y\vert\tilde{Z} _\theta)-I(\tilde{Z} _\theta;Y\vert{Z} _\theta)\big) \newline
&=\arg\max _\theta\big(-H(Y\vert\tilde{Z} _\theta)-I(\tilde{Z} _\theta;Y\vert{Z} _\theta)\big) \newline
&=\arg\min _\theta\big(H(Y\vert\tilde{Z} _\theta)+I(\tilde{Z} _\theta;Y\vert{Z} _\theta)\big)
\end{aligned}
$$
On the contrary, the objective of the suggested method is defined as follows, and it does not guarantee the minimization of $I(\tilde{Z} _{\theta'};Y\vert{Z} _{\theta'})$:
$$
\arg\min _{\theta'}\big({H(X\vert\tilde{Z} _{\theta'}})+H(Y\vert\tilde{Z} _{\theta'})\big)
$$
Similarly, the other suggested method, denoising AE cascaded with property predictor, also does not align with our objectives in the same reason.
It's important to note that although computing the conditional mutual information $I(\tilde{Z} _\theta;Y\vert{Z} _\theta)$ is generally intractable, recent research has explored the use of neural estimators to approximate it [1]. To our best knowledge, these approaches employ maximization of a lower bound of the CMI $\hat{I} _\psi$, which translates our problem setting to a min-max problem, as depicted below. However, this approach might not be easily applicable to various tasks and models, which is why we chose the alternative objective demonstrated in section 3.3. The fact that our alternative objective might provide a looser estimation is acknowledged as one of our limitations.
$$
\begin{aligned}
\min _{\theta}\left(H(Y\vert \tilde{Z} _\theta) + I(\tilde{Z} _\theta;Y\vert Z _\theta)\right)\ge\min _{\theta}\left(H(Y\vert \tilde{Z} _\theta) + \max _{\psi} \hat{I} _\psi\right)
\end{aligned}
$$
We hope this provides a clearer understanding of how GeoTMI's objective addresses the specific challenges in capturing the shared information among $\tilde{Z} _\theta$*, $Z _\theta$*, and $Y$, distinct from the suggested methods.
[1] S. Molavipour, G. Bassi, and M. Skoglund, in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2020).
---
**[Weakness 4] Rotation and translation equivariance**
First, we apologize for any confusion caused by the statement "GeoTMI ensures the equivariance because it directly updates the position vector ". Thanks to your valuable feedback, we have become aware of a deficiency of detailed descriptions. The intended meaning of the statement is that GeoTMI employs well-established position vector update methods ensuring both rotation and translation equivariance.
In this study, we basically used the position vector update method suggested by Satorras et al [1]. Positional vector update methods used for each task were described on lines 218, 271, and 295. The rotation and translation equivariance of each position update method is proved in "Appendix A. Equivariance Proof" in the previous work [1]. In brief, one can use the method to update the 3D Cartesian coordinates of each atom using an update vector, which is decomposed into magnitude and direction. The former is invariant, while the latter is equivariant with respect to rotation and translation of the molecular geometry. Thus, the method guarantees the equivariance of the updated molecular geometry.
[1] V. G. Satorras, E. Hoogeboom, and M. Welling, in *International Conference on Machine Learning* (PMLR, 2021), pp. 9323-9332.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: I thank the authors for clarifications in their rebuttal. One minor concern/clarification still remains:
While I now see that the Markov assumption made in this work is different from a denoising A + predictor, I would like the authors to speculate about the generalization of this assumption to other tasks. Are there scenarios/applications where the relation between $\tilde{X},X,Y$ cannot be reduced to a Markov assumption? In those scenarios, how would the Geo TMI model work?
---
Reply to Comment 1.1.1:
Title: Response to reviewer's comment
Comment: We sincerely appreciate the thoughtful question offered by the reviewer. Before addressing the additional question, we would first like to reemphasize that our primary goal was to improve the prediction of high-level quantum chemical properties from easy-to-obtain geometries. To the best of our knowledge, in the field of quantum chemistry, the calculation process for certain properties typically involves geometry optimization. Consequently, the Markovian assumption can be generally consistent with the inherent nature of the problem.
In scenarios where the relationship between data is non-Markovian, we could hypothetically consider both $\tilde{X}$ and $X$ as independent data $X_1$ and $X_2$ associated with property $Y$. Non-Markovianity implies that, given their respective counterparts, $X_1$ or $X_2$ is conditionally dependent on $Y$. This violates our Proposition A.1, leading to $I(X_1;Y\vert X_2)\ne0$, and also prohibits meaningful comparison between conditional entropies $H(Y\vert X_1)$ and $H(Y\vert X_2)$. Essentially, this means that predicting $Y$ necessitates information from both $X_1$ and $X_2$, not just one.
Although we have not considered non-Markovian cases within the processes of obtaining quantum chemical properties in our manuscript, we have additionally thought about the several scenarios that cannot be reduced to a Markov assumption, as suggested by the reviewer. For example, in reaction barrier height problems, we can redefine $X_1$ and $X_2$ as $X^R$ and $X^{TS}$ instead of $(X^R,X^P)$ and $(X^R,X^{TS})$, respectively. The reaction barrier height fundamentally depends on both $X^R$ and $X^{TS}$, which simply leads to the definition of non-Markovian $\left(p(\mathrm{BH}\vert{X^R})\ne p(\mathrm{BH}\vert{X^R},X^{TS})\right)$. Likewise, in protein-ligand systems, predicting $\Delta\Delta{G}$ by perturbing the ligand type while keeping the same protein is another example [1]. If we denote $X_1$ and $X_2$ as complexes with different ligands interacting with the same protein, then the $\Delta\Delta{G}$ between these complexes also depends on both $X_1$ and $X_2$, reflecting non-Markovianity.
Conversely to the core goal of GeoTMI (implicitly modeling the latent space to satisfy $I(X_1;Y\vert{X}_2)=0$), the aforementioned scenarios involve situations where $X_1$ and $X_2$ possess unique information that is necessary for predicting $Y$, respectively. Consequently, while it may be possible to apply GeoTMI in non-Markovian settings, it does not theoretically guarantee optimal performance due to the distinct nature of the relationship between $X_1$ and $X_2$ from our original problem setting. It is also noteworthy that the scenario of predicting $\Delta\Delta{G}$ is not related to easy-to-obtain geometry, implying that, unlike quantum chemical property prediction, the scenarios with non-Markovianity don't have to involve easy-to-obtain geometry.
[1] Wang, L., Wu, Y., Deng, Y., Kim, B., Pierce, L., Krilov, G., ... & Abel, R. (2015). Accurate and reliable prediction of relative ligand binding potency in prospective drug discovery by way of a modern free-energy calculation protocol and force field. Journal of the American Chemical Society, 137(7), 2695-2703. | Summary: This paper proposes a novel training framework called GeoTMI. This framework uses a denoising process to accurately predict quantum chemical properties for molecules using MMFF geometries that are much easier to obtain than DFT-optimized geometries.
Strengths: 1. The proposed method is interesting, and the derivation well supports the training objective.
2. The experimental results show that GeoTMI achieves very good performance over multiple molecular tasks. Although expensive DFT-optimized geometries always produce the best property prediction as expected, GeoTMI can use easy-to-obtain geometries to improve the prediction when DFT-geometries are lacking. It’s practically meaningful.
Weaknesses: Although Table 3 has shown that “Equiformer + Noisy Nodes + GeoTMI” achieves better performance than “Equiformer + Noisy Nodes”, the direct comparison between Noisy Nodes and GeoTMI is missing. It would be better if a direct comparison with other denoising-based methods is included.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: PCQM4Mv2[1] is a quantum chemistry dataset in which DFT geometries are provided for the training set but not provided for validation and testing sets. This is a dataset/task very suitable to apply the proposed GeoTMI. It would be good if experiments on this dataset is also included.
[1]. Hu, Weihua, et al. "Ogb-lsc: A large-scale challenge for machine learning on graphs." arXiv preprint arXiv:2103.09430 (2021).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Weakness 1] Although Table 3 has shown that “Equiformer + Noisy Nodes + GeoTMI” achieves better performance than “Equiformer + Noisy Nodes”, the direct comparison between Noisy Nodes and GeoTMI is missing. It would be better if a direct comparison with other denoising-based methods is included.**
First, we would like to inform you that we have revised Table 3 by replacing “Equiformer + Noisy Nodes + GeoTMI” with “Equiformer + GeoTMI”. Please refer to the first response in “Response to all reviewers”.
We sincerely appreciate suggestions to include a comparison with another denoising model to strengthen the evaluation of GeoTMI's performance. Unfortunately, to the best of our knowledge, all denoising works in the OC20 task have used the same node-level auxiliary loss as proposed by Noisy Nodes. We are open to any further questions or suggestions you may have.
---
**[Question 1]** **PCQM4Mv2 is a quantum chemistry dataset in which DFT geometries are provided for the training set but not provided for validation and testing sets. This is a dataset/task very suitable to apply the proposed GeoTMI. It would be good if experiments on this dataset is also included.**
Thank you for the suggestion regarding the PCQM4Mv2 dataset. We agree that including experiments on this dataset would indeed provide valuable insights into the performance and effectiveness of the proposed GeoTMI method. However, the main goal of the PCQM4Mv2 dataset is to achieve accurate predictions in situations where 3D DFT geometry is not available. To solve this problem, many studies have designed various model architectures to use 2D graphs as input and report the optimal hyperparameters. Since GeoTMI is designed based on 3D GNNs, we could not find a suitable model and its hyperparameters for the PCQM4Mv2 dataset, unlike our previously reported experiments (QM9, Barrier Heights, OC20). In this light, we conducted additional experiments using SchNet and its reported hyperparameters [1] to verify the effectiveness of GeoTMI on the PCQM4Mv2 dataset.
The validation set provided by PCQM4Mv2 was used as the test set of our experiment. The training set provided by PCQM4Mv2 was randomly split at 6:1 to build our defined training/validation sets. We used the MMFF-optimized geometries as $\tilde{X}$. As a result, we achieved a performance improvement of 7.0973% on our test set (no DFT geometry) when using GeoTMI. We expect to see even better performance improvements on the PCQM4Mv2 dataset if we set the appropriate model and hyperparameters for optimal performance.
| Methods | Input type (Train / Infer.) | GAP (eV) |
| --- | --- | --- |
| SchNet | $\tilde{X}$/$\tilde{X}$ | 0.1254 |
| SchNet + GeoTMI | $X$,$\tilde{X}$/$\tilde{X}$ | 0.1165 |
| Improvements by GeoTMI (%) | | 7.0973 |
[1] K. T. Schütt, P. Kessel, M. Gastegger, K. A. Nicoli, A. Tkatchenko, and K. R. Muller, *J. Chem. Theory Comput.* **15**(1), 448-455 (2018).
---
Rebuttal Comment 1.1:
Comment: Thanks for authors' response and I'd like to maintain my score. | Summary: The authors propose a novel method to help solve the problem of 3D positional noise in quantum chemical properties. The proposed method is like a plug-in for other 3D GNN methods to improve their performance on defective 3D positional data. The numerical results show that the model can help the GNN models to perform better on corrupted data.
Strengths: 1. The authors propose a novel training framework based on maximizing the mutual information between correct and corrupted geometries to make accurate predictions on noisy positional information. Involving mutual information in this problem is novel and interesting.
2. This paper has rich experimental results. From the numerical experiments, we can see that the proposed model can improve the basic GNN models on the property prediction tasks.
Weaknesses: 1. The main concern is that this proposed GeoMTI will be less effective with more powerful basic models. From QM9, which I'm more familiar with, GeoMTI performs best on SchNet, then EGNN, and worst on DimeNet++(an average of less than 10% improvement). Since there are more powerful molecular property prediction models such as [1], [2], [3], etc. I wonder whether the GeoMTI will be still useful on these models.
2. The motivation of this paper is a little strange. Molecules are not like the other graph problems that can easily encounter with OOD detection. Using DFT and MMFF to obtain the geometry information might lead to different molecular configurations and different energy. I'm not sure whether using MMFF to predict the DFT properties is admissible. Moreover, the DFT geometry information is not that hard to obtain, so I'm worried that the proposed problem might be a carefully tailored problem by computer scientists with no meaning to the quantum chemistry society.
[1] Spherical message passing for 3D graph networks
[2] ComENet: Towards Complete and Efficient Message Passing for 3D Molecular Graphs
[3] ViSNet: an equivariant geometry-enhanced graph neural network with vector-scalar interactive message passing for molecules
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I wonder what the performance will be if the authors give the basic prediction models the same input type as GeoMTI, which is $X$ and $\tilde{X}$ as training and $\tilde{X}$ as infer. By giving the basic models more input data, even if they might be conflicting, I think this will help the performance of these models on the corrupted data.
I might be misunderstanding something about the motivation. So further discussion is welcomed and I'm positive about changing the scores if I'm proved to be wrong.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors do address three limitations in the paper. However, I think the most important limitation is the motivation and usefulness of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Weakness 1] From QM9 results, I wonder whether the GeoTMI will be still useful in more powerful molecular property prediction models.**
We appreciate your concern about its effectiveness on more powerful models. It is important to assess GeoTMI's performance on state-of-the-art models to better understand its capabilities.
Before exploring GeoTMI's performance on other GNNs, we would like to highlight two important points. First, while this evaluation was conducted on the OC20 task, we tested GeoTMI on the Equiformer, which is known as a more powerful molecular property prediction model than SchNet, EGNN, and DimeNet++ for the QM9 task. We found a significant improvement when GeoTMI was used in conjunction with Equiformer.
Second, we acknowledge the relatively lower improvement observed in DimeNet++. However, this may be due to the limited hyperparameter tuning. As shown in Table 10 in the supplementary material, the search space for the hyperparameter λ was smaller in DimeNet++ compared to SchNet and EGNN. We additionally tested DimeNet++'s performance on $R^2$ and GAP with λ=1.0, which resulted in improvements of 20.4% and 5.68%, with MAEs of 4.63 $\mathrm{Bohr}^2$ and 53.1 meV, respectively. We believe these results point to potential improvements through optimal λ search.
---
**[Weakness 2] The motivation is a little strange.**
We would like to emphasize that our work is motivated by the need to address the high computational cost of obtaining geometry in quantum calculations. For example, in previous approaches to the QM9 task, 3D GNNs are trained to predict DFT properties from DFT geometry. However, in real-world applications, the DFT geometry is not readily available as input to 3D GNNs. If the input geometry has been obtained by DFT-based geometry optimization, the DFT properties usually already exist or can be obtained very easily. In this respect, the goal of this study is to point out the lack of practicality of 3D GNNs and to improve them. Our solution is to use geometries that can be obtained relatively cheaply, e.g. MMFF-based geometries in the QM9 task. It is important to note, however, that it is not limited to the relationship between DFT and MMFF, and is broadly applicable to similar tasks given $\tilde{X}$, $X$, and $Y$ (see the second response in "Response to all reviewers").
Regarding the comment “DFT geometry information is not that hard to obtain”, we would like to tell you the following. Within DFT, there is a wide range of theory levels with different computational costs. For example, widely-used hybrid functionals like B3LYP have a time complexity of $\mathcal{O}(n^4)$. Optimization using the hybrid functionals can be time-consuming, e.g. taking over 150 days for obtaining optimized geometries on the Molecule3D test set [1]. Furthermore, the cost increases dramatically with the size of the system.
Regarding the comment “whether using MMFF to predict the DFT properties is admissible”, we would like to say the following. DFT-based geometry optimization is followed by the determination of DFT properties. As a practical approach, MMFF-based optimization is often used as a preliminary step to get a good starting point to mitigate the high cost of DFT-based optimization [2-4]. Given this process, it seems admissible to use ML to directly predict DFT properties from MMFF geometries. It is also worth noting that there is extensive ML research in the quantum chemistry society to predict high-level quantum chemical properties from low-level computational results [5-9]. Moreover, as mentioned in "Related Works" section, Lu et al. [5] made the same attempt as us to predict the QM9 property from MMFF geometry.
Our goal is to contribute to the advancement of both the computational chemistry and ML communities. We hope our explanation clarifies the motivation and significance of our work. We appreciate your thoughtful review and are open to any further questions or suggestions you may have.
[1] Z. Xu, et al., arXiv preprint arXiv:2110.01717 (2021).
[2] M. Nakata and T. Shimazaki, *J. Chem. Inf. Model.* **57**(6), 1300-1308 (2017).
[3] C. A. Grambow, L. Pattanaik, and W. H. Green, *Sci. Data* **7**(1), 137 (2020).
[4] S. Axelrod and R. Gomez-Bombarelli, *Sci. Data* **9**(1), 185 (2022).
[5] J. Lu, C. Wang, and Y. Zhang, *J. Chem. Theory Comput.* **15**(7), 4113-4121 (2019).
[6] X. García-Andrade, P. García Tahoces, J. Pérez-Ríos, and E. Martínez Núñez, *J. Phys. Chem. A* **127**(10), 2274-2283 (2023).
[7] B. Savoie, Q. Zhao, D. Anstine, and O. Isayev, *Chem. Sci.* (2023).
[8] R. Ramakrishnan, P. O. Dral, M. Rupp, and O. A. Von Lilienfeld, *J. Chem. Theory Comput.* **11**(5), 2087-2096 (2015).
[9] K. Atz, C. Isert, M. N. Böcker, J. Jiménez-Luna, and G. Schneider, *Phys. Chem. Chem. Phys.* **24**(18), 10775-10783 (2022).
---
**[Question 1] I wonder what the performance will be if the authors give the basic prediction models the same input type as GeoTMI, which is $X$ and $\tilde{X}$ as training and $\tilde{X}$ as infer.**
Thanks for the suggestion to validate the effectiveness of GeoTMI. We have tried the proposed method for verifying the effectiveness in the development step. However, the accuracy improvement by the method using the EGNN for $U_0$ was -1.7 % (Improvement by GeoTMI: 16.7 %). This can be explained by the fact that multi-task learning tends to be less effective when certain tasks hold greater importance or relevance than others (predicting $Y$ from $X$ is much easier than predicting $Y$ from $\tilde{X}$). Also, the training objective is no longer maximizing the lower bound of the three-term mutual information $I(\tilde{Z}; Z; Y)$. Thus, the model based on the method can not learn the proper representation $\tilde{Z}$ in predicting $Y$, by aligning it into $Z$ that contains more enriching information for $Y$.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarifying the concerns and it seems that I have misunderstood the motivation of this paper. I will change the score accordingly. | Summary: The paper proposes an effective framework, GeoTMI, to train 3D GNNs for quantum property prediction. Specifically, GeoTMI involves the denoising process during the learning of property prediction tasks by maximizing a three-term mutual information among the noisy representation, original representation, and prediction target. The framework effectively improves the inference performance of 3D GNNs when only easy-to-obtain (corrupted) geometry is provided during inference.
Strengths: + Comprehensive discussion with related work.
+ The idea is interesting. The major bottleneck of quantum-related tasks is the gap between the low-cost geometry and precise geometry computed by DFT algorithms. It seems that this work is pretty effective in bridging the gap.
+ The proposed approach is theoretically grounded and can provide insight for general geometry representation learning.
+ Multiple GNN models suggest generalizable effectiveness.
Weaknesses: - Statement that denoising works focusing on prediction from X is not very true. It only applies to augmentation-based, but not denoising auto-encoders. The approaches also aim to perform downstream tasks given noisy/corrupted data.
- Although the work eliminates the requirement of X during inference, the precise geometry from the same dataset is still needed during training. This somehow limits the applicable scenarios and hence the impact. The authors may want to also validate the denoising effect across dataset and tasks, since the geometry is common. For example, is it possible to use a more general task (i.e., Y) and (X, \tilde{X}) pairs to pre-train the encoder, and fine-tune it on a different task but without X? If that is the case, the pre-trained encoder can be used as a foundation model with a much higher impact.
- The experimental results suggest the effectiveness of GeoTMI. However, since you mention noisy node, it may be more convincing to also compare “model+GeoTMI” with “model+noisy node”. To my knowledge, noisy node is also model agnostic.
- Could the author include some theoretical comparisons on the connection/differences between GeoTMI and related approaches to further demonstrate its supremacy? For example, would GeoTMI be any tighter bound to the three-term MI than other frameworks?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The two assumptions in the problem setup intuitively make sense to me, but could you clarify the term “higher quality of information”? Does it suggest I(X, Y)>=I(\tilde{X}, Y)?
* Could you clarify how you incorporate noisy nodes with GeoTMI? Is it a simply additive term in the loss or there is additional insights?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Weakness 1] Statement that denoising works focusing on prediction from X is not very true.**
We acknowledge the point that not all denoising approaches may exclusively focus on predicting $X$. However, in the context of predicting quantum chemical properties, we intended to highlight the prevalent use of denoising approaches to enhance prediction accuracy from $X$. If the reviewer could provide examples of the related studies, it would greatly assist us in refining our manuscript for better clarity.
---
**[Weakness 2] Necessity of the precise geometry in training**
From the perspective of computational chemistry, the quantum chemical property $Y$ is solely obtained from the molecular geometry $X$ ($Y$ can not be obtained without $X$). Thus, our goal is to develop a training framework that can make accurate $Y$ prediction from relatively easy-to-obtain molecular geometry, $\tilde{X}$, using $X$ in the training phase, but without $X$ in the inference phase.
We appreciate the reviewer's concern regarding the requirement of precise geometry during training and its potential impact on applicability. Exploring ways to mitigate this limitation would be highly beneficial. Thus, we verified the usefulness of an encoder pre-trained by GeoTMI. To investigate the feasibility of GeoTMI as a pre-training method, we conducted experiments on the QM9 dataset by splitting it into two halves.
For the pre-training task, the EGNN model was trained by GeoTMI and auxiliary losses for all properties of Table 1 except $\alpha$. In this case, the training objective is $I(Z;\tilde{Z}; Y)$ where $Y$ is a vector of the properties. For the fine-tuning task, the last layer of the pre-trained model was modified to predict the $\alpha$ property, and the task was performed without $X$. The same hyperparameters introduced in Appendix C.1 were used for pre-training. For comparison, we also trained the model to predict only $\alpha$ from $\tilde{X}$ without any pre-training task, and the accuracy of the model in terms of MAE was 0.178 $\mathrm{Bohr}^3$. The accuracy of the model with pre-training was 0.141 $\mathrm{Bohr}^3$. Consequently, the accuracy improvement by the pre-trained encoder was 20.7%, showing the potential of GeoTMI on a different task but without $X$.
---
**[Weaknees 3], [Question 2] Relationship between Noisy nodes with GeoTMI?**
We apologize for any confusion caused by the phrase "Equiformer + Noisy Nodes + GeoTMI. Please refer to the first response in “Response to all reviewers”.
---
**[Weakness 4] Theoretical comparisons between GeoTMI and related approaches**
To our best knowledge, no existing works have focused on maximizing three-term mutual information (MI).
In our problem setting, we need to reduce the conditional mutual information $I(\tilde{Z};Y\vert{Z})$ (CMI) to satisfy the Markov property, originates from the data relationship. Recent research has concentrated on estimating two-term MI, utilizing neural network-parameterized lower bounds for estimation [1]. In particular, Molavipour et al. [2] focused on the estimation of CMI through tight lower bounds based on the Donsker-Varadhan theorem and the universal approximation theorem. We could have employed a similar CMI estimator to fulfill our primary goal as outlined in Equation 1 of our manuscript - precise estimation of conditional MI. However, this approach may be less practical for CMI minimization due to the intricacies of solving a minimax problem. Alternatively, our proposed objective provides a lower bound for the three-term MI. Although strict tightness is not guaranteed, it offers the advantage of circumventing the need for a complex minimax training process.
[1] M.I. Belghazi et al., in International Conference on Machine Learning (PMLR, 2018).
[2] S. Molavipour, G. Bassi, and M. Skoglund, in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2020).
---
**[Question 1] The meaning of “higher quality of information”**
As the reviewer suggests, the term *"higher quality of information"* implies that $I(X;Y) \ge I(\tilde{X};Y)$. To see detailed descriptions, please refer to the second response in “Response to all reviewers”. | Rebuttal 1:
Rebuttal: $\Large{\text{Response to all reviewers}}$
We extend our sincere appreciation to the reviewers for your invaluable insights and constructive feedback, which have significantly enhanced the quality and rigor of our manuscript. Your feedback will undoubtedly contribute to the refinement of our research. We here reply two shared questions from several reviewers.
**1. Response to questions about comparison with Noisy Nodes in Table 3**
First of all, we would like to apologize for any confusion caused by the phrase "Equiformer + Noisy Nodes + GeoTMI”. We note that the Noisy Nodes used data augmentation in the OC20 task by using multiple geometries, interpolated between the initial structure (IS) and the relaxed structure (RS), as $\tilde{X}$. To ensure a fair comparison with Noisy Nodes, we also used the same data augmentation technique. In this context, we expressed our results as "Equiformer + Noisy Nodes + GeoTMI" in the previous manuscript.
To implement the Equiformer with GeoTMI, we borrowed a lot from the original Equiformer paper [1]. First, we used the same Noisy Nodes data augmentation. Second, we used a similar node-level auxiliary loss for the IS2RS task. The auxiliary loss predicts the node-level difference between target positions and noisy inputs, which corresponds to the denoising loss of $\mathcal{L}_d$ in our notation.
The different points of the Equiformer with GeoTMI compared to the Equiformer with Noisy Nodes are as follows. The noisy positions were explicitly updated by passing through GNN layers. The detailed objective here is to calculate the difference between the updated noisy positions and the linearly interpolated target positions at each GNN layer, which we refer to as the gradual denoising loss in our paper. In addition, we incorporated an auxiliary task that predicts the relaxed energy (RE) from the relaxed structure (RS), denoted as $\mathcal{L}_{\text{y, correct}}$, which ultimately facilitates the training process of maximizing the three-term mutual information (TMI).
In this regard, after careful consideration, we realize that the phrase "Equiformer + Noisy Nodes + GeoTMI" may be misleading. While we adopted the data augmentation technique from Noisy Nodes, it is important to note that the key feature of GeoTMI is the use of a TMI approach. For clarity, we will revise Table 3 and the descriptions to make the following correction: the correct presentation of our results should be "Equiformer + GeoTMI", with the explicit clarification that we used the same data augmentation as in Noisy Nodes. We hope the revision will provide readers with more convincing and clear insights.
[1] Y. L. Liao and T. Smidt, arXiv preprint arXiv:2206.11990 (2022)
---
**2. Response to questions about the theoretical background of GeoTMI**
As stated in lines 127-128, we consider the relation of $\tilde{X}\to{X}\to{Y}$ as a Markov chain based on physical relationships. By utilizing the property of conditional independence between two non-adjacent states in a Markov chain, we derive the relationship between conditional entropy as follows:
$\begin{aligned}H(Y\vert\tilde{X}) &=\mathbb{E}_{p(\tilde{X},Y)}\big[\log{p(Y\vert\tilde{X})}\big]\newline&=\mathbb{E} _{p(\tilde{X},X,Y)}\big[\log{p(Y\vert\tilde{X})}\big]\newline&=\mathbb{E} _{p(\tilde{X},X,Y)}\big[\log{p(Y\vert{X})\big]+\mathbb{E} _{p(\tilde{X},X,Y)}\big[\log{p(X\vert\tilde{X})}}\big]\newline&=\mathbb{E} _{p(X,Y)}\big[\log{p(Y\vert{X})\big]+\mathbb{E} _{p(\tilde{X},X)}\big[\log{p(X\vert\tilde{X})}}\big]\newline&=H(Y\vert{X})+H(X\vert\tilde{X})\newline&\ge H(Y\vert X)\end{aligned}$
Intuitively, $H(Y\vert{X})$ is smaller than $H(Y\vert\tilde{X})$ since $Y$ is a property calculated from $X$. Thus, the term *"higher quality of information"* implies that $I(X;Y) \ge I(\tilde{X};Y)$, which is equivalent to $H(Y\vert\tilde{X})\ge H(Y\vert X)$. Continuing with the context, we can consider $\tilde{X}\to{\frac{X+\tilde{X}}{2}}\to{X}$ as a geometry optimization process which is a Markov chain. Appendix B.1 illustrates the uncertainty in predicting $Y$ from the state $\frac{X+\tilde{X}}{2}$, which is later compared to the state $\tilde{X}$ in a multi-state Markov chain. Thus, the later state exhibits reduced uncertainty in predicting $Y$, showing consistency with the results presented in Appendix B.1. Therefore, based on the formulation and results, we can conclude that $\tilde{X}$ should be selected as the previous state of $X$ in a Markov chain $\tilde{X}\to{X}\to{Y}$. In all of our experiments, the initial geometries $\tilde{X}$ are used for the geometry optimization to obtain optimized geometries $X$, and properties $Y$ are directly computed from $X$.
| EXP | $\tilde{X}$ | $X$ |
| --- | --- | --- |
| $\mathrm{QM9}_M$ | MMFF-optimized geometries | DFT-optimized geometries |
| Reaction | Reactant and product geometries | Reactant and transition state geometries |
| OC20 | Initial structures (IS) | Relaxed structures (RS) | | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Computational Guarantees for Doubly Entropic Wasserstein Barycenters | Accept (poster) | Summary: The paper presents an algorithm (damped Sinkhorn) and theoretical convergence guarantees for computing doubly regularized Wasserstein barycenters. The concept of doubly entropic Wasserstein barycenters extends the single entropic regularized barycenters by introducing an additional level of regularization. This addition allows for the two regularization terms to counterbalance each other, leading to a debiasing effect under the right conditions. The authors demonstrate that the damped Sinkhorn algorithm can be implemented using an approximate Sinkhorn oracle. The latter can be evaluated using Monte Carlo sampling, providing a complete computational pipeline with theoretical guarantees.
Strengths: The paper tackles a complex and highly relevant problem in optimal transport and computational geometry: computing Wasserstein barycenters.
The convergence guarantees are rigorous and well-explained, enhancing the value of the proposed algorithm.
The paper connects the algorithmic problem to practical computation, demonstrating how the proposed methods can be implemented using Monte Carlo sampling. This gives a complete end-to-end view of the problem, which is valuable.
Weaknesses: The paper has some minor errors and inconsistencies, such as missing weights in Equation (2) and inconsistent notations (lack of boldface in $\psi^*$) in Lemma 1 and Theorem 1. There is a typo in line 203 (“is can”). In Definition 1, both $\widehat\nu$ and $\widetilde\nu$ are used.
In the discussion above Lemma 2, the claim about achieving accuracy close to $\sqrt{\varepsilon_\mu}$ is misleading as the bound seems to be larger than $(m_j \varepsilon_\mu)^{1/4}$ based on the first two terms in the bound.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The LSI constant derived using Holley--Stroock perturbation seems impractical due to its large value. Are there any more practical situations where this constant could be more manageable?
What can be said about the convergence of the algorithm for an unbounded cost, e.g., the quadratic cost over $\mathbb R^d$?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and for spotting some typos.
- The LSI constant is indeed large, which renders the computational speed exponential in $R/\tau$, where $R$ is the radius of the domain. This is, however, unavoidable because Wasserstein barycenters are NP-hard to compute for discrete point clouds (https://arxiv.org/abs/2101.01100). This means we must pay an exponential cost if we let $\lambda$ and $\tau$ go to zero.
Concerning interesting practical situations where the cost could be manageable, we may consider the following:
1. Use $\lambda,\tau = \Theta(1)$; this way, we lose approximation properties but gain in statistical properties and in computational speed;
2. Work with specific classes of marginals, e.g., it is possible to carry out exact computations for Gaussian marginals, where we can actually implement the exact scheme (Algorithm 1);
3. There is some hope that some efficiency could be gained, for example, for log-concave measures, again relying on sampling approaches. However, we are not aware of any specific results in this direction yet.
- Regarding convergence in the unbounded case, we believe this should be possible. It is known how to analyze Sinkhorn's algorithm in the unbounded case (see https://arxiv.org/abs/2212.06000), and because our scheme essentially combines Sinkhorn updates with damping, we believe it should not be too difficult to extend our results to quadratic cost in $\mathbb{R}^{d}$.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, I am satsfied and I will keep my score. Unlike the other reviewers, I did not find the lack of experiments to be a detriment to the paper. Given the general lack of theory for computing barycenters (as mentioned, this seems to be the first convergence guarantee of its kind for the setting under consideration) it is clear that the theoretical result is already considerably interesting. Therefore, I strongly advocate for acceptance. | Summary: -----
EDIT : 6 --> 8
-----
This paper builds on [15] and considers the recently introduced model of _doubly regularized entropic Wasserstein Barycenters_ which, given a set of measures $\nu^1,\dots, \nu^k$, weights $(w_j)_j$ (non-negative and sum to $1$), a reference measure $\pi$, and two smoothing parameters $\lambda,\tau > 0$, consider the problem
$$ \text{ minimize } \mu \mapsto \tau \mathrm{KL}(\mu,\pi) + \sum_{j=1}^k w_j T_\lambda(\mu, \nu_j), $$
where
$$ T_\lambda(\mu,\nu) = \inf_\gamma \braket{c,\gamma} + \lambda \mathrm{KL}(\gamma,\mu\otimes\nu). $$
Depending on the choice of the parameters $\lambda,\tau$, this problem encompasses many standard formulations for Wasserstein barycenters and their variations (Schrödinger barycenters, etc.).
This problem is deeply analyzed in [15] from a theoretical perspective (existence, uniqueness, smoothness of solutions, etc.), which concludes on the need to elaborate on the computational complexity of these $(\lambda,\tau)$-barycenters. This is the purpose of the current paper, which has two (three) main contributions to that respect :
1. They provide a theoretical _damped Sinkhorn algorithm_ (which can been seen as an adaptation of _Iterative Bregman Projections_ of [6]) for which they prove a $O(t^{-1})$ convergence rate toward the global minimizer of the above functional ($t$ being the number of steps). In a nutshell, the trick consists of replacing the "learning rate" $\lambda$ in the standard Sinkhorn algorithm by the quantity $\min(\lambda,\tau)$.
2. They provide a _approximated_ algorithm for which an implementation is possible, provided one has access to an "ApproximatedSinkhornOracle", and for which convergence rate are accessible,
3. (2bis) They provide an example of such a practical Approximated Sinkhorn Oracle based on random sampling techniques.
### References
[15] : (Doubly regularized Entropic Wasserstein Barycenters, Chizat, 2023)
Strengths: First, the paper is overall very well written. It manages to remain concise without sacrificing the mathematical rigor, and basically every line/equation is interesting, which is highly appreciated.
I also believe that its theoretical contributions are fairly significant and quite insightful and clearly speak in favor of adding more _outer regularization_ in OT-related problems.
The proofs have been investigated (with the exception of those of Lemma 2 and 3) and no major flaw was detected; the clarity of the presentation make them easy to parse while not being trivial.
Weaknesses: # Major issue
**The paper feels somewhat incomplete.** This is my main (and almost single) issue with the work (which undoubtedly has great potential). In a nutshell, this work is dedicated to the elaboration of provably converging **practical** algorithms to compute $(\lambda,\tau)$-barycenters; hence having absolutely no numerical illustration of such algorithms is somewhat surprising---even more given that the implementation seems to exist ($\ell$199, "we have observed empirically"; why not showcasing it?).
Typically, the divergence of the standard Sinkhorn algorithm (when $\tau < \lambda/2$) and the convergence of the damped one would have been nice to observe (especially given that the former is only empirical). Similarly, the paper mentions both in the abstract and in the conclusion that the approach works for _both fixed and free-support_ models (which boils down to the choice of the reference measure $\pi$), but this is (almost) not developed in the work. In my opinion, it is fascinating to have a globally converging way to approximate Wasserstein barycenter, and showcasing empirical situations where the proposed approach provides much better results than the "naive" one (say, IBP).
Similarly, the work mentions at several occasions the edge case $\tau = \lambda/2$ (because of the powerful statistical properties showcased in [15]), but never leverage this setting as far as I can tell. I do not know if there is something specific to say from the algorithmic perspective, but here as well, I feel like this could have been a nice occasion to numerically showcase the debiasing effect (for instance).
Do not get me wrong: I believe that the current contributions of the paper are of interest. But in my opinion, there are few elements that are missing to make the paper complete.
# Minor issues and other remark/suggestions
Note: this are not actual criticism that are expected to be addressed during the discussion period immediatly, but rather comments that may hopefully be useful for the authors.
- The hyperlinks are broken (probably due to the split of the pdf between the main body and the supplementary material). It may be nice to fix that for the camera ready version (which implies, I guess, to compile the supplementary material independently. Note that you can include the main body in the supplementary, which makes it "self contained" and quite handy for reviewers/readers).
- [typo] In Eq. (2), I think that the $(w_j)_j$ are missing. This occurs in other places as well (Eq 8, def of $\mu_\psi, Z_\phi$).
- [typo] The parameters $\nu,w, \lambda,\tau$ for the objective function $E$ are sometime missing (e.g. in Eq 10). This can be slightly confusing given the proximity with the expectation operator $\mathbf{E}$.
- [typo] $\ell$193, $\nu_j^t$ should be $\nu_t^j$ I guess.
- [typo] In definition 1, the notation $\hat{\nu}$ and $\tilde{\nu}$ are both used (in my understanding, they denote the same quantity).
- [typo] $\ell$232, I think it should be "the proof _of its convergence_ is deferred...".
- [suggestion] That's trivial but it may be handy to write in $\ell$215 that $\delta_{t}-\delta_{t+1} = E_{\lambda,\tau}^{\nu,w} (\phi_{t+1}) - E_{\lambda,\tau}^{\nu,w} (\phi_{t}) \geq 0$ by Prop 1, making the sequence of error non-increasing (instead of saying it in $\ell$222).
- [suggestion] Lemma 1 is never referenced after being stated (only once in the contributions section). While it is of course obvious, it may be worth adding a short sentence after Theorem 1 saying something like "Theorem 1 together with Lemma 1 implies the convergence of $\mu_t$ toward $\mu_{\lambda,\tau}$.
- [typo] $\ell$430, I think that $\Delta_t^j$ should be defined without the $\log$.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: # Main questions
These are questions/suggestion I would like to see addressed during the discussion period. Given what I have written above, they naturally rely on (synthetic) numerical experiments that may hopefully shed some light on the strengths and weaknesses of the proposed approach.
1. Is there a computational price to pay for the _damped_ Sinkhorn? Namely, from what is written in the paper, I assume that the "naive" Sinkhorn algorithm (with $\eta = 1$ no matter $\tau$) does converge when $\tau \in [\lambda/2 , \lambda]$ (at least empirically). If so, (i) does it provide (empirically) the same output (asymptotically in $t$) as the output of the damped algorithm? (ii) if yes, is it faster? (I would expect that it is, since the damped algorithm somewhat slow the gradient descent).
2. Can you elaborate on $\pi_{\mathrm{ref}}$ be the Lebesgue measure encode "the free support case"? If my understanding is correct, this forces $\mu$ to have a density, which is not exactly what "free-support" refers to in the work of Cuturi and Doucet (2014) (but I agree that their terminology can be discussed as well as they fix the cardinality of the support). But typically, if the $\nu^j$ are discrete, their $(0,0)$-barycenter is discrete as well (Carlier & Ekeland, 2015) and in that case, it may be hard for $\mu_t$ to concentrate. From [15, Thm 3.2] we may expect a $\lambda^2$ proximity between the exact Wasserstein barycenter and the $(\lambda,\lambda/2)$ one, but as far as I can tell this theorem is proved only in the case where the measures have densities. So I would be interested to see if/how much things fail in the discrete setting (now that it is possible to implement it!).
# Minor questions
These are minor questions that I'm asking because I am interested in the work, but I would not be offended if the authors do not address them during the discussion period.
- From my reading of the proofs, I missed the point where using the damped algorithm is crucial. I think that I have a correct line-by-line understanding of the proof, but not a global vision; could you tell me what would fail if one tries to run the same proof but with $\eta = 1$? I guess it appears in the proof of Lemma 4 (case $\tau < \lambda$), which I did not investigated; is it possible to have a global picture of why it is required?
- In the way Theorem 2 is currently stated, nothing is said once the condition on $T$ is reached (assuming $T < \infty$). Is it clear that once the condition is reached, the gap remains below $2\epsilon$? As far as I can tell by checking the proof, it is not sure that the sequence $\tilde{\delta}(t) - \tilde{\delta}(t+1)$ is non-increasing (once the criterion is reached), may it happen that the gap increases again in an uncontrolled way? (i.e. at time $T$ we are below $2\epsilon$, but at $T+1$ we suddenly get something much worse and we have to keep running the gradient descent again) I think that (if such thing can happen) this can be avoided by simply checking the variations of the objective value.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: I do not identify specific limitation of potential negative societal impact specific to this work.
For the former point, I guess that additional numerical experiments may showcase some limitations of the work, that would be worth discussing then (without diminishing the contributions of the work).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for such a thorough review and for your very helpful suggestions.
## Answers to Main Questions
### Question 1: Computational Price of Damping
*Is there a computational price to pay for the damped Sinkhorn?*
**Answer:** Actually, in the numerical experiments attached to the main response, we have observed that the correct amount of damping helps to speed up the convergence speed. Indeed, Figure 1 (c) shows that $\eta = 0.5$ gives the fastest convergence (which, in this setup, is the choice suggested in the paper). Also, you may inspect Figures 2 and 3. We have three rows: no damping, theoretically suggested damping and overdamping. In all three columns, we see that the middle row (theoretical choice) gives the fastest convergence.
However, there is a second side to this. In particular, Figure 4 suggests that damping more is worse in noisy setups. In addition, by overdamping beyond the theoretical choice specified in Algorithm 1, we do see a slowdown as expected (see Figure 3, compare the top row with the bottom one).
*Namely, from what is written in the paper, I assume that the "naive" Sinkhorn algorithm does converge when $\tau \in [\lambda/2, \lambda]$ (at least empirically).*
**Answer:** This is true, which can now be seen in Figure 1 (c). Indeed, it also answers your question (i) that we asymptotically converge to the same value regardless of the choice of $\eta \in (0,1]$, and we have already discussed (ii) above.
### Question 2: On the Reference Measure
You are absolutely right that whenever $\pi\_{\mathrm{ref}}$ has a density, it forces the optimal $(\lambda, \tau)$-barycenter also to have a density. And for arbitrary discrete marginals, with $\pi\_{\mathrm{ref}}$ being the Lebesgue measure, the convergence via our Algorithm 2 would indeed be slow as $\tau$ goes to $0$ (our convergence guarantee is exponential in $R/\tau$ where $R$ is the radius of the domain). However, this is unavoidable due to the NP-hardness results for approximating $(0,0)$ barycenters due to https://arxiv.org/abs/2101.01100.
It is true that the $\lambda^2$ proximity to the true $(0,0)$-barycenter only holds in the continuous and smooth case. However, we can benefit from the debiaising effect of the $(\lambda,\lambda/2)$ barycenters -- even when we are doing computations with discrete measures -- when there is an underlying continuous structure. Consider for instance the common setting where the discrete marginals are obtained by discretizing (say, via $n$ iid samples) some continuous distributions and we would like to compute/estimate the unregularized barycenter of the continuous distributions directly. Combining the approximation ($\lambda^2$) and estimation ($\lambda^{-1-d/2}n^{-1/2}$) errors leads to better estimation bounds using $(\lambda,\lambda/2)$-barycenters compared to other entropy regularized barycenters (see [15]); even though the computation is done with discrete marginals.
## Answers to Minor Questions
- Here, I hope the performed numerical simulations will be helpful. Consider the $\tau = \lambda/2$ cases shown in Figure 1 (a) and Figure 2 (left-most plot). Both plots display a zig-zag-type line for the undamped algorithm ($\eta = 1$), which essentially shows a behavior analogous to the one encountered in quadratic optimization for gradient descent with too large step size, which is when gradient descent iterates bounce back and forth between different sides of a valley. In the gradient descent case, this zig-zag behavior arises for smooth problems with a too-large step size, which is also what is happening in our simulations here. Indeed, there is a precise sense in which smoothness of the dual maximization objective for doubly entropic barycenters can be characterized [Proposition 2.6., https://arxiv.org/pdf/2303.11844.pdf], which shows that decreasing $\tau$ (unsurprisingly) makes the dual objective less smooth, which is detrimental to the $\eta=1$ algorithm as its convergence is essentially smoothness based. What matters is the $\tau$ relation to $\lambda$, and as $\tau$ passes below $\lambda/2$, we move beyond a critical threshold at which the exact alternating maximization/minimization stops working (see Figure 1 (b) for a display of this phase transition).
- You are absolutely right. We do not say anything about what happens after the condition is reached. Indeed, numerical simulations (right-most column in Figures 2,3,4) demonstrate a curious behavior, where Algorithm 2 first reaches a good value and later gets stuck into a slightly worse one, where it remains indefinitely.
---
Rebuttal Comment 1.1:
Title: Thanks!
Comment: Thank you for your detailed answer and the additional pdf including few experiments (that are worth including in the main paper if eventually accepted).
I am quite convinced by the depth of the work and will increase my grade. | Summary: The paper has proposed a computational algorithm for computing the newly developed regularized Wasserstein barycenters in [Chizat,2023] via optimizing the duality of the primal problem. The paper also characterised the convergence of algorithms on both exact and approximated algorithms which are the main contributions of the paper.
Strengths: The algorithms and their convergence guarantee for computing doubly entropic Wasserstein barycenters are significant contributions in terms of technical aspects. The paper is easy to follow as the related results have been well presented to make use of the derivations of the proposed algorithms.
Weaknesses: There are some aspects the paper can be improved:
- A section for numerical experiments/illustrations for both synthetic and real-world data will help to demonstrate the results of the two main algorithms and their convergence algorithms.
- Algorithms should link to the text updating equations. For instance, in Alg. 1, step 2(a) computes the left part of the equation (11) while step 2(b-d) computes the right part of that equation, authors break down into multiple lines of code without clear purposes which usually make readers harder to follow.
- In line 199, "We have observed empirically that the iterates of the iterative Bregman projections (i.e., the scheme of updates (12), (11)) diverge whenever \tau < \lamba<=2", it is not clear where the results with the iterative Bregman projections presented.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: As the doubly entropic Wasserstein barycenter may have some interesting properties for some specific type of data if the author can discuss further in terms of theoretical and experimental aspects that would be great.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: No empirical result provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review.
The theoretical aspects of doubly entropic barycenters have been thoroughly investigated in https://arxiv.org/pdf/2303.11844.pdf. Our primary goal was, instead, to provide new numerical schemes for their computation and to establish their convergence, particularly covering the case $\tau < \lambda$ for which we obtained the first provably convergent method.
In addition, some numerical examples concerning a comparison of $(\lambda, \tau)$-barycenters with different parameter values are available in Section 6 of the referenced paper. | Summary: This paper proposes an algorithm for solving the doubly regularized Wasserstein barycenter problem for probability measures that corresponds to adding an inner regularization based on the entropy penalty appearing in the Wasserstein distance term, and an outer regularization appearing at the level of the Wasserstein barycenter problem, based on the KL divergence. The algorithm comes with convergence guarantees, depending on inner and outer regularization parameters.The algorithm is then modified to approximately solve the doubly regularized Wasserstein barycenter problem, but with non-asymptotic convergence guarantees.
Strengths: This article is very well written, proposing an algorithm for a recently introduced notion of regularization for the Wasserstein barycenter, unifying several proposals in the literature. The proposed algorithm does not rely on space discretization (prohibitive in high dimensions). The proofs are accurate, well documented and complete. The most important contribution lies in the approximate damped Sinkhorn scheme, which allows for non-asymptotic convergence guarantees on the value of the dual objective.
Weaknesses: The idea of dumping an algorithm is not really innovative, but it stabilizes the algorithm and provides convergence guarantees for any positive values of the parameters $(\lambda,\tau)$.
A result on the distance between barycenters obtained via the exact and approximate damped Sinkhorn scheme could have been interesting. In particular, the addition of experiments on the barycenters obtained via both algorithms would strengthen the paper.
Finally, an intuition on the approximate Sinkhorn oracle (Definition 1) is important in my opinion, as the properties listed in Definition 1 are quite precise. An example of a Radon-Nikodym derivative that does not satisfy these properties could be added.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Probability measures are supported on a compact convex subset of $\mathbb{R}^d$, could this be relaxed?
In practice, how does the number of support points of the discrete barycenter is chosen? More generally, how do you choose $\pi_{ref}$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: An experimental (and if possible theoretical) comparison of the barycenters obtained by the exact and approximate proposed algorithms, as well as a comparison with Bregman iterative projections algorithm for particular choices of regularization parameters would, in my opinion, improve this article.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and questions.
- Regarding the compactness assumption, it is really only needed in our case because the prior work that introduced doubly entropic barycenters derived many theoretical results in the compact case. In particular, compactness was used in that context to justify a certain interchange of inf-sup [Section 2.3, https://arxiv.org/pdf/2303.11844.pdf]. Because these types of results can usually be obtained via other means (e.g., by approximation), and because it is known how to analyze Sinkhorn's algorithm for unbounded measures, we do not see any major obstacles in extending our results beyond the compact case.
- Regarding the choice of $\pi\_{\mathrm{ref}}$, there is, in general, no way to know where the optimal barycenter is supported or in which part of the region it is concentrated without computing it first. Thus, you would typically choose $\pi\_{\mathrm{ref}}$ by gridding the space. Of course, in high dimensions, this approach is not feasible. This motivates looking into grid-free/support-free methods such as Algorithm 2 with the implementation suggested following Lemma 2.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: Thank you for your response and for producing an additional section of convincing experiments. I have also read the responses of the other reviewers, who have shed light on the contribution of your work, and have decided to increase my score from 5 to 7. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their feedback. We will answer minor questions raised by the reviewers individually; in this shared response to all reviewers, we will focus on the numerical simulations aspect.
Most of the reviewers pointed out the absence of numerical simulations as a significant shortcoming of our paper. We have performed numerical simulations that cover a significant fraction of the concerns raised.
## Note: there is currently no performance benchmark for our algorithm.
Several reviewers suggested benchmarking our algorithm against others. We would like to clarify that:
- We have provided the first provably convergent method for the case $\tau < \lambda$ (both exact and inexact versions).
- Our focus on $\tau < \lambda$ is deliberate. Specifically, the recently discovered case $(\lambda, \lambda/2)$ enjoys very strong theoretical properties, and we have provided the first algorithm covering this case with convergence rates.
- When $\tau \geq \lambda$ (no damping), our algorithm reduces to the classical alternate maximization/minimization scheme, which is well understood from both practical and theoretical perspectives. However, note that we provided a unified analysis that covers all choices of $(\lambda, \tau)$.
## Simulation setup: isotropic Gaussian marginals.
However, even without a benchmark, we can perform insightful simulations to address several issues highlighted by the reviewers. These include:
- demonstrating the need for damping when $\tau < \lambda/2$;
- examining empirical convergence rates with and without damping;
- testing the inexact damped Sinkhorn scheme (Algorithm 2).
To provide meaningful computations, we need a setup for which we know what the true $(\lambda, \tau)$ barycenter is. In this response and in the attached simulations, we denote the optimal barycenter for a given problem by $\mu^{*}\_{\lambda, \tau}$, while the iterates of our algorithms (Algorithm 1 and Algorithm 2) are denoted by $\mu\_{t}$.
The only known case with closed-form $\mu^{*}\_{\lambda, \tau}$ is that of isotropic Gaussian marginals with the same variance (Proposition 3.4 in https://arxiv.org/pdf/2303.11844.pdf). Let $w \in \mathbb{R}^{k}$ denote the non-negative weights vector that sums to one, let $\sigma^{2} > 0$ be arbitrary, and for $j=1,\dots,k$ let $m\_{j} \in \mathbb{R}^{d}$ be arbitrary. If $\nu^{j} = N(m\_{j}, \sigma^{2}I\_{d})$ for $j=1,\dots,k$, then
$$
\mu^{*}\_{\lambda, \tau} = N(\sum\_{j=1}^{k} w\_{j}m\_{j}, \xi^{2} I\_{d})
\text{ with } \xi^{2} = \frac{(\sigma^{2} + \sqrt{(\sigma^{2} - \lambda)^{2} + 4\sigma^{2}\tau})^{2} - \lambda^{2}}{4\sigma^{2}}.
$$
## Implementing Algorithm 1.
When the marginals are Gaussian (not necessarily isotropic) and when we initialize $\psi^{j} = 0$, then, there exist some $d\times d$ matrices $A_{t}^{j}, C_{t}^{j}, \Sigma_{t}$ and $d$-dimensional vectors $b_{t}, d_{t}, e_{t}$ such that for all $t$ the iterates of Algorithm 1 satisfy:
$$
\psi^{j}\_{t}(y) = \frac{1}{2}y^{\top}A\_{t}^{j}y - y^{\top}b\_{t}^{j} + \mathrm{const},\quad
\phi^{j}\_{t}(y) = \frac{1}{2}x^{\top}C\_{t}^{j}x - x^{\top}d\_{t}^{j} + \mathrm{const},\quad
\mu_{t} = N(e_{t}, \Sigma_{t}).
$$
That is, the functions $\psi$ and $\phi$ are always quadratic, and $\mu_{t}$ is always Gaussian. In this case, all the integrals involved in Algorithm 1 have closed-form solutions, and we can implement Algorithm 1. Remark: for work in this spirit, see https://arxiv.org/abs/2006.02572.
## Implementing Algorithm 2.
We implement Algorithm 2 via the transform $A_{t}^{j} \mapsto A\_{t}^{j} - N\_{t}^{j}$, where $N\_{t}^{j}$ is a random positive semi-definite matrix with trace equal to $\varepsilon$ (see attached simulations). This serves as a toy model for an inexact Sinkhorn oracle for Gaussian simulations (but note that it is different from the Monte Carlo scheme that we proposed for discrete measures).
## Limitations of our experimental design.
Let us briefly summarize some limitations of our simulation setup:
- The results in our paper are proved under boundedness assumptions, while Gaussians are unbounded. While it requires justification, extending our results to unbounded cases should not cause any major problems.
- Our simulations do not cover the free-support discrete point clouds setup.
## Comments on the uploaded simulation results.
We now comment on our simulations, referring to the figures in the attached pdf:
- Figure 1 considers the toy setup $k=1$, $\nu^{1} = N(0,1)$. We observe:
1. figure (a) shows that undamped iterates explode for $\tau < \lambda/2$ (note the absence of a blue line, because its iterates exploded);
2. figure (b) zooms in on the case $\tau \approx \lambda/2$, noticing a sharp phase transition in the convergence behavior at the critical case $\tau = \lambda/2$;
3. figure (c) demonstrates that damping removes the bad behavior. Moreover, the damping factor suggested in our work yields the fastest convergence among the tested five different damping factor $\eta$ choices.
- Figure 2 investigates an undamped inexact algorithm. As suggested by Theorem 2, the iterates converge up to some level governed by the accuracy parameter of the inexact Sinkhorn oracle.
- Figure 3 performs the same simulations, but this time with damping. We observe again that damping helps significantly in the critical case $\lambda/\tau = 2$, yielding a much faster convergence. The second row in Figure 3 considers the overdamped case, where we observe slightly slower convergence than that with the optimal choice of damping parameter.
- Figure 4 investigates the effect of damping in the noisy setup (Algorithm 2). First, observe that the undamped algorithm (eta = 1) explodes in the noisy case with $\tau = \lambda/2$, while this was not the case in the zero-noise setup (Figure 1 (a)). Moreover, we find that overdamping might have a negative effect on attained accuracy: larger damping factors lead to lower accuracy at convergence in noisy setups.
Pdf: /pdf/0185dd1ce3cb1b614aeea1195285e501678e0c6a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a study on the computation of doubly regularized Wasserstein barycenters, a recently introduced family of entropic barycenters with inner and outer regularization strengths. The authors build upon previous research, which has shown that different choices of regularization parameters unify various entropy-penalized barycenter concepts, including debiased barycenters.
The proposed algorithm for computing doubly regularized Wasserstein barycenters combines damped Sinkhorn iterations with exact maximization/minimization steps, ensuring convergence for any choice of regularization parameters. Additionally, the authors introduce an inexact variant of the algorithm that utilizes Sinkhorn Oracle definition, providing non-asymptotic convergence guarantees for approximating Wasserstein barycenters between discrete point clouds in the free-support/grid-free setting. The authors highlight that while a straightforward adaptation of the alternate maximization scheme leads to diverging iterates for small values of $\tau$, their analysis demonstrates that damping these iterations is sufficient to achieve convergence.
Strengths: 1. Clear structure, good mathematical exposition, and solid theoretical results.
2. Detailed proofs to support their claims, demonstrating a strong theoretical foundation
Weaknesses: 1. Lack of numerical experiments and empirical evaluations.
2. While the paper discusses computational complexity, further analysis and discussion on the scalability and efficiency of the algorithm could provide a more comprehensive understanding of its practical implications.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have the following questions/comments for the authors:
1. Can the authors provide more motivation for using the definition of Sinkhorn Oracle in Section 4, Approximate Damped Sinkhorn Scheme? It would be helpful to understand the rationale behind this choice and how it contributes to the proposed algorithm.
2. In Section 2.2, Doubly Regularized Entropic Barycenters, could the optimization part (Line 172 - 184) be made more clear and detailed? This section seems to be the foundation for the updates in your main contribution algorithm. Providing additional explanations and elaboration would enhance the understanding of the optimization process.
3. Algorithm 1 is presented without a thorough mathematical derivation. It would be beneficial to include a more detailed derivation or explanation of the algorithm to aid readers in understanding the underlying principles and steps involved.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: While the paper provides theoretical guarantees and analyses, incorporating visualizations and benchmarking against alternative approaches would enhance the practical significance of the proposed algorithm
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and questions. We respond to your questions below:
1. Let us explain why the Approximate Sinkhorn Oracle (Definition 1) is defined the way it is. First, why do we need an approximate algorithm at all? We need it because, for continuous measures, we typically cannot implement the exact algorithm (Algorithm 1) due to the high-dimensional integrals that need to be computed in lines (a) and (e) of Algorithm 1. Of course, when the marginals are discrete, for any given x we can actually evaluate the line (a), as the integral inside is just a sum. Thus, the only issue is with the computation of line (e), repeated below for convenience:
$$
(e) \quad \frac{d\nu\_{t}^{j}}{d\nu^{j}}(y) \leftarrow \int \exp\left(\frac{\phi_{t}^{j}(x) + \psi_{t}^{j}(y) - c(x,y)}{\lambda}\right)\mu\_{t}(dx).
$$
While we cannot compute the integral above exactly, if we can generate samples from $\mu_{t}$, then for any given $y$ we could estimate the desired quantity $\frac{d\nu\_{t}^{j}}{d\nu^{j}}(y)$ up to some error.
This leads to the question: suppose, instead of $\frac{d\nu_{t}^{j}}{d\nu^{j}}(y)$, we have some other function $g^{j}\_{t}(y)$. How close does it need to be to $\frac{d\nu_{t}^{j}}{d\nu^{j}}(y)$ for us to run Algorithm 1 with $g^{j}\_{t}$ instead of $\frac{d\nu_{t}^{j}}{d\nu^{j}}(y)$? Well, the answer to this is laid out in Definition 1. To get these properties, you just need to follow the proof of Theorem 1 and put $g^{j}_{t}$ in place of $\frac{d\nu\_{t}^{j}}{d\nu^{j}}(y)$, observing what properties need to hold for you to preserve the convergence analysis up to some tolerance. You may then see how the proof of Theorem 2 follows the lines of Theorem 1 when the properties of Definition 1 are plugged into the proof.
We hope this helps!
2. and 3. Thank you for the suggestions. We will make sure to clarify these parts when we update the paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's detailed response. It addresses most of my concerns. I'll raise my score accordingly. | null | null | null | null | null | null |
PAC Learning Linear Thresholds from Label Proportions | Accept (spotlight) | Summary: Learning from label proportions allow training data to aggregate into sets of feature vectors with sum or average of their labels for each set as a label. As supervised learning, the goal is to classify test set of instances and minimize error of the classifier. This work focuses on learnability of LLP over Gaussian distributions of linear threshold functions. Although PAC learnability of LTFs has been studies in the LLP setting, PAC-LLP is intractable to learn LTFs using LTFs. This work formalize the problem into Gaussian distribution, which makes LTFs PAC-learnable. The approach is directly to maximize the instance-level accuracy on the distribution.
Strengths: 1. PAC-learnability of LTFs in the LLP setting (which is a very interesting topic) is well-defined as Definition 1.2 and well-addressed by Theorem 1.3, 1.4 and 1.5, especially considering the case for k = q/2.
2. The proof of Theorem 1.4 is sound and solid coming with an algorithm and lemmas (even proofs for lemmas).
3. As a theoretical work, it is nice to have a section for experimental results, although the datasets look relatively simple.
Weaknesses: 1. For Theorem 1.3 and Theorem 1.5, it is better to provide one or two sentences for each about proofs instead of putting everything in the appendix. You dont have to provide a separate section like Theorem 1.4. Just few sentences please.
2. The writing looks less readable for some sentences. Please refer to limitations section.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: 1. In Theorem 1.5, what is \lambda? What are \lambda_max and \lambda_min? Please clarify notations like what you did on Line 126.
2. What are relationships for r* and r head in Theorem 1.3, 1.4 and 1.5? I know you explained this on Line 145. Please put it ahead.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: 1. If possible, please don’t include citations in the abstract.
2. On Line 59, I guess you typed one more “level”.
3. Please make “Theorem” (Like on Line 247 or 257) and “Thms.” (Like on Line 128) consistent (Use either one).
4. Need a space between “independence” and “i.e.,” on Line 132.
5. Similarly, for Line 148, need a space between “directions” and “We”.
6. You should read over your manuscript carefully and try to rephrase some sentences to make them more readable (I wont correct them here verbatim).
- Some sentences have no subject.
- Some sentences have no periods or spaces like limitations 4 and 5.
- Some pronouns refer ambiguously.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q. *For Theorem 1.3 and Theorem 1.5, it is better to provide one or two sentences for each about proofs instead of putting everything in the appendix. You dont have to provide a separate section like Theorem 1.4. Just few sentences please.*
*Authors*: While we have included in Section 1.4 an overview of our techniques for Theorems 1.3 and 1.5, we accept the suggestion and will also add a paragraph after Section 4 in the main paper with an informal description of the proofs of Theorems 1.3 and 1.5 along with references to the respective appendices with the formal proofs.
Q. *In Theorem 1.5, what is \lambda? What are \lambda_max and \lambda_min? Please clarify notations like what you did on Line 126.*
*Authors*: $\\lambda_{\\max}$ and $\\lambda_{\\min}$ are the maximum and minimum eigenvalues respectively of $\\mathbf{\\Sigma}$, and we will explicitly state this before the statements of Theorems 1.4 and 1.5.
Q. *What are relationships for r\* and r head in Theorem 1.3, 1.4 and 1.5? I know you explained this on Line 145. Please put it ahead.*
*Authors*: The value of $\\hat{\\mathbf{r}}$ output by our algorithms is a close estimate of $\\mathbf{r}^*$ (or possibly $-\\mathbf{r}^*$ in the case of balanced bags) obtained via the mean-estimation (Theorem 1.3) or the PCA based method (Theorems 1.4 and 1.5). We will add this in Section 1.3 as well.
We are grateful to the Reviewer for pointing out inadvertent typos and some editorial feedback. We shall of course correct all the typos as well as improve the phrasing of sentences wherever required.
---
Rebuttal Comment 1.1:
Title: Thanks for rebuttal
Comment: Thank authors for their rebuttal and it clears all my concerns. Thanks! | Summary: This theoretical paper investigates the learnability of linear threshold functions (LTFs) in the learning from label proportions (LLP) setting. When the feature-vectors are distributed according to a Gaussian distribution and conditioned on their underlying labels, LTFs can be efficiently properly learnt. For the general Gaussian distribution, this paper develops a principal component algorithm which estimates the means and covariance matrices using subgaussian concentration bounds. Generalization bounds from bag classification error to instance classification error are also provided to resolve the ambiguity between the obtained solutions. The experimental results validate the effectiveness of the proposed techniques.
Strengths: The learnability of LTFs in the LLP setting is an important issue, this work provides some novel and significant theoretical results.
Weaknesses: $\bullet$ From Theorem 1.3 to 1.5, as the distribution becomes more general, the sample complexity required to efficiently properly learn LTFs also increases, and the difference between these sample complexities seems to be obvious. Numerical results on the difference between these sample complexities and associated discussion and analysis are necessary to more intuitively elucidate these theoretical results.
$\bullet$ Intuitively, the theoretical results in this paper will depend on the label proportion $k/q$, but in the end, it is mainly implemented on the bag size $q$, i.e., the order of the sample complexity does not explicitly reflect the label proportion $k/q$. While this is not mathematically problematic, it would be helpful to add some remarks to illustrate this point.
$\bullet$ The readability of the paper needs to be improved. More introductions to the problem setting and related definitions of LLP, as well as further illustrations and explanations about the existing theoretical work will help readers better understand the theoretical results in this paper. In addition, some typos also harm the readability.
Typos:
$\bullet$ Line 27: they're labels The goal -> their labels. The goal
$\bullet$ Line 132: independencei.e., -> independence, i.e.,
$\bullet$ Line 146: $r = q/2$ -> $k = q/2$
$\bullet$ Line 148: directionsWe -> directions. We
$\bullet$ Line 350: bag satisfaction -> bag classification
$\bullet$ Some typos in the Supplementary Material also need to be checked and corrected.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please refer to the Weaknesses for details.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: This work does not seem to have any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q. *From Theorem 1.3 to 1.5, as the distribution becomes more general, the sample complexity required to efficiently properly learn LTFs also increases, and the difference between these sample complexities seems to be obvious. Numerical results on the difference between these sample complexities and associated discussion and analysis are necessary to more intuitively elucidate these theoretical results.*
*Authors*: We do include experiments in Appendix F on the setting of Theorems 1.3, 1.4 and 1.5, focusing on the performance improvement against existing baselines of [Saket 21, 22] and random LTF. For example, from Tables 3 and 4 we observe that Algorithm 2 (without LTF offset) has better accuracy than Algorithm 4 (with offset) for the same sample complexity, which is consistent with our theoretical results. We will include in Section 5, a discussion of the performance and sample complexity for the various settings.
Q. *Intuitively, the theoretical results in this paper will depend on the label proportion , but in the end, it is mainly implemented on the bag size, i.e., the order of the sample complexity does not explicitly reflect the label proportion. While this is not mathematically problematic, it would be helpful to add some remarks to illustrate this point.*
*Authors*: The sample complexity depends on both $q$ and $k$, however for ease of notation we have used the upper bound of $q$ for $k$ as well.
Appendix A gives the proof of Theorem 1.3 where the sample complexity solely depends on $k/q$ (refer to the expression for $\\eta(k, q)$). The algorithm (Algorithm 3) samples a single feature-vector from each bag to compute a mean estimate. Thus, the probability of sampling it from the positive half should only depend on $k/q$ and the sample complexity therefore only depends on $k/q$.
However, in Algorithm 2, we sample pairs of feature-vectors with and without replacement. When sampling without replacement, the probabilities of the label configurations of the two feature-vectors depend on the size of the bag $q$, and not only on $k/q$. In particular, the probability of sampling a pair of differently labeled feature-vectors without replacement is $2(k/q)(1-k/q)/(1-1/q)$. Keeping $k/q$ the same, this probability decreases with increasing bag size which increases the sample complexity for larger bags. We shall add this explanation in Section 1.4.
Q. *The readability of the paper needs to be improved. More introductions to the problem setting and related definitions of LLP, as well as further illustrations and explanations about the existing theoretical work will help readers better understand the theoretical results in this paper. In addition, some typos also harm the readability.*
*Authors*: We will add more discussion in Section 1 to describe the LLP setting, and also further explain the existing body of theoretical work in LLP.
We thank the Reviewer for pointing out the inadvertent typos and we will fix them along with any others in the paper as well as the supplementary material.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their rebuttal, and the authors' responses have addressed my concerns. | Summary: This work studies the problem of learning linear threshold functions (LTFs, aka linear classifiers) under the setting of learning from label proportions (LLP), where the training data are "bags" (aka sets) of instances, and the training labels are classifier proportions of the instances in the bag. Under the assumption that the instances are gaussian distributed, bags are instances sampled iid, and the size of the bags are upper bounded, the authors show that it is possible to efficiently (properly) learning LTFs.
Strengths: The problem setting of PAC-LLP deserves more attention.
The authors gives good justification for why LLP matters, e.g., privacy and legal.
In light of the previous theory works on LLP that shows NP-hardness of PAC-LLP learning LTFs,
this current work is surprising, relvant, and interesting to the ML community.
Theorem 1.4 gives an algorithm that highlights the interesting geometry and the clever exploitation of the subtle difference between sampling with and without replacement.
Weaknesses: After Section 1.1, it will be useful for the reader if there is an overview of the paper. Something along the lines of
"Sec 1.3 are the main results. Sec 1.4 gives proof sketch of the main results and high level description of the algorithms. Section 3 state these algorithms precisely..."
For Theorem 1.3, it will be useful to interpret the sample complexity. Is the sample complexity essentially the difficulty of estimating the mean of the bag vectors? If so, the authors should make this explicit.
For Theorem 1.4 and 1.5, are the lambdas eigenvalues of the covariance matrices? Please explicitly say so.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Why is Definition 1.2 called the bag oracle? Isn't it just the distribution over the training data (of bags)?
Does the authors believe that the results are tight? In other words, is LLP under the considered data assumption essentially equivalent to mean/covariance estimation?
It will be good if there is a discussion regarding the sample-complexities in LLP versus ordinary classification.
From this point of view, what is the "cost" of only having access to bagged data?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q. *After Section 1.1, it will be useful for the reader if there is an overview of the paper. Something along the lines of "Sec 1.3 are the main results. Sec 1.4 gives proof sketch of the main results and high level description of the algorithms. Section 3 state these algorithms precisely..."*
*Authors*: We accept this suggestion and will add a paragraph after Section 1.1 providing the organization of the paper.
Q. *For Theorem 1.3, it will be useful to interpret the sample complexity. Is the sample complexity essentially the difficulty of estimating the mean of the bag vectors? If so, the authors should make this explicit.*
*Authors*: Yes, the sample complexity in Theorem 1.3 is essentially the same as that of mean estimation of the bag-vectors up to a desired error. We note however, that this distribution is not a Gaussian due to having unbalanced bags. We will add this to the overview in Section 1.4 and also to the proof of Theorem 1.3 in Appendix A.
Q. *For Theorem 1.4 and 1.5, are the lambdas eigenvalues of the covariance matrices? Please explicitly say so.*
*Authors*: Yes, $\\lambda_{\\min}$ and $\\lambda_{\\max}$ denote the minimum and maximum eigenvalues of the covariance matrix $\\mathbf{\\Sigma}$ of the distribution from which the feature-vectors are sampled. We will state this before Theorems 1.4 and 1.5.
Q. *Why is Definition 1.2 called the bag oracle? Isn't it just the distribution over the training data (of bags)?*
*Authors*: Typically in PAC learning literature, the underlying distribution is defined over the feature-vectors. Given an (unknown) classifier, the corresponding *example oracle* samples a feature-vector according to the distribution and outputs it along with the label assigned by the classifier. We extend this to the case of bags in which the bag oracle – additionally parameterized with a size $q$ and label sum $k$ – samples $k$ iid $1$-labeled feature-vectors and $(q-k)$ iid $0$-labeled feature-vectors and outputs a bag consisting of them.
Q. *Does the authors believe that the results are tight? In other words, is LLP under the considered data assumption essentially equivalent to mean/covariance estimation?*
*Authors*: The sample complexity lower bound for PAC-LLP learning LTFs is a very relevant problem, however it is out of the scope of this work. While there is a blowup in the sample complexity incurred by the generalization and the stability bounds (Theorem 2.2. , Lemma 2.3), a major component is indeed the mean/covariance estimation (Algorithm 1). Nevertheless, one cannot rule out other algorithmic techniques for this problem which bypass such estimation.
Q. *It will be good if there is a discussion regarding the sample-complexities in LLP versus ordinary classification. From this point of view, what is the "cost" of only having access to bagged data?*
*Authors*: For ordinary classification in $d$-dimensional space, the sample complexity is essentially $O(d\\log d)$ as one can solve a linear program to obtain an LTF and then use uniform convergence to bound the generalization error. While our algorithms have the same dependence on $d$, we shall add a discussion in Section 1.3 on the blowup incurred in the LLP setting due to the bag size, the condition number of the covariance matrix and the other geometric and error parameters.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thorough reply. Based on the excellent submission and rebuttal, I have raised my score. | Summary: This paper studies PAC learning when the training data is aggregated into sets or bags of feature vectors. For each bag, we observe the feature vectors of these bags and only the average of the labels in the bag. They focus on the case when the feature vectors are distributed according to a Gaussian distribution, and the hypothesis class is the set of Linear Threshold Functions (LTFs). Their main results include polynomial time algorithms for learning the correct halfspace when the Gausssian is either standard or skewed and the correct LTF is either homogeneous or not. Finally, they compare their Algorithm experimentally with various procedures from prior work.
Strengths: This paper proposes a very interesting direction for learning under label aggregation, which bypasses the lower bounds that existed for this setting in prior work. I think it opens the way for more interesting problems to be considered in this setting, by possibly changing the assumptions on the distribution or the hypothesis class. All the claims are sufficiently explained and the presentation is generally clear.
The authors introduce some interesting technical novelties to extract information about the true vector of the LTF using the aggregated labels. In particular, the idea of comparing the variance of a random pair with and without replacement and finding that it is maximized in the direction of the true vector $\mathbf{r^\star}$ is very elegant and leads to some interesting algorithms, which can be implemented in polynomial time using PCA. Moreover, the techniques for handling non-centered and skewed Gaussians with unknown parameters are non-trivial and interesting.
Weaknesses: This is not a significant weakness, but the technical presentation in some parts of the paper seems quite dense and not necessarily adding to the understanding of this work. For example, in the proof of Theorem 1.4 or Lemma 4.2, too much emphasis is given in the calculation of the optimal setting of parameters and sample complexity. I would prefer a more intuitive explanation at times, although I'm not sure it that's possible.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: How tight are the sample complexities provided in Theorem 1.4 and 1.5 in terms of their dependence on the minimum and maximum eigenvalues of the covariance matrix and the size of the bag? In Theorem 1.5, we also see a term involving $l$, which depends on the parameters of the true LTF. Is such a dependence necessary?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q. *For example, in the proof of Theorem 1.4 or Lemma 4.2, too much emphasis is given in the calculation of the optimal setting of parameters and sample complexity. I would prefer a more intuitive explanation at times..*
*Authors*: We state our results formally in Section 1.3 and therefore include the parametric dependencies explicitly. However, to aid a more intuitive understanding, we will include an informal description of how the parameters in Theorems 1.4 and 1.5 are obtained and what they convey qualitatively in Section 1.4. Also, in the proofs of Lemmas 4.1 and 4.2 we shall add more explanations along the way for ease of understanding.
Q. *How tight are the sample complexities provided in Theorem 1.4 and 1.5 in terms of their dependence on the minimum and maximum eigenvalues of the covariance matrix and the size of the bag?*
*Authors*: We have optimized the dependencies on the eigenvalues and the bag size in our application of the algorithmic techniques used in our results. However, sample complexity lower bounds for PAC-LLP learning LTFs – while a very relevant problem – is out of the scope of this work.
Q. *In Theorem 1.5, we also see a term involving l, which depends on the parameters of the true LTF. Is such a dependence necessary?*
*Authors*: The term $l$ tells us the perpendicular distance from the center of the Gaussian to the unknown LTF's hyperplane, normalized by the stretch induced by $\\mathbf{\\Sigma}$ in the direction of $\\mathbf{r}^*$. This is required to estimate the density of the Gaussian distribution near the unknown LTF's hyperplane which directly affects the sample complexity – the less the density, the more the sample complexity. We will add this explanation in the relevant portion of Section 1.4. | Rebuttal 1:
Rebuttal: We thank the Reviewers for their encouraging and helpful feedback. We have addressed their questions and comments in the respective author rebuttals to the reviews. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Strong and Precise Modulation of Human Percepts via Robustified ANNs | Accept (poster) | Summary: The paper presents a novel approach to find categorical perceptual changes in humans using artificial neural networks (ANNs). Notably, the paper presents compelling evidence that an adversarially trained resnet50 is better at generating these adversarial attacks on humans on a low budget pixel regime. To test this the authors tested on humans subjects how the categorical percept change in relation with the pixel budget assigned to a PGD attack on the network. On low pixel budget, adversarially robust resnet50 are able to generate stimuli that can generate more successful targeted attacks that affect humans.
Strengths: Overall the paper is well-written and well-executed.
The question asked is interesting and very relevant to the community. Even though it’s not completely solved, the authors propose a first set of answers that could lead to interesting future work.
Weaknesses: Minor points:
I think having methods presented after the results was a bit unexpected and perhaps confusing. A reorganization may help understand a bit better the results, by putting the foundations first.
Be careful with Figure 2, the caption doesn’t exactly correspond to the figure : 3 panels in the figure but only two in the caption. Also in panel c, second row and “frog column", some writing appears which are probably not supposed to be part of the image.
the authors sometimes mentioned a pixel of 30 (or 30.0) and sometimes of 3.0. A revision to make sure the units are coherent through the paper would help.
The term “robustified models” can be a bit miss-leading and not commonly used. It would be better to refer to “adversely trained models”.
Major points:
Some of the contributions seem a bit over-claimed considering that only one type of attack is tested and also only one type of model is analyzed.
Although the pixel budget is well defined, there seems strong semantic distance even in low-pixel budget regime. For instance, in figure 1, the primate example, it seems like in 10^1 budget regime there is crucial semantic information lost and added that would explain the impact on performance for humans. There seems to be potential in this direction to understand image classification, but the authors may need to clarify exactly why taking this route.
It seems what the paper is proposing is a categorical morpher, why taking this route and what does it tell us about the human judgements? Also, why using an adversarial approach and not a stable diffusion approach?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why are humans as sensitive to image perturbation at an l2 budget of 3.0 and 10.0? What happens with a pixel budget of 5 for example? And more than 10? How can you explain that by changing more content in the human, the images get less semantically different for humans?
How do you explain that DM images have more effects on human errors than TM perturbations?
It was not completely clear what is the performance of both models in the 9 categories described.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No explicit limitation section is written.
It would be nice to mention that this work is restricted to only one norm of adversarial attacks and one type of architecture. Those results show an increased alignment between humans and models’ robustness to adversarial attacks but a lot remains to be analyzed to completely claim of closing the gap.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Errors/typos & paper organization**
We thank the reviewer for spotting those important errors/typos, now all corrected. As far as organization is concerned, and in accord with the reviewer’s suggestion, we designed the “Overview of approach and experiments” section to provide a succinct and high-level description of our methodology, while leaving mostly the more technical details for Methods. If there are additional elements, which if moved from Methods to Overview may help ease the reading further, we are happy to do so.
**Budgets 30.0 and 3.0**
Indeed, the 30.0 and 3.0 budgets are two distinct budgets, corresponding to the perturbation budget in our experiments when attacking *pre-trained models*, and that allowed *during model (adversarial) training*, respectively. We also add a visual to explain these regimes to the global response figure and to Figs 2,3.
We now further clarify this point by adding:
In Overview:
*“Notably, we denote this \textit{adversarial training budget} by $\varepsilon$, to be distinguished from the perturbation budget, $\epsilon$, which applies to any image-perturbation method.”*
Kindly refer also to our global comment about this distinction.
**The term “robustified models”**
As correctly pointed out, our “robustified models” are colloquially known as “adversarially trained models” or “robust models”. However, adversarial training does not guarantee a robust model – not to adversarial attacks nor in broader senses of robustness. We thus believe that coining the term “robustified models” has a merit in referring to just the process performed on the model towards becoming more robust. We further use it interchangeably with “adversarially trained models”.
**Limited attack/model types**
We agree that analyzing other attacks and/or model types can better support our contributions. We thus include in our Supplementary Material additional analyses titled “Different attack strategy: linf attacks and linf-robust models”. We describe and refer to this section from the main text in line 228 under “Other pixel budget constraints”.
We refactored our conclusion section to more explicitly call out the limitations (specifically, we focused on one architecture and mainly l2).
**Inducing interpretable semantic shifts**
As human observers ourselves, we agree with the reviewer (another human observer) that we also perceive a *strong semantic shift* even in the low-pixel budget regime. Systematically testing and quantifying such measures of human perception relative to model “perception” is a key contribution of our work. The ability to induce those shifts in human reports in such a pixel budget regime – which is otherwise a "perceptually stable regime” as we demonstrated via random attacks experiments or a contrast blend approach – was not known to be possible before this study.
Notably we have in fact conducted experiments also by a Stable Diffusion modulation approach. While those are outside the scope of this paper, they reflected that unlike robustified-models’ guided perturbations, compelling modulations by LDMs require *larger* budgets than our defined low-norm budget (i.e., < 30).
**Why are humans as sensitive to image perturbation at an l2 budget of 3.0 and 10.0**
Figs 2b or 3b show human reports for Disruption or Targeted Modulations respectively, for perturbation/attack budgets in range < 30, which are performed on an array of models, including vanilla and a few robustified models. Those robustified models were trained to various degrees of robustification levels, 1, 3, and 10. The curves on both Figs 2b or 3b for all robustified models 3, 10, and mostly 1 are monotonically increasing, indicating the humans are more strongly modulated by higher allowable budget perturbations. Kindly refer to our previous clarification (and the global) regarding distinguishing between the training and the experiment perturbation budget.
**How do you explain that DM images have more effects on human errors than TM perturbations?**
There are 8/9 (n categories=9) ways to disrupt a category but only 1/9 to induce a specific one, making the former an easier objective compared with the latter under a given pixel budget constraint.
It was not completely clear what is the performance of both models in the 9 categories described.
Please refer to our global response on the topic.
**It would be nice to mention that this work is restricted to only one norm of adversarial attacks and one type of architecture. Those results show an increased alignment between humans and models’ robustness to adversarial attacks but a lot remains to be analyzed to completely claim of closing the gap.**
As noted previously, we have shown that these results extend also to linf attacks and have analyzed linf models. Previous studies on model adversarial training suggest that the results obtainable by other architectures (e.g., ResNet101) may be comparable. We have further mentioned in Conclusion ln 339-343 that our findings are in a lower bound sense, and that there may be other models that are more behaviorally aligned:
*“We emphasize that these are lower bound estimates because we are probing human vision with models that are imperfect approximations of the underlying neurobiology. Indeed, we do not claim that robustified ANN models discover all low-norm wormholes or that they never point to wormholes that do not actually exist. Notably, because we still see a residual gap in behavioral alignment (Figs 1c and 2b), other models, which are even more behaviorally aligned, must still exist.”*
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for the rebuttal and for the experiments adding other strategies. I have decided to keep my current score. | Summary: I have read the rebuttal and will keep my (relatively high!) score as is.
It is a folk-theorem that human categorization behavior is robust to adversarial perturbations that are under an L2 norm of 30 or less. This paper shows that networks that have received adversarial training can be used to generate relatively low pixel-budget “adversarial” examples that are reliably mis-classified by humans under an L_2 norm of 30 or less. This closes one gap between human behavior and network behavior, in that the humans are disrupted to nearly the same extent that networks are.
Strengths: + the paper shows a novel result: That when adversarially-trained networks, which are a better match to primate neural representations than “vanilla” networks, the adversarial examples that they generate can disrupt human categorization with a much lower budget than is seen with vanilla networks.
+ the paper shows that these “robustified” networks can modulate relatively arbitrary images to relatively arbitrary targets with low budgets.
+ The paper is well-written and the experiments are thorough.
Weaknesses: - Just a bit of terminology complaint: Generally, an adversarial example is by definition one that fools networks but not humans. So once humans perceive these as the target category, I would no longer call them adversarial.
- A bit of a philosophical complaint: The paper focuses on the fact that these low-budget examples fool humans and networks to conclude that now these networks are better models of the human visual system. I find that a bit distracting from the goal of discovering network properties that cause the network to be a better model of the human visual system. That is, humans don’t *need* adversarial training to not be fooled by adversarial examples. A more biologically plausible model would be one that shares this property with humans. I find adversarial training to be a kind of dumb response to the susceptibility of networks to adversarial examples - again, we should be looking for properties that make them not susceptible to these images in the first place - e.g., perhaps the right kind of recurrence, feedback, a capsule architecture (see below), or some other property will achieve this.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: lines 23-25: “However, individual ANNs are also notoriously susceptible to adversarial attack: the addition of tiny (e.g., small l2-norm) pixel perturbations to the model’s input, optimized to disrupt the model’s categorization of the original image [11–13].”
This sentence is awkward: one expects a verb in the clause after the colon, but it is just a noun phrase describing what an attack is. So, it’s ok, it’s just hard to parse - almost a garden path.
lines 36-39: for an example of a robust model that doesn’t require adversarial training, and causes computed adversarial examples to appear as the category they are trying to spoof, see https://arxiv.org/abs/2002.07405.
line 126: Fig 2a -> Fig. 2
line 160: Fig 2a -> Fig, 2b
Figure 4: I’m not clear whether these are confusion matrices or error matrices? I.e., are the entries the percent of times the network returns that answer, or are they the times that it doesn’t. The caption seems to mean the latter (1-hot). A confusion matrix usually shows the number or percent of times that the answer goes in that box, so if the network is doing well, the diagonal has high values.
line 184: do you mean precision in the technical sense (i.e., as in precision and recall)? If not, better to choose a different word.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors adequately discuss the possibly misuse of this technique.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for supporting our work.
**Terminology complaint**
We agree that “adversarial image” is not well defined in the field and that the work we presented exposes the need for a clear definition of this phrase. Our working definition – which is inline with the original methodological approach – has been that an adversarial perturbation for a model is an image perturbation that intends to cause the model to make a mistake (relative to a ground truth label). Strictly speaking, this definition is *not dependent* on the effect of that perturbation on human perception. However, because the difference between model perception and human perception is what made adversarial examples most interesting to many, the reviewer’s assumed definition (different than ours) is understandable. To avoid this confusion, we propose to revise the manuscript accordingly:
ln48: “adversarial sensitivity” -> “low-norm perturbation sensitivity”
ln75: “low-norm adversarial sensitivity” -> “low-norm perturbation sensitivity”
ln133: “adversarial sensitivity” -> “perturbation sensitivity”
ln240: “adversarial perturbations” -> “model-guided perturbations”
**Philosophical complaint**
We agree that finding a model that is both predictive and biologically plausible in its developmental/learning trajectory is a major goal in the field. Indeed we are not claiming that adversarial training (AT) is the mechanism by which robustness in humans emerges. We rather consider AT as a method by which we arrive at an improved estimate of the adult state ventral stream and supported behavior, and a contribution of our paper is to demonstrate that. Future work may consider alternative, more biologically plausible mechanisms that may give rise to a comparably aligned model in terms of predictivity (e.g., Topographic ANNs, Margalit et al. 2023). We will add a line in the new “Limitations” section to make this clear.
**Phrasing in lines 23-25**
We see the point. Now changed to:
“However, individual ANNs are also notoriously susceptible to \emph{adversarial attack}: Adding of a tiny amplitude (e.g., ultra-low $\ell_{2}$-norm) pixel perturbation to the model's input, which is optimized to disrupt the model's categorization of the original image.”
**An example of a robust model that doesn’t require adversarial training and causes computed adversarial examples to appear as the category they are trying to spoof**
We thank the reviewer for this reference. As mentioned in our comment about biological plausibility, we do not claim nor highlight adversarial training or the specifics of our algorithm for image generation given a robustified model, as key to behavioral modulation. For this we dedicated a conclusion paragraph in lines 339-343 clarifying:
“We emphasize that these are lower bound estimates because we are probing human vision with models that are imperfect approximations of the underlying neurobiology. Indeed, we do not claim that robustified ANN models discover all low-norm wormholes or that they never point to wormholes that do not actually exist. Notably, because we still see a residual gap in behavioral alignment (Figs 1c and 2b), other models, which are even more behaviorally aligned, must still exist.”
line 126: Fig 2a -> Fig. 2
line 160: Fig 2a -> Fig, 2b
**Fig 2 referencing typos**
Thanks! Corrected.
**Figure 4 matrices**
Indeed confusion matrices, with high values on the diagonal for robustified models, but not for the control methods (Image interpolation, see Supplementary Material for the vanilla). The values shown are normalized rates for choosing the target class, same as in panels (a) and (c), and as denoted on the color bar.
**line 184: do you mean precision in the technical sense (i.e., as in precision and recall)? If not, better to choose a different word.**
We thank the reviewer for this suggestion. We now change: “precision” -> “specificity” in ln184, ln202.
---
Rebuttal Comment 1.1:
Title: quick note about just one point
Comment:
I still don't like your revised sentence: “However, individual ANNs are also notoriously susceptible to \emph{adversarial attack}: Adding of a tiny amplitude (e.g., ultra-low-norm) pixel perturbation to the model's input, which is optimized to disrupt the model's categorization of the original image.”
here's a suggested revision:
However, individual ANNs are also notoriously susceptible to \emph{adversarial attack}: The addition of a tiny amplitude (e.g., ultra-low
-norm) pixel perturbation to the model's input that causes the model to mis-categorize the image.
---
Rebuttal Comment 1.2:
Title: definition of adversarial attack
Comment: Hi - Again, we disagree about what an adversarial attack is, and I think your new terminology just obscures things. I asked a few of my friends, and they generally agree with my definition, that an adversarial attack is one that deceives the human. Geoff Hinton said, with respect to your wormholed-examples, where a human can see what it is: "I call that a deflected adversarial attack. In order to get the neural net to get it wrong, you had to change the image so much that a person also saw the image as something else." I.e., defining an adversarial attack as just making a perturbation that makes the model get it wrong is too broad a definition, and your examples are no longer adversarial - they are deflected adversarial attacks.
---
Reply to Comment 1.2.1:
Title: Definition of Adversarial Attacks
Comment: Thank you for the suggested definition of "adversarial attack", and the suggestion of "defected adversarial attacks".
The precise definition of "adversarial attack" is not central to the contributions presented in the paper, and based on your comments we are now unsure of the consensus view of this term.
As such, we are happy to drop the term completely or change it to align with the consensus terminology that may be reached among the reviewers and the area chairs.
---
Rebuttal 2:
Title: Why my score is high compared to the other reviewers'
Comment: First, let me say that I find the results reported in this paper to be (nearly) completely novel. The fact that in "robustified" networks, small budget changes make **semantically obvious** changes in the images is something I've never seen before. Hence I disagree completely that these results are not novel and to be expected (Reviewer keVx). This paper will attract a lot of attention because of this. (I say "nearly" because in my review I did cite a paper where a special capsule network that is attacked changed the appearance semantically (deflected attacks) - but this was for MNIST digits - not nearly as interesting as this paper's results).
Their first listed contribution: " We provide evidence for the existence of low-norm image perturbations of arbitrary natural images that strongly disrupt human categorization behavior." was clearly fulfilled - and I would argue the other contributions listed are also fulfilled.
**Reviewer Xb7f (6: weak accept):** Their first weakness was "My main concern with the paper has to do with the interpretation. A perturbation size of 30 (for normalized images) seems quite large, and the example images do not (in my estimation) seem "close" to the original image."
To my mind, this is one of the main points of the paper! A perturbation of size 30 for a vanilla network (that has not been adversarially trained) would **not** be noticeable. Here, it is - there is a semantic change that is striking in how it focuses on features that would change the classification, such as essentially adding a monkey face to the frog.
And I completely agree with their interpretation that "Perturbations from adversarially trained networks are semantically meaningful,” which, if adopted by the authors, would be a nice way of putting their result.
I don’t really see how this critique - how the results are interpreted - makes the results less important.
**Reviewer keV (4: borderline reject):** Main complaint: lacks novelty. As noted above, I completely disagree.
**Review LMUo (5: borderline accept):** again, this reviewer says the result “doesn’t seem surprising. Humans often misjudge many images”. See above.
**Reviewer ibQ6 (5: borderline accept):** (small point: doesn’t like the term “robustified” - seems fine to me)
Major complaints:
- only one type of attack and one type of network tested. Response: You only need one flying pig to make a point. More seriously, the authors show the result also holds for l_{\inf} models.
- Remarks that there is strong semantic distance, even in a low-budget regime. Again, to me, that’s the point of the paper - these are “wormholes” in representation space: a shortcut to another category. | Summary: The paper systematically challenges the common assumption that human categorization of images remains highly robust to small-scale image perturbations (low pixel budget). The authors find that small-scale image perturbations, guided by adversarially trained artificial neural networks (robustified ANNs), can significantly and precisely alter human perception of object categories. Not only does this finding challenge the above assumption, but it also reveals the existence of "wormholes" in the image space that can lead to a significant change in human object category perception. Moreover, the study demonstrates that contemporary models of visual processing are precise enough to locate these "wormholes".
Strengths: 1) The authors discuss from an intriguing perspective how deep neural network models (robustified ANNs) can induce anticipated human behavior (i.e., misjudgments) with generated images (slight changes on the original image).
2) The authors reveal that only models generated through adversarial training have this ability to induce.
3) The authors seek to explore human perceptual wormholes from modern visual models (robustified ANNs).
Weaknesses: 1) The paper revolves around the premise that images generated by robustified ANNs (under a low pixel budget) can interfere with human classification judgments. This conclusion doesn't seem surprising. Humans often misjudge many images; it's just that robustified ANNs can generate highly deceptive images.
2) The authors believe that robustified ANNs and human perception have similar responses to image perturbations. The only evaluation criterion seems to be the final classification error rate. Having similar error rates doesn't equate to having similar responses.
3) The idea of using robustified ANNs to search for human perceptual wormholes seems impractical, since even from the perspective of robustified ANNs, the authors do not provide any patterns of wormholes in robustified ANNs.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1) Have the authors tried more models besides ResNet?
2) Could the connection between deep models and human perception be explored from more angles, such as the similarity of intermediate features in the model?
3) Wormholes, a core part of the paper, are not prominently discussed. The authors should seek and describe the regularities of wormholes in robustified ANNs, instead of simply saying that they can generate images that interfere with human judgment from any source image distribution.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The paper revolves around the premise that images generated by robustified ANNs (under a low pixel budget) can interfere with human classification judgments. This conclusion doesn't seem surprising. Humans often misjudge many images; it's just that robustified ANNs can generate highly deceptive images**
We thank the reviewer for this comment.
We agree that attacks on robustified models were previously shown to engage human perception (as addressed in Introduction, ln38). However, to the best of our knowledge, there is yet no evidence for systematically quantifying the model-human gap in adversarial sensitivity. Further, it was unknown whether in a low budget-regime – in which, as we show, human perception is stable under baseline attacks (image interpolation or random attacks) – it is possible to disrupt or induce specific category percepts, and to what degree. Furthermore, and even more surprising, is that the strong modulations by robustified guide models are prominently evident when perturbing at budgets well beyond the budget range used for model robustification. Namely beyond the “robustification perimeter”.
For perspective, consider some of the following live hypotheses prior to this study: Image perturbations discovered by robustified ANNs (1) have no effect on human category reports, regardless of the perturbation budget, (2) have some effect of on human category reports, but only in the typical-norm budget regime (i.e. typical of pairwise distances, ~130), (3) have perceptually interpretable effects in the low-norm regime, but have no effect on human categorization behavior.
To better clarify our results and their novelty we now revise in Results:
*“Notably, we report these strong disruptions in human category percepts when perturbing at budgets well above the budget used for model robustification, yet still well-below the typical pairwise natural image distance regime. This result was not a priori guaranteed.”*
**Have the authors tried more models besides ResNet?**
While generalizing to other architectures is not foundational to the our claims presented in this paper, we agree that exploring other architectures is an interesting question.
Because model adversarial training takes ~8 days to complete (on 4 A100 GPUs), in conjunction with the requirement to train two such models per architecture and robustification level for white-box/gray-box analysis, for this study, we focused our analysis on ResNet50 only.
Previous studies on model adversarial training suggest that the results obtainable by other architectures (e.g., ResNet101) may be comparable (Singh, Croce & Hain 2023). We have further mentioned in Conclusion ln 339-343 that our findings are in a lower bound sense, and that there may be other models that are more behaviorally aligned:
*“We emphasize that these are lower bound estimates because we are probing human vision with models that are imperfect approximations of the underlying neurobiology. Indeed, we do not claim that robustified ANN models discover all low-norm wormholes or that they never point to wormholes that do not actually exist. Notably, because we still see a residual gap in behavioral alignment (Figs 1c and 2b), other models, which are even more behaviorally aligned, must still exist.”*
**Model-human alignment evaluation criterion focusing on final classification**
We agree with the reviewer that finding a good model of biological vision has multiple facets to it, including internal representation. We note that while our disruption modulation (DM) experiments were indeed simply measures of error rate (accuracy), our Targeted Modulation (TM) tests were tests of specific types of errors, which is a more stringent test in the spirit of the reviewer’s question. Internal (feature) similarities of these models and primate internal neural representations have been reported prior to this study, and were key motivation points for us to focus the present paper on testing the alignment in the downstream behavioral output, which is conceived as supported by those internal responses in both the brain and the models. We refer to those prior findings about the similarities in representation between the biological visual system and the artificial counterpart in the Introduction in lines 39-46.
**Using robustified ANNs to search for human perceptual wormholes**
We thank the reviewer for raising this concern, which understandably can be elusive. If correctly parsed, this comment has to do with better explaining what we mean by “wormholes”. The “wormholes” is a metaphor for our empirical finding that, from every tested starting “location” (i.e., starting image), there exists a “nearby” (i.e., low-norm) perturbation step that will induce drastic changes in human behavioral categorization report. The metaphor is intended to conceptually express the idea that when a human subject is in one perceptual “universe”, s/he can “walk” a very short distance in pixel space to then be in another perceptual “universe.” We acknowledge that this metaphor may not be perfect, and may be missed, but we are inclined to believe this helps convey the conceptual essence of our findings in a faithful way. We will add a sentence in Introduction to clarify this.
We agree with the reviewer’s related comment that we have not yet fully characterized the statistical distribution of such human perceptual wormholes at any given pixel distance, but we believe the work we presented here – demonstrating that some computational models can reliably point to and reveal those human perceptual phenomena – is a necessary precursor for that follow on work. We have added a sentence in the revised manuscript (under “Limitations”) to express this. | Summary: The paper under review brings to light a crucial and fascinating issue in deep learning: adversarial attacks. The authors contend that a neural network trained adversarially, which is logically more robust against attacks, produces perturbed images that humans perceive as differing from the original when it is finally attacked. They present evidence that humans perceive images generated for both non-targeted and targeted attacks similar to how the neural network does. The concept of comparing human perception with the performance of artificial neural networks holds merit. However, it's necessary to critique the exaggeration of novelty in the authors' findings. They find that an adversarially trained network, which necessitates higher pixel perturbations for successful attacks due to its robustness, aligns well with human perception. Further, the authors' claim of their 'novel finding' that "human percepts are massively disrupted by low-norm image perturbations discovered by robustified ANNs”. The purported surprise and novelty is questionable as it simply reiterates the foundational goal of adversarial training and warrants scrutiny. Firstly, the paper does not offer a clear foundation as to who assumes that humans are necessarily resilient all perturbation with low norm, casting doubt on the originality of the claim. Secondly, as the authors themselves acknowledge (figure 1) that object classes are intricately entangled in pixel space, hinting that in high dimensions small pixel-space deviations can be found to cause perceptual shifts to other classes. Therefore, it's counter-intuitive to assert that such a phenomenon is a surprising novelty. Moreover, the paper employs a pixel budget of 30 to define 'small perturbations', but it falls short of explaining why this particular value is low (average and maximum distances in the ImageNet dataset's high-dimensional space is 130 and 338 pixels if these distances are to be compared to the pixel budget for adversarial generation). To conclude, despite reservations regarding the claimed novelty, the paper does contributes valuable insights. Its methodology is solid, and its attention to human perceptual measurements is commendable. The authors carry out a diligent study quantifying human perception with respect to adversarial attacks, the results of which could spark important discussions in the field of adversarial robustness. However, the paper lacks claimed novelty.
Strengths: Quantification of human perception relating it to adversarially trained neural networks.
Weaknesses: -Lacks novelty and main results are expected based on the fact that the whole purposed that neural networks are trained with adversarial attacks is to make robust and more aligned with human perception
- Not clear why in Figure 1c1 there is not shift of the red curve to the left compared to the blue one which is what one would expect from robustified neural net. The authors show this expected shift in Figure 2b where the blue is clearly shift to the left.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Explain better the value of a pixel budget perturbation (i.e. 30). For example you can compare to contrast of the image to provide a more intuitive meaning of the level of perturbation needed to change classes for different networks.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for reviewing our work, for their positive comments about our behavioral experimental methods, and for their critical analysis of our empirical findings. We agree that the novelty and significance of our findings are dependent on 1) the prior beliefs in the field on perceptual sensitivity in humans, 2) the extent to which our results are straightforward consequences or restatements of adversarial training, and 3) whether any natural image may, with high probability, be a distance of < 30 to other images from other categories in pixel space (which would make it wholly unsurprising that perceptual shifts via perturbations of norm < 30 are possible ).
We provide our clarifications and justification on each of these points below.
**Who assumes that humans are necessarily resilient to all perturbations of low norm?**
We contend that human vision is widely believed to be robust to “ultra low norm” perturbations (l2 norm < 3); e.g. [Wichman and Geirhos, 2023](https://arxiv.org/pdf/2305.17023.pdf) and [Dujmovic et al. 2020](https://elifesciences.org/articles/55978#s3).
However, we should have been careful to not make the logical leap that this implies that the field also widely believes humans are invariant to merely “low norm” perturbations (which we define as norms < 30; see our global response for justification), where we found evidence of human perceptual sensitivity. We implicitly make this leap throughout our paper (e.g. language in line 30; line 323), and intend to be more precise in a future revision. Indeed, it would be more accurate for us to describe our findings not as challenging a “prevailing assumption”, but rather as resolving wide uncertainty and an absence of evidence around the question of whether and to what extent humans are sensitive in this < 30 “low norm” regime. We are unaware of any other prior work systematically establishing the perturbative sensitivity of human perception in this norm regime (and would welcome any key references in this regard).
We thank the reviewer for challenging us on this point, which will help us more accurately contextualize our findings relative to existing beliefs.
**Is alignment with human perceptual sensitivity to “small” pixel perturbations a restatement of adversarial training?**
The adversarial training (AT) objective is to make a system’s behavior robust to pixel perturbations less than or equal to a certain norm (which is determined by a hyperparameter). This objective could be aligned with the objective of driving similarity to human perceptual sensitivity, to the extent that humans are **also** robust to perturbations below that hyperparameter-specified norm.
However, our results focus on the fact that we found correspondences in sensitivity between AT-trained guide models and human perception at perturbation budgets **outside** of the budget range used for AT. This result could not be a straightforward consequence of the AT objective, which provides no explicit constraints on perturbations greater than 3 (for the primary, epsilon=3 robust model we considered). Indeed, the degree of alignment between humans and models has not been previously shown or systematically tested and analyzed prior to this study.
**To what extent are images from different object classes “close” to each other in pixel space?**
As shown in Rebuttal Panel A, and as the reviewer mentioned in their answer, natural images are estimated to be typically farther than what we termed “low norm” (median pairwise l2 distance ~130; 99.9% quantile range [69, 268], min ~45; max ~306; from Imagenet Restricted images).
As the reviewer correctly points out, if we had found that humans were perceptually sensitive from perturbations with norms of “typical” size, this would have been a trivial and unsurprising result (in that humans are obviously perceptually sensitive to different natural images).
However, we found humans were perceptually sensitive to perturbations with norms as low as ~10. This is roughly one order of magnitude lower than the trivial result.
**Our response to: “Not clear why in Figure 1c1 there is not shift of the red curve to the left compared to the blue one which is what one would expect from robustified neural net. The authors show this expected shift in Figure 2b where the blue is clearly shift to the left.”**
Thank you for noticing this. Some context: you are absolutely correct that a robustified version of a model should be more invariant to perturbations, namely the red curve should be to the right of the blue curve (i.e. requires a higher pixel budget to produce the same level of disruption). We indeed replicate that previously reported result in Fig 2a (left panel). However, please note that this rightward shift is only guaranteed in the situation in which the model is used to design attacks against itself (aka “white box”). The colored curves in Fig 1c1 are the results of “gray box” attacks. Specifically, these are tests of attacks designed by one member of the model family and tested on another member of that same model family. We plot these against the human data in Figure 1 as they are the proper way to make a quantitative comparison to the behavioral of human visual neural networks (as we do not have white box access to those networks), and we explain this in Overview. For gray box comparisons, the rightward shift of the red curve relative to the blue curve is, unlike the white-box situation, not guaranteed. Indeed, while not the primary focus of our work, it is interesting that we found that robust models and vanilla models had similar gray box sensitivities (e.g. as shown in Figure 1 and Fig 2b; left panel). We will add a clarification to the caption of Fig 1 to indicate that these are gray-box results
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' attempt to refine and temper their conclusions. Nevertheless, I continue to assert the significance of discussing the distances between natural images more extensively in the paper. Consider this: if a random search across millions of natural images reveals that there is even one image whose distance from another image image to change its class falls within tens of pixels (in Euclidean distance), then even if the authors' method can synthesize such images, the outcome is not particularly surprising. The authors indicate (median pairwise l2 distance ~130; 99.9% quantile range [69, 268], min ~45; max ~306; from ImageNet Restricted images), with a minimum of ~45, which closely aligns with their values. This essentially reinforces my initial argument. Therefore if one can identify even a single image from millions that has a tens-of-pixels distance, and their method yields a result within the same magnitude, then this should be clarified and quantified. The paper should be written with this emphasis in mind.
---
Reply to Comment 1.1.1:
Comment: We agree with the reviewers intuition that wormholes may be a naturally occurring phenomenon, even amongst natural images. Finding a natural image that falls within the 30 pixel budget and disrupts the category might be possible, especially for carefully chosen start images.
However, the number of those candidate images is unfathomably large (well beyond millions).
Our point is that there exist models that allow us to find perceptual wormholes very efficiently near arbitrary start images. This is not to claim that "natural wormholes", as given by a search over an infinite natural image database, is impossible.
However, we don't believe such an efficient search has been demonstrated before nor has been shown to approach budget-limited disruption rates as those shown in this paper. We are happy to properly address such relevant past literature if exists. | Rebuttal 1:
Rebuttal: We thank the reviewers for finding our work interesting and for your support. We appreciate your constructive feedback and we agree that those suggestions would clarify the contributions of this work.
**Interpreting the low-pixel budget regime (< 30)**
All reviewers asked for clarification about the “low-norm” pixel budget regime that we focused on in the paper. We agree that the original version of the paper did not provide full clarity on this issue.
Our definition of the “low-norm” perturbation pixel budget regime (l2 norm of < 30) was/is primarily based on empirical evidence that we provided in Figure 1 in our Supplementary Material. We show this empirical evidence in Rebuttal Panel A, and it illustrates that: 1) that human reports are virtually unaffected by random noise image perturbations or to image perturbations guided by a vanilla ANN, when those perturbations are restricted to pixel budgets less than 30, and 2) that the typical (median) pairwise distance in natural images is ~130 with the *closest observed* pair of natural images (out of n=101,025 pairs) is ~45 apart. These empirical observations support the notion that perturbations of norm less than 30 can be reasonably referred to as “low-norm.”
As shown in Rebuttal Panel A, we define our perturbation budget regimes as:
- __“Ultra low-norm”__: < 3, a typical range studied in adversarial literature (e.g., for performing adversarial training)
- __“Low-norm”__: < 30, a range in which human categorization reports are insensitive to attacks by Gaussian noise perturbations or vanilla ANN attacks.
- __“Typical-norm”__ : 69-268, the range that we estimated contains 99.9% of pairwise distances for two independently sampled natural images.
In our revision, we will strictly adhere to these terms, and as suggested, we will refrain from calling the perturbations in our experiments “tiny” and simply call them “low-norm”.
To better illustrate the state of affairs prior to this study, we now also include in Rebuttal Panel B examples of pixel-norm 30 perturbations (alongside others) for three baseline image perturbation generation approaches.
As correctly pointed out by several reviewers, the perturbations discovered by robustified models – despite being low-norm – are indeed semantically meaningful, which is another way of saying that they are likely to produce specific changes in human perceptual report, and quantifying that was the focus of our study. The primary novelty is the observation that, from apparently any starting image, robust guide models can be used to induce strong shifts in human perceptual reports in such a low-norm budget regime – otherwise a perceptually stable regime under naive baseline approaches. This finding is novel in that it was not a priori guaranteed or otherwise known (see below).
**Distinguishing between pixel budget for adversarial training and pixel budget for designing perturbations experiments to test on humans and (fixed) models**
To further clarify the distinction between these two pixel budget regimes and their relationship to our results we now add in the revised manuscript:
In Overview:
*“Notably, we denote this \textit{adversarial training budget} by $\varepsilon$, to be distinguished from the perturbation budget, $\epsilon$, which applies to any image-perturbation method.”*
Please note that the adversarial training procedure used to train the robust guide models does not a priori guarantee that the resultant representations of robust models will be aligned with human perceptual semantics – rather, that procedure attempts to establish *complete insensitivity* to all pixel perturbations below a certain (ultra low) norm (determined by the epsilon-training hyperparameter), without any explicit constraint attempting to explicitly match measurements of human perception. Thus, the fact these guide models automatically discover human-aligned perturbations in the (untrained) low-norm regime was not guaranteed, and is thus, in our view, surprising and novel.
To clarify this, we now add in Results:
*“Notably, we report these strong disruptions in human category percepts when perturbing at budgets well above the budget used for model robustification, yet still well-below the typical pairwise natural image distance regime (see Supplementary Material, Fig~1). This result was not a priori guaranteed.”*
We also add a visual to explain these regimes to the global response figure and to Figs 2,3.
**Details on model performance**
We thank the reviewers for this feedback.
Given that these are ImageNet trained models they were evaluated on the 1000-way task on natural images and adversarial images (and for various evaluation budgets for the adversarial attacks). Their performance on natural images ranged from 45-76% Top-1 accuracy with the least robustified (vanilla) highest; on adversarial images ranged from 10-53% Top-1 accuracy (when evaluating at the same budget as the training one) with the least robustified (excluding the vanilla) highest. As shown in prior work, robustification tends to reduce natural image accuracy (Tsipras et al. 2019, Zhang et al. 2019). Further note that robustifying at a higher training budget level entails evaluation at the same (higher) budget level, resulting in a lower validation accuracy on adversarial images overall for the models trained for higher robustification levels.
For reference we include in our response figures summarizing their performance and learning curves.
**Fig2 typos**
We thank the reviewers (StMT, ibQ6) for spotting Fig 2 typos, which are now all fixed.
**Public availability of behavioral data**:
If this work is accepted, we will release an anonymized copy of the behavioral data used in this study, following the regulations for privacy, anonymity, and storage in our protocol, which was approved by our institutional board. We will update the Methods section paper with a reference to this protocol, and a link to these data.
Pdf: /pdf/e0d8572a29c403561007178a11bd5808b8d3f8b3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the problem of adversarial images in artificial and biological visual systems. On standard ("vanilla") image models (e.g. ResNet-50), adversarial perturbations that successfully disrupt a vanilla model are quite small and do not significantly alter human perception. This paper generates adversarial perturbations on adversarially-trained ("robustified") image models, and tests whether those perturbations more effectively disrupt human perception, and find that they do (especially as the perturbation norm grows larger). The paper finds that it is both possible to perturb in an arbitrary direction, or in a targeted direction towards another class. It appears that you need to adversarially train the network used to generate the perturbation with a strong perturbation strength in order to see these effects (networks trained with no or weak adversarial perturbations did not yield perturbations that generalized to humans).
Strengths: - Experiments are clear and straight-forward
- Effect size is significant, although the perturbation size required is large.
- Experiments contain nice controls and baselines (comparing perturbations from four different adversarially trained networks trained with different perturbation strengths; comparing targeted perturbations to interpolation; hybrid-targeted modulation, etc.)
Weaknesses: - My main concern with the paper has to do with the interpretation. A perturbation size of 30 (for normalized images) seems quite large, and the example images do not (in my estimation) seem "close" to the original image. The original definition of adversarial images were images that were quite close in pixel space (imperceptibly so); by contrast the perturbations needed to modulate human perception are quite large. I don't think that in and of itself is an issue, it's still interesting that the large perturbations from an adversarially-trained network are sufficient to modulate human perception (especially in a targeted way). What I disagree with is that the claim that these perturbations are "tiny" (e.g. first sentence of the abstract, Fig 1 caption, etc). Throughout the text, the paper suggests that these are really small perturbations, which doesn't feel correct to me at a norm of 30.
- Instead, my interpretation of the paper would be something like: "Perturbations from adversarially trained networks are semantically meaningful". If you look at some of the example perturbations, they are intuitive (unlike adversarial perturbations on vanilla networks, which look like noise). For example, in Fig 1c2, to perturb the image of a frog, the perturbation simply replaces the frog's head with that of a primate, and replaces the frog's skin texture with fur. By keeping the same background color (black) and pose of the animal, the perturbation minimizes l2 norm in pixel space while still being semantically meaningful. In some sense, the perturbation has become more _efficient_. Rather than a random noise pattern fooling the network, the perturbation needs to be more careful in how it modifies the target image. It is no surprise then that these more effective perturbations also modify human perception. I don't think this interpretation is any less interesting, but to me it feels like a more accurate description of the results in the paper.
- I am curious if the authors looked at similarities or differences in the representations across the four trained networks. If one expects the robustified networks to be more similar to biological representations, this could make predictions for experimental visual neuroscientists to test.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - I would appreciate more details on the four adversarially trained networks. What is their performance on restricted imagenet? What do the training curves look like?
- I would also like to get some better intuition for what a pixel norm of 30 means in this context.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['Ethics review needed: Responsible Research Practice (e.g., IRB, documentation, research ethics)']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Interpretation of the low budget regime**
We thank the reviewer for this feedback. Please refer to our global response on this point.
**I am curious if the authors looked at similarities or differences in the representations across the four trained networks. If one expects the robustified networks to be more similar to biological representations, this could make predictions for experimental visual neuroscientists to test.**
We thank the reviewer for this suggestion. Some connections between adversarial robustness and increased representation alignment between models and primate neural activity have been established in previous works ([Dapello et al. NeurIPS’20](https://proceedings.neurips.cc/paper/2020/hash/98b17f068d5d9b7668e19fb8ae470841-Abstract.html), [Guo et al. ICLR’22](https://proceedings.mlr.press/v162/guo22d/guo22d.pdf), [Dapello et al. ICLR’23](https://openreview.net/pdf?id=SMYdcXjJh1q)), as we discuss in lines 36-46. Further evaluating the correspondences between the model(s) we study in this work and biological representations is an interesting line of future work.
**I would appreciate more details on the four adversarially trained networks. What is their performance on restricted imagenet? What do the training curves look like?**
Please refer to our global response on the topic.
**I would also like to get some better intuition for what a pixel norm of 30 means in this context.**
Please refer to our global response on the pixel norm value. Notably most of the studied low pixel budget range (i.e., < 30) is beyond the robustification budget range (1.0, 3.0, or 10.0). In this context, please also refer to our global response on distinguishing these two budgets (adversarial training pixel budget vs. the budget used in our behavior modulation experiments on the pretrained models).
To better illustrate the meaning of a 30 pixel budget, we show examples of pixel-norm 30 perturbations (alongside others) Rebuttal Panel B for a variety of guide approaches.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: I am still curious what the authors' thoughts are on my interpretation of the paper ("perturbations from adversarially trained networks are semantically meaningful"). I'm not suggesting you make drastic changes to the paper, but curious if you agree with my interpretation.
After reading the other reviews and the rebuttal, I have decided to keep my original score.
---
Reply to Comment 1.1.1:
Comment: We tried to adhere to the signal measured, in this case the category report. But if we assume that "semantically meaningful" is defined as that which causes a category report change, then yes, we agree with this interpretation. | null | null | null | null | null | null |
Diffusion Model for Graph Inverse Problems: Towards Effective Source Localization on Complex Networks | Accept (poster) | Summary: This work discusses the challenges associated with tracing the origin and path of information diffusion in complex networks, such as those involved in epidemics or rumors. To address these, the authors propose a probabilistic model, DDMSL (Discrete Diffusion Model for Source Localization), which utilizes Markov chains to model information diffusion and a reversible residual network to localize the source and reconstruct the paths of information diffusion, with its efficacy supported by extensive tests on five real-world datasets.
Strengths: The general topic is interesting, and the paper studies the problem of source localization under two information propagation patterns: SI and SIR. The proposed methods are sound with theoretical guarantee. The overall writing is easy to follow, but the use of notations are sometimes less formal. The idea of simultaneously handling the problems of source localization and reconstructing diffusion paths seems to be interesting, but the motivation is less promising. The experiments are strong, and the proposed method is better than most of existing baselines.
Weaknesses: There are three main questions of this work. Please refer to the questions section.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The motivation for recovering the diffusion path is weak. I feel like the source localization only needs to predict the correct set of seed nodes in order to identify the culprit, why would it be necessary to recover the diffusion paths with the already given diffusion status/observation? I am still having a hard to convince myself with the provided motivation.
2. The authors claim "existing inference results are often deterministic and fail to quantify the uncertainty of source localization", but it seems like the proposed method's results are also deterministic from Figure 2. Does the proposed method output a range of possible source nodes with [S, I, R] labels?
3. In terms of the experiment section, the biggest problem is the efficiency issue of the proposed framework. With the step-wise diffusion model to recover the overall diffusion path, the runtime of the proposed method and the complexity analysis should be demonstrated.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I am not seeing any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your valuable suggestion. We have incorporated the time complexity analysis of the DDMSL algorithm and will provide comprehensive answers to all your inquiries.
(1)"The motivation for recovering the diffusion path"
Thank you for your question. In our perspective, reconstructing communication paths plays a pivotal role in controlling information dissemination. This process involves speculating on the distribution of node states throughout the diffusion process.
First, the reconstruction of the diffusion path offers valuable support in managing information spread. When provided with the observed infection status graph $X_T$, early source identification is crucial when T is small as the information diffusion scale is limited, allowing effective control through timely identification. However, as T grows larger, information has already propagated to various parts of the network, making it challenging to effectively control its dissemination even with source identification. In such cases, focus shifts to previously infected nodes or those with a high infection probability. For instance, in the context of the global Covid-19 pandemic, a common control strategy following a major outbreak involves identifying infected individuals and their close contacts at $T-1$, $T-2$, and $T-s$ based on observed infections in $X_T$, and subsequently isolating them. This strategy has proven to be highly effective in controlling epidemic outbreaks, despite being time-consuming and labor-intensive. Here, our reconstruction of node states at $T-1$, $T-2$, ... and $T-s$ can provide direct assistance in this context.
Additionally, reconstructing the diffusion path can deepen our understanding of information diffusion processes in complex networks and offer insights applicable to other fields. DDMSL generates possible diffusion paths from observed graphs, guiding network control for specific information. It has applications in disease and rumor control, as well as influence maximization. The research on reconstructing diffusion pathways holds immense practical value, and we are optimistic about its future prospects.
(2)"Does DDMSL output a range of possible source nodes with [S,I,R] labels?"
The answer to this question is affirmative.DDMSL can generate a series of possible source nodes, each accompanied by probability labels representing three states:S,I, and R.
In practice, we can only observe $X_t$, which corresponds to the infection data.Therefore,we must infer $X_{t-1}$ from $X_t$ iteratively until we obtain the inferred result of $X_0$.
Various methods exist for inferring $X_{t-1}$ from $X_t$. For example, DDIM predicts the noise between $X_t$ and $X_{t-1}$ and subtracts it from $X_t$ to obtain $X_{t-1}$. In our case, we use a discrete denoising diffusion model (D3PM), which utilizes a neural network model to predict an intermediate $X_0^{'}$ (different from $X_0$) based on $X_t$. Then, $X_0^{'}$ is substituted into equation 8 to derive $X_{t-1}$.Appendix D details the DDMSL model's process.
It is important to note that during the training and testing of DDMSL, Gumbel Softmax is utilized to sample $X_{t-1}$ after obtaining its distribution (shown in lines 10 and 7 of Algorithm1 and Algorithm2 in Appendix D, respectively). Therefore, the inferred results of DDMSL at each time step are sampled from a probability distribution, and the state distribution of the next time step depends on the inferred results from the previous time step. As a result, DDMSL can generate a series of diverse source nodes and diffusion paths that conform to different diffusion patterns.
(3)"the efficiency issue of the proposed framework."
DDMSL uses a residual graph convolutional network as its backbone. For a graph with N nodes and E edges, the complexity of the network during inference from time t=T to t=0 is roughly $O(KT|E|)$(K is the number of GCN layers).
During experiments, we found that the generation of $Q_t$ has the most significant impact on time complexity. $Q_t$ is derived by performing m SIR Monte Carlo simulations to calculate $Q_t$, resulting in a time complexity of approximately $O(mTN^2)$(m is the number of Monte Carlo simulations). This calculation can be time-consuming, especially for large-scale graphs.
To reduce inference time in DDMSL, we employ the following strategies:
1)To optimize the computation of $Q_t$ using Monte Carlo simulation, we support code optimization for parallel computation. Our code enables accelerated computation through multi-threading. By utilizing CUDA operators, we can reduce inference time by 20-30 times (GPU version still undergoing testing). We also provide a comparison of the training and testing times of DDMSL using 12 threads alongside other benchmark algorithms (Due to time constraints, we used default settings for baseline algorithms.) in Table 3, where DDMSL demonstrates acceptable inference speed. Please refer to the attached PDF.
2)An alternative approach involves leveraging existing deep learning models(see Appendix D) to fit $Q_t$, requiring only a portion of the generated data for accurate predictions. This substitution can roughly reduce testing and training times by one-third.
(4)"Limitations"
We believe that the negative impact of our research on society is limited. One potential concern regarding DDMSL is its impact on privacy when reasoning about source nodes or diffusion paths. This issue may lead to the unintended exposure of sensitive information associated with certain nodes. For instance, in the context of an AIDS transmission network, patients who wish to keep their AIDS diagnosis confidential could potentially be identified. We acknowledge this concern and will address it in the upcoming revised version of our paper.
Once again, we express gratitude for your thorough and meticulous review, as well as for the valuable suggestions you have provided for our paper. We hope that our explanation adequately addresses your inquiries. | Summary: This paper proposes a discrete denoising diffusion model for source localization called DDMSL. DDMSL can simultaneously locate information sources and restore information propagation paths. Experiments results on real-world datasets demonstrate DDMSL’s effectiveness.
Strengths: 1. This is the first study to solve information spread problems based on denoising diffusion model.
2. Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed method.
Weaknesses: 1. The motivation of employing denoising diffusion model is not clear. Why is this generative model used for source localization problem? Compared to other source localization models, what are the advantages of the denoising diffusion model?
2. I do not agree that the proposed method is a denoising diffusion model. The final distribution of the forward process and the initial distribution of the reverse process are both random because they are simulated by the SIR model. It is more like a sequence model trained by the SIR data.
3. The motivation behind the model design is not clear. For example, why are SN and BN employed in Eq. 16?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. The phrase “graph inverse problem” or “inverse problem” sounds peculiar, perhaps the diffusion inverse problem or the spread inverse problem would be more comprehensible.
2. The formulation of the proposed denoising diffusion model is not clear. In Eqs. (4) and (5), why is $x^i_t$ included in $q(x^i_t | x^i_{t-1})$ and $q(x^i_t | x^i_0)$? If we already know $x^i_t$, is it necessary to have $x^i_{t-1} or x^i_0$ to calculate $x^i_t$?
3. How to obtain the distribution $Cat(x^i_t; p)$?
4. What are reaction diffusion models? Maybe they should be included in the Section Related Work?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: 1. The proposed model requires simulation of SIR from t_0 to t_T to obtain the training data, whereas diffusion models can be trained at any step by sampling Gaussian noise at an indicated time step. It seems that the model proposed in this paper needs more training time and is difficult to apply to large-scale graphs, such as OGB datasets. It is also necessary to compare time complexity with baseline methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you sincerely for your valuable suggestion. Taking into account your feedback, we have incorporated a comprehensive time complexity evaluation for DDMSL and will ensure that all your questions are answered thoroughly.
(1) "What are the motivations and advantages of using diffusion models to handle inverse problems?"
Please refer to our global response.
(2) "The relationship between DDMSL and denoising diffusion model."
As expounded in global response, we posit that the SIR model's role in the information diffusion process can be perceived as noise, progressively obfuscating the information contained within $X_0$. It becomes challenging to disentangle the inherent noise of the original $X_0$ data from the final input data utilized by DDMSL. In contrast to conventional denoising diffusion models like DDIM, which handle a sequence of artificially added Gaussian noise, our model processes a sequence of SIR data wherein the noise originates naturally. This fundamental distinction sets apart the noise characteristics encountered in SIR sequence data from those artificially introduced in DDIM.
(3) "The motivation behind the model design is not clear. "
The rationale behind the design of $nn_{\theta}$ stems from our belief that the propagation process of SIR resembles the message transmission process depicted in equation 14, thus enabling simplification to the form presented in equation 15. We will furnish a specific design case (termed as Equation 16) based on equation 15. Employing SN(·) and BN(·), we aim to constrain the Lipschitz coefficients of the model, as discussed in the proof of Lemma 3.1 (page 6, line 209), with additional detailed proof provided in the appendix. We sincerely apologize for any inconvenience caused to the readers. To enhance clarity, we will augment this section with more comprehensive explanations in the next revised version.
(4) The use of the phrase “graph inverse problem”
Diffusion occurs in various spatial and entity relationships. Our research focuses on graph-based analysis, hence the name is set to align with our investigation. We highly appreciate your input and will diligently take your opinion into consideration.
(5) "The formulation of the proposed denoising diffusion model is not clear. "
Equations 4 and 5 are crucial for modeling the information diffusion process in DDMSL. Eq.4 captures the relationship between node states at adjacent time steps, while Eq. 5 establishes the connection between $X_0$ and node states at any given time using a Markov chain derived from Eq. 4.
To calculate our target $q(X_0 | X_t)$, we need to obtain $q(X_t | X_0)$ and establish the relationship between $X_t$ and $X_0$. However, we can only observe $X_t$ in reality, which corresponds to the infection data. Therefore, we must infer $X_{t-1}$ from $X_t$ iteratively until we obtain the inferred result of $X_0$.
Various methods exist for inferring $X_{t-1}$ from $X_t$. For example, DDIM predicts the noise between $X_t$ and $X_{t-1}$ and subtracts it from $X_t$ to obtain $X_{t-1}$. In our case, we use a discrete denoising diffusion model (D3PM), which utilizes a neural network model to predict an intermediate $X_0^{'}$ (different from $X_0$) based on $X_t$. Then, $X_0^{'}$ is substituted into equation 8 to derive $X_{t-1}$. Appendix D details the DDMSL model's process.
(6)"How to obtain the distribution $Cat(x_t^{i};p)$?"
Let's illustrate the calculation process of Equation (4) with an example. At time $t=s$, node A has a state represented by $x_s=[1,0,0]$. The state transition matrix $Q_{s+1}$ at time $t=s+1$, computed using equations 2 and 3, is assumed to be $Q_{s+1}=[0.4,0.6,0; 0,0.7,0.3; 0,0,1]$. Thus, the state distribution of node A at time $t=s+1$ is $x_{s+1} = x_sQ_{s+1} = [0.4,0.6,0]$.
This implies that node A is in state S at time $t=s$, and the probability of it being in state S at time $t=s+1$ is:
$q(x_{s+1}=S|x_s=S)=x_sQ_{s+1}x_{s+1}^T=[0.4,0.6,0][1,0,0]^T=0.4$.
Similarly, the probability of node A being in state I is 0.6.
(7)"What are reaction diffusion models?"
Our sincere apologies for any lack of clarity in explaining the reaction diffusion models. In the paper, "reaction diffusion models" refer to models like SIR, SI, etc., where nodes undergo reactions with neighboring nodes to update their states and influence others. We will enhance the introduction and related work sections to clarify these concepts.
(8)"Limitations"
DDMSL uses a residual graph convolutional network as its backbone. For a graph with N nodes and E edges, the complexity of the network during inference from time t=T to t=0 is roughly $O(KT|E|)$(K is the number of GCN layers).
During experiments, we found that the generation of $Q_t$ has the most significant impact on time complexity. $Q_t$ is derived by performing m SIR Monte Carlo simulations to calculate $Q_t$, resulting in a time complexity of approximately $O(mTN^2)$(m is the number of Monte Carlo simulations). This calculation can be time-consuming, especially for large-scale graphs.
To reduce inference time in DDMSL, we employ the following strategies:
1) To optimize the computation of $Q_t$ using Monte Carlo simulation, we support code optimization for parallel computation. Our code enables accelerated computation through multi-threading. By utilizing CUDA operators, we can reduce inference time by 20-30 times (GPU version still undergoing testing). We also provide a comparison of the training and testing times of DDMSL using 12 threads alongside other benchmark algorithms (Due to time constraints, we used default settings for baseline algorithms) in Table 3, where DDMSL demonstrates acceptable inference speed. Please refer to the attached PDF.
2) An alternative approach involves leveraging existing deep learning models (see Appendix D) to fit $Q_t$, requiring only a portion of the generated data for accurate predictions. This substitution can roughly reduce testing and training times by one-third. | Summary: Information diffusion is common in various domains, such as social networks, the internet, and disease propagation. Obtaining the diffusion paths and localizing the source based on the final diffusion node states is beneficial for researchers to identify key transmission pathways during information dissemination and facilitate control over the propagation. This article proposes a framework for information diffusion called Discrete Diffusion Model for Source Localization (DDMSL) using the Susceptible-Infected-Recovered (SIR) model. The framework utilizes a discrete Markov Chain to characterize information diffusion and recover diffusion paths based on observed results while achieving source localization. For the neural network module in the framework, the authors design an inference model based on a residual network and rigorously prove its reversibility to justify the rationality of the model design. The experiments are performed on five real-world networks, where propagation data is generated using SIR and SI models. The experiment results demonstrate that, compared to the baseline, DDMSL performs better in source localization and diffusion path recovery across all datasets.
Strengths: - Interesting idea. This paper proposes a method to establish a correlation between the information propagation process and the denoising diffusion model, which is worth further research.
- Rigorous theoretical guarantees. This paper presents reliable theoretical proofs to explain the designs of the model, which are also proven effective in the experiment stage.
- Well performance on different networks(datasets). The experiment results demonstrate the consistent performance across different networks and the improvement compared to the baselines.
Weaknesses: - This paper needs to clarify why the diffusion model is being used for this task. Only the achievements of the diffusion model in the image inverse problems are mentioned. Personally, it seems that there is a significant difference between the inverse problems of images and the task addressed in this paper.
- In my view, the method proposed is not highly correlated to the so-called denoising diffusion model, as the initial input is not a particular noise, and the diffusion steps $T$ are set equal to the steps of information propagation.
- The paper does not show the model's generalization ability on real-world propagation datasets (only SIR or SI model as presented in the paper, no real-world datasets like *Digg* and *Memetracker*).
- Compared to previous works, the proposed method requires fine-grained diffusion history information, which may largely limit its application scenarios.
- Compared to previous works that do not explicitly simulate every diffusion step, such as SL-VAE, this method seems much more inefficient. And no efficiency study is provided.
## After Rebuttal
The authors' response has addressed most of my concerns. I suggest authors discuss the following points in the revised versions.
1. How to decide the number of diffusion/denoising steps in the diffusion model, the same as the real-world forward diffusion process on networks? What if this number is extremely large?
2. Please provide generalization performance on other real-world large networks, and comparison with baseline methods.
3. Requiring fine-grained diffusion history information is still a large limitation. Can this forward diffusion history be inferred with parameter estimation?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My most concerning questions are mostly mentioned in the *Weaknesses* part, which are:
1. The reason why the proposed method utilizes the denoising diffusion model for this task.
2. The performance of the proposed method on real-world propagation datasets is not provided.
3. In IVGD(Wang et al.), the invertibility of the network is proven so that fixed-point iterations can be performed. I am concerned about how the reversibility of residual blocks, strictly proven, contributes to the prediction of $X_0$.(In Figure 2, the input is $Y_T$, while in the text of sec.3.4, the input is $X_0$.)
4. Moreover, I would like to know whether the model has been tested on different graph topologies under a single training. Because in most application scenarios, the trained model is asked to generalize on the changing graphs. I wonder whether the results in the paper are from the model trained and tested on the same static network topology(dataset).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have illustrated the main limitation that the model requires prior knowledge about propagation models.
The provided source code cannot be run directly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. In response to your input, we have conducted experiments utilizing real-world diffusion datasets and DDMSL generalization experiments. The forthcoming section will provide comprehensive responses to each of your inquiries.
(1)"Why the proposed method utilizes the denoising diffusion model for this task?"
Response: Please refer to our global response.
(2)"Relationship between the proposed method and the denoising diffusion model."
Response: As mentioned in global response, our approach utilizes a discrete diffusion denoising model that is extensively employed in graph data generation tasks [2,3]. Unlike Gaussian distributed noise sets, this model deals with noisy data exhibiting a stable distribution resulting from the initial data's interaction with the state transition matrix. As illustrated in the paper, the SIR, SI, and other models can be effectively represented using state transition matrices. Since the diffusion noise sequence data is naturally generated, there is no need to introduce additional noise for constructing the noise sequence.
Furthermore, Theorem 3.1 provides assurance regarding the convergence of node states at every moment, ensuring the model's functionality at any given time. Consequently, we set the time step to T in our experimentation.
(3) "Showing the model's generalization ability on real-world propagation datasets."
Response: Thank you for your suggestion. We tested DDMSL using real-world information dissemination datasets. However, the Digg dataset lacked propagation information flows due to recent updates. The Memetracker dataset's significant growth posed processing challenges within limited time. Thus, we used alternative datasets: Twitter [4] (12,627 nodes, 309,631 edges) and Douban [4] (23,123 nodes, 348,280 edges). These datasets underwent SLVAE benchmark algorithm processing, followed by source localization testing. Results can be found in Table 4 in the attached PDF.
(4) "How the reversibility of residual blocks, strictly proven, contributes to the prediction of $X_0$ ?"
Response: We will first explain the relationship between Section 3.4 and Figure 2, and then briefly introduce the process of DDSML and the roles of each module.
a. The relationship between Section 3.4 and Figure 2.
Figure 2 illustrates a reversible residual network used in DDMSL. This network functions similarly to Unet or Transformer in an image diffusion model (DDIM). Please refer to Appendix D for detailed training and inference processes. The network takes observed infection information ($Y_T$) as input and outputs the potential source node ($X'_0$) that might have caused $Y_T$.
Moreover, Section 3.4 primarily focuses on constructing a neural network model capable of fitting the SIR inverse diffusion process, which is introduced in Section 3.4 on page 5. Equation 13 demonstrates that assuming $X_0$ represents the source nodes, we utilize $Y_T=P(X_0)$ to represent the generation of $Y_T$ from $X_0$ using the SIR or SI process. Naturally, $X_0$ is unknown while $Y_T$ is known, and inferring $X_0$ from $Y_T$ can be denoted as $X_0=P^{-1}(Y_T)$.
Lastly, we devised a residual block based on GCN, built upon the equivalence between SIR/SI diffusion and message passing processes, and assembled these blocks to form a reversible network. Subsequently, we verified that the network is indeed reversible. Our ultimate objective is to design the residual network depicted in Fig2, denoted as $P^{-1}$, ensuring its capability to output $X_0$ after training with $Y_T$ as input.
b. How the reversibility of residual blocks, strictly proven, contributes to the prediction of $X_0$?
Now let's delve into this matter further. The group of residual blocks forms a network that improves the receptive field of GCN and addresses issues like node oversmoothing. This network corresponds to fitting $P^{-1}$. We input the observed infection graph $Y_T$ into the residual network, calculate the loss between the output and the actual $X_0$, and update the network's parameters. Through iterations, the residual network gradually approximates the true $P^{-1}$. Rigorous theoretical proofs assure that residual networks can produce $X_0$ as an output.
(5) "Whether the model has been tested on different graph topologies under a single training?"
Response: Our experiments entail separate training and testing processes for each graph structure, following the prevailing methodology in deep learning algorithms for source localization.
We wholeheartedly concur with your viewpoint regarding the significance of evaluating model generalization ability. To evaluate generalization, we conducted additional experiments using three diverse network datasets of varying scales. Each trained model was assessed on different network structures, revealing the DDMSL model's remarkable generalization performance. Detailed outcomes are provided in Table 2 of the attached PDF.
(6) "Limitations"
Response: As elucidated in our paper, DDMSL does possess limitations that necessitate prior knowledge of partial diffusion models. However, these limitations are not insurmountable, given the availability of alternative approaches to aid in parameter estimation for information diffusion models (see Sec 5). In addition, We have made updates to the code.
[1] Song, Jiaming , C. Meng , and S. Ermon . "Denoising Diffusion Implicit Models." (2020).
[2] Vignac, Clement, et al. "Digress: Discrete denoising diffusion for graph generation." arXiv preprint arXiv:2209.14734 (2022).
[3] Haefeli, Kilian Konstantin, et al. "Diffusion models for graphs benefit from discrete state spaces." arXiv preprint arXiv:2210.01549 (2022).
[4] Z. Cao, K. Han and J. Zhu, "Information Diffusion Prediction via Dynamic Graph Neural Networks," 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Dalian, China, 2021, pp. 1099-1104, doi: 10.1109/CSCWD49262.2021.9437653.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks for your detailed response. However, I still think the following weaknesses hinder the applicability of DDMSL.
1. The number of diffusion steps is fixed as the real-world forward diffusion process on networks. What if this number is extremely large?
2. Authors did not provide generalization performance on other real-world large networks (attached PDF, Table 2), and comparison with baseline methods.
3. Requiring fine-grained diffusion history information is still a large limitation. Can this forward diffusion history be inferred with parameter estimation?
---
Reply to Comment 1.1.1:
Title: Some clarifications on DDMSL generalization performance and limitations.
Comment: Thank you for raising these important questions. We have carefully considered these issues and would like to share our thoughts on them.
1. The number of diffusion steps is fixed as the real-world forward diffusion process on networks. What if this number is extremely large?
Response: First, we propose that the size of information diffusion is the primary factor influencing the performance of source location, rather than the number of diffusion steps. Taking the SI model as an example, when the diffusion step size is sufficiently large, almost all nodes will be in the infected state, making it impossible to identify the source node using any algorithm. In the context of predetermined initial parameters for the entire diffusion system, the diffusion scale increases as the diffusion step size increases. The impact of diffusion scale on source location performance has been discussed in previous research [1].
Secondly, extremely large diffusion steps are rarely observed in real-world scenarios since they imply complete information diffusion throughout the network. Currently, no algorithm is capable of handling such a situation. Hence, the majority of source location methods, including DDMSL, regulate the diffusion step during the initial and intermediate stages of information dissemination.
Therefore, in DDMSL, we opt to locate the source based on a specific diffusion scale (e.g., when 50% of nodes are infected). However, achieving this specific diffusion scale requires different numbers of diffusion steps depending on the set of initial diffusion parameters. Our experiments align with the aforementioned statement: source detection performance is mainly influenced by the diffusion scale. Moreover, DDMSL exhibits similar performance when detecting infection patterns with different diffusion steps but the same diffusion scale.
[1] Shah, Chintan , et al. "Finding Patient Zero: Learning Contagion Source with Graph Neural Networks." (2020).
2. Authors did not provide generalization performance on other real-world large networks (attached PDF, Table 2), and comparison with baseline methods.
Response: We appreciate your insightful comments. Indeed, the generalization performance of the source location algorithm is of great importance. However, it is worth noting that, to the best of our knowledge, previous studies have not specifically examined this aspect, which is why in our original submission we did not include a comparison with the generalization performance of the baseline algorithm in our evaluation of DDMSL.
The experimental results in this paper are based on synthetic data sets on real networks with different sizes and structures. The results presented in Table 2 in the attachment demonstrate that DDMSL exhibits strong performance in both small world and BA network structures. Although there is a notable decline in performance in extreme cases such as ER networks, the results remain acceptable. These findings highlight the robust generalization capabilities of the DDMSL algorithm. We believe that DDMSL should also perform well on two extensively diffuse datasets. However, due to time constraints during the discussion phase, we may not be able to provide the results of subsequent supplementary generalization experiments in a timely manner. If necessary, we will include these results in our next revision to further support our findings.
3. Requiring fine-grained diffusion history information is still a large limitation. Can this forward diffusion history be inferred with parameter estimation?
Response: In fact, taking the main comparison algorithm SLVAE as an example, DDMSL only requires additional diffusion information during the inference process, specifically the initial time diffusion parameters (such as infection rate and recovery rate).
During the training process, obtaining diffusion history data in most real-world scenarios is straightforward. Even if such data is not available, it is still possible to synthetically generate data of the same diffusion scale for training purposes after estimating the initial diffusion parameters. DDMSL is trained and tested on the real diffusion dataset using this method, that is, the training set is synthesized after the parameters are evaluated, and finally tested on the real dataset. Table 4 in the attachment shows the feasibility of this method.
Consequently, this limitation does not pose a significant concern. | Summary: This paper addresses the problem of source identification in a stochastic
network diffusion process, having observed the set of nodes that are
infected at time T. It considers SIR and SI models. The authors propose a
neural network based solution called DDMSL. The authors approach is based
on a reaction-diffusion process and formulate it using a message-passing
function and use a reversible residual network for source identification
and reconstruction of the cascade. DDMSL is compared with several
baselines and an ablation study is conducted to evaluate the significance of
the reversible residual network and propagation rule supervision module.
Strengths: The authors use a novel reversible residual network construction for the
source identification problem.
The paper has a strong experiments component considering multiple networks
and several baselines.
DDMSL shows much superior performance compared to the baselines.
Weaknesses: The presentation could be greatly improved. Many notation and crucial
concepts are missing. The derivations are very dense and lack intuitive
explanations. So, it is hard to understand the results. Details follow.
Diffusion models are not clearly defined: There are several types of SIR
and SI models in the literature. The authors fail to describe their model
clearly. Is it a discrete-time SIR process, i.e., an infected node infects
its neighbors with probability equal to the edge weight at each time step
until it recovers, which again is determined by some probability? A second
type of process is a Gillespie process which is a discrete-event process
which involves rates as the authors have used. Also, the authors seem to
use a single transmission rate per node. This is important to clarify as
the baselines considered in the paper use their own definitions of SIR
models. Is it consistent with the definition of the SIR model in this
paper? If not, then the comparison wouldn't be fair.
Reaction-diffusion process is not defined. In section 3.4.1, the authors
seem to relate GCN with reaction-diffusion models. This is not clear at all
as the concept is not introduced.
Theorem 3.1: What is the purpose of this theorem? Further, its proof is not
presented, even in the appendix. Also, it is not referred to anywhere.
Equation (3) is not used anywhere as well. It is not clear why the rates
can be expressed in that manner. Also, even though these are mentioned as
rates, beta and gamma are used as probabilities.
Several symbols are not defined: nn_\theta, KL, D_KL, Cat(), Q_k^i. It is
very difficult to follow the derivations. They are dense and lack
explanations.
Ablation: In DDMSL(a), the reverse residual block is removed, but replaced
by a GCN network. Typically, in an ablation study, the component is removed
and the performance evaluated. It is possible that the GCN network
deteriorates the performance.
The Figures 6 to 11 in the Appendix are very difficult to read and infer.
Besides, these are just few instances where DDMSL does well. Table 2
already quantifies the performance.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Several definitions of notation are missing. Diffusion model is not
defined. These are mentioned in the Weakness section.
In Algorithm 1 line 4, t is sampled from a distribution where the
probability of t increases as its value increases. This is not explained
anywhere. Why is this sampling necessary?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: No negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your comments and suggestions. We have provided some explanations and clarification regarding your concerns as follows:
(1) “Many notation and crucial concepts are missing”.
Response: Thank you very much for your comments and suggestions. The neural network model used in the reverse inference process was described in section 3.3.2 on page 4 by ${nn_{\theta}}$.
We have only used $Q_t^i$ instead of $Q_k^i$ in our manuscript, and specific explanations of $Q_t^i$ are provided on page 4, line 121. As for $KL, D_ {KL}, Cat()$, these symbols are from the Loss function of the Diffusion model, which have been used in many diffusion models widely, such as DDIM[3]. We deeply apologize for any confusion caused to readers. We will provide a more detailed introduction to these symbols in subsequent revised version.
As for the lack of the concept of reaction diffusion process, in our paper, reaction diffusion process refers to individual mediated diffusion processes such as SIR (see equation 1), SI, SIS, etc. We will revise and add this section in subsequent paper revisions. Thank you for your suggestion.
(2) “Diffusion models are not clearly defined”.
Response: We use discrete SI and SIR models, please refer to equation 1 on page 3 for detailed definitions. During the experiment, all baseline algorithms used consistent SIR and SI models and ensured that the experimental results were generated based on the same datasets.
In fact, the transition probability of each node in the SIR process is necessarily different, just as Lokhov et al. [1] analyzed the SIR process using message passing algorithms, and we inferred the state distribution of any node at any time step. Each node follows the pre-defined process throughout the entire diffusion process: an affected node affects its neighbors with probability equal to the edge weight at each time step until it recovers. However, the infection probability and recovery probability of each node during this period depend on the state of the node and its neighbors, as SIR is a Markov process.
(3)“The proof of Theorem 3.1 and the role of Equation 3”.
Response: The purpose of Theorem 3.1 is to ensure that the state distribution of nodes at each time step is convergent, which is a necessary condition for the discrete Diffusion model to work [2]. Due to space limitations, we have provided a detailed explanation of its proof and function in the appendix. Please refer to section C.1 on page 16 for details. Therefore, our proof of Theorem 3.1 is not missing. And, we will add a description of Theorem 3.1 in the main text.
Equation (3) is not that what you said has not been used anywhere. If you read the paper carefully, the role of Equation (3) is explained in lines 121-124 on page 4, that is, the parameters used to calculate the State-transition matrix of Equation (2). Meanwhile, the generation of Equation (3) is also very intuitive. For instance, the probability of node $i$ being infected at time $t+1$ is the sum of the two: the probability of $i$ being uninfected and subsequently infected at time t, and the probability of $i$ being infected at time $t$ but not transitioning to a recovery state afterwards. We will add a detailed explanation of this section in the appendix.
(4) “Ablation Study“.
Response: Thanks for the great suggestion. We have added ablation experiments on the GCN module, and the experimental results are shown in Table 1 and Figure 1 in the attached PDF, where DDMSL (c) represents the model after removing the GCN module.
(5) “The Figures 5 to 11 in the Appendix“.
Response: Figures 5 to 9 are visualizations of DDMSL in the process of reconstructing information diffusion, while Figures 10 to 11 are visualizations of source localization tasks. Each graph has thousands or even tens of thousands of nodes, and it is impossible to display the reconstruction process of each dataset in such a small space. Therefore, we choose to use vector graphs to clearly see the state of each node and even each edge when zoomed in, but this will result in a significant memory footprint.
We believe that visualization of reconstruction diffusion is necessary, and the source localization results can already be clearly displayed in Table 2. Therefore, we have reduced the visualization of source localization to prevent further expansion of text memory usage.
(6) “Reason for sampling t in Algorithm 1 line 4“
Response: The probability of t being sampled is indeed positively correlated with $t$. Generally, $t$ is uniformly sampled in the Diffusion model of other fields. But we found in the experiment that, compared to the uniform sampling, our setting can significantly reduce the training time and bring some performance improvement. Sorry for missing the explanation on t sampling, and we will supplement it in subsequent modifications.
(7) “Limitations“.
Response: We believe that the negative impact of our research on society is limited. It is possible that some people's privacy may be exposed in some use scenarios, e.g., AIDS patients who do not want to disclose their diseases. In summary, we will also add this section in the appendix.
[1] Lokhov, Andrey Y., Mézard, Marc, and Zdeborová, Lenka. "Dynamic message-passing equations for models with unidirectional dynamics." (2014).
[2] Austin, Jacob, et al. "Structured Denoising Diffusion Models in Discrete State-Spaces." (2021).
[3]Song, Jiaming , C. Meng , and S. Ermon . "Denoising Diffusion Implicit Models." (2020).
---
Rebuttal Comment 1.1:
Comment: I am satisfied with the answers. I do not have any further queries. | Rebuttal 1:
Rebuttal: I would like to extend my sincere appreciation to the esteemed reviewers for their invaluable suggestions on our paper. It has come to our attention that several reviewers have expressed interest in understanding the rationale behind our adoption of the denoising diffusion model. We will address this and share additional experiments conducted based on reviewer feedback, including testing DDMSL on real-world diffusion data, generalization experiments, time complexity analysis, and supplementary ablation experiments. Please refer to the attached PDF file for the experimental results.
We will introduce our motivation from the following aspects:
a. "Differences between discrete diffusion denoising models and classical diffusion models (such as DDIM)"
Firstly, it is important to clarify that we are employing the D3PM [2] diffusion denoising model in a discrete space, which distinguishes it from classical diffusion models like DDIM [1]. D3PM leverages Markov chains to model discrete data, where the data between two consecutive time steps are interconnected through the state transition matrix $Q_t$. In contrast to DDIM, which introduces Gaussian noise directly to the data to establish a forward process, D3PM incorporates noise into $Q_t$ to account for the stochastic nature of the diffusion process.
b. "Comparison of SIR diffusion and noise diffusion on images"
The application process of diffusion models in image analysis can be summarized as follows: starting with an original image, noise is continuously introduced to approximate a Gaussian distribution. DDIM, in particular, performs continuous denoising on the Gaussian distribution noise to generate a new image. In information dissemination models such as SIR , the process of information diffusion bears similarities to the noise introduction process in image analysis. Given a set of initial seed nodes $x_0$, their states gradually become perturbed under the influence of SIR rules. Subsequently, the noise imposed by SIR rules is eliminated from the final noisy infection graph. We hypothesize that the state of $x_0$ evolves over time based on the impact of SIR rules, consequently causing changes in the states of $x_0$'s neighbors and even higher-order neighbors. This progressive alteration obscures the original information encoded in $x_0$. From this perspective, the node state modification at each time step induced by SIR can be interpreted as noise introduced during the evolution of information diffusion.
Models like SIR,SIS and SI epitomize the most natural diffusion processes. For instance, in the SIS model, which can also be processed b DDMSL, when specific network topology conditions are met, and given a set of seed nodes, as the propagation time increases, each node eventually converges to a stable distribution where the probabilities of being in the susceptible(S) and infected(I) states approach 0.5. In fact, Theorem 3.1 demonstrates that the node state distribution of both the SI and SIR models at each time step during the diffusion process exhibits convergence.
Hence, it is our contention that the dissemination of information on a graph and the propagation of noise in an image are two distinct occurrences transpiring in discrete and continuous spaces, respectively. This observation serves as one of the motivations behind employing diffusion models for source localization. As previously discussed, we leverage the D3PM to address this task. The SIR, SI, and SIS models align seamlessly with the utilization requirements of D3PM, and the formulation of the state transition matrix (as specified in Equation 2) for each node is straightforward.
c. "Why use generative models for source localization?"
In the context of an observed infection graph ($Y_T$), numerous sets of source nodes ($x_0$) can exist, as depicted in Fig1 of the paper. Inferring the distribution of source nodes using posterior diffusion data constitutes a critical task. However, existing source localization algorithms such as LPSI, OJC, and other methods employed in Baseline generate deterministic and imprecise outcomes. This limitation motivates the utilization of VAE in SLVAE. In fact, this forms the foundation of DDMSL and SLVAE's approach to source localization.
d. "Why use diffusion models for source localization? What are their advantages?"
Firstly, the diffusion model demonstrates superior performance compared to traditional generative models like VAE in terms of task generation accuracy. In our experimental evaluation, DDMSL outperformed generative models such as SLVAE and DDMIX for source localization purposes.
Secondly, the unique advantage of the diffusion model lies in its ability to leverage the inference results from previous time steps to inform the inference process at each subsequent time step. As a result, it can construct comprehensive node information at each time step, aligning perfectly with our objective of reconstructing the communication path. This aspect serves as an additional motivation for adopting the discrete diffusion model for our task.
In summary, the reasons for using the discrete diffusion model are as follows:
1) The phenomenon of information diffusion is a pervasive occurrence in various domains, making the adoption of D3PM suitable for modeling forward processes and learning reverse processes.
2) DDMSL exhibits remarkable performance in reconstructing the states of individual nodes at each time step throughout the propagation process, enabling successful accomplishment of the two primary tasks outlined in the paper: source localization and path reconstruction.
3) When compared to alternative algorithms like SLVAE and DDMIX, which also employ generative approaches for source localization, DDMSL demonstrates superior performance.
[1] Song, Jiaming , C. Meng , and S. Ermon . "Denoising Diffusion Implicit Models." (2020).
[2] Austin, Jacob , et al. "Structured Denoising Diffusion Models in Discrete State-Spaces." (2021).
Pdf: /pdf/4d2cff1437ff6e4ae670cf94c7e7bf7b28c3efe2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective | Accept (spotlight) | Summary: The paper addresses the dataset condensation task and proposes a new framework termed Squeeze, Recover and Relabel. In this three step approach, the authors first train a model from scratch to accommodate most of the crucial information from the original dataset. In the second stage, target data is synthesized from Gaussian noise. And in the final stage, the generated synthetic data is relabelled using a crop-level scheme to align with the true label of the data. Extensive and controlled experimentation showed significant performance improvement compared to previous state-of-the-art methods.
Strengths: This publication has several strengths including:
1) The writing is clear and easy to understand.
2) Generalizability of the framework to scale of datasets, input resolution, and the size of network architectures.
3) Good experimental methodology with carefully designed ablations that justifies architectural design decisions especially impacts of squeezing budget, diverse data augmentations for original data compression, recovery budget and regularization terms for data recovery and insights on model choice and training for relabeling process.
4) Very exhaustive in-depth empirical comparison with state-of-the-art methods demonstrating strong performance as well as incur reduced compute and memory consumption.
Weaknesses: I am confused by the claims in the section “Cross-Architecture Generalization”. The authors attribute the suboptimal performance of DeiT-Tiny on the condensed datasets due to the model’s inherent need for substantial training data.
However, from Table 5 there appears to be a “cross-architecture gap”: ResNet based evaluation models perform poorly on condensed data based on ViT based squeezed model compared to when evaluated on condensed data based on ResNet based squeezed model. Similar observation for DeiT-Tiny evaluated on condensed data based on DeiT-Tiny-BN and ResNet-18 squeezed model. This shows that the condensed does not generalize well to different network architectures.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The paper in its current form needs clarification on Cross-Architecture Generalization. Please refer to the “Weakness” section for details.
The authors acknowledge the performance disparity between condensed dataset in the limitation section. I feel that this might limit the adaptability of the proposed approach.
Minor typo:
1) Line 277 refers to Figure 3, it should refer to Table 5.
2) “Deit” -> “DeiT”
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors discuss the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your constructive comments. We are encouraged that you find our work clear and easy to understand, and provide an exhaustive in-depth empirical comparison with state-of-the-art methods. We would like to address the comments and questions below.
>Q1. Clarification on Cross-Architecture Generalization.
Thanks for the insightful suggestion. It is a well-known observation in prior dataset condensation/distillation works that performance degradation often results from a mismatch between the synthesis and final training architectures. Yet, as indicated in the following results, the gap between ViT and ResNet-18 in our case is significantly narrower compared to the previous method TESLA. This suggests that our approach will be better at managing this mismatch with stronger *Cross-Architecture Generalization* ability.
| | ViT | ResNet-18 | Gap |
|:----- |:----:|:---------:|:-------------|
| TESLA | 11.0 | 7.7 | $\downarrow$ 30.0% |
| Ours | 25.4 | 24.7 | $\downarrow$ 2.8% |
Moreover, Figure 3 clearly exhibits the proficiency of cross-model generalization across ResNet-{18, 50, 101}, and Table 5 presents cross-architecture generalization between ViT and ResNet. We will make this clearer in our revision.
>Q2. The authors acknowledge the performance disparity between condensed dataset in the limitation section. I feel that this might limit the adaptability of the proposed approach.
Thanks for your comments, we clarify the *adaptability* ability of our method manifests in at least two aspects:
1. The proposed method is the only approach capable of distilling the entire ImageNet-1K under 224 $\times$ 224 resolution, while still achieving commendable performance (60.8%) on the original ImageNet-1K validation set, which is 32.9% higher than the previous SOTA method TESLA (ICML 2023) using the same IPC of training samples. This has been shown the strongest *adaptability* ability in the task.
2. Moreover, our compression rate is 25$\times$. This implies that we are training our model on just 1/25 of the samples compared to conventional model training. This will be a good fit/adaptability for various resource-constrained training scenarios. Given this drastic reduction in training samples, the performance gap in comparison to full-data model training is surprisingly narrow.
>Minor typo: 1. Line 277 refers to Figure 3, it should refer to Table 5. 2. “Deit” -> “DeiT”
Thanks for pointing them out. Line 277 refers to both Figure 3 and Table 5 in our paper. We have corrected the typos in our revision and will polish the whole paper thoroughly.
---
Rebuttal Comment 1.1:
Comment: I went over the rebuttal and the other reviews. I appreciate the authors addressing my raised concerns about the "Cross-Architecture Generalization” and providing clarification on the adaptability ability. I suggest the authors add the above results and discussion to the revised paper. I am happy to increase my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback. We will incorporate the results and discussions from the rebuttal into our revised paper. Best wishes to you. | Summary: This paper proposes a 3-step dataset condensation approach. Instead of applying bilevel optimization based approach in the previous work, the proposed method break down 3 decoupled steps: squeeze, recover and relabel. The key idea is decoupling the modeling training on real data and the generation of the synthetic data. At the first squeeze step, the model is trained on the original full dataset. At the second recovery step, synthetic data is generated using pretrained model and class prior with additional regularization (TV loss and BN consistency). At the 3rd stage, soft labels are generated using the pretrained model. And finally, a model on synthetic data is trained using the images from step 2 and labels from step 3.
Strengths: Compared with previous method on Tiny-IN & IN-1K, the proposed approach achieves superior performance.
The proposed method generates more visually appealing images from the example images in Fig 4.
The first work condenses the full IN-1K, based on the claim from the paper.
Efficient way of using decoupled steps during the generation.
Weaknesses: 1. Lack of strong technical novelty. The proposed method combines a few prior work on image synthesis, such as deepDream & Inverting Image, and applies it directly into a new problem.
2. Lack of comprehensive study with baseline approaches, and demonstration of the improvement of the key novelty.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I am not familiar with the field, but I am not convinced about the overall problem being solved.
If we would like to condense the original dataset for efficient training, we should at least achieve comparable accuracy on retraining. This is not the case from the IN-1K result.
If we would like to obtain a way of retraining a new model without requiring original labeled dataset, semi-supervised learning on a unlabeled data would be a more promising direction instead of image synthesis.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please see my concerns in the Questions section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your constructive comments. We are appreciated that you find our work achieving superior performance with more visually appealing images. We would like to address the comments and questions below.
>W1. Strong technical novelty. The proposed method combines a few prior work on image synthesis, such as deepDream & Inverting Image, and applies it directly into a new problem.
Thanks for the valuable feedback. We highlight our novelty in each stage of *squeeze*, *recover* and *relabel* comparing to prior methods such as DeepDream, Inverting Image, etc.
1. In the squeeze phase, we have crafted a new BN-based ViT architecture specifically for the task of data condensation. Interestingly, we've also observed that a higher-performing model in the squeeze phase does not necessarily translate to superior knowledge for the subsequent processes of recover and relabel. This is discovered by our conducted extensive experiments with various data augmentation techniques, which help to identify the most useful strategies that enhance the squeeze procedure.
2. In the recover phase, we performed systematic ablation studies on various regularizers to understand their impact on relabeling and final training. Additionally, we introduced a simple yet highly effective multi-crop optimization technique, which significantly elevated the performance levels.
3. In the relabel phase, we go beyond the straightforward application of FKD. We enhance it by developing a novel soft label storage mechanism. This innovative solution ensures FKD's compatibility with mixture-based data augmentation techniques like Mixup and CutMix, which is not supported in FKD's vanilla design.
>W2. Comprehensive study with baseline approaches, and demonstration of the improvement of the key novelty.
Thanks for the suggestion. In response, we have expanded our analysis to include additional comparisons with baseline approaches as detailed below:
We recover images from ConvNet in $\texttt{SRe}^2\texttt{L}$, the IPC equals 10 in post-training for the comparison with baseline method TESLA. We conducted two configurations: relabel using the same squeezed model and relabel using the large pretrained ResNet-18. The results are shown in the following table and our approach demonstrates superior performance over the baseline. The results also reflect the observation and novelty in our paper that a correct relabeling model is necessary for the final accuracy. We will include these comparisons in our revised paper.
| Squeezed Model | ResNet-18 | ResNet-50 | ResNet-101 |
| ----------------------------------- |:---------:|:---------:|:----------:|
| TESLA (IPC=10) | 7.7 | -- | -- |
| Our ConvNet (relabel w/ itself, IPC=10) | 17.0 |20.5 | 21.2 |
| Our ConvNet (relabel w/ ResNet-18, IPC=10) | 12.8 | 15.7 | 17.1 |
| Our ResNet-18 (IPC=10) | 21.3 | 28.4 | 30.9 |
>Q1. I am not familiar with the field, but I am not convinced about the overall problem being solved. If we would like to condense the original dataset for efficient training, we should at least achieve comparable accuracy on retraining. This is not the case from the IN-1K result. If we would like to obtain a way of retraining a new model without requiring original labeled dataset, semi-supervised learning on a unlabeled data would be a more promising direction instead of image synthesis.
Thank you for your insights. While our proposed method may not completely resolve the dataset condensation problem, it significantly advances the ability to address this challenge within this domain:
1. It is worth noting that ours is the only approach capable of distilling the entire ImageNet-1K under 224 $\times$ 224 resolution, while still achieving commendable performance (60.8%) on the original validation set. It is an absolute 32.9% higher than the previous SOTA method TESLA (ICML 2023) using the same IPC training samples.
2. Moreover, our compression rate is 25$\times$. This implies that we are training our model on just 1/25 of the samples compared to conventional model training. Given this drastic reduction in training samples, it's reasonable that the performance may not fully reach that of standard model training.
We also clarify that our objective is not to pursue a method that eliminates the need of the original labeled dataset. Instead, our focus is on data condensation/distillation where the goal of this task is to achieve a similar level of accuracy as the original full dataset but with **much fewer samples**. This target makes data distillation work apart from semi-supervised learning, which has a different set of objectives.
We appreciate the reviewer's valuable feedback. We will persist in our efforts to enhance and optimize the performance of data distillation within this area. | Summary: This paper proposes a new dataset condensation termed Squeeze, Recover, and Relabel that decouples the bilevel optimization of model and synthetic data during training. Extensive experiments show the effectiveness and efficiency of the proposed method in several IPC settings.
Strengths: 1, The paper is well-written and easy to understand.
2, The proposed method becomes efficient due to the decoupled stages.
3, The extensive experiments show the effectiveness of the method.
Weaknesses: 1, Albeit the computation and memory efficiency in the proposed method, the whole training time might be comparable to other methods like DM and MTT.
2, Unclear illustration about BN layers in VIT. Is it means that the proposed method uses the BN layer to replace the LN?
3, The lack of comparison between the IPC and directly sampling the same scale images.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1, One naive question: why the condensed images could be used for training? I am not familiar with this topic.
2, The stage-3 Relabel seems like the process of knowledge distillation. Could the author try another setting, like using R50 to teach R18?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your constructive comments. We are encouraged that you find our work well-written and easy to understand, the proposed method is efficient with extensive experiments. We would like to address the comments and questions below.
>W1. Albeit the computation and memory efficiency, the whole training time might be comparable to DM and MTT.
Thanks for pointing out this. We've broken down the whole training time consumption at each stage of our training process in the table below. The timing is evaluated on Tiny-ImageNet using the *ConvNet/ResNet-18* architectures with one RTX 4090 GPU.
| | Squeeze/Pretrain (h)| Recover/Synthesis (h)| Relabel/Distilling (h)| Total (h)|
|:----- |:-------:|:------:|:-----:|:-----:|
| DM| - | 23.78/99.88 |-| 23.78/99.88 |
| MTT| 0.15/0.63 $\times$ 100 |16.60/69.72| - | 31.60/132.72 |
| TESLA | 0.15/0.63 $\times$ 100 |16.93/71.11| 0.02/0.08| 31.95/134.19 |
| Ours|0.37/1.54|1.98/8.33 |0.02/0.08|2.37/9.95|
From this, it becomes evident that, even when considering the time spent on the squeeze/pre-training stage, our proposed framework still significantly surpasses the efficiency of other methods, such as DM and MTT.
In the squeeze phase, multiple squeezed models are necessary to be pretrained for sampling and matching multiple trajectories in MTT. However, only one squeezed model is required to match Batch Norm statistics in our method. Consequently, the squeezing phase in our method is considerably more time-efficient compared to that of MTT.
In the synthesis phase, we have included a time consumption comparison in Table 1 of our paper. This comparison illustrates the time taken to generate one Tiny-ImageNet image with a single iteration update. When considering the iteration settings for synthesis, both DM and MTT models train over 10K iterations. In contrast, our model requires only 4K iterations. Therefore, our model's reduced per-iteration synthesis times, resulting in a shorter total synthesis time.
>W2. Unclear illustration about BN layers in VIT.
The vanilla ViT can be formulated as:
$$\mathbf z_{\ell}^{\prime}=\operatorname{MHSA}\left(\mathrm{LN}\left(\mathbf z_{\ell-1}\right)\right)+\mathbf z_{\ell-1}$$
$$\mathbf z_{\ell}=\operatorname{FFN}\left(\mathrm{LN}\left(\mathbf z_{\ell}^{\prime}\right)\right)+\mathbf z_{\ell}^{\prime}$$
where $\mathbf z_{\ell}^{\prime}$ is the intermediate representation before Feed-forward Network ($\operatorname{FFN}$), and $\mathbf z_{\ell}$ is that after $\operatorname{FFN}$ and residual connection. $\operatorname{FFN}$ contains two linear layers with a GELU non-linearity in between them, i.e.,
$$\operatorname{FFN}(\mathbf z_{\ell}^{\prime})=\left(\operatorname{GELU}\left(\mathbf z_{\ell}^{\prime} W^1_\ell+b^1_\ell\right)\right) W_\ell^2+b_\ell^2$$
The newly constructed BN-ViT is:
$$\mathbf z_{\ell}^{\prime}=\operatorname{MHSA}\left(\mathrm{BN}\left(\mathbf z_{\ell-1}\right)\right)+\mathbf z_{\ell-1}$$
$$ \mathbf z_{\ell}=\operatorname{FFN_{BN}}\left(\mathrm{BN}\left(\mathbf z_{\ell}^{\prime}\right)\right)+\mathbf z_{\ell}^{\prime}$$
where we add one additional BN layer in-between two linear layers of $\operatorname{FFN}$, i.e.,
$$
\operatorname{FFN_{BN}}(\mathbf z_{\ell}^{\prime})=\left(\operatorname{GELU}\left(\operatorname{BN}\left(\mathbf z_{\ell}^{\prime} W^1_\ell+b^1_\ell\right)\right)\right) W_\ell^2+b_\ell^2
$$
We will include these details in our revision.
>W3. Comparison between the IPC and directly sampling the same scale images.
The results of directly sampling the same scale images are shown in the following table. As suggested by the reviewer, we randomly sampled 50 images per class as the pruned dataset. It can be observed that our dataset condensation results outperform the dataset pruning by large margins across various architectures.
| IPC=50 | ResNet-18 | ResNet-50 | ResNet-101 |
|:--------|:--------:|:--------:|:---------:|
| Random Dataset Pruning|27.47| 26.05 | 26.96 |
| Dataset Condensation (Ours)|46.75| 55.62| 60.81|
>Q1. One naive question: why the condensed images could be used for training? I am not familiar with this topic.
We understand this concern stems from the distinctive visual appearance of synthetic data. Despite the condensed images appearing quite different from the natural ones, they are engineered to encapsulate the essence or core characteristics of the original large dataset, particularly:
1. Information Retention: These distilled images retain the important features and patterns required for effective training. In other words, despite their smaller size, distilled images retain crucial information from the original dataset. This information is what the model learns to recognize and apply when it sees new, similar data.
2. Noise Reduction: Dataset condensation/distillation can also serve as a form of noise reduction, filtering out unnecessary or irrelevant information in the synthetic data and allowing the model to focus on the most salient features.
Furthermore, the visualization of distilled examples in Figure 1 of our supplementary material, offers a straightforward interpretation. It reveals that numerous small areas, saturated with categorical features, are scattered throughout the image. This distribution significantly augments the image's expressiveness, enriching its visual representation during model training.
>Q2. The stage-3 Relabel seems like the process of knowledge distillation. Could the author try another setting, like using R50 to teach R18?
Thanks for the suggestion. Indeed, we have accommodated a variety of results with diverse relabeling settings including the suggested *using R50 to teach R1* in Figure 3 of the paper. We have also included detailed configurations within the legends of each subfigure. For example, the notation $\mathrm{T_{R50}S_{R18}}$ is used to indicate the use of a ResNet-50 model for relabeling synthetic data, which subsequently teaches the learning process of a ResNet-18 model.
---
Rebuttal Comment 1.1:
Comment: The authors's rebutal well addressed my concerns. Thanks for the authors' efforts. I would like to raise my rate. | Summary: This paper proposes a dataset distillation or dataset condensation method that can support ImageNet-scale compression. The main idea is inspired by some data-free knowledge distillation techniques to optimize the cross-entropy error, BN statistic distance, and some other prior terms for the distilled data. The complexity is simple compared with recent mainstream methods of dataset distillation. And the proposed method achieves impressive accuracy for models trained with distilled data on large-scale datasets.
Strengths: 1. The proposed method is simple yet effective, which enjoys satisfactory scalability.
2. It is good to know for the community that dataset distillation could achieve promising results on large-scale datasets like ImageNet1k-224 resolution.
3. The writing is coherent and it's easy for readers to follow the proposals.
Weaknesses: 1. I am a little bit worried about the technical novelty. Since the main idea is largely inspired by data-free knowledge distillation techniques [a], I list this as a weakness for NeurIPS, the top-tier machine learning conference.
2. Some detailed ablation studies are expected:
* I notice that the settings of this paper is different from previous works. For example, previous works typically use Convent for dataset distillation while this work mainly considers ResNet. I am not arguing that the setting must be the same. However, I do think that an ablation study is necessary to show the improvement coming from the architectures.
* A sensitivity analysis with respect to $\alpha_{BN}$ is necessary.
* I do understand that the proposed method is mainly for large-scale datasets. But it is also interesting to dynamically increase the size of datasets and compare the performance with the existing methods, to help readers better understand when the proposed method yields advantages.
[a] Dreaming to distill: Data-free knowledge transfer via deepinversion. Yin et al., CVPR 2020.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: My major questions focus on the detailed ablation studies. Please refer to the 2nd point of the above weaknesses part for details.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors should clarify more limitations of the specific solution they proposed instead of those general issues of dataset distillation, e.g., larger datasets and performance gap with models trained on full data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your constructive comments. We are encouraged that you find our work simple yet effective, enjoying satisfactory scalability, and helping the community to know dataset distillation could achieve promising results on large-scale datasets. We would like to address the comments and questions below.
> W1. The technical novelty.
Thanks for your kind comments. We highlight our novel contributions in this work beyond data-free knowledge distillation and other related works such as DeepDream, Inverting Image, etc., in the following aspects:
1. In the squeeze phase, we have crafted a new BN-based ViT architecture specifically for the task of data condensation. Interestingly, we've also observed that a higher-performing model in the squeeze phase does not necessarily translate to superior knowledge for the subsequent processes of recover and relabel. This is discovered by our conducted extensive experiments with various data augmentation techniques, which help to identify the most useful strategies that enhance the squeeze procedure.
2. In the recover phase, we propose the new perspective that not all regularizers in image synthetic are useful for dataset condensation. We performed systematic ablation studies on various regularizers to understand their impact on relabeling and final training. Additionally, we introduced a simple yet highly effective multi-crop optimization technique, which significantly elevated the performance levels.
3. In the relabel phase, we go beyond the straightforward application of FKD. We enhance it by developing a novel soft label storage mechanism. This innovative solution ensures FKD's compatibility with mixture-based data augmentation techniques like Mixup and CutMix, which is not supported in FKD's vanilla design.
> W2 (1). Previous works typically use Convent for dataset distillation while this work mainly considers ResNet. I am not arguing that the setting must be the same. However, I do think that an ablation study is necessary to show the improvement coming from the architectures.
Thanks for the valuable comments. As suggested, we've incorporated additional ablation experiments to reconstruct images using ConvNet, as shown in the table below. To integrate BN-matching within $\texttt{SRe2L}$, we've implemented BatchNorm operations post each convolutional layer. Subsequently, after training the ConvNet-BN model for 90 epochs, we achieved a well-optimized model boasting a Top-1 accuracy rate of 58.62% as its squeezed model. The results under IPC=10 across various post-training architectures still outperform TESLA by large margins, highlighting $\texttt{SRe2L}$'s proficiency even with the non-residual model.
| Squeezed Model (IPC=10) | ResNet-18 | ResNet-50 | ResNet-101 |
| ------- |:---------:|:---------:|:----------:|
| TESLA | 7.7 | - | - |
| ResNet-18 (paper) | 21.3 | 28.4 | 30.9 |
| Our ConvNet (relabel w/ itself) | 17.0 | 20.5 | 21.2 |
| Our ConvNet (relabel w/ ResNet-18) | 12.8 | 15.7| 17.1 |
> W2 (2). A sensitivity analysis with respect to $\alpha_{BN}$ is necessary.
The experimental ablation results of $\alpha_{BN}$ are presented in the table below. $\alpha_{BN}=0.01$ achieves the highest accuracy on ResNet-101, which is adopted in our paper. While, $\alpha_{BN}=0.1$ obtains slightly better performance on ResNet-18 and ResNet-50 architectures. We will include these ablation results in our revised paper.
| $\alpha_{BN}$ | ResNet-18 | ResNet-50 | ResNet-101 |
| ------------- |:---------:|:---------:|:----------:|
| 0.001 | 45.87 | 54.92 | 56.95 |
| 0.01 | 46.75 | 55.62 | **60.81** |
| 0.1 | **47.83** | **56.19** | 58.36 |
| 1.0 | 46.87 | 55.24 | 57.70 |
> W2 (3). I do understand that the proposed method is mainly for large-scale datasets. But it is also interesting to dynamically increase the size of datasets and compare the performance with the existing methods.
Thank you for the insightful suggestion. As recommended, we undertook further experiments from two dimensions: increasing resolution and increasing the number of classes. The details are presented in the table below.
For the distillation of the *IN-1K-64x64* dataset, we adhered to the ResNet architecture previously employed for Tiny-ImageNet, adjusting the output dimension to accommodate 1K classes. When distilling the *IN-1K-112x112* dataset, we utilized the aforementioned adapted model and activated the max-pool layer to suit the 112 $\times$ 112 resolution.
For the *IN-100-224x224* and *IN-10-224x224* datasets, we curated corresponding sub-datasets and aligned the standard ResNet's output dimension to match the respective class numbers. A trend emerged from our findings. As the number of classes decreased, accuracy exhibited an upward trajectory. This trend aligns seamlessly with established learning principles. The results from our experiments demonstrate that our approach is adaptable, catering not just to large datasets but also to datasets of diverse scales.
| Dataset (IPC=50)| ResNet-18 | ResNet-50 | ResNet-101 |
| ------- |:---------:|:---------:|:---------:|
| IN-1K-224x224 (paper) | 46.75 | 55.62 | 60.81 |
| IN-1K-112x112 | 34.15 | 42.76 | 45.25 |
| IN-1K-64x64| 35.27 | 42.26 | 44.37 |
| | | | |
| IN-1K-224x224 (paper) | 46.75 | - | - |
| IN-100-224x224 | 52.70 | - | - |
| IN-10-224x224 | 73.00 | - | - |
>L1. The authors should clarify more limitations of the specific solution they proposed instead of those general issues of dataset distillation.
Thanks for the suggestion. Beyond the usual constraints associated with dataset distillation, we've noticed an additional limitation within our approach that the proposed framework requires storage of extra soft labels for the synthetic dataset, leading to increased disk storage consumption. In other aspects, our approach shows significant advantages over other methods.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal!
Comment: Thanks the author for the informative rebuttal! It clears most of my concerns. Here are some remaining ones:
1. The authors have provided some results on down-sampled ImageNet datasets to illustrate the performance on small datasets, which conducts the ablation of dataset sizes via changing resolutions. I am curious about the studies with respect to changing the number of images and the number of classes. For example, how about the performance on small-scale datasets like CIFAR? The authors have mentioned in the supplement that the method is for large-scale datasets and the results on small datasets are not competitive. I definitely understand this. Nevertheless, I think it is important to report these results or some other forms of ablation studies on the number of images to help readers understand what sizes of datasets can benefit from this method.
2. I notice that the authors use a trick of "Multi-crop Optimization", which is related to some recent works on synthetic data parameterization [a, b, c], and the performance is indeed sensitive to this operation. The authors are encouraged to have a discussion with these related works.
[a] Dataset Condensation via Efficient Synthetic-Data Parameterization (Jang-Hyun Kim et al., ICML 2022)
[b] Dataset Distillation via Factorization (Songhua Liu et al., NeurIPS 2022)
[c] Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks (Zhiwei Deng et al., NeurIPS 2022)
---
Reply to Comment 1.1.1:
Title: Further response to the comments
Comment: Thanks very much for the additional insightful comments and suggestions.
>1. The authors have provided some results on down-sampled ImageNet datasets to illustrate the performance on small datasets, which conducts the ablation of dataset sizes via changing resolutions. I am curious about the studies with respect to changing the number of images and the number of classes. For example, how about the performance on small-scale datasets like CIFAR? The authors have mentioned in the supplement that the method is for large-scale datasets and the results on small datasets are not competitive. I definitely understand this. Nevertheless, I think it is important to report these results or some other forms of ablation studies on the number of images to help readers understand what sizes of datasets can benefit from this method.
Adjusting the number of images can have a notable impact. Generally, having more images for each class in the original dataset tends to result in better-trained models. These improved models can often reproduce higher-quality images. As for altering the number of classes, we have presented the ImageNet results corresponding to varying class numbers, specifically, 1K, 100, and 10 classes, in the last three rows of the table in rebuttal of **W2 (3)**.
Here, we incorporate additional ablation experiments on the CIFAR-10/100 datasets and the results are shown in the table below. The adapted ResNet-18 is utilized as a backbone model throughout our SRe2L's three phases. Our prior ablation findings showed that the proposed approach excels particularly with ImageNet scale, especially with resolutions exceeding 64 $\times$ 64 and classes numbering more than 100. On the relatively large CIFAR-100 dataset with more classes, our results parallel those of leading-edge methods, such as DM, CAFE, and MTT, among others. However, on the small CIFAR-10, the gap is clearly observed. Overall, the new CIFAR experiments suggest that our approach might not offer significant benefits for lower-resolution 32 $\times$ 32 datasets, such as CIFAR-10/100. We will include these results with discussions in our revised paper.
| IPC | CIFAR-100 | CIFAR-10 |
| --- |:---------:|:--------:|
| 50 | 49.37 | -- |
| 100 | 54.08 | 60.97 |
| 200 | 57.86 | 71.33 |
>2. I notice that the authors use a trick of "Multi-crop Optimization", which is related to some recent works on synthetic data parameterization [a, b, c], and the performance is indeed sensitive to this operation. The authors are encouraged to have a discussion with these related works.
[a] Dataset Condensation via Efficient Synthetic-Data Parameterization (Jang-Hyun Kim et al., ICML 2022)
[b] Dataset Distillation via Factorization (Songhua Liu et al., NeurIPS 2022)
[c] Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks (Zhiwei Deng et al., NeurIPS 2022)
Thanks for introducing these related papers. IDC [a] integrates multi-formation, updating various cropped regions in every iteration. When examined in our framework, the results indicated only a slight increase in accuracy, but this was offset by a considerably longer recovery time. Conversely, our approach utilizes a single-formation, updating just one cropped area at a time. This strategy is consistent with the RandomResizedCrop operation used during the post-training phase.
[b] proposes a hallucinator-basis factorization method for the dataset distillation task. It leverages hallucinators to encode inner relations between different samples in original datasets, also introduces a pair of adversarial contrastive constraints to diversify the knowledge captured by different hallucinators. [b] argued that the data augmentations, e.g., multi-crop, cannot encode any information about the target datasets, and further proposes their approach to enhance the informativeness gained in distilled data.
[c] proposes to learn a set of bases/memories which are shared between classes and combined through learned flexible addressing functions to generate a diverse set of training examples.
We will definitely add the discussions with these related works in our revision. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We would like to express our gratitude for your insightful feedback and comments, which have been helpful in updating and enhancing our submission. We kindly invite you to review our author rebuttal so that we may address any further questions you may have or clarify any points that remain unclear. In summary, our rebuttal mainly includes the following:
* We carry out theoretical analysis of generalization error bound through Input Compression Bound (ICB) and Mutual Information (MI) between the original/condensed input and the final layer representations. (Reviewer 4huB)
* We present results on ConvNet for dataset distillation, a sensitivity analysis with respect to $\alpha_{BN}$, and additional experiments on increasing resolution and increasing the number of classes. (Reviewer QGHe)
* We break down the whole training time consumption at each stage to demonstrate the superior efficiency of our proposed approach. (Reviewer RJ8s)
* We provide a detailed explanation of BN-ViT architecture. (Reviewer RJ8s)
* We provide the comparison between our dataset condensation and directly sampling the same scale of images. (Reviewer RJ8s)
* We expand our analysis to include additional comparisons with the baseline approach. (Reviewer MBn1)
* We present the comparison and discussion of the cross-architecture generalization ability. (Reviewer 7q3V)
We hope our responses can adequately address your concerns. We will integrate all the comments presented in the rebuttal into the revised paper, and we sincerely appreciate your valuable feedback.
Best,
Authors | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces a new dataset condensation framework, that is Squeeze, Recover, and Relabel (SRe2L).
SRe2L decouples the optimization of model and synthetic data during training, enabling effective condensation across varying dataset scales, model architectures, and image resolutions.
The authors mention in the paper about the advantages such as arbitrary resolution synthesis, low training cost and memory consumption, and scalability to different evaluation network architectures.
Extensive experiments on Tiny-ImageNet and full ImageNet-1K datasets demonstrate its improved performance compared to state-of-the-art methods.
SRe2L also outperforms the MTT approach in terms of speed and memory consumption during data synthesis.
Overall, SRe2L presents a powerful solution for dataset condensation with improved performance and efficiency.
Strengths: 1. Clear Paper Organization: The paper exhibits a well-structured organization that aids in comprehending the presented concepts, methodologies, and experimental results. The logical flow of information allows readers to follow the paper's contributions easily.
2. Novel Framework: The authors propose a new dataset condensation framework, Squeeze, Recover, and Relabel (SRe2L), which offers a fresh perspective on addressing the data condensation problem. While some technical details may not be entirely novel, the paper presents an alternative solution to data condensation, introducing new ideas and approaches.
3. Surprising Performance: The scalability of data condensation to large datasets and deep networks poses significant challenges. The paper's performance on ImageNet-level datasets is particularly impressive, demonstrating the effectiveness and robustness of the proposed framework in handling complex datasets and achieving high validation accuracy.
Weaknesses: 1. Limited Theoretical Analysis: While the paper presents impressive empirical results and demonstrates the effectiveness of the proposed framework, it lacks a comprehensive theoretical analysis. More theoretical analysis, such as error bounds or performance upper bounds, would provide a deeper understanding of the underlying principles and limitations of the proposed approach. Incorporating theoretical analysis could further strengthen the paper's contributions and provide insights into the algorithm's behavior and performance guarantees.
Overall, the paper is well-structured, introduces a novel framework, and achieves remarkable performance on challenging datasets. However, enhancing the theoretical analysis would add a valuable dimension to the paper and provide a more comprehensive evaluation of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your constructive comments. We are encouraged that you find our work novel with surprising performance on scalability. We would like to address the comments and questions below.
>W1. Limited Theoretical Analysis: While the paper presents impressive empirical results and demonstrates the effectiveness of the proposed framework, it lacks a comprehensive theoretical analysis. More theoretical analysis, such as error bounds or performance upper bounds, would provide a deeper understanding of the underlying principles and limitations of the proposed approach. Incorporating theoretical analysis could further strengthen the paper's contributions and provide insights into the algorithm's behavior and performance guarantees.
Thanks for the insightful suggestion of incorporating a more theoretical perspective into our current analysis. Given the significance of estimating the generalization error (GE) of deep neural networks as a method for evaluating their ability to generalize, since the data condensation task aims to train on the condensed data meanwhile achieving good performance on the original val data. We've adopted this approach for assessing the generalization capability of our condensed dataset for analyzing the generalization/error bounds between models trained on original data and condensed data.
More specifically, we employ the Mutual Information (MI) between the original/condensed input and the final layer representations to carry out this analysis, using the same network architecture limit to bound MI, in line with the methodology outlined in [1]. To elaborate further:
The MI between two variables $X$ and $D$ is:
$$I(X ; D) \equiv \sum_{x, d} p(x, d) \log \frac{p(x, d)}{p(x) p(d)}=\mathbb E_{p(x, d)}\left[\log \frac{p(d \mid x)}{p(d)}\right] \tag{1}$$
where $X$ is the input sample, $D$ is the input representation, i.e., model's output. The *leave one out* upper bound (UB) [2] can be utilized to conservatively bound MI:
$$
I(X ; D) \leq \mathbb{E}\left[\frac{1}{N} \sum_{i=1}^{N} \log \frac{p\left(d_{i} \mid x_{i}\right)}{\frac{1}{N-1} \sum_{j \neq i} p\left(d_{i} \mid x_{j}\right)}\right]=I_{\mathrm{UB}} \tag{2}
$$
Following the information theory fundamentals, by applying the conventional Probably Approximately Correct (PAC) of GE bound, we can obtain the bound on GE as:
$$
\mathrm{GE}<\sqrt{\frac{\log (|\mathcal{H}|)+\log (1 / \delta)}{2 N_{\mathrm{trn}}}} \tag{3}
$$
where $|\mathcal H|$ is the hypothesis-class cardinality and $N_\mathrm{trn}$ is the number of training examples. For the synthetic data, $N_\mathrm{trn}=|\mathcal C_\mathrm{syn}|$, while for the full data, $N_\mathrm{trn}=N_\mathrm{full}$. The confidence parameter, denoted by $\delta$ and ranging between 0 and 1, specifies the likelihood that the bound remains consistent with respect to the chosen $N_{\mathrm{trn}}$ training samples.
According to the property of deep neural networks [1], the cardinality of the hypothesis space reduces to $|\mathcal{H}| \approx 2^{|\mathcal T|}$ where $|\mathcal{H}|$ is the number of class-homogeneous clusters that the backbone network distinguishes. An estimate for the number of clusters can then be obtained by $|\mathcal T| \approx 2^{H(X)} / 2^{H(X \mid Z)}=2^{I(X ; Z)}$.
The ability of Input Compression Bound (ICB) [3, 4] is to predict changes in GE under different dataset interventions, it then can be formulated as:
$$
\mathrm{GE}_{\mathrm{ICB}} < \sqrt{\frac{2^{I(X ; D)} + \log (1 / \delta)}{2 N\_{\operatorname{trn}}}} \tag{4}
$$
Thus, we can have the generalization error bound for the condensed data as:
$$
\mathrm{GE}^{\mathrm{syn}}\_{\mathrm{ICB}}<\sqrt{\frac{2^{I(X ; D)}+\log (1 / \delta_\text{syn})}{2 | \mathcal C\_{\operatorname{syn}}|}} \tag{5}
$$
where the generalization error bound for full dataset is $\mathrm{GE}^{\mathrm{full}}\_{\mathrm{ICB}}<\sqrt{\frac{2^{I(X ; D)}+\log (1 / \delta_\text{full})}{2 N\_{\operatorname{full}}}}$.
We will include a more detailed and well-organized theoretical analysis into the revised version of our paper.
### References
[1] Angus Galloway, Anna Golubeva, Mahmoud Salem, Mihai Nica, Yani Ioannou, and Graham W. Taylor. "Bounding generalization error with input compression: An empirical study with infinite-width networks." Transactions on Machine Learning Research (TMLR) 2022.
[2] Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. On Variational Bounds of Mutual Information. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 5171–5180. 2019.
[3] Naftali Tishby. Information Theory of Deep Learning, 2017.
[4] Ravid Shwartz-Ziv, Amichai Painsky, and Naftali Tishby. Representation Compression and Generalization in Deep
Neural Networks. OpenReview, 2019.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: I would like to express my appreciation to the authors for their detailed and insightful response to my review. Their effort in providing additional theoretical support has indeed addressed my initial concerns and has significantly enriched my understanding of the paper's contributions.
The authors' response has provided a clear and compelling explanation of how their approach fits into the broader landscape of related methods. The clarification regarding the theoretical underpinnings of their method, along with the comparisons to existing techniques, has solidified my confidence in the novelty and importance of their work.
I commend the authors for their diligence and thoughtful responses, and I look forward to the continued refinement and impact of their work in the field.
---
Reply to Comment 1.1.1:
Title: Thanks for your comments
Comment: We sincerely thank you for your acknowledgment and uplifting comments. Wishing you a wonderful day. | null | null | null | null | null | null |
Provable Guarantees for Neural Networks via Gradient Feature Learning | Accept (poster) | Summary: This paper proposes a general framework for analyzing feature learning in two-layer ReLU neural networks. The idea is to consider the class of two-layer ReLU networks with “gradient features”, i.e. features aligned with the gradients of the loss induced by the distributions of the data and initial model parameters. The main result (Theorem 3.12) is that gradient descent on the two-layer ReLU network achieves generalization error close to the that of the optimal model in this class under weak assumptions. Instantiations of this result are provided for the special cases of the data being generated by a mixture of gaussians and parity functions. In these settings, the optimal loss among models in the gradient feature class is computed, with final generalization rates strictly improving over kernel methods and matching or surpassing the best known results in the feature learning literature.
Strengths: 1. The paper analyzes a highly relevant topic.
2. The idea is simple, yet powerful and novel, to my knowledge. The proposed notion of gradient features can likely be used to show feature learning guarantees for models beyond two-layer ReLUs. At present, it allows for competitive feature learning guarantees with very general boundedness assumptions on the data distribution and smoothness of the loss function.
3. The two instantiations of the main result are very helpful to concretize the significance of the main result. Both are very well-studied settings, and the provided guarantees strictly improve upon kernel methods in both cases, and match the rates for learning parities albeit with more general assumptions.
4. The presentation is clear and rigorous.
5. The related works are well-covered.
Weaknesses: 1. On its own, it is difficult to gauge the significance of the main result, since in general we do not know whether gradient features can lead to a good model. Indeed, it would be helpful to have more discussion of when gradient features can lead to a good model beyond the two special cases.
2. Related to above, a result showing that if the optimal model in the gradient feature class has large risk, then the ground-truth mapping from inputs to labels is not learnable by gradient descent would strengthen the paper.
3. Like other feature learning studies, the analysis leverages feature learning happening in the early steps of gradient descent, while subsequent updates are shown only to not corrupt the learned features, although it is unclear how well this aligns with practice.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Minor note: the restrictions on $\tau$ should appear in the statements of Theorems 4.4 and 4.8.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing thorough suggestions! For feature learning happening in the early steps of gradient descent, please refer to the global response above. Below we address the other comments. In short, we provide some failure cases that our framework cannot cover.
### More discussion
In the Appendix, we also provide a linear data model and multiple-index data models. We admit that for general data distributions without detailed information, we may not know whether the gradient feature is good. In our case study, we can build a “ground-truth” network on gradient features. However, for arbitrary data distribution or labeling functions, the “ground-truth” networks may not exist. See examples in the Failure case below.
On the other hand, one cannot hope for non-vacuous bounds for general problems, given the various hardness results of network learning on general problems, e.g., [1,2]. We agree that given a general problem, we may not have an easy way to compute the “complexity’’ quantity of the problem to get guarantees. This by itself is an interesting and challenging question: given a general problem, is it possible to determine if network learning can have non-vacuous guarantees? We conjecture a negative answer, but this is beyond the scope of the current work and left for future study. See more discussion in the global response above.
### Failure case
There are two failure cases we can think of currently:
- In [3], they constructed a function that is easy to approximate using a 3-layer network but not approximable by any 2-layer network. Since the function is not approximable by any 2-layer network, it cannot be approximated by the gradient-induced networks as well, so OPT will be large. As a result, the final error will be large.
- In uniform parity data distribution, considering an odd number of features rather than even, i.e., $k$ is an odd number in Assumption E.30, we can show that our gradient feature set is empty even when $p$ in Equation (5) is exponentially small, thus the OPT is a positive constant since the gradient induced network can only be constants. Meanwhile, the neural network won’t be able to learn this data distribution because its gradient is always 0 through the training, and the final error equals OPT.
- The above two cases give examples that if the optimal model in the gradient feature class has a large risk, then the ground-truth mapping from inputs to labels is not learnable by gradient descent.
[1] Daniely, A., & Vardi, G. (2020). Hardness of learning neural networks with natural weights.
[2] Daniely, A., Srebro, N., & Vardi, G. (2023). Efficiently Learning Neural Networks: What Assumptions May Suffice?
[3] Safran, I., Eldan, R., & Shamir, O. (2019). Depth separations in neural networks: what is actually being separated?
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for your response. I have decided to maintain my score, and encourage the authors to add the failure cases to the final version, perhaps in the appendix.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your comments and suggestions! We will add the failure cases to our revision. Thank you for your time! | Summary: This paper proposes a general framework of feature learning for two-layer neural networks trained with gradient descent. This framework covers a variety of classification losses and data distributions. Specifically, the authors establish that the loss of neural networks trained with gradient descent is comparable to that of networks with first layers in the direction of a simplified gradient and optimal second layers. The framework is then specialized to multiple examples, including Gaussian mixture classification and learning parity functions.
Strengths: * The presented framework is rather general and unifies the approaches of several recent works, while covering distributions that were not previously covered in the literature.
* The studied problem is exciting and significant for the community as the existence of such a framework is necessary given the ad-hoc nature of current feature learning analyses.
* The related literature is covered extensively.
Weaknesses: * The general guarantees provided by the framework (Theorem 3.12) are limited to the performance of the optimal network with first-layer directions approximately aligned with the gradient. Such a statement does not directly imply feature learning for general problems/distributions, as a key part of feature learning seems to be showing that the gradient directions are indeed useful. To that end, the guarantees are more or less similar to prior work and suboptimal in some cases (e.g. Gaussian mixtures).
* The sample complexities of Theorems 4.4 and 4.8 are only stated as polynomials in problem parameters, thus while outperforming kernel methods, provide limited insight in comparison with prior works. Furthermore, the $\tilde{O}(d^{1.5})$ sample complexity to learn an XOR label from a mixture of 4 Gaussians in Section 4.1.1 seems to be suboptimal, as [92] shows a sample complexity of $O(d)$ for learning the same problem.
* The presentation of the paper can be improved, with specific examples given below.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Is it possible to characterize the reason behind the suboptimality in the $\tilde{\mathcal{O}}(d^{1.5})$ sample complexity of this work in comparison with the $O(d)$ sample complexity of [92] to learn a mixture of 4 Gaussians with XOR-like structure?
* A number of figures can be added to improve the presentation of the paper, e.g. for the definition of gradient features and introducing the Gaussian mixtures and the input distribution for learning parity functions.
* Including the choices of step size in Theorem 3.12 can provide additional insights. Specifically, it seems like the theorem is still operating in the regime of a few gradient steps for the first layer, similar to prior work. A more explicit discussion of the training of the first layer would clarify the similarities and distinctions with prior work.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have mostly discussed the limitations of their work. It would help the readers better understand the limitations if there is also a discussion on how realistic the training assumption for the first layer is (i.e. only a few steps or many steps), and also the potential suboptimality in the rate of learning mixtures of Gaussians in comparison with prior work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing thorough suggestions! For the theorem still operating in the regime of a few gradient steps for the first layer, please refer to the global response above. Below we address the other comments. In short, we can **improve** our results from $\tilde{O}(d^{1.5})$ to be $\tilde{O}(d)$ in sample size under the setting of a mixture of 4 Gaussians with an XOR-like structure.
### Theorem 3.12 are limited to the performance of the optimal network with first-layer directions approximately aligned with the gradient
We agree that our framework cannot directly imply guarantees for general problems/distributions. We want to emphasize that our key contributions are the concept of gradient features and the idea of using networks with gradient features as baselines to quantify the learning errors. We view the current work as a first step in exploiting the full power of these ideas. Even when we use the framework with the current analysis, we can already unify the prototypical questions/analyses in existing work and also obtain interesting insights. This shows the great potential of the framework. See more discussion in the global response above.
### Suboptimal results in the mixture of Gaussians
[92] use ODE to simulate the optimization process for the 2NN learning XOR-shaped Gaussian mixture and give convincing evidence that $O(d)$ number of samples is enough to learn the XOR-shaped Gaussian mixture, yet they did not give a rigorous convergence guarantee for this problem. We successfully derived a convergence guarantee while we required a slightly larger sample size $\tilde{O}(d^{1.5})$.
Moreover, we can improve our results from $\tilde{O}(d^{1.5})$ to be $\tilde{O}(d)$ in sample size. Note that in Lemma D.17 Equation (153) and (154), it still holds if we choose $z = {\log n \over n^{-{1 \over 2}}}$. Then, all $n^{- {1\over 3}}$ in sample size will change to $ {\log n \over n^{-{1 \over 2}}}$. The probability term will change from $O(\exp(- n^{1\over 3}))$ to $O({1 \over n^2})$. Then, we can still get the final guarantee with $\tilde{O}(d)$ in a mixture of 4 Gaussians with an XOR-like structure. In our original submission, we do not provide the tightest bound, we will update it in our paper.
On the other hand, as we mentioned in the limitation section, our framework may or may not recover the width or sample complexity bounds in existing work. This is because our work is analyzing general cases, and thus may not give better than or the same bounds as those in special cases since special cases have more properties that can be exploited to get potentially better bounds.
We proposed the two key ideas of gradient feature and gradient feature-induced neural networks not only to show their ability to unify several current works but also to open a new direction of thinking with respect to the learning process. These notations have the potential to be extended to multi-layer gradient features and multi-step learning, and this work is only our first step.
### Figures
Great suggestion! We made a plot about Gradient Feature under Mixture of Gaussians data in the **[additional rebuttal pdf](https://openreview.net/attachment?id=BMPAso6Sns&name=pdf)**. We will plot more figures in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and for the nice figure! I believe that this work presents an effective framework to unify recent approaches, however, I'm keeping my original score as I'm not entirely certain how much of the high-level intuitions obtained from this work are significantly novel in comparison to the existing literature.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We appreciate that the reviewer believes our work presents an effective framework to unify recent approaches. We will update our work with new bound for the 4-mixture-of-Gaussians, more figures, and more insights. Thank you for your time and valuable suggestions! | Summary: This paper introduces a general framework for studying feature learning in two-layer NNs. This framework covers feature learning in different examples such as linear classification, mixture of Gaussians and parity functions (it also gives some intuition about the learned features). The neural network under study has one hidden layer and is initialized in a symmetric manner. The training algorithm used is mini-batch GD with fresh samples. In the first iteration, features are learned in some hidden neurons. During the next iterations both layers get updated, however the change in the hidden layer is controlled.
Strengths: - The paper does not freeze the first layer's weights, instead it trains all parameters together and controls the change in the first layer's weights with small learning rate.
- The literature review is quite extensive; the paper can generally help people interested in feature learning to become familiar with this direction of research. The paper's framework has also been applied to several different learning problems.
- I personally liked the connection to the lottery ticket hypothesis. I'd suggest moving (more of) it to the main.
Weaknesses: - There are a few limitations which are quite common in the current literature of deep learning theory and have been discussed in the paper: e.g., the feature learning is actually done in the first iteration of gradient descent. See below for some questions regarding the limitations of the framework.
- Personally I wonder if there are new insights given by this framework or not? (Also see below)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Q1. Is it possible to extend the results to the continuous setting and $\ell_2$ loss function (at the beginning $\ell_2$ also acts like $-y\hat{y}$)
- Q2. What are the general insights given by the framework (other than the unification)?
- Q3. Is it possible to use other activation functions?
- Q4. Why in Theorems 33 and 46 the algorithm that trains both layers is not used?
Suggestions:
- It would be really great if some intuition about the gradient feature learning framework can be provided (maybe even in the abstract).
- The parity formulation in section 4.2 was difficult to understand for me; maybe some changes would help.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: This is a theoretical work and there is not negative societal impact. I think the limitations have been discussed adequately (nonetheless see above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing thorough suggestions! For the limitation of the first gradient descent learning and Q2 General insights, please refer to the global response above. Below we address the other comments.
### Q1 Continous setting and square loss
It is possible to extend to a continuous setting and this is one of the works in progress. We can define a gradient feature distribution rather than a gradient feature set. However, we find the technical tools used in the continuous setting are pretty different from the discrete version.
For the $\ell\_2$ loss, it works well in the early-stage where feature learning happens. However, in the later stage analysis (online convex learning part), we need to show that the hidden layer weights stay in a small neighborhood while the top layer weights get updated to a good solution. For the squared loss, it may still be possible to do this control. The current argument easily bounds the change for logistic/hinge losses, but it cannot be directly adopted in the case of squared loss (due to its faster growth). We conjecture a more careful step-by-step inductive argument being needed for that case.
### Q3 Activation functions
Yeah, we can change the ReLU activation function to a sub-linear activation function, e.g., leaky ReLU, sigmoid, to get a similar conclusion. First, we need to introduce a corresponding gradient feature set and then we can make it by following the same analysis pipeline. For simplicity, we present ReLU only.
### Q4 Theorems 33 and 46
For Theorem 33, we provided Theorem 42 as an alternative version that trains both layers. We provide Theorem 33 because (1) use it as a warm-up and (2) follow the original analysis in [1] to give a comparison.
For Theorem 46: this is because we would like to **unify** previous work. [2] are very closely related to our framework: their analysis for multiple index data follows the same principle and analysis approach as our general framework, although it does not completely fit into our Theorem 3.12 due to some technical differences. We can cover it with our Theorem 3.4.
- The same principle and analysis approach: [2] shows that the first layer learns good features by one gradient step update, which can approximate the true labels by a low-degree polynomial function. Then, a classifier (the second layer) is trained on top of the learned first layer which leads to the final guarantees. This is consistent with our framework: we first show that the first layer learns good features by one gradient step update, which can approximate the true labels, and then show a good classifier can be learned on the first layer.
- Technical differences: First, in the second stage, [2] fix the first layer and only update the top layer which is a convex optimization. Our framework allows updates in the first layer and uses online convex learning techniques for the analysis. Second, they consider the square loss (this is used to calculate Hermite coefficients explicitly for gradients, which are useful in the low-degree polynomial function approximation). While in our online convex learning analysis, we need boundedness of the derivative of the loss to show that the first layer weights’ changes are bounded in the second stage. Given the above two technicalities, we analyze their training algorithm (Algorithm 2) which fixes the first layer weights, which currently do not fit directly into our Theorem 3.12 but can fit into Theorem 3.4.
### Parity formulation
We will polish the formulation to make the setting more clear. We provide a high-level intuition here. We have $r$ parity functions each corresponding to a block of $k$ dimensions; $\mathcal{X}\_{j,+}$ and $\mathcal{X}\_{j,-}$ stands for the component providing a strong signal for the $j$-th parity; $\mathcal{X}\_U$ corresponds to uniform distribution unrelated to any parity and providing weak learning signal; $A^\perp$ is the noise part. The label depends on the sum of the $r$ parity functions.
[1] Barak, B., Edelman, B., Goel, S., Kakade, S., Malach, E., & Zhang, C. (2022). Hidden progress in deep learning: Sgd learns parities near the computational limit.
[2] Damian, A., Lee, J., & Soltanolkotabi, M. (2022). Neural networks can learn representations with gradient descent.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I think the paper can benefit from including some of these discussions. I will keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your positive and helpful feedback! We will make sure to include these discussions in the revision. | Summary: The paper defines the concept of "gradient features" which capture the features that the network can learn after one step of gradient descent. The paper then instantiates this framework to prove optimization and generalization guarantees for various statistical learning problems.
Strengths: - The paper develops a general framework which formalizes the "one step" feature learning trick which has recently become popular. This provides an easy to use framework which makes it easy to derive sharp sample complexity guarantees for a number of well defined statistical learning problems.
- The paper instantiates this framework in a number of settings (mixtures of Gaussians, parity, multi-index models) and re-derives a number of sample complexity results.
Weaknesses: - It appears that this framework can only handle features that can be learned directly at initialization (i.e. through the "one-step trick"). In particular, it seems unable to handle multi-step feature learning (e.g. the merged staircase property in [1]).
[1] Abbe et al. (2022) "The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks"
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: - What is the role of the $s$ in the gradient feature? It seems to encode the sign of $b$ but I don't immediately see why this is important.
- For simple $k$-parity, my understanding is that the learned gradient feature $D_1$ is $1_{A}$, the indicator function for the subset $A$. This would therefore compute parity by computing the parity of $\sum_{i \in A} x_i$. Is this correct?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors adequately addressed the limitations of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing thorough suggestions! For the "one-step trick" question, please refer to the global response above. Below we address the other comments.
### More about multi-step feature learning
We would like to mention that the early-stage analysis (“one-step trick”) is an important and necessary foundation for multi-step analysis. In our analysis, we show that when training time is at a certain range (denoted as $T$), during online convex optimization, the first layer weights will stay in a small regime while the second layer weights will converge to a good classifier based on these gradient features. However, the NN may improve its performance if we continue training NN beyond $T$. Our gradient features are defined based on initialization, i.e., $f^{(0)}$. It is natural to think about whether we can define a new gradient feature set upon $f^{(T)}$ and whether we can get a better guarantee based on the new gradient feature set. If the answer is yes, then we could provide a framework with multi-stage/step feature learning.
In [1], they introduce a very sophisticated data distribution whose properties are exploited to address the above challenges smartly. The analysis of multi-step feature learning in a general framework is still open and will be our future work, as we mentioned in our Conclusion section “While the current framework focuses on the gradient features in the early gradient steps, whether feature learning also happens in later steps and if so how to formalize that?”.
### Role of $s$
Yes, the $s$ encodes the sign of the bias term, which is important. Recall that we do not update the bias term for simplicity. Let’s consider a simple toy example. Assume we have $f_1(x) = a_1 ReLU(w_1^\top x + 1)$, $f_2(x) = a_2 ReLU(w_2^\top x - 1)$ and $f_3(x) = a_3 ReLU(w_3^\top x + 2)$.
- The sign of the bias term is important. We can see that we always have $a_1 ReLU(w_1^\top x + 1) \neq a_2 ReLU(w_2^\top x - 1)$ for any $a_1, w_1, a_2, w_2$. It means that $f_1(x)$ and $f_2(x)$ are intrinsically different and have different active patterns. Thus, we need to handle the sign of the bias term carefully.
- The scaling of the bias is absorbed. On the other hand, we can see that $a_1 ReLU(w_1^\top x + 1) = a_3 ReLU(w_3^\top x + 2)$ when $a_1 = 2 a_3, 2w_1 = w_3$. It means the scale of the bias term is less important, which can be absorbed into other terms.
Thus, we only need to handle bias with different signs carefully.
### $k$-parity
Yes, it is correct that $D_1$ is $1_A$, the indicator function for the subset $A$ and we build the optimal neural network based on such directions.
[1] Abbe, E., Adsera, E. B., & Misiakiewicz, T. (2022). The merged-staircase property: a necessary and nearly sufficient condition for sgd learning of sparse functions on two-layer neural networks.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I have decided to keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for reading our response! We are pleased that our response addressed your questions. | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive and valuable feedback.
We are glad that all reviewers unanimously agree that our theoretical analysis is novel, exciting, powerful, and significant. Reviewers find that our paper provides an easy-to-use, general, and unified framework (zxbj, H9VP, Sj27) that makes it easy to derive sharp sample complexity guarantees strictly improve upon kernel methods (zxbj, 3J3m, Sj27). Reviewers believe that our framework is necessary given the ad-hoc nature of current feature learning analyses (3J3m, H9VP). We are encouraged that reviewers agree that the paper can generally help people interested in feature learning to become familiar with this direction of research (3J3m, H9VP, Sj27).
Here we address the early-stage feature learning problem raised by all reviewers and provide more insights about our framework. We will address the other comments in individual responses to each reviewer.
We agree that early-stage feature learning is a limitation of many latest feature learning analysis papers (zxbj), and we emphasize that our framework provides new insights including some for potentially going beyond the early-stage feature learning. The **challenges in the later-stage analysis** are: (1) the weights in the later stage will not be as normal as the initialization, and we need new tools to analyze their properties; (2) to show that the later-stage features eventually lead to a good solution, we may need new analysis tools for the nonconvex optimization due to the changes in the first layer weights.
On the other hand, when building the general framework, we indeed (1) pin down the key principle behind learning over different data distributions and (2) get new insights.
1. Our framework articulates the following key principles (pointed out for specific problems in existing work but not articulated more generally):
- **Role of gradient**: The gradient leads to the emergence of good features, which is useful for the learning of upper layers in later stages.
- **From features to solutions**: Learned features in early steps will not be distorted, if not improved, in later stages. The training dynamic for upper layers will eventually learn a good combination of hidden neurons based on gradient features, giving a good solution.
2. Some other interesting insights are obtained from the generality of the framework:
- To build a general framework, the meaningful error guarantees should be data-dependent, since NN learning on general data distributions is hard and data-independent guarantees will be vacuous. Comparing the optimal in a family of “ground-truth’’ functions (inspired by agnostic learning in learning theory) is a useful method to obtain the data-dependent bound. We further construct the “ground-truth’’ functions using properties of the training dynamics, i.e., gradient features. This greatly facilitates the analysis of the training dynamics and is the key to obtaining the final guarantees.
- The framework can also be viewed as **using the optimal by gradient-induced NN to measure the “complexity’’** of the problem. For easier problems, this quantity is smaller, and our framework can give a better error bound. So this provides a united way to derive guarantees for specific problems.
- It is important to validate the effectiveness of a general framework by applying it to prototypical problems. Such applications can also help clarify what’s crucial or less crucial in existing analyses for specific problems.
- For an SGD-optimized NN, its **actual representation power** is from the subset of NN based on gradient features, instead of the whole set of NN. This view helps explain the simplicity bias/implicit regularization phenomenon of NN learning in practice.
- Our framework goes **beyond NTK** as we use features from gradients rather than features from random initialization. It means features from gradients are more powerful.
3. More broadly, our framework may give new perspectives about roadmaps forward.
- We argue a new perspective about the connection between the strong representation power and the successful learning of NN. **Traditionally, the strong representation power of NN is the key reason for hardness results of NN learning**: NN has strong representation power and can encode hard learning questions, so they are hard to learn. See the proof in SQ bound from [1] or NP-hardness from [2]. The strong representation power also causes trouble for the statistical aspect: it leads to vacuous generalization bounds when traditional uniform convergence tools are used.
- **Our framework suggests a new perspective in sharp contrast: the strong representation power of NN with gradient features is actually the key to successful learning**. More concretely, the optimal error of the gradient feature-induced NN being small (i.e., strong representation power for a given data distribution) can lead to a small guarantee, which is the key to successful learning.
- The above new perspective suggests a different analysis road than traditional ones. Traditional analysis typically first reasons about the optimal based on the whole function class, i.e. the ground truth, then analyze how NN learns proper features and reaches the optimal. In contrast, our framework defines feature family first, and then reasons about the optimal based on it.
- Our framework provides the foundation for future work on analyzing gradient-based NN learning, which may inspire future directions including but not limited to (1) defining a new feature family for 2-layer NN rather than gradient feature, (2) considering deep NN and introducing new gradient features (e.g., gradient feature notion for upper layers), (3) defining different gradient feature family at different training stages (e.g., gradient feature notion for later stages).
[1] Daniely, A., & Malach, E. (2020). Learning parities with neural networks.
[2] Blum, A., & Rivest, R. (1988). Training a 3-node neural network is NP-complete.
Pdf: /pdf/652d9a7f69e7a78a27f9f1b5044e46dfbf9129d0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Adaptive Normalization for Non-stationary Time Series Forecasting: A Temporal Slice Perspective | Accept (poster) | Summary: The paper proposes a normalization technique that works on sliced time-series with the main goal to remove non-stationary behavior of the inputs (and outputs). The paper computes mean and standard deviation of the slides inputs and then normalizes the inputs by them. Additionally, the paper proposes to estimate output normalization for y from the input mean and standard devation.
Strengths: Problem statement is well presented.
Weaknesses: The paper proposes to normalize the slice of a time series by its mean and standard deviation of its inputs.
There is a critical issue with that: this is a **non-causal** normalization, i.e., inputs at the start of the time series are normalized by values observed later in the slice. This makes the proposed approach **unsuitable for forecasting**. It is very likely that the good empirical results presented in the paper are due to this leakage of future values of the inputs **x**.
Secondly, assuming that the first problem can be fixed, there are already very similar works presented in the literature. The authors do mention them but do not compare. Given the practical nature of the paper, these other approaches should be compared as the difference seems quite minor from a general theoretical view point. For example, DAIN (https://arxiv.org/pdf/1902.07892.pdf) is very similar and uses normalization by mean and standard deviations, mixed with neural networks. Please explain why the proposed method is novel compared to DAIN and provide experimental support that it actually improves upon DAIN.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Why should the proposed method perform better than DAIN, which also uses similar normalization ideas (mean and stdev).
* What is the variable that equation 1 is summing over?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No limitations were mentioned in the paper. But clearly the choice of the length of the slicing window is a problem for the proposed method, compared to other methods. The authors should either clearly state that this is a limitation or make experiments showing that the method performance is not affected by the slice length.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your acknowledgement and valuable feedbacks on our work, we would like to address your concerns as follows.
- In practice, **it is commonly assumed that time series data within a slice have the same distribution**. Existing normalization methods (such as DAIN and RevIN) also assume that the entire input/output series follows the same distribution. Therefore, conducting non-causal normalization is reasonable. Additionally, our statistics prediction module and backbone model are **trained solely on the training set**. In the testing phase, the mean and standard deviations for output series are **predicted based on the input series**, ensuring no issue of information leakage.
- DAIN is a pioneering work that introduces normalization into forecasting tasks. However, there are two main drawbacks of DAIN compared to our proposed SAN:
- Firstly, DAIN does not **restore the non-stationary information back** to the output of forecasting models. For example, given two simulated series $x_1= 0.1sin(t)$, $x_2 = 100sin(t)$, DAIN may normalize them into similar input series $\bar{x_1}\approx\bar{x_2}\approx sin(t)$ for later processing. This can lead to the forecasting model predicting similar outputs that greatly violate the original scale. On the other hand, SAN effectively addresses this challenge by **learning to predict future distribution and performing de-normalization operations with predicted statistics**.
- Secondly, DAIN roughly assumes that the **entire input series follows the same distribution**. As illustrated in Figure 1 in our paper, it is clear that distribution changes occur within small slices and there exists a discrepancy in distribution throughout the whole series. To tackle this issue well, SAN proposes slice-level adaptive normalization.
In summary, SAN is theoretically superior to DAIN. To further validate our claim, we conducted additional experiments comparing SAN and DAIN using DLinear on 4 datasets. The detailed results are provided in the following table which experimentally confirms the effectiveness of our method compared to DAIN.
| | | DLinear_SAN | | DLinear_DAIN | |
| --------------- | --------------- | ----------- | --------- | ------------ | ------- |
| **Dataset** | **Pred_length** | **MSE** | **MAE** | **MSE** | **MAE** |
| **Electricity** | 96 | **0.137** | **0.234** | 0.203 | 0.314 |
| | 192 | **0.151** | **0.247** | 0.217 | 0.328 |
| | 336 | **0.166** | **0.264** | 0.229 | 0.339 |
| | 720 | **0.201** | **0.295** | 0.248 | 0.352 |
| **Exchange** | 96 | **0.085** | **0.214** | 1.289 | 0.916 |
| | 192 | **0.177** | **0.317** | 1.503 | 0.998 |
| | 336 | **0.294** | **0.407** | 1.857 | 1.106 |
| | 720 | **0.726** | **0.649** | 2.379 | 1.236 |
| **Weather** | 96 | **0.152** | **0.210** | 0.248 | 0.331 |
| | 192 | **0.196** | **0.254** | 0.286 | 0.357 |
| | 336 | **0.246** | **0.294** | 0.330 | 0.382 |
| | 720 | **0.315** | **0.346** | 0.398 | 0.425 |
| **ETTh1** | 96 | **0.383** | **0.399** | 0.701 | 0.647 |
| | 192 | **0.419** | **0.419** | 0.725 | 0.653 |
| | 336 | **0.437** | **0.432** | 0.743 | 0.661 |
| | 720 | **0.446** | **0.459** | 0.742 | 0.670 |
- Thank you for bringing the writing flaw in equation 1 to our attention. The variable being summed is time points. We will carefully review and rectify any similar errors to enhance the clarity of our presentation.
- Due to page limitations, we had discussed the limitations in section 4 of our supplementary material. We will provide more detailed discussions on the limitations of our method in a later version of our paper to better illustrate potential drawbacks. Additionally, we had studied the effect of slicing length in our supplementary material. Using SCINet as the backbone model, here are the corresponding results. It is evident that our proposed **SAN remains resilient to changes in slicing length**.
| slicing length | 6 | | 12 | | 24 | | 48 | |
| -------------- | --------- | --------- | --------- | --------- | --------- | --------- | ----- | ----- |
| | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |
| Electricity | 0.210 | **0.305** | 0.207 | **0.305** | **0.206** | 0.307 | 0.208 | 0.307 |
| Exchange | **0.892** | **0.712** | 0.895 | **0.712** | 0.901 | 0.715 | 0.898 | 0.714 |
| Traffic | 0.612 | **0.376** | 0.608 | **0.373** | **0.607** | 0.381 | 0.611 | 0.382 |
| Weather | **0.338** | 0.366 | **0.338** | **0.365** | 0.340 | 0.367 | 0.339 | 0.366 |
| ILI | **2.487** | **1.063** | 2.680 | 1.118 | n/a | n/a | n/a | n/a |
| ETTh1 | 0.491 | 0.475 | **0.488** | 0.474 | 0.489 | **0.473** | 0.492 | 0.474 |
| ETTh2 | 0.440 | 0.465 | **0.435** | 0.460 | 0.437 | **0.459** | 0.443 | 0.462 |
| ETTm1 | 0.495 | 0.469 | **0.450** | **0.441** | 0.611 | 0.503 | 0.463 | 0.448 |
| ETTm2 | **0.391** | 0.406 | **0.391** | **0.405** | 0.392 | **0.405** | 0.403 | 0.415 |
We hope that our responses have adequately addressed your concerns, and we appreciate the opportunity to clarify our work.
---
Rebuttal Comment 1.1:
Title: I keep my original scores unchanged
Comment: I would like to thank authors for their comments.
* Unfortunately, the key issue of **non-causality** of the method, is still not addressed. The non-causality here means that the method leaks future values into the final forecasts, both during training and during **validation** (**testing**).
* The authors claim that it is not an issue because the method uses forecasting during inference. However, the **Equation 3**, which specifies how $\hat\mu^i$ and $\hat\sigma^i$ are estimated, is also **non-causal**, i.e., the MLPs take as inputs the means and stdevs of **all slices** of the i-th input series ($\mu^i$ and $\sigma^i$). For example, to calculate the mean of the first slice ($\hat\mu^1$) one needs to have access to the means ($\mu^i_1$, $\mu^i_2$, ...) of **every future slice in i-th the series**.
Therefore, I will keep my original score as the method as currently presented clearly leaks future values into the forecasts of $\hat\mu^i$ and $\hat\sigma^i$ and consequently to the the final outputs of the models.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 3LTc
Comment: Thank you for your response. We would like to further elaborate on the functionality of Equation 3 and address any concerns regarding potential information leakage:
- Firstly, in the forecasting tasks, **the input series and target series is not overlapped**.
- Secondly, for future statistics $\hat{\mu}, \hat{\sigma}$, they are **estimated only by the input/observed series' statistics** $\mu, \sigma$.
- As pointed out by the reviewer, the calculation of the mean $\hat{\mu_i}$ for the first target slice indeed requires accessing the means $\mu_1, \mu_2, \ldots$ associated with various slices. However, **it is crucial to note that these $\mu_i$ values pertain to input/observed means, rather than future means.**
For example, under a setting of input-96-predict-192 with a slice length of 12, we can get non-overlapped input series $x_1$ and the corresponding target series $y_i$, where $|x_i| =96, |y_i|=192$. To estimate the means and stdevs of $y_i$'s 16 slices, our model utilizes the statistics of $x_i$'s 8 slices with the Equation 3. This explicit demonstration underscores that our modeling approach does not inadvertently expose any future information.
We sincerely hope that this clarification addresses your concerns. Should you have any further inquiries or reservations, please do not hesitate to share them with us.
---
Rebuttal 2:
Title: Request for Reviewer 3LTc to respond to the rebuttal
Comment: Reviewer 3LTc, as there are only 2 days left in the author discussion period, would you please read the authors' response, explain the extent to which their answers address your concerns, and whether you will adjust your rating.
If you decide to keep your score, please justify this decision, specifying which aspects of the paper or response have been the deciding factors in you keeping your score. | Summary: The paper introduces a novel approach called Slicing Adaptive Normalization (SAN) for non-stationary time series forecasting. The proposed method addresses the challenge of accurate predictions in the presence of non-stationarity in real-world data. It overcomes limitations in existing normalization techniques by considering the distribution discrepancy between input and horizon series and by modeling the evolving trends of statistical properties at a fine-grained temporal slice level. SAN is a model-agnostic framework that can be applied to various forecasting models, and experiments on benchmark datasets demonstrate its effectiveness.
Strengths: Addressing non-stationarity: The paper tackles the problem of non-stationarity in time series forecasting, which is a significant challenge in real-world scenarios. By considering the distribution discrepancy between input and horizon series, SAN provides a mechanism to mitigate the impact of non-stationary nature on predictions.
Fine-grained normalization: SAN introduces a slice-level adaptive normalization approach, which operates on local temporal slices (sub-series) rather than the entire instance. This fine-grained normalization allows for a more accurate representation of the statistical properties within each slice, preserving distinct patterns and avoiding suboptimal improvements.
Evolving trends modeling: SAN incorporates a statistics prediction module to independently model the evolving trends of statistical properties in raw time series. This module improves the estimation of future distributions, enabling adaptive denormalization and enhancing the accuracy of predictions.
Model-agnostic framework: SAN is designed to be a general framework that can be applied to arbitrary forecasting models. It can serve as a plugin to existing models, making it flexible and adaptable to different forecasting scenarios.
Weaknesses: The overall method is simple and easy to understand, however, I have the following concerns or questions.
[Major] Experiments. Most of my concerns are from the experimental section. (1) The baselines are not sufficient. The advanced models such as PatchTST [1], and crossformer[2] are not compared. If the method can also improve the advanced backbones, I will increase my rating.
[1] Nie, Yuqi, et al. "A Time Series is Worth 64 Words: Long-term Forecasting with Transformers." arXiv preprint arXiv:2211.14730 (2022).
[2] Zhang, Yunhao, and Junchi Yan. "Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting." The Eleventh International Conference on Learning Representations. 2023.
Limited discussion of limitations: The paper does not extensively discuss the limitations or potential challenges of the proposed method. Providing a more thorough analysis of the limitations and potential drawbacks would give a clearer understanding of the scope and applicability of SAN.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see Weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: see Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your recognition of our proposal's novelty and effectiveness. We would like to address your concerns as follows:
- Firstly, we conducted additional experiments using PatchTST and CrossFormer on 5 datasets. We built forecasting models using their official codes and hyper-parameter settings (if available). For PatchTST, we replaced the RevIN layer with our SAN. The detailed results and conclusions are provided in the table below.
| | | PatchTST | | PatchTST_SAN | | CrossFormer | | CrossFormer_SAN | |
| --------------- | --------------- | --------- | --------- | ------------ | --------- | ----------- | ------- | --------------- | --------- |
| **Dataset** | **Pred_length** | **MSE** | **MAE** | **MSE** | **MAE** | **MSE** | **MAE** | **MSE** | **MAE** |
| **Electricity** | 96 | 0.138 | **0.233** | **0.136** | 0.234 | 0.150 | 0.258 | **0.143** | **0.246** |
| | 192 | 0.153 | **0.247** | **0.150** | **0.247** | 0.175 | 0.284 | **0.162** | **0.265** |
| | 336 | 0.169 | **0.263** | **0.165** | 0.264 | 0.218 | 0.325 | **0.177** | **0.280** |
| | 720 | 0.208 | **0.296** | **0.200** | **0.296** | 0.226 | 0.324 | **0.221** | **0.318** |
| **Exchange** | 96 | 0.089 | **0.210** | **0.088** | 0.221 | 0.283 | 0.393 | **0.087** | **0.219** |
| | 192 | 0.195 | 0.315 | **0.174** | **0.314** | 1.087 | 0.804 | **0.171** | **0.313** |
| | 336 | 0.354 | 0.434 | **0.310** | **0.421** | 1.367 | 0.905 | **0.286** | **0.401** |
| | 720 | 0.869 | 0.698 | **0.705** | **0.635** | 1.546 | 0.987 | **0.749** | **0.653** |
| **Weather** | 96 | 0.155 | **0.205** | **0.150** | **0.205** | **0.148** | 0.214 | 0.151 | **0.210** |
| | 192 | 0.200 | **0.245** | **0.195** | 0.250 | 0.201 | 0.270 | **0.198** | **0.253** |
| | 336 | 0.251 | **0.286** | **0.245** | 0.290 | **0.248** | 0.311 | **0.248** | **0.294** |
| | 720 | 0.320 | **0.336** | **0.313** | 0.340 | 0.366 | 0.395 | **0.322** | **0.350** |
| **ETTh1** | 96 | 0.408 | 0.425 | **0.378** | **0.401** | 0.390 | 0.417 | **0.387** | **0.402** |
| | 192 | 0.438 | 0.441 | **0.416** | **0.424** | 0.424 | 0.448 | **0.413** | **0.425** |
| | 336 | 0.442 | 0.446 | **0.428** | **0.434** | 0.486 | 0.492 | **0.436** | **0.431** |
| | 720 | 0.453 | 0.470 | **0.445** | **0.461** | 0.507 | 0.519 | **0.467** | **0.474** |
| **ETTm2** | 96 | 0.168 | **0.257** | **0.166** | 0.258 | 0.330 | 0.401 | **0.170** | **0.262** |
| | 192 | 0.223 | **0.295** | **0.222** | 0.302 | 0.623 | 0.543 | **0.224** | **0.301** |
| | 336 | **0.280** | **0.335** | 0.302 | 0.353 | 0.887 | 0.637 | **0.274** | **0.333** |
| | 720 | **0.371** | **0.391** | 0.402 | 0.418 | 0.844 | 0.640 | **0.366** | **0.390** |
- SAN can improve the forecasting performance of both PatchTST and CrossFormer to some extent in most cases.
- The improvement for PatchTST is not significant due to two main reasons: 1) RevIN has already been introduced in the model to mitigate the impact of non-stationary time series; 2) We have not tuned the hyper-parameters of SAN combined with PatchTST, these results may not reflect its near-optimal performance. Since both are patch-based methods that split series into slices (non-overlapping slices for SAN and overlapping patches for PatchTST), parameter settings may have a greater impact on performance compared to non-patch-based models. These preliminary untuned experiments demonstrate the potential application of SAN in advanced methods.
- The official code of CrossFormer provides parameter settings for Electricity, Weather, and ETTh1 datasets. For Exchange and ETTm2 datasets, we extracted common and reasonable settings. Without SAN, CrossFormer performs poorly compared to PatchTST on these latter two datasets due to unsuitable parameters. However, when enhanced with SAN under the same setting, CrossFormer achieves competitive or even superior performance. This phenomenon reveals that SAN can potentially **reduce reliance on parameter settings for backbone models while also reduce costs associated with parameter adjustment in real-world forecasting applications**.
- Secondly, we would provide more in-detail discussions on the limitation of our method in the later version of our paper. This will help to better illustrate potential drawbacks based on the current discussions in Section 4 of our supplementary material. In short, SAN has three main limitations: One is that the current non-overlapping isometric slicing scheme is not flexible enough. Additionally, SAN may result in an over-stationary issue that negatively impacts performance. Lastly, as mentioned earlier, determining how to adjust SAN's parameters for patch-based models is not a straightforward task.
We hope that our responses have adequately addressed your concerns, and we appreciate the opportunity to clarify our work.
---
Rebuttal Comment 1.1:
Title: Official Comment by reviewer jRLw
Comment: I appreciate your prompt response and the additional experiments you have conducted. While the new experiments provide valuable insights into the performance of your proposed method, I will be retaining my original score for your paper.
I believe that further improvements can be made to enhance the clarity and depth of your article from two specific aspects:
1. Consider conducting additional analysis, such as delving into data patterns or engaging in statistical analysis, to elucidate the reasons behind the observed performance disparities between SAN and ReVIN when integrated with PatchTST. This could provide valuable insights into the specific scenarios where each method excels and help to uncover potential nuances that contribute to the contrasting results.
2. Address the variance in improvement magnitudes across different cases. Offering comprehensive explanations for why improvements are marginal in certain instances and significant in others would provide a more nuanced understanding for readers. By delineating the specific conditions or characteristics under which your proposed normalization method proves superior, you can offer valuable guidance to practitioners and researchers alike.
These suggested enhancements would not only enrich the overall quality of your article but also contribute to a more comprehensive understanding of the strengths and limitations of your approach. | Summary: Non-stationary time series forecasting is a challenging problem, and recent research has focused on using normalization techniques to address non-stationarity. However, these methods have limitations when it comes to handling the distribution discrepancy between the input and the forecasted horizon. This discrepancy arises from the assumption that all time points within the same instance share the same statistical properties. To overcome this limitation, the authors propose a method called SAN (sliced-level adaptive normalization) that empowers time series forecasting with more flexible normalization and denormalization.
SAN addresses this issue in several ways. First, it utilizes local temporal slices instead of considering the entire global instance, allowing for more localized analysis and adaptation. Second, SAN incorporates a slight network module that independently models the evolving trends of the statistical properties of the raw time series. This approach enables the model to adapt to changing statistical characteristics over time. Finally, SAN is a general model-agnostic plugin, which means it can be integrated into various existing forecasting models to enhance their performance.
Strengths: - The authors emphasize the importance of adopting a local perspective for adaptive normalization. By considering local temporal slices instead of the entire global instance, the model gains a more granular understanding of the data, allowing for better adaptation to each slice's specific characteristics and trends. This localized approach enables more accurate and flexible normalization, leading to improved forecasting performance.
- The experimental results presented in the paper demonstrate promising performance in time series forecasting. By incorporating the SAN (sliced-level adaptive normalization) technique into existing forecasting models, the authors achieve notable improvements in mse/mae and predictive capabilities.
Weaknesses: A. Insufficient justification for the problem definition.
i. The main contribution of this paper is to perform normalization based on locally slicing the time series data, instead of assuming the entire time series follows the same distribution.
ii. However, there is a lack of explanation regarding why global normalization should not be used. It would be beneficial to provide a detailed explanation, for example, by referencing papers on CPD (change point detection) or OOD (Out-of-Distribution) analysis within the same time series data, to clearly define the reasons for adopting a local perspective.
B. Citation in Section 2.1. – Detailed weakness and gentle recommendation
i. In section 2.1, discussing the RNN sentence, it would be better to cite papers on RNN, LSTM, and GRU to provide more comprehensive coverage (96-97).
ii. Furthermore, in the following sentence, while Informer is cited for pointing out issues with RNN, it might be more suitable in the context to cite papers addressing RNN’s gradient exploding or vanishing problems.
iii. In the subsequent sentence, when discussing self-attention and convolution for time series forecasting, it would be more appropriate to cite Scinet alongside Informer.
iv. Following that, the mention of Fedformer and Autoformer, which incorporate decomposition characteristics from Transformer-based architecture, could be supplemented by referencing Dlinear.
C. Section 2.2 contents
i. Section 2.2 introduces existing methodologies that employ adaptive normalization for non-stationary time series forecasting.
ii. However, there is a lack of discussion on the limitations of previous research and the rationale behind the need for this study.
D. Section 3 – the methodology for slicing
i. The methodology of slicing the data, which is the most crucial aspect of this study, is not well understood.
ii. If equally sized segments were used, how were the discrepancies in length handled at the end of the slices?
iii. Additionally, it seems fitting to explore the application of methods such as DTW or CPD for segmentation based on similarity or probability in this study. Were they considered and applied?
E. The data example of Figure 2
i. The illustration in Figure 2 appears to represent variations in the magnitude of a sine wave, which does not seem suitable as an example of non-stationary data.
ii. It is suggested to replace the illustration in Figure 2 with a more non-stationary dataset and provide a detailed diagram depicting the slicing process and the resulting changes after normalization.
F. Experiments
i. How do authors perceive the increase in learning complexity and inefficiency due to SAN being trained separately from the forecasting model as a module?
ii. The backbone models used in the experiments leverage decomposition and excel in handling stationary data. Is there a specific reason for applying SAN, designed for non-stationary learning, to these models?
iii. PatchTST also involves patching time series sub-sequences and applying them to a transformer-based architecture. Was SAN also applied to PatchTST?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please address my concerns in the above weakness
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Adaptive normalization is one of the commonly used methodologies for handling non-stationary time series data. The application of slicing in this context is a contribution of this paper. However, the motivation behind this approach is not clearly explained.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your acknowledgement of our proposal. We would like to address your concerns as follows:
- Weakness A: Thank you for your valuable advice on defining the problem. We have found it helpful to reference papers on CPD or OOD to better illustrate the limitations of existing normalization methods and our motivation for proposing the slicing approach.
- Weakness B&C: We appreciate your suggestions on describing limitations of existing methods. We will provide the corresponding discussions to better illustrate our motivation and contribution in this section. Additionally, we will improve our citations to provide a clearer background on forecasting.
- Weakness D:
- i. The idea of slicing time series is based on the observation that many real-world time series data exhibit sliced distribution shift. For example, in Figure 1 of our paper, the energy consumption data's scale changes daily while the approximate curve without absolute scale remains near constant throughout the day. Therefore, we aim to model this nature through the slicing operation.
- ii. The current design of SAN cannot handle indivisible series lengths, so in our experiments, we enforce a slicing length that satisfies this condition. However, this non-overlapping isometric slicing scheme is not flexible and is one of the main limitations of our method as described in Section 4 of the supplementary material.
- iii. This is an excellent question. We are currently exploring incorporating CPD methods to achieve a non-isometric slicing schema for increased flexibility and better performance.
- Weakness E: Thank you for your advice regarding the framework diagram. We will consider your suggestion and redraw a more suitable diagram by replacing the simulated data.
- Weakness F:
- i. Our statistics prediction module is a lightweight network consisting of only 2 MLPs, resulting in minimal training overhead in terms of time and memory. For instance, when training Autoformer enhanced with SAN on the ETTh1 dataset (input length 96, output length 720), the statistics prediction module takes only about 5 seconds to train, while Autoformer consumes approximately 35 seconds per epoch. Besides, thanks to the two-stage training approach, SAN introduces almost no additional delay in training backbone models.
- ii. Although many backbones used in our experiments employ decomposition techniques, these methods do not claim to be specifically designed for forecasting stationary data. The purpose of using these models as backbones is to test the generalization ability of SAN when applied to different architectures.
- iii. SAN can also be applied to PatchTST by replacing its RevIN layer. We provide comparison experiments as follows:
| | | PatchTST | | PatchTST_SAN | |
| --------------- | --------------- | --------- | --------- | ------------ | --------- |
| **Dataset** | **Pred_length** | **MSE** | **MAE** | **MSE** | **MAE** |
| **Electricity** | 96 | 0.138 | **0.233** | **0.136** | 0.234 |
| | 192 | 0.153 | **0.247** | **0.150** | **0.247** |
| | 336 | 0.169 | **0.263** | **0.165** | 0.264 |
| | 720 | 0.208 | **0.296** | **0.200** | **0.296** |
| **Exchange** | 96 | 0.089 | **0.210** | **0.088** | 0.221 |
| | 192 | 0.195 | 0.315 | **0.174** | **0.314** |
| | 336 | 0.354 | 0.434 | **0.310** | **0.421** |
| | 720 | 0.869 | 0.698 | **0.705** | **0.635** |
| **Weather** | 96 | 0.155 | **0.205** | **0.150** | **0.205** |
| | 192 | 0.200 | **0.245** | **0.195** | 0.250 |
| | 336 | 0.251 | **0.286** | **0.245** | 0.290 |
| | 720 | 0.320 | **0.336** | **0.313** | 0.340 |
| **ETTh1** | 96 | 0.408 | 0.425 | **0.378** | **0.401** |
| | 192 | 0.438 | 0.441 | **0.416** | **0.424** |
| | 336 | 0.442 | 0.446 | **0.428** | **0.434** |
| | 720 | 0.453 | 0.470 | **0.445** | **0.461** |
| **ETTm2** | 96 | 0.168 | **0.257** | **0.166** | 0.258 |
| | 192 | 0.223 | **0.295** | **0.222** | 0.302 |
| | 336 | **0.280** | **0.335** | 0.302 | 0.353 |
| | 720 | **0.371** | **0.391** | 0.402 | 0.418 |
It reveals that SAN can also boost the performance of PatchTST in most cases. While the improvement is not significant, we address it to two reasons: firstly, PatchTST already incorporates RevIN to mitigate non-stationarity and performs close to state-of-the-art compared to other forecasting approaches in most scenarios; secondly, we have not fine-tuned the hyperparameters for combining SAN with PatchTST yet, so the current results may not reflect optimal performance. As both SAN and PatchTST are patch-based methods that divide time series into segments (non-overlapping slices for SAN and overlapping patches for PatchTST), parameter settings may have a greater impact on their performance compared to non-patch-based models. These preliminary untuned experiments confirm the potential applicability of SAN in advanced methods.
We hope that our responses have adequately addressed your concerns, and we appreciate the opportunity to clarify our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your insightful rebuttal, and it addresses many of my concerns. However, there are still a few remaining concerns left and I prefer to stick to my previous score (Borderline accept).
There are many things to be modified in original manuscripts. For examples, Experiment sections are not yet organized well. (e.g. ablation studies of SAN and comparing or applying to state-of-the-art models) | Summary: The paper introduces a normalization approach for predicting non-stationary timeseries. While previous work on this topic assumes that output timeseries roughly share the same statistics (mean and variance) as the input timeseries and adjusts predictions accordingly, the authors propose two adjustments. Firstly, they predict the statistics of the target timeseries instead of relying solely on those of the input. Secondly, they utilize local time statistics from slices of the timeseries data, rather than global statistics from the entire timeseries. This approach outperforms alternative methods numerically.
Strengths: - The proposed approach, SAN, is model-agnostic and can be applied to any timeseries regression model.
- SAN is simple but effective. The improvement of SAN is consistent in the reported benchmark.
Weaknesses: The proposed model generalizes existing methods [14, 22], but it reverts to previous approaches when slide number = 1 and an identity statistics prediction model are chosen. While the method performs well empirically, conducting ablation experiments can enhance its validation. For example, testing with a single slice and solely learning statistics prediction would confirm the necessity of slicing.
Moreover, the slicing process requires tuning several important hyperparameters, such as the number of input slices (M) and the number of output slices (K). It would be beneficial if the authors could address the outcome's sensitivity to these hyperparameters.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: The SAN results were slightly worse than the normalization alternatives on the weather dataset. In the paper, the authors attributed this to an over-stationarization issue. Can we detect this problem during training without knowing the generalization performance?
Additionally, does SAN enhance baseline results with a single slice and a learnable statistics prediction module? (See the suggested ablation experiment in the "weakness" section).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: I recommend conducting ablation studies on slicing and hyperparameter choices (e.g., $M$ and $K$) as mentioned in the "Weakness" and "Questions" sections. Indeed, these are the two new adjustments in the paper compared to earlier work. While these adjustments seem to be useful, the approach's effectiveness will depend on how easily appropriate hyperparameters can be found. The authors could also comment on empirical guidelines for finding new hyperparameters for an unseen task.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your acknowledgement and valuable feedbacks on our work, we would like to address your concerns as follows.
- Firstly, the suggestion of conducting ablation study on setting slice number to 1 is of great value. Considering both the training efficiency and rebuttal space limitation, we provide additional experiments on such setting with DLinear on 4 datasets in the following table:
| | | DLinear_SAN | | DLinear_1_slice | | DLinear_wo_SAN | |
| --------------- | --------------- | ----------- | --------- | --------------- | ------- | -------------- | --------- |
| **Dataset** | **Pred_length** | **MSE** | **MAE** | **MSE** | **MAE** | **MSE** | **MAE** |
| **Electricity** | 96 | **0.137** | **0.234** | 0.143 | 0.242 | 0.140 | 0.237 |
| | 192 | **0.151** | **0.247** | 0.157 | 0.255 | 0.153 | 0.250 |
| | 336 | **0.166** | **0.264** | 0.842 | 0.760 | 0.168 | 0.267 |
| | 720 | **0.201** | **0.295** | 0.868 | 0.770 | 0.203 | 0.301 |
| **Exchange** | 96 | **0.085** | 0.214 | **0.085** | 0.220 | 0.086 | **0.213** |
| | 192 | 0.177 | 0.317 | 0.163 | 0.309 | **0.161** | **0.297** |
| | 336 | **0.294** | **0.407** | 0.303 | 0.422 | 0.338 | 0.437 |
| | 720 | **0.726** | **0.649** | 0.757 | 0.656 | 0.999 | 0.755 |
| **Weather** | 96 | **0.152** | **0.210** | 0.158 | 0.221 | 0.175 | 0.237 |
| | 192 | **0.196** | **0.254** | 0.203 | 0.264 | 0.217 | 0.275 |
| | 336 | **0.246** | **0.294** | 0.265 | 0.323 | 0.263 | 0.314 |
| | 720 | **0.315** | **0.346** | 0.319 | 0.353 | 0.325 | 0.366 |
| **ETTh1** | 96 | 0.383 | **0.399** | 0.689 | 0.552 | **0.377** | **0.399** |
| | 192 | 0.419 | **0.419** | 0.705 | 0.566 | **0.417** | 0.426 |
| | 336 | **0.437** | **0.432** | 0.709 | 0.581 | 0.464 | 0.461 |
| | 720 | **0.446** | **0.459** | 0.708 | 0.599 | 0.493 | 0.505 |
The results demonstrate that though SAN could enhance baseline results with a single slice under certain settings, the multi-slicing approach performs better consistently in the benchmark. This validates our hypothesis that the distribution shift in time series data happens rapidly, therefore a slicing normalization is required to better remove the non-stationarity.
- Secondly, we believe that there is an invariant statistic for a dataset (period for example), therefore we share the slicing length for both input series and output series. Consequently, **the number of input slices (M) and the number of output slices (K) are determined by the slicing length**. In our initial submission, we had already conducted the ablation study on this parameter and the results are provided in the Table 4 in the supplementary materials. Specifically, we utilized SCINet as the backbone under the multi-variate forecasting setting to test the effect of different slicing length on our method. We also attach the results in the following for better illustration.
| slicing length | 6 | | 12 | | 24 | | 48 | |
| -------------- | --------- | --------- | --------- | --------- | --------- | --------- | ----- | ----- |
| | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |
| Electricity | 0.210 | **0.305** | 0.207 | **0.305** | **0.206** | 0.307 | 0.208 | 0.307 |
| Exchange | **0.892** | **0.712** | 0.895 | **0.712** | 0.901 | 0.715 | 0.898 | 0.714 |
| Traffic | 0.612 | **0.376** | 0.608 | **0.373** | **0.607** | 0.381 | 0.611 | 0.382 |
| Weather | **0.338** | 0.366 | **0.338** | **0.365** | 0.340 | 0.367 | 0.339 | 0.366 |
| ILI | **2.487** | **1.063** | 2.680 | 1.118 | n/a | n/a | n/a | n/a |
| ETTh1 | 0.491 | 0.475 | **0.488** | 0.474 | 0.489 | **0.473** | 0.492 | 0.474 |
| ETTh2 | 0.440 | 0.465 | **0.435** | 0.460 | 0.437 | **0.459** | 0.443 | 0.462 |
| ETTm1 | 0.495 | 0.469 | **0.450** | **0.441** | 0.611 | 0.503 | 0.463 | 0.448 |
| ETTm2 | **0.391** | 0.406 | **0.391** | **0.405** | 0.392 | **0.405** | 0.403 | 0.415 |
Obviously, the results suggest that our proposed SAN method is resilient to changes in slicing length. When it comes to finding new hyperparameters (the slicing length here) for unseen tasks, we recommend using a heuristic approach based on physical properties like cyclical or seasonal patterns. While this method may not yield optimal parameters every time, SAN has shown its ability to achieve near-optimal performance even with sub-optimal slicing lengths.
- Finally, by conducting statistical analysis, such as using the ADF test, we can assess the stationarity of a given time series. Based on these results, it becomes possible to adjust the normalization strength of SAN in order to prevent over-stationarity.
We hope that our responses have adequately addressed your concerns, and we appreciate the opportunity to clarify our work.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for your response. The two sets of experiments above addressed my questions. I have increased my score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A Spectral Theory of Neural Prediction and Alignment | Accept (spotlight) | Summary: This submission uses spectral theory and simulation experiments to try to compare and assess the representations of DNNs vs those of biological neural networks.
Strengths: The paper comes with a fairly extensive review of the literature.
The paper uses both theoretical ideas and simulations.
Weaknesses: The paper is difficult to read and understand. The primary reason is that the problem being tackled is not stated in any clear way. The boundary between artificial NNs and biological NNs is very fuzzy throughout the paper and it is often unclear if the authors are discussing biological NNs or artificial NNs. The results also are not very clear. This is already obvious from the abstract which does not contain a single quantitative result. Furthermore, there are too many concepts piled up in the paper in a messy way (spectral, geometry, alignment, adversarial vs standard training, error mode weights, etc.).
The authors cite 43 papers. If the goal is to provide a fairly complete overview of the relevant work, then it seems that some key papers are missing (e.g. Anderson and Zipser in the 1980s, more recent work by DiCarlo et al. )
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: There may be good ideas behind this paper and in time this direction of research may prove useful. But at this stage things look premature and very foggy. In future versions, it will be essential to improve the clarity and keep very clear distinctions between artificial and biological NNs. A term like "neural predictivity" is extremely ambiguous in your context as it could have several meanings.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: In addition to the problems mentioned above, there are significant limitations posed by the size of the data sets. The authors are aware of this problem and mention it in Section 5 together with other limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for raising these issues and your comments. Please see below for the responses.
1. **Clarity of the results**
We have made several changes to the paper which we believe substantially address these issues. While we refer the reviewer to the global response for details, we note that we have substantially revised the figures of the paper to give clear schematics of the modeling problem that we consider and describe our results (Rebuttal Figures 1&2), streamlined the presentation of the theory in Sec. 2, and added a concise list of our main contributions to the first section.
2. **Additional Citations**
Thank you for suggesting that we provide additional context for how our results relate to previous literature. Within this work, we are primarily discussing taking the activations from computational models (pre-trained neural network models or other hand-engineered stimulus computable models) and learning a regression mapping between the activations of the computational model and those measured in the brain for the same set of stimuli. This is directly related to the lines of work from DiCarlo et al. We already cite 6 papers from the group, but have also added the following citation to our section on Adversarial vs. Standard Training:
* Guo, C., Lee, M., Leclerc, G., Dapello, J., Rao, Y., Madry, A., & Dicarlo, J. (2022, June). Adversarially trained neural representations are already as robust as biological neural representations. In International Conference on Machine Learning (pp. 8072-8081). PMLR.
Regarding the Andersen and Zipser work, this is more along the lines of building non-linear models that predict neural responses. There are many recent works along these lines–see e.g., [1,2]. It is beyond the scope of this paper to apply our theory to this framework, but we have added the following to the discussion for future work along these lines:
“Finally, we note that while we focused on DNN models trained on image data sets, a closely related line of work has investigated the properties of DNN models directly trained to predict neural responses [1-3]. Future work can compare the error mode geometries from end-to-end networks to those obtained from networks trained on image data. Such an analysis could elucidate how these two vastly different training schemes are both able to predict neural data well. Furthermore, the spectral theory of kernel regression we used here has previously been applied to analyze kernels corresponding to different layers of DNNs at different stages of the training process [4]. Future work could take a similar approach to analyzing how representational geometry evolves throughout the training process of end-to-end DNN models by extracting layer wise kernels at different training checkpoints.”
## Questions
1. **Ambiguity in the term, "neural predictivity"**
We believe that our revisions make the manuscript substantially clearer. However, we decided to continue to use the term "neural predictivity," as this is a standard term in this field. See for example [5,6] below.
## Limitations
1. **Data set size.**
In response to reviewer concerns regarding the data set size for V1, we reran these experiments on a significantly larger data set from Ref. [7] with $P=1250$ images (see response to Reviewer t8VM and Rebuttal Figure 6). Note that the V4 and IT data sets are significantly larger and are standard data sets for bench marking models of visual cortex [8]. Additionally, note that the V1/V2 data sets that we used were a small, publicly available, subset of the full data presented in [9]. We are in the process of obtaining the full data.
References
[1] Cadena et. al., PLoS Comp. Bio., 2019
[2] Klindt et. al., NeurIPS, 2017
[3] Zipser & Andersen, Nature, 1988
[4] Canatar & Pehlevan, IEEE, 2022
[5] Zhuang et. al., PNAS, 2021
[6] Yamins et. al., PNAS, 2014
[7] Cadena et. al., PLoS Comp. Bio., 2019
[8] Shrimpf et. al., BioRxiv, 2018
[9] Freeman et. al., Nature Neuro, 2013
---
Rebuttal Comment 1.1:
Comment: Thank you for the comments and clarification. I will increase my score by 1. | Summary: The paper uses (but doesn't introduce) a theoretical framework that relates generalization error to spectral bias of network activations to geometrically/spectrally analyze what aspects of pretrained deep networks contribute to the predictive performance of neural activity in layers V1, V2, V4 and IT. The authors verify that the theory matches empirical values, and analyze different aspects of the generalization, such as the influence of the training data size and adversarial pretraining.
Strengths: It is inherently difficult to analyze what aspects of deep network representations contribute to the predictive performance on neural data. The authors make an important contribution that disentangle different factors. This will help to understand why and when deep representations predict neural data well.
Weaknesses: My main two criticisms are:
- The clarity/accessibility could be improved at times.
- There are a few more references that could be considered.
- Some of the assumptions/limitations could be discussed more clearly.
While I like the general approach, I think it could be clearer at some point. Since you, as the authors, have thought a long time about what the different $W_i$, $R$, $D$ and other values mean, the reader sees that for the first time. So I think it's important to give a good intuition about what they mean. Figure 1 didn't really help in that respect. Particularly, D and E need a bit more effort to be helpful. The reason why this is so crucial is that they mainly discuss the observations in terms of these values. So if the reader doesn't have a clear intuition what they mean, the observations/interpretations become meaningless. So I urge the authors to improve on the introduction part and on the observation part (--> Q1).
I think the following papers would be worth considering to cite (no, I am not S. Cadena ;) ):
* Santiago A. Cadena, Konstantin F. Willeke, Kelli Restivo, George Denfield, Fabian H. Sinz, Matthias Bethge, Andreas S. Tolias, Alexander S. Ecker Diverse task-driven modeling of macaque V4 reveals functional specialization towards semantic tasks
* Santiago A. Cadena, Fabian H. Sinz, Taliah Muhammad, Emmanouil Froudarakis, Erick Cobos, Edgar Y. Walker, Jake Reimer, Matthias Bethge, Andreas Tolias, Alexander S. Ecker How well do deep neural networks trained on object recognition characterize the mouse visual system?
* S. A. Cadena, G. H. Denfield, E. Y. Walker, L. A. Gatys, A. S. Tolias, M. Bethge, and A. S. Ecker Deep convolutional models improve predictions of macaque V1 responses to natural images
The first, because it discusses different prediction performances depending on the pretrained representation. The second because it finds that random representations do almost equally well as pretrained (albeit for mouse). And the third, because it compared task driven and data driven prediction performance for monkey V1. As far as I know, the authors also provide the data. So you could even repeat some of the analysis with more data for V1 (135 texture stimuli are not a lot).
Re Limitations/Assumptions: Some of the assumptions seem unrealistic to me. I would ask the authors to discuss that in more detail. Specifically
- Q2: You write that model and brain responses need to be deterministic functions of the stimuli. For brain responses this is certainly not true since there is noise and also latent brain states. Can you discuss how the deviation from this assumption affects your results.
- Q3: I am unsure about the $M,P\rightarrow \infty$ and $M/P \in O(1)$ assumption. This means that the model features need to go to infinity as the data grows. Doesn't that mean that the model has to be non-parametric, which is not true for deep network. I would ask the authors to discuss that in more detail.
Discussing the limitations in two very short paragraphs in section 5 is not enough. Especially, since they have two pages left.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Q1-Q3: See above.
Q4: I couldn't find how you extracted the 516 model activations (neither in the paper nor in the supp material). Please briefly describe this procedure in the main paper (i.e. did you use PCA? on how many activations/stimuli? did you only use one layer or many?).
Minor:
- l 97: "self-consistently" did you mean "self-consistency"?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: See weakness above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thorough review. Please see our responses below.
**Q1:**
Thank you for the suggestions regarding the clarity and presentation of the manuscript. We have made a number of changes that we hope address these concerns. These are detailed in the global response. We have also updated Fig. 1 (see RFig 1) and have expanded on Fig. 1E into its own Figure which we hope will guide interpretation of our results (RFig 2).
**Re Cadena References:** Thank you for the references. All of these are very related to our work. We have added the following references:
1. *Diverse task-driven modeling of macaque V4 reveals functional specialization towards semantic tasks:* We reference this work while discussing how training typically benefits models in later visual regions more than earlier ones (see global response).
2. *How well do deep neural networks trained on object recognition characterize the mouse visual system?:* We now reference this work in the revised section presenting our results regarding trained vs. random networks (see global response).
For the third reference, we analyzed the trained vs. random network error mode geometry using this V1 dataset (focusing on the $P=1250$ natural images, RFig 6), as suggested. This analysis reproduced the overall trends where trained networks do slightly better than random networks at predicting V1 neural data, but not by much. This finding was consistent when using 135 images (matched to the previous V1 dataset) or when using all 1250 images. Additionally, the same trends held when using neural recordings from Cadena et al. texture stimuli rather than the natural images (not shown for space reasons). Even though the overall trends replicated, we found that the generalization error for the Cadena et al. dataset was higher (corresponding to worse predictions) than observed in the Freeman et al. dataset. We are further investigating this difference, but it may stem from a lower number of trial presentations per stimulus in the Cadena et al. dataset (2 or 4 trials per stimulus, in comparison to 20 presentations for the Freeman et al. dataset) leading to larger noise levels. We will include RFig 6 in the SI of the paper.
**Q2:**
Thank you for this comment. Previous works on the spectral theory of ridge regression have investigated the role that noise plays in this setting [1]. In the kernel regression setting, noise leads to an additional term in the generalization error formula which depends on the spectrum of the model. Explicitly modeling neuronal noise is an important direction for future work, and we expanded the following paragraph in the limitations section:
“*...Moreover, this theory assumed that the brain responses are deterministic functions of the stimuli. As shown in [1], this assumption may be relaxed, and future work can examine the role of neuronal noise in these contexts.*” (Quote follows line 259 in the original manuscript.)
**Q3:**
This limit is often employed to obtain asymptotic expressions whenever both the number of units and the number of data points is large–see for example [1-3]. Our framework does not require that the feature dimension $M$ grows with $P$. Here, we focus on this large $M$ and $P$ limit since the resulting theory accurately predicts the performance of kernel ridge regression when applied to real data. However, since our formula for $E_g$ is only exactly correct in the large $M,P$ limit, we explicitly confirmed that this theory predicted empirical generalization errors in Fig. 2A.
**Q4:**
Thank you for highlighting this confusion. We do not apply any dimensionality reduction to the model activations and flatten activations from convolutional layers. As it was shared by other reviewers we have included more details of the architecture, layer choices, and extraction on lines 76-80 of the main text:
“*The neural responses and model activations were extracted for each stimulus (Fig. 1B), and each layer of each investigated neural network was treated as a separate encoding model. We examined 32 deep neural networks trained on the ImageNet classification task. Model architectures included convolutional neural networks, Vision Transformers, and “biologically inspired” architectures with recurrent connections, and model types spanned a variety of supervised and self-supervised training objectives (see SI.2 for full list of models). We extracted model activations from several stages of each model. For ViTs we extracted activations from all intermediate encoder layers, while in all other models we extracted activations from the ReLU non-linearity after each intermediate convolutional activation. This resulted in a total number of $516$ analyzed model stages. In each case we flattened the model activations for the layer with no subsampling.*”
**Minor: l 97 "self-consistently" did you mean "self-consistency"?**:
This terminology refers to the fact that there is no closed-form expression for calculating $\kappa$ as a function of the model spectrum [1-3]. Instead, given fixed $\{\lambda_i\}$, we have to numerically solve for the value of $\kappa$ that satisfies the equation
$$
\kappa - \alpha_{reg} - \kappa \sum_i \frac{\lambda_i}{p\lambda_i + \kappa}=0.
$$
[1] Canatar et. al., Nature communications, 2021.
[2] Loureiro et. al., NeurIPS, 2021.
[3] Seung et. al., Phys. Rev. A, 1992.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarifications
Comment: Dear authors, thanks for the clarifications. I have read the rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thank you for the suggestions to improve and strengthen our paper! | Summary: This study asks a crucial question: What is it about representations in ImageNet-trained neural networks that allow the successful regression of responses in mammalian visual cortex? The authors answer this question (in part) via an ingenious method combining learning theory and empirical analyses.
Even after a decade of study (or more), it is unclear why the responses of neurons in the visual cortex can be so well predicted – better than any other approach – by linearly regressing them from the representations of pretrained DNNs. Here, the authors open the door to a potentially very fruitful method of answering why. Beginning from an analytical expression for the generalization error of ridge regression, the authors notice that it can be decomposed into a product of two terms describing the geometry of the error in the space of examples (the outer product a.k.a. Gram matrix space): one term which carries the interpretation of an average radius, and the other which carries the interpretation of the effective dimensionality of the error. A low generalization error can be achieved by decreasing this radius and dimension. Then, the authors empirically (and thoroughly) analyze how the generalization error decomposes into radius and dimension when predicting V1, V2, V4, and IT responses from a very wide range of pretrained networks, both supervised and unsupervised. The results are striking. For example, networks appear to be surprisingly constrained in their “error mode geometry”, apparently forced to trade between low radius and high dimension. There are many implications in this paper for how we go about understanding neural responses.
Strengths: A pleasure to review. This paper tackles a question of high significance using a thorough and deep analysis, one that is both highly original and leverages recent advances. The empirical analyses are complete and leave nearly nothing wanting.
Weaknesses: The only weakness I might comment on, in the category of clarity, is the absolute density of results, many of which are not followed up or explored more deeply. To help the reader internalize these interesting results, it would be better to provide a writing structure with more summaries of the outcomes of interest both at the paragraph level and at the paper level. (I personally prefer the CCC style argued for by Mensh and Kording (2017)). At the moment the reader is left to determine for oneself which results to find interesting – meaning that to many quick readers some of these important lessons will be overlooked.
I have a number of requests for expanding the exposition. Apologies, as not all of these will fit in the page limit!
First, it would be nice to have a little more discussion and intuition of the meaning of the key error mode geometry terms, radius and dimension.
Then, there were a number of interesting findings that were only given a sentence or not mentioned in the main text.
- SI.5.4 is very interesting but not much mentioned in the main text. Consider moving to the main text; by way of contrast this might help distinguish your own definition of dimension.
- Line 443 in the supplement, which mentions that the eigenspectra of the stimulus set in part controls generalization error, which means that it could be a tool for better selecting stimulus sets for neural predictions. What would this look like?
- It would be good to mention in the main text (briefly) that you analyze both supervised and unsupervised networks, and of many architectures.
- That “the improved predictivity of trained networks in regions V2, V4, and IT is primarily driven by changes in their intrinsic expressivity as summarized by their eigenspectrum decay, rather than significant changes in their alignment with neural data.” (160-162). This would have large implications for those believing the brain==ImageNet DNNs. Could this be unpacked?
These are just a few; it would be nice to have summaries of all main takeaways.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could a plot be created in which the error mode geometry is shown split by which objective function (supervised, Barlow Twins, etc) as well as each network? Even a null result (indistinguishable) would be interesting.
Does the theory require an alignment between the eigenmodes of $G_R$ and $G_X$? If so, how is this achieved in theory?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitations were thoroughly mentioned and addressed. I greatly appreciated the exploration (in the supplement) of the difference between the generalization error of ridge regression and the Pearson’s R2 measure for partial least squares regression commonly used elsewhere. The additional supplement plots for other small details (outliers, train/test split, $l_\infty$ adversarial) were also appreciated. In general, I appreciate how the analyses and plots were chosen not to puff up results or push them into any particular narrative, but rather were an honest and complete presentation of the results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply thank you for reading our paper thoroughly, your positive thoughts and detailed comments/suggestions!
## Weaknesses:
1. **Clarity, density of results and writing:**
We thank the reviewer for the suggestion. To make it easier for the reader to get a high level summary of our results, we’ve added a list of our main contributions to the end of the introduction (see global response).
2. **Intuition for Error Geometry: Radius and Dimensionality:**
We appreciate this suggestion and have simplified and improved the presentation of the theory (§2.2; see global response). Furthermore, we substantially revised Fig. 1 (see RFig 1) and have added a helpful schematic (RFig 2) to make our results on the error mode geometry easier to understand.
3. **SI.5.4:**
Thank you for this suggestion, and we are glad that you enjoyed this section. We will consider moving some of this to the main text if space allows. We also added the following sentence to the end of Sec. 3.1:
“*These results highlight the need to study the spectral properties of both the model and the brain responses as summarized by the $\tilde W_i$: Simply studying the properties of the eigenspectra of the model is insufficient to characterize neural predictivity (see Sec. SI.5.4 for more details).*”
4. **Line 443 in SI, tool for selecting stimulus sets:**
Thank you for this comment. In response to this, we expanded this line as:
“*These differences are not due to the recorded neurons, as the model eigenspectra is only a function of the stimulus set, pointing out the importance of considering the stimuli used for the neural predictions. Future work can consider choosing stimulus sets to maximize the difference between model eigenspectra. This would likely lead to greater differences between models in their predictions of neural activity patterns.*”
5. **Mentioning the analysis of various networks and training procedures in the main text:**
Thank you for the suggestion. We have included more details of the architecture and layer choices on lines 76-80 of the main text, and explicitly noted the use of different training objectives:
“*The neural responses and model activations were extracted for each stimulus (Fig. 1B), and each layer of each investigated neural network was treated as a separate encoding model. We examined 32 deep neural networks trained on the ImageNet classification task. Model architectures included convolutional neural networks, Vision Transformers, and “biologically inspired” architectures with recurrent connections, and model types spanned a variety of supervised and self-supervised training objectives (see SI.2 for full list of models). We extracted model activations from several stages of each model. For ViTs we extracted activations from all intermediate encoder layers, while in all other models we extracted activations from the ReLU non-linearity after each intermediate convolutional activation. This resulted in a total number of $516$ analyzed model stages. In each case we flattened the model activations for the layer with no subsampling.*”
6. **Lines (160-162) Implications for brain==ImageNet DNNs:**
Thank you very much for this suggestion! The comments around lines 160-162 were based primarily on visual inspection of differences in the respective $\{\lambda_i\}$ and $\{W_i\}$. However, in response to this comment, we ran an explicit experiment to test this claim (see RFig 4 and Global Response). To our surprise, these results indicate that training primarily drives increased predictivity through changes in alignment with the neural data, rather than changes in the eigenspectra of the model. This highlights that even if model eigenspectra seem quite different (RFig 4D), the spectral differences may not be driving the underlying differences in predictivity. We hope this serves as the groundwork for identifying the contribution of these different properties.
## Questions:
7. **Plotting error mode geometry by objective functions and networks:**
We have added this as RFig 5 and will include in the SI for V1, V2, V4, and IT. On first pass it seems like there is little difference in the error mode geometry between the supervised and self-supervised networks, which is in line with other works that suggest these networks have similar representations [1-3]. We have also added a plot grouping supervised models based on architecture (RFig 3)
8. **Alignment between $G_R$ and $G_X$ eigenmodes:**
Our theory for the generalization error depends on the alignment between the entire matrix of biological neuron responses, which is now denoted $\mathbf{G}_Y$, and the eigenvectors of the model Gram matrix $\mathbf{G}_X$, denoted by $\mathbf{v}_i$. This alignment is quantified by the weights $W_i = ||\mathbf{Y}\mathbf{v}_i||^2/||\mathbf{Y}||^2_F$, which quantifies the fraction of the variance in $\mathbf{Y}$ that the eigenvector $\mathbf{v}_i$ accounts for. As such, the eigenmodes of $\mathbf{G}_Y$ do not *explicitly* enter into the formula for the generalization error but rather $W_i = \frac{\mathbf{v}_i^\top \mathbf{G}_Y \mathbf{v}_i}{\text{Tr}\, \mathbf{G}_Y}$, i.e. its projection (see Eq. 2 in main text).
[1] Geirhos et al. 2020 (NeurIPS SVRHM Workshop)
[2] Zhuang et al. 2021 (PNAS)
[3] Konkle & Alvarez 2022 (Nature Communications) | Summary: Previous works have demonstrated that many different state-of-the-art deep neural networks (DNNs) perform similarly at neural responses prediction. But a complete understanding of which aspects of these DNNs lead to the similarity in predicting neural responses remains unknown. The authors proposes a spectral theoretical framework to explore how the geometrical properties of DNNs affect the performance of neural responses prediction. More specifically, the authors introduce a generalization error to serve as a measure of fit, which brings a geometrical interpretation of the model activations (representations) in DNNs. This geometrical interpretation could further give additional insights into how DNNs are achieving different performance of neural responses prediction. The authors also design several experiments to investigate the roles of layer depth, dataset size, and different training approaches of DNNs in neural responses prediction.
Strengths: * The authors establish a novel link betwwen the predictivity of neural responses and the geometry of DNN representations.
* The authors provide a solid analysis and several comprehensive experiments to support their theoretical framework.
Weaknesses: * The authors should improve the clarity of their conclusions from their results. Section 3 provides many details about their analysis and results, but it's hard for the readers to extract the essential arguments the authors want to convey. It would be better to summarize your conclusions at the beginning or end of each paragraph.
* The authors should improve their exposition of the spectral theoretical framework. I recommend the authors include more background information and provide some examples to show the role of each equation in the relationship between the generalization error and the geometry of model activations.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I have highlighted weaknesses above. I have nothing further to add here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your review and suggestions. Please see our responses below.
1. **Clarity issues and summary of results:**
We thank you for these suggestions and agree that the clarity should be improved. In response to this comment and others, we added a concise list outlining the principle contributions of paper to the end of the introduction. We have also internally updated our document to increase clarity of the results section focusing on the main conclusions for each section. Furthermore, we uploaded Rebuttal Figures (RFig) further explaining main concepts.
2. **Improvement for spectral theory framework:**
We significantly simplified the exposition of our theory in response to this and other comments, please see the RFig 1 and RFig 2. Furthermore, please see the updated version of Sec. 2 below:
> In response to a total of $P$ stimuli, we denote model activations with $M$ features (e.g. responses from one stage of a DNN) and neural responses with $N$ neurons (e.g. firing rates) by $\mathbf{X}\in\mathbb{R}^{P\times M}$ and $\mathbf{Y} \in \mathbb{R}^{P\times N}$, respectively. Sampling a training set $(\mathbf{X_{1:p}}, \mathbf{Y_{1:p}})$ of size $p < P$, the ridge regression solves: $$\hat{\beta}(p) = \arg\min_{\beta \in \mathbf{R}^{M\times N}} ||\mathbf{X_{1:p}} \beta - \mathbf{Y_{1:p}}||_F^2 + {\alpha^{reg}} ||\beta||_F^2, \quad \hat{\mathbf{Y}}(p) = \mathbf{X} \hat{\beta}(p)$$
We analyze the neural predictivity based on the generalization error $E_g(p) = \frac{||\hat{\mathbf{Y}}(p) - \mathbf{Y}||_F^2}{||\mathbf{Y}||_F^2}$ for which we utilize theoretical tools from learning theory, random matrix theory and statistical physics to extract geometrical properties of representations based on spectral properties of data. In particular, the theory introduced in [1,2] relies on the orthogonal mode decomposition (PCA) of the Gram matrix $\mathbf{X}\mathbf{X}^\top$ of the model activations, and projection of the target neural responses onto its eigenvectors:
>
> $$\mathbf{X}\mathbf{X}^\top = \sum_{i=1}^P \lambda_i \mathbf{v}_i \mathbf{v}_i^\top,\quad W_i := \frac{||\mathbf{Y}^T\mathbf{v}_i||_2^2}{||\mathbf{Y}||^2_F}, \quad <\mathbf{v}_i, \mathbf{v}_j> = \delta\_{ij} .$$
>
> Here, associated to each mode $i$, $W_i$ denotes the variance of neural responses $\mathbf{Y}$ in the direction $\mathbf{v}\_i$, and $\lambda_i$ denotes the $i^{th}$ eigenvalue. Then, the predicted generalization error is given by:
$$E_g(p) = \sum_{i=1}^P W_i E_i(p), \quad E_i(p) := \frac{\kappa^2}{1-\gamma} \frac{1}{(p\lambda_i + \kappa)^2},$$
where $\kappa = \alpha_{\text{reg}} + \kappa \sum_{i=1}^P \frac{\lambda_i}{p\lambda_i + \kappa}$ must be solved self-consistently, and $\gamma = \sum_{i=1}^P\frac{p \lambda_i^2}{(p \lambda_i + \kappa)^2}$. Note that the theory depends not only on the model eigenvalues $\lambda_i$, but also on the model eigenvectors $\mathbf{v}_i$ along with the responses $\mathbf{Y}$, which determine how the variance in neural responses distributed among its eigenmodes.
>
> Although the equations are complex, the interpretations of $W_i$ and $E_i(p)$ are simple: $W_i$ quantifies the projected variance in neural responses on model eigenvectors (alignment between neural data and model eigenvectors, i.e., *task-model alignment* [1]). Meanwhile, $E_i(p)$ determines the reduction in the error contributed from each $W_i$ when the training set size is $p$, and depends only on the eigenvalues, $\lambda_i$ (i.e., *spectral bias* [1]).
>
> In this work, we combine both and introduce *error modes* $\tilde W_i(p) := W_i E_i (p)$:
$$\tilde W_i(p) := \frac{\kappa^2}{1-\gamma}\frac{W_i}{(p\lambda_i + \kappa)^2}, \quad E_g = \sum_i \tilde W_i(p)$$
As shown in RF 1C, it can be seen that $\tilde W_i$ that are associated with larger eigenvalues $\lambda_i$ will decay faster with increasing $p$ than those associated with small eigenvalues.
>
>While the error modes $\tilde W_i$ completely characterize the generalization performance of a given model, it is difficult to use them for direct model comparison due to their high dimensionality. Our main goal here is to derive a geometric measure that analytically relates to the generalization error. This is in contrast to previous such measures, such as the effective dimensionality that only depends on model eigenvalues.
>
>Here, we define a set of geometric measures that characterize the distribution of a model's $\tilde W_i$ via the ***error mode geometry*** (RFig 1D). Specifically, we rewrite the generalization error $E_g(p)$ as:
> $$ R_{em}(p) := \sqrt{\sum_i \tilde{W}_i(p)^2}, \quad D\_{em}(p) := \frac{\big(\sum_i \tilde W_i(p)\big)^2}{\sum_i \tilde W_i(p)^2}, \quad E_g(p) = R\_{em}(p) \sqrt{D\_{em}(p)}.$$
>
> The error mode radius $R_{em}$ denotes the overall size of the error terms, while the error mode dimension $D_{em}$ represents how dispersed the total generalization error is across the different eigenvectors (RFig 1D). Note that, the generalization error $E_g(p)$ above has a degeneracy of error geometries; many different combinations of $R_{em}$ and $D_{em}$ may result in the same $E_g(p)$ (RFig 2)."
[1] Canatar et. al., Nature communications, 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for your efforts to address my concerns. The presentation of this paper is overall improved. I will upgrade my score to 7. | Rebuttal 1:
Rebuttal: # Global Response
We thank the reviewers for their helpful comments and for highlighting the significance of our work. As noted by one reviewer, “Even after a decade of study...it is unclear why the responses of neurons in the visual cortex can be so well predicted...by linearly regressing them from the representations of pretrained DNNs.” Our work uses a recent theoretical framework that directly relates the generalization error from regression to the spectral bias of the model activations and the alignment of the neural responses onto the learnable subspace of the model. As noted by another reviewer, "Analyzing the eigenspectrum of a model’s feature space...goes beyond the standard measures of representational similarity analysis or linear regression experiments that one usually comes across for this line of research."
Here, we address a few concerns shared by multiple reviewers. We also upload a pdf with Rebuttal Figures (RFig).
### 1. We clarified key contributions
We added the following to the end of the introduction:
1. We analytically decompose the generalization error of ridge regression from a model to a set of brain data in terms of the model eigenspectra, the alignment between the eigenvectors of the model and brain data, and the training set size.
2. We introduce two geometric measures that summarize these spectral properties and directly relate to the neural predictivity. We show that these measures distinguish between different models with similar predictivities using a wide variety of network architectures, learning rules, and firing rate datasets from visual cortex.
3. Using spectral theory, we demonstrate that for networks effective in predicting neural data, we can ascertain if their superior performance stems from the model's spectra or alignment with the neural data. Our findings indicate:
(a) Trained neural networks predict neural data better than untrained ones due to better alignment with brain response.
(b) The effect of adversarial training on these geometric measures interacts with both the cortical area and the neural network layer being analyzed.
### 2. We revised the presentation of the theory
Section 2.2 has been rewritten to improve clarity and provide intuitions. We have also updated Figure 1 and added a helpful schematic–see RFig 1 & 2. We include part of the section here (see 3d9A for the full section):
"...Although the equations are complex, the interpretations of $W_i$ and $E_i(p)$ are simple: $W_i$ quantifies the projected variance in neural responses on model eigenvectors (alignment between neural data and model eigenvectors, i.e., *task-model alignment* [1]). Meanwhile, $E_i(p)$ determines the reduction in the error contributed from each $W_i$ when the training set size is $p$, and depends only on the eigenvalues, $\lambda_i$ (i.e., *spectral bias* [1]).
In this work, we combine both and introduce *error modes* $\tilde W_i(p) := W_i E_i (p)$:
$$\tilde W_i(p) := \frac{\kappa^2}{1-\gamma} \frac{W_i}{(p\lambda_i + \kappa)^2}, \quad E_g = \sum_i \tilde W_i(p)
$$
(see Eq. SI.5 for details.) As shown in RFig 1C, $\tilde W_i$ associated with large eigenvalues $\lambda_i$ will decay faster with increasing $p$ than those associated with small eigenvalues."
The generalization performance of a model is fully characterized by its error modes, $\tilde W_i$. However, due to its vector nature, $\tilde W_i$ is not ideally suited for model comparisons. To address this limitation, we condense the overall shape of $\tilde W_i$ into two geometric measures, while preserving their direct relationship to the generalization error.
### 3. New experiment and clarification of training vs. random model results
One common concern was the clarity of results. We rewrote our results sections to more clearly connect the spectral properties of the model and its alignment to the brain data to the error mode geometry. Additionally, we ran a new experiment to directly test whether training neural networks improves predictivity through improved alignment with neural data as opposed to changes in model eigenspectra. To demonstrate these points, we give the revised section on trained vs. untrained networks below.
"*We analyzed how the error mode geometry for neural predictions differed between trained and randomly initialized DNNs (Fig. 4A). In line with previous results [2,3,4,5], we found that training yielded an improvement in neural predictivity as measured via smaller $E_g(p)$ (RFig 4B). This improvement was most notable in regions V2, V4, and IT, where there was also a characteristic change in the error mode geometry. In these regions, while $R_{em}$ decreased with training, $D_{em}$ surprisingly *increased**.
...*To gain insight into this we further investigated how the eigenspectra ($\lambda_i$) and the alignment coefficients ($W_i$) individually contributed to the observed error mode geometry. These two spectral properties can be varied independently. We performed an experiment on the trained and random ResNet50 model activations where we measured $\lambda_i$ from one model and paired it with the $W_i$ for IT data measured from the other model (RFig 4C). When using the eigenspectra of the random model, the $D_{em}$ was lower than when using the eigenspectra of the trained model. In contrast, when using the $W_i$ terms of the random model, the $R_{em}$ was much larger than that of the trained model. This $R_{em}$ decrease when using the $W_i$ terms from the trained model is the main driver of the improvement in $E_g(p)$ when we use the trained model to predict neural data...*
*...In other words, the eigenvectors of the trained model were overall better aligned with the neural data compared to the random model.*"
[1] Canatar et. al., Nat. Comm, 2021
[2] Schrimpf et. al., BioRxiv, 2018
[3] Saxe et. al., Nat. Reviews Neuro, 2021
[4] Cadena et. al., bioRxiv, 2022
[5] Cadena et. al., NeurIPS, 2019
Pdf: /pdf/1ac78da2505e5d4e2d956d568fd650690189ce04.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors studied linear regression-based DNN encoding models of macaque visual cortical areas by extending a recent generalization error theory. They provided a theoretical link between the predictivity and geometry of representations and showed that models with similar generalization errors may have quite different geometrical properties. They compared a randomly initialized ResNet50 with standard and adversarially trained ResNet50 models. Evaluations and analyses are a bit limited.
Strengths: - Analyzing the eigenspectrum of a model's feature space appears to be an interesting analysis that goes beyond the standard measures of representational similarity analysis or linear regression experiments that one usually comes across for this line of research at the intersection of cognitive neuroscience and machine learning. So, I appreciate that effort.
Weaknesses: **Major**
- **Tense**: There's a mix between present tense and past perfect in $\S2$ which I think is not great. Choose one tense and use it consistently. I'd use the present tense to describe the problem settings and methods and either use present tense or past perfect for the results section. Also, use active rather than passive voice which I think is better readable (and more scientific).
- **$\S3.4$ + main results**: I find it very strange to denote the training set size with small $p$. As I mention below, $p$ is usually used to refer to the number of features in some representation space and not to the size of the (training) data. If I was just reading the title of the subsection, I'd think that the generalization error depends on the size of the feature space (although I would not know on which of the feature spaces). This is not implausible, but very different from what you mean/find. Actually, it's pretty trivial that the generalization error depends on the size of the training data for fitting the regression model. When would the generalization error not depend on the size of the training data? This section is in general a bit confusing. In lines 205-207 you write: "[...] it may be that trained networks do better than their random counterparts at predicting V1 data for large, but not small, training data set sizes." Do you refer to the size of the (training) data with which the regression model is fit? If that's the case, I am a bit puzzled because from Bayesian statistics (and statistics in general) we know that the prior is most important for small data regimes. In the infinite data limit, the prior and, therefore, any regularization parameter stops mattering. A pretrained neural network has a much better initialization compared to a randomly initialized network. It has found a good local minimum over the course of pretraining which gives it a big head start. The pertaining is an implicit regularization for any downstream task. Here, we can treat the regression task as a downstream task. So, the prior should matter most when the training data to fit your regression model is small. You seem to find the opposite, or at least that's what you claim. The only possibility where I can see this being the case is when the training data size for the downstream task (e.g., for performing linear regression) is so small that the parameters of your (regression) model can basically explain no variance at all. But then the findings are pretty useless, aren't they? Actually, I am confident that $N=135$ is too small to find any statistically meaningful effects.
- **Model choice**: It would have been interesting to see at least one, if not more, other architecture(s) being analyzed. I doubt that you can generalize any findings from a single ResNet50 to the whole space of DNNs. Even within the subclass of CNNs, there exist countless models. I find it more interesting to compare CNNs (pick two or three) to ViTs (pick two or three) but comparing a few CNNs (each with a different inductive bias --- e.g., AlexNet, VGG16, ResNet50, ResNext, ConvNeXt, EfficientNetV2), ideally trained with different recipes would have probably been sufficient.
**Minor**
- Lines 26-27: "[...] many different architecture and training procedures lead to similar predictions of neural responses." While the former is true and has been demonstrated multiple times in various ways, it has recently been shown in a comprehensive study that both training data and objective function have a major impact on the degree of alignment with human behavior; much more than architecture (see [Muttenthaler, L., Dippel, J., Linhardt, L., Vandermeulen, R. A., & Kornblith, S. (2022)](https://openreview.net/pdf?id=ReDQ1OUQR0X)). A longer while ago it has also been shown that different data augmentation strategies lead to different biases and as such to different degrees of consistency with human errors. See [Hermann, K., Chen, T., & Kornblith, S. (2020)](https://proceedings.neurips.cc/paper_files/paper/2020/file/db5f9f42a7157abe65bb145000b5871a-Paper.pdf); [Hermann, K., & Lampinen, A. (2020)](https://proceedings.neurips.cc/paper/2020/file/71e9c6620d381d60196ebe694840aaaa-Paper.pdf); [Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F. A., & Brendel, W. (2018)](https://arxiv.org/pdf/1811.12231.pdf); and [Geirhos, R., Narayanappa, K., Mitzkus, B., Thieringer, T., Bethge, M., Wichmann, F. A., & Brendel, W. (2021)](https://proceedings.neurips.cc/paper_files/paper/2021/file/c8877cff22082a16395a57e97232bb6f-Paper.pdf).
- In line 58 is a small grammar mistake: "[...] characterizing how fast the eigenvalues of data Gram matrix fall [...]". There is an article missing before **the** data Gram matrix.
- **Notation**: The notation in $\S2.2$ is weird. Why is the matrix of neural activations $\mathbf{R} \in \mathbb{R}^{P \times N}$? First, $\bf{R}$ can sometimes have a special meaning. So, I'd be careful with that. Second, although not bolded, you use $R$ for the generalization error terms. Third, generally $N$ is used to denote the number of stimuli/examples in your training data and not to refer to the number of features. My suggestion is to denote the matrix of model features as $\mathbf{X} \in \mathbb{R}^{N \times D}$ and the matrix of neural activations $\mathbf{Y} \in \mathbb{R}^{N \times P}$. As such, the regression problem $\mathbf{Y} = \mathbf{X}\mathbf{A}^{\top} + b$ becomes more intuitive. I'd replace $\mathbf{R}^{\ast}(\alpha_{reg}, p)$ with the full equation. $\mathbf{R}^{\ast}(\alpha_{reg}, p)$ is not easy to parse. That being said, note that this is just my personal taste which is why it's a minor weakness. There may be people who don't care about that. I think that maths should be easy to follow, even if a reader skims the paper. I dislike decorative maths. If it's just decorative, it can as well be omitted.
- In Equation 2 is an "is equivalent to" symbol. It is probably not clear to everyone why the two terms are equivalent but not equal. I'd like to see a (short) derivation of it. You can derive it in the Supplementary Material. Similarly, there are two "$\equiv$" symbols in Equation 5. Why did you choose "$\equiv$"? How are the terms "equivalent" but not "equal"? Do you perhaps mean "$\coloneqq$" for an assignment of variables? That would make more sense to me. "$\equiv$" seems to be an abuse of notation but maybe I am missing something here.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - How do you explain a decrease in $R_{em}$ and an increase in $D_{em}$ that is caused by (standard) training?
- Why did you choose a (wide) ResNet50 and not any other model? Is there a particular reason for that choice?
- Why do you think "[...] that trained networks do better than their random counterparts at predicting V1 data for large, but not small, training data set sizes"? Is it possible that the size of the (training) data to fit the regression model is just too small to explain any variance at all or do you have another explanation that rules out that possibility? I think that $N = 135$ is too small to see any effects at all which is why I think there's no notable difference between pretrained and randomly initialized networks in your analyses. However, $N = 3600$ is large enough to observe statistically significant effects. So, I doubt that you can draw any conclusions at all about the smaller of the two datasets. Note that $N = 3600$ is still not a particularly large dataset.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes. The authors discussed limitations of their work in a dedicated section at the end of the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your careful, extensive and constructive assessment of our work which led us to improve the clarity of our paper! Please also see global response and Rebuttal Figures (RFig).
**Tense**: Thank you for pointing this out. We edited our draft so that the tense is consistent in each section, and we now use active voice when possible. Please see global response updates to §2.2 and §3.2 for examples.
**§3.4 + main results**:
1. **Notation:** Thank you for pointing out this discrepancy in the literature. We use the convention where $p$ is the training set size and $P$ is the full dataset size to be consistent with previous theoretical work [1,2]. But if you feel that it is absolutely necessary, we are willing to change the notation.
2. **Text and Title:** We fully agree that the title and text of this section are confusing. The main purpose of this section is that metrics for "ranking" models depend on dataset size and the ordering of models can change based on this decision. This will be clarified in the main text. We also changed the section title to “*Characterizing the Dependence of Generalization Error on Training Set Size*", and updated the first sentence to “*Our spectral theory of neural predictivity characterizes the relationship between the training set size $p$, the generalization error, and the spectral properties of the model and brain responses.*”.
Note that this section mainly explains potential consequences from our theoretical results, and as such could be moved to the Discussion if you think it's appropriate.
3. **Regarding model priors and dataset size:** We apologize for the confusion and thank you for bringing up this excellent example!
While we agree with your intuition, the theory is precisely useful for such examples. In our framework, having a good prior corresponds to the variance in neural responses $\mathbf{Y}$ being fully characterized by the first few model eigenvectors $\mathbf{v}_i$ with large eigenvalues $\lambda_i$. Since the theory poses that the variance corresponding to the larger eigenvalues are learned in greater rates, the theory confirms your example when few samples are sufficient to describe the majority of the variance.
However, we observe that the distribution of the data variance among model eigenvectors on V1 data are quite similar between trained and untrained models, and widely spread. This means that in small data ($p$) regime, the explained variance remains roughly the same for both models. Then we concluded that the trained models *may* distinguish themselves in the large $p$ regime where the observed high dimensionality (hence expressivity) of the feature space ensures learning remaining portion of the variance in data.
4. **Model Choice:** Thank you for expressing concerns regarding experiments, which we believe is a confusion due to lack of experimental details in the main text. The models investigated in Fig. 2-4 spanned many different CNNs and ViT architectures. This is now clear in main text lines 76-80 (see response #5 to PvxB). We also included RFig 3 of the error mode geometry for supervised neural networks grouped into different architecture types. For Fig. 5 we used publically available checkpoints for ResNet50 so that we had different training types with the same architecture.
**Minor Concerns:**
1. **Lines 26-27:** Thank you for highlighting this point and providing these references. We agree that the line of work focusing on human vs. model behavior is particularly interesting, especially given that many models have similar neural predictions. We tried to convey this in the original text Lines 27-33 (which already include some of the referenced papers), but agree that this could be further clarified. We modified the paragraph starting at line 27 and added in these citations. The final sentence of that paragraph now reads:
“*Given the increasing number of findings that demonstrate large variability among candidate computational models, it is an open question of why current neural prediction benchmarks are less sensitive to model modification, and how to design future experiments and stimulus sets to better test our models.*”
2. **Grammar Mistake in Line 58:** We updated this.
3. **Notation in §2.2:** We have changed our notation for the neural data matrix from $\mathbf{R}$ to $\mathbf{Y}$ as suggested.
4. **The $\equiv$ symbol:** Indeed, the “equivalent” symbols meant assignment of variables. We switched to the “:=” symbol as suggested.
**Questions:**
1. **$R_{em}$ and $D_{em}$ after training:** These points are now explicitly described in the revised §3.2 (see the global response), and the corresponding RFig 4.
2. **Choice of ResNet50:** Please see the response above regarding other models.
3. **Size of Training Set and V1:** We thank you for this question, which led to experiments presented in RFig 4 and RFig 6. A few things to note:
1. First, a similar finding has been reported before that the random networks do fairly well for early visual areas [3,4,5], and our observations that the differences are more pronounced in later regions is in line with these previous works. That said, the V1 data is still slightly better predicted by trained networks, which is now explicitly stated in RFig 4B.
2. Note that the V2 dataset has the same number of stimuli presented as the V1 dataset, and we see more difference between trained and random networks in this region, thus the difference cannot be only due to stimulus set size.
3. We have run an additional suite of experiments on V1 region data with larger data sizes ($P=1250$ stimuli) and find that our observations are consistent. Please see our response to reviewer t8VM and RFig 6 for more details.
[1] Bordelon et. al., ICML, 2020
[2] Canatar et. al., Nature communications, 2021
[3] Schrimpf et. al., BioRxiv, 2018
[4] Saxe et. al., Nature Reviews Neuroscience, 2021
[5] Elmoznino et. al., bioRxiv, 2022
---
Rebuttal Comment 1.1:
Title: Thanks for the thorough response
Comment: I thank the authors for taking the time to read my review and respond to it thoroughly. I am glad to see that my feedback was useful. Because I appreciate the effort that the authors made to improve their submission and I feel the responsibility to acknowledge that, I will raise my score from 3 to 5.
---
Reply to Comment 1.1.1:
Comment: We thank you for your thoughtful review and raising our score. We are happy to address any further concerns or questions. | Summary: In recent years, DNNs trained on image recognition tasks have emerged as strong predictive models of neural activity in the visual cortex. Typically, a regression model is trained to map DNN responses onto biological neural activity in response to the same inputs. The standard method for evaluating the predictive capacity of given DNN is to quantify the variance explained by the model on held-out images. This compresses the structure of each model into a single scalar value. While this can be used to rank models, it provides little insight into the structure of the model's representations and how they align with the structure of the biological neural representations. Another approach analyzes the similarities in representational geometry between models and neural data. However, the authors, state, these approaches do not relate the geometry of the DNN to the regression model used for the neural predictions.
In this paper, the authors leverage a recently proposed method for analyzing the generalization error of a model in terms of the radius and dimensionality of the error along each eigenvector of the input data (images viewed by both model and brain), to analyze the geometry of the generalization error of the regression model that maps a given DNN to neural data.
The authors use this method to analyze layer-wise differences in prediction error in a DNN, in terms of the error geometry. They find that the contributions of the error radius and error dimension change across layers.
They also analyze difference between trained and untrained DNNs at predicting neural responses. For V1 data, they find that the difference in predictivity and error geometry for trained vs. untrained networks is small. However, for V2 - IT data, they find interesting differences. In particular, they find that training reduces error radius, but increases error dimensionality, i.e. spreads errors more evenly across error modes. They suggest that the improved predictivity of trained networks for V2 - IT data is primarily driven by changes in their intrinsic expressivity rather than significant changes in their representational alignment with neural data.
In addition, they examine differences between adversarially vs. standardly trained models. Replicating previous findings, they find that adversarial training overall improves model predictivity. They observe differences in how it affects error mode geometry across layers. In earlier layers, error radius is decreased with little change in error dimensionality. In later layers, error dimensionality is the main distinction between robust and standard models.
Lastly, they examine the dependence of the generalization error on the size of the training set of images. They find that for small datasets, the directions with small eigenvalues yield high error. For larger datasets, the additional samples permit learning along small eigenvalue directions.
Strengths: The paper applies a recently proposed methodology for analyzing the generalization error of a model to the problem of understanding DNN-neural data representational alignment. It is a solid idea, and the authors conduct a suite of well-posed experiments that analyze existing results from the literature in this frame.
Weaknesses: The paper would benefit from several changes.
First, the motivation of the methodology was unclear. Intuitively, why are we interested in understanding the radius and dimensionality of the error? What insight can this give us into understanding representational alignment? What's the motivation for projecting along the eigenvectors of the data's Gram matrix? I think strong answers to these questions can be formulated. They should appear front and center in the paper.
Second, the conclusions obtained by the experiments were difficult to tease out. These should be clearly and simply stated, and presented in a list as an overview of the results. The way the results are written and presented visually, the reader needs to do a lot of digging to parse the meaning and significance. Consider making more intuitive graphical figures to emphasize the main points / contributions / conclusions, and/or condensing the results figures into simpler plots that highlight the important trends.
Lastly, the standard approach for interpreting DNN-neural data alignment could be more clearly explained and illustrated, so the reader understands how your approach differs. Consider making a schematic plot that highlights this distinction.
Overall, I felt the paper required considerable effort to understand. Given that this work draws from several disciplines and relies on a very recently proposed methodology, I believe the authors should do more work to help the reader understand the motivation and significance of this work.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I'd like to get a more intuitive handle on the significance of the error modes. I expect the eigenvectors of the gram matrix of the image data would be something like a Fourier basis. In this case, we'd expect large eigenvalues for low frequency bases and small eigenvalues for high frequency. Does this analysis essentially explain how error is distributed across frequency content in the images? If so, how does this give us insight into representational alignment?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The significance of the conclusions drawn from the experiments is not clear. The purpose of this tool is to provide insight into neural representations in DNNs and the brain. Greater effort should thus be made to make the meaning of the quantified values (error radius / dimension along eigenvalues of the data's Gram matrix) interpretable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review of our work. Please see the responses below.
## Weaknesses:
1. **Motivation for methodology:**
We are thankful for the suggestions and made changes to improve the clarity of our work.
We found that the radius and dimensionality of the error modes are a convenient way to summarize spectral properties of the generalization error. Specifically the two measures jointly summarize whether the distribution of the error modes, $\tilde W_i$ are relatively spread out (higher $D_{em}$, lower $R_{em}$), or tightly concentrated (lower $D_{em}$, higher $R_{em}$) at a fixed generalization error. Because the total error ($E_g(p)$) is the product of these two properties, we can easily visualize how the error mode geometry of many models differs. We include RFig 2 (which we will add to the main text) to give readers additional intuition for these quantities and how to interpret our plots.
The motivation for projecting along the eigenvectors of the Gram matrix is that it allows us to write the generalization error in terms of the alignment coefficients, $W_i$ and the spectrum, $\{\lambda_i\}$. We have highlighted these interpretations in the revised figures (RFig 1C-D) and the revised section 2.2–see the global response for details.
2. **Conclusions for experiments and clarity:**
Thank you for this suggestion. We have added a list of key contributions to the end of the introduction (see the global response). We have significantly revised Figure 1 (RFig 1), and have expanded Figure 1E (RFig 2) to help readers interpret the contour plots. We have also replaced Figure 4B-D with RFig 4, where we performed an explicit experiment analyzing the contribution of $\lambda_i$ and $W_i$ to the generalization error to emphasize the main conclusions of this section (see global response for full details).
3. **Comparison to standard approaches:**
Our approach decomposes one commonly used measure of DNN-brain comparison (neural predictivity via regression) in terms of the spectral properties of the model and its alignment with the neural data. As such, our analyses primarily concern how and why models achieve the predictivities that they do, rather than offering an alternative scalar metric for predictivity. Indeed, as one reviewer noted, "Analyzing the eigenspectrum of a model's feature space [....] goes beyond the standard measures of representational similarity analysis or linear regression experiments that one usually comes across for this line of research..." To make this more explicit we have added a schematic figure highlighting this aspect of our method (RFig 2).
4. **Overall:**
Thank you for this suggestion. We believe that the changes detailed in the global response have made the manuscript significantly easier to understand.
## Questions:
1. **Intuitive picture for error modes and analogy to Fourier basis:**
Please note that the Gram matrices that we refer to in the paper are defined on the artificial and biological network representations, rather than the images themselves. For example, the gram matrix $\mathbf{G}_X$ is defined as $\mathbf{G}_X = \mathbf{X}\mathbf{X}^T$, where $\mathbf{X}$ is a $P\times M$ matrix containing the activations of $M$ artificial network units responding to $P$ images. Given that the network responses have no direct connection to the frequency content of the images, neither do the Gram matrices.
Our work focuses on the alignment between the gram matrices of the artificial and biological network representations, denoted $\mathbf{G}_X$ and $\mathbf{G}_Y$, respectively. The alignment between these two matrices as expressed by the coefficients $W_i$ therefore quantifies the similarity between the artificial and biological network representations.
Nevertheless, we agree that it would be of interest to gain insight into how these error modes map onto the image space. One future direction we are interested in pursuing is visualizing error modes for a given network that are contributing the most and least to the error. It is possible that these could somewhat correspond to specific frequencies (especially when investigating early layers of the neural network). However, this type of analysis is beyond the scope of this paper.
## Limitations:
1. **Main conclusions from experiments and their significance:**
Thank you for this suggestion. To make the interpretation of $R_{em}$ and $D_{em}$ clearer to the reader, we have significantly revised the figures in the paper with helpful schematics. Furthermore, we greatly simplified the presentation of the theory in section 2.2 (see global response), and we revised section 3.2 to more clearly explain our results involving $R_{em}$ and $D_{em}$ (see global response).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the thorough rebuttal. The rebuttal addressed my concerns, which primarily regard the clarity of the results and presentation of the main ideas. With these clarifications incorporated into the paper, I think it will be a strong and valuable piece of work. I have adjusted my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review and suggestions for how to clarify the presentation and clarity of the paper, and for thinking that this will be a strong piece of work. If you have any further questions we are happy to address them. | null | null | null | null |
Common Ground in Cooperative Communication | Accept (spotlight) | Summary: The authors identify the problem of common ground as the core challenge in cooperative communication, where common ground means having enough shared knowledge and understanding to successfully communicate. They argue that prior models of cooperative communication uniformly assume the strongest form of common ground, perfect and complete knowledge sharing, and fail to capture the core challenge of cooperative communication.
The authors propose a general theory of cooperative communication that is mathematically principled and explicitly defines a spectrum of common ground possibilities, going well beyond that of perfect and complete knowledge sharing, on spaces that permit arbitrary representations of data and hypotheses. Also, the authors argue that their framework is a strict generalization of prior models of cooperative communication. The authors consider a parametric form of common ground and view the data selection and hypothesis inference processes of communication as encoding and decoding, and thus establish a connection to variational autoencoding. The empirical simulations support and elaborate on the theretical results.
Strengths: 1. Common ground is a very important concept in communication. Previous research on teacher-student communicative learning ignores this concept. I am glad to see the studies on common ground.
2. The proposed theory is quite mathematically principled. Though not very easy to follow for people without strong math background, I appreciate the authors' efforts in proposing such a theoretical framework in precise mathematical language.
3. The authors bridge the common ground and the VAE method, which is novel.
4. Results support the arguments.
Weaknesses: 1. Common ground is a concept from cognitive science. I cannot see any paragraphs to do some analysis on this. Is the concept used in this paper the same as the one in psychology and cognitive science? What's the same part and what's the difference? Can the proposed theory be well grounded in the cognitive theories on this concept?
2. A figure is worth a thousand words. It would help to understand the theory a lot if you could add an illustrative figure for the proposed theory. Otherwise, it's a bit hard to follow.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you elaborate more on the major differences between your theory and the related works?
2. What are the potential applications of the theory?
3. What's the biggest limitation of the proposed theory?
4. Can the theory explain the underlying mechanisms for humans to form common ground?
5. When will people form common ground, and when will people recursively estimate each other's minds and cannot reach common ground? Is there a criterion to define the key moment of the emergence of common ground?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As I just mentioned, please analyze on biggest limitation of the proposed theory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and comprehensive review. We have addressed questions and considerations common to all the reviewers in our global response. Here we will address the remainder of your review and considerations unique to it. If you feel something in your review has not been attended either in our global response or below, please let us know. We apologize in advance for anything missed.
**Comment:** Common ground is a concept from cognitive science. I cannot see any paragraphs to do some analysis on this. Is the concept used in this paper the same as the one in psychology and cognitive science? What's the same part and what's the difference? Can the proposed theory be well grounded in the cognitive theories on this concept?
**Response:** The idea of common ground originates in cognitive science, commonly traced to the paper by Herb Clark and Susan Brennan (cited in the into). Establishment and maintenance of common ground are viewed as a process in this literature, wherein speaker and listener exchange utterances in order to build common ground incrementally. In contrast with the experimental literature on perfect common ground inference, manipulations have been sequential in nature rather than in single trials. Moreover, the lack of a formalization of common ground in pragmatic models means that there not only have not, but could not, be systematic tests even in simple not scaleable models. One of our contributions is, therefore, to open the door to this possibility by formalizing, for the first time, imperfect common ground and connecting pragmatic inference with imperfect common ground to a powerful family of highly scaleable models in machine learning, variational auto encoders. With this contribution, it becomes possible to formalize the sequential reasoning process that underlies building and maintenance of common ground, which is an important direction for future work. Indeed, prior work developing the antecedent theory of cooperative inference separated the
single (Wang, Wang, Paranamana, Shafto, 2020) from the sequential problem (Wang, Wang, Shafto, 2020) because the theoretical considerations for each are quite substantial (see also Machine teaching and Iterative
Machine Teaching).
**Comment:** Can you elaborate more on the major differences between your theory and the related works?
**Response:** Previous models only consider situations when both parties have complete and perfect knowledge sharing abilities, effectively, a person communicating with themself. Furthermore, previous models, apart from one as far as we know (Wang, Wang, Paranamana, Shafto, 2020), provided algorithmic solutions to the problem of cooperative communication (with perfect knowledge sharing) without explicitly defining the problem of cooperative communication.
We define the problem of cooperative communication and permit, for example, any solution technique that might be used to find an optimal VAE to be potentially applied to the problem of cooperative communication.
**Comment:** Is there a criterion to define the key moment of the emergence of common ground?
**Response:** From the point of view of our framework, yes: when the pair of sets that define common ground is fixed. | Summary: This paper models cooperative communication, particularly under imperfect knowledge sharing. The authors generalize the model of cooperative communication giving it a principled mathematical footing and elucidating the dynamics of communication by introducing insightful concepts like conditional teaching and learning plans, parametric common ground, data and hypothesis spaces, and communication cost.
The paper delves into the concept of conditional probabilities and their application in communication and learning plans. It introduces the concept of a conditional teaching plan, which is the family of induced conditional probabilities determined from a teacher's communication plan. Similarly, a conditional learning plan is the family of induced conditional probabilities determined from a learner's communication plan.
The authors discuss the concept of communication cost as a loss function, which increases as a function of either ϵ or δ when all other terms are fixed. They explain that the communication cost is non-negative and vanishes if and only if the teacher's induced posterior on data is equal to the learner's prior on data. This suggests that the communication cost is a measure of the difference between the teacher's and learner's understanding of the data. The paper also shows that their formulation of the cooperative communication problem as an encoder-decoder problem has an intuitive connection with Variational Autoencoders.
The paper also references the work of Wang et al. (2020), who formalized the cooperative communication problem as a pair of Boltzman–Shannon entropy regularized Optimal Transport problems on a discrete data-hypothesis space. This suggests that the authors are building on previous work in the field and applying it to their own research.
The authors also present experimental results, plotting reconstructions and data activation. They explain that the coefficient ϵ and δ are picked based on the best reconstruction effectiveness, and an initialization is sampled at random with p = 0.0625. The size of the sample set I is 6000. They plot samples and their reconstructions, as well as the means of f λ( · | d)s for activated and non-activated data.
Strengths: Contributions and Originality: This paper offers a significant contribution to cooperative communication, particularly under imperfect knowledge sharing. The authors generalize the model of cooperative communication giving it a principled mathematical footing and elucidating the dynamics of communication by introducing insightful concepts like conditional teaching and learning plans, parametric common ground, data and hypothesis spaces, and communication cost. The paper's formulation of the cooperative communication problem as an encoder-decoder problem, and its intuitive connection with Variational Autoencoding, is a novel contribution.
Technical Soundness: The methodology is robust within the domain chosen, with a strong emphasis on thorough theoretical analysis complemented by limited practical applications. The results of the paper are convincing and well-presented. The authors provide a detailed analysis of the communication cost and demonstrate how it is influenced by various factors.
Building on Previous Work: The authors reference the work of Wang et al. and are building on previous work in the field.
Experimental Results: The authors present experimental results, plotting reconstructions and data activation. This provides empirical support to their theoretical claims.
Weaknesses: Clarity and Complexity: The paper's technical complexity may limit its accessibility. More detailed justifications for the underlying assumptions may be beneficial. Although intuitive explanations are provided perhaps a more direct exposition of the complex mathematical ideas could improve its reach.
Technical Soundness: The connection between the theoretical model and actual examples of communication is not clear. Communication doesn't happen in a vacuum, but between actual entities (humans, birds, whales, even networked computers).
Impact and Importance: While the paper presents novel concepts, it is not entirely clear how these concepts advance the state of the art or how they could be used by other researchers or practitioners. The authors could potentially improve the paper by discussing the potential applications or implications of their work in more detail.
Evaluation of Strengths and Weaknesses: The authors do not seem to provide a thorough dispassionate evaluation of the strengths and weaknesses of their work.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Could you provide more details on the experimental setup, specifically on how the coefficients ϵ and δ were determined? How did you ensure that these coefficients led to the best reconstruction effectiveness? (The coefficients ϵ and δ are picked based on the best reconstruction effectiveness, and an initialization is sampled at random. However, I didn’t understand how these coefficients were determined.)
Could you elaborate on the interpretation of the experimental results? Specifically, how do the reconstructions and data activations relate to the theoretical concepts discussed in the paper? (The authors plot samples and their reconstructions, as well as the means of fλ( · | d)s for activated and non-activated data. The authors note that not all data are activated and that there is continuity of data selection with respect to hypotheses. However, I didn’t understand the interpretation of these results.)
Biggest, most abstract question: What is actually being communicated here?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors include a Broader Impacts section. This does not include limitations to the current work, except for a brief mention that the current work is primarily theoretical. They state that "while the contributions of this work are primarily theoretical, the potential positive societal implications are many," but examples of such implications are not provided. I think it would be better to state the limitations of a purely theoretical approach to the very applied topic of communication. What is the fidelity of the "parametric form" of common ground presented when compared to actual human studies or data-driven approaches based off the work of Clark, for example? While the amount of math presented is impressive and I have to approach it with a presumption of correctness due to time limitations, it's unclear how this theoretical model actually describes real-world communicative phenomena. Another way of stating this is that this paper is really about information theory rather than communication, and the connection rests on assumptions laid out by Shannon's noisy channel model, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and comprehensive review. We have addressed questions and considerations common to all the reviewers in our global response. Here we will address the remainder of your review and considerations unique to it. If you feel something in your review has not been attended either in our global response or below, please let us know. We apologize in advance for anything missed.
**Comment:** Could you provide more details on the experimental setup, specifically on how the coefficients $\epsilon$ and $\delta$ were determined? How did you ensure that these coefficients led to the best reconstruction effectiveness? (The coefficients $\epsilon$ and $\delta$ are picked based on the best reconstruction effectiveness, and an initialization is sampled at random. However, I didn’t understand how these coefficients were determined.)
**Response:** The constants $\epsilon$ and $\delta$ are hyper-parameters. Each pair of $\epsilon$ and $\delta$ defines a different optimization problem and reflects different preferences with respect to the terms in the objective to be minimized. We are not searching for the best $\epsilon$ and $\delta$. Rather we try to understand how they affect minimizers as they differ. We chose a spread of $\epsilon$s and $\delta$s based on the analytical analysis of our model in the theory section, which outlines phase transitions in the model with respect to these parameters.
**Comment:** Could you elaborate on the interpretation of the experimental results? Specifically, how do the reconstructions and data activations relate to the theoretical concepts discussed in the paper? (The authors plot samples and their reconstructions, as well as the means of $f_\lambda( \cdot | d)$s for activated and non-activated data. The authors note that not all data are activated and that there is continuity of data selection with respect to hypotheses. However, I didn’t understand the interpretation of these results.)
**Response:** Reconstruction effectiveness and data activation are the notions/methods we defined to analyze the properties of communications plans. When we say continuity of data selection with respect to hypotheses, we mean the sampled hypotheses associated with the same data are clustered together around the mean hypothesis represented by the data.
**Comment:** Biggest, most abstract question: What is actually being communicated here?
**Response:** As stated in Section 2, the agents communicate general hypotheses by way of general data. Thus, the scope of communication problems encompasses virtually any problem in ML (and beyond). In the general response, we point to some literature that has used more specific formulations of our general approach. We invite you to review those papers and the citations therein.
---
Rebuttal Comment 1.1:
Comment: Sorry for the delayed feedback here. Thanks for your response. I think most of my questions are clarified and I would encourage you to include these clarifications in the final version if accepted. I still have reservations about the overall clarity and accessibility of the paper due to the amount of math and theorems on which the main body of the paper relies (to your credit, you do address the difficulties of balancing rigor and clarity within the space constraints; I think this is something every author in this field struggles with). I would dispute your assertion that the paper is accessible to *anyone* with an MS-level ML education. With far more than that, it took multiple reads to fully understand how you connect the presented work to existing literature on common ground (e.g., Clark). The mathematical formalisms in the main body are not necessarily intuitive even to someone with a long background in common ground and communication, and it's not evident in the background literature that common ground can be treated as a pure ML problem.
Much of this might be remedied by simply reorganizing parts of the paper, and shifting some of the theorems and proofs to the appendix in favor of more narrative, example-driven, prosaic explanation in the main body. IMO, the rigor would still be there, but the impact of the work would be more evident to someone with an MS-level background. This is a personal preference, but I would generally say that the fewer Greek letters you use, the better job you're doing communicating your ideas (if that sounds cheeky, it is, so take from that what you will). With the proof and theorems in the appendix, they're still there for those who care to dig deeper or formally replicate the work.
Regarding the final question: what is actually being communicated here?—I would encourage you to include at least a few specific examples of the types and instances of communication problems you can address herein, and can restate that the scope is broadly applicable.
I will raise my score to a 6 and I would strongly encourage you to consider such revisions if accepted to NeurIPS, or for a future submission.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your comments, constructive feedback, and raising of your rating. We appreciate your reservation and will most definitely, as you recommend, shift some things to the Supplement and use the newly minted space for more narrative in order to help our readers. Building a strong, stable, and inviting bridge between our abstract mathematical formalization of common ground/cooperative communication (as well as the link to ML) and our readers’ intuitive understanding or understanding coming from previous literature of these concepts is important. Furthermore, we will be sure to add some concrete, illustrative examples of what we will look to address with future work to help our readers even more. | Summary: > This is an emergency review for the paper. Due to the limited time, the content is shorter than a normal review, and the mathematical details are not fully checked.
This paper considers the theory of two-party cooperative communication. Compared to prior models of cooperative communication, the proposed model does not assume the strongest form of common ground and complete knowledge sharing, as a strict generalization. The problem of cooperative communication is modeled as minimizing the communication cost on the pair of plans within the admissible set. Experiments are carried out to study the properties of the model.
Strengths: 1. The proposed modeling of cooperative communication is a strict general form of prior models of cooperative communication. The model is shown to be compatible with previous models as a special case. It can be set on arbitrary data and hypothesis spaces and communication plans.
2. Rich experimental results on initialization, cooperative inference, perturbation, etc., with analysis are provided.
Weaknesses: Lack of analysis and experiments on real scenarios (e.g. the apple experiment in [1]).
The writing in this paper sometimes needs to be clarified. It may be better to explain notations in time, and leverage subscripts to distinguish between the teacher and learner (rather than using $\theta$ and $\lambda$, $g$ and $f$). More background information and related work about cooperative communication should be provided.
[1] Wang, Pei, et al. "A mathematical theory of cooperative communication." Advances in Neural Information Processing Systems 33 (2020): 17582-17593.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The definition of the communicative cost seems to be asymmetrical to the teacher and the learner. For example, it only considers how different the teacher chooses to represent the hypothesis with respect to the learner’s prior, but it does not consider the learner’s choice to infer. Can you provide further explanation on this asymmetry?
2. In the experiment studying initialization (Fig. 3.1), how exactly are the networks initialized to be close to a uniform distribution or become “sparse”? How are they constrained to $[-p, p]$? Can you provide more explanation or visualization to show the difference between the two choices of $p$?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The work, as well as some previous work, lacks of demonstrations or analysis of the real-world application of the theory. It is suggested to talk about or show how the theory can go beyond simple probabilistic spaces or be used in specific real-world human-robot cooperative communication cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and comprehensive review. We have addressed questions and considerations common to all the reviewers in our global response. Here we will address the remainder of your review and considerations unique to it. If you feel something in your review has not been attended either in our global response or below, please let us know. We apologize in advance for anything missed.
**Comment:** The definition of the communicative cost seems to be asymmetrical to the teacher and the learner. For example, it only considers how different the teacher chooses to represent the hypothesis with respect to the learner’s prior, but it does not consider the learner’s choice to infer. Can you provide further explanation on this asymmetry?
**Response:** Yes, our definition of communication cost is asymmetrical. We feel this is natural as communication itself is asymmetrical, communication is an ordered pair of processes, the teacher goes first (encodes) and the learner goes second (decodes). If one thinks of two people talking, for example, one of the two of them will speak first (encode what's in their mind into words), making them the teacher.
**Comment:** In the experiment studying initialization (Fig. 3.1), how exactly are the networks initialized to be close to a uniform distribution or become “sparse”?
**Response:** To be clear, we are not initializing the networks or the weights and biases of networks to be uniform or sparse. We actually try to initialize the categorical distributions parameterized by the networks. As we presented in Section 3.1, footnote 5, the output logits of shallow neural networks are near 0 if all weights and biases are initialized near 0 no matter the inputs. After applying the softmax function, the resulting distribution will be close to a uniform distribution. On the other hand, if the networks are initialized more arbitrarily, the initial logits can fall into a wide range of values. Then, after applying the softmax function to create a probability distribution, this distribution will be less uniform, i.e., a more sparse distribution.
**Comment:** How are they constrained to $[-p, p]$?
**Response:** We have sampled the weights and biases from a uniform distribution on interval $[-p, p]$.
**Comment:** Can you provide more explanation or visualization to show the difference between the two choices of $p$?
**Response:** Echoing the above response, when $p = 0.0625$, the initial categorical distribution is closer to uniform. When $p = 0.5$, the initial distribution is less uniform and, hence, more arbitrary or sparse. | Summary: The paper highlights the core challenge of cooperative communication is establishing a common ground, which refers to the shared knowledge and understanding necessary for successful communication. Existing models of cooperative communication assume perfect and complete knowledge sharing, thereby overlooking the fundamental challenge of establishing common ground. To address this limitation, the authors propose a general theory of cooperative communication that is mathematically principled and defines a spectrum of common ground possibilities, where the authors introduce a parametric form of common ground and conceptualize the communication process as encoding and decoding through data selection and hypothesis inference. To validate their theoretical results, the authors conduct a series of empirical studies. These experimental results provide additional support for the proposed framework of cooperative communication.
Strengths: The paper's primary strength lies in its ability to generalize previous models of cooperative communication. By explicitly defining a spectrum of common ground possibilities, the authors propose a flexible framework that can accommodate different levels of shared knowledge, surpassing the limitations of previous setups. Additionally, the authors demonstrate clarity in explaining how their proposed method can be reduced to previous models that assume an omniscient agent with the strongest form of common ground and perfect knowledge sharing. This highlights the seamless integration of their approach with existing literature. Another strength of the paper is the discussion of the connection between their method and variational autoencoder. By drawing parallels to a powerful model in modern machine learning, the authors enhance the theoretical foundations of their framework and provide valuable insights.
Weaknesses: The overall writing should be improved in the revision.
My major complain is about the writing of this paper. This paper suffers from unclear writing and excessive use of technical terms and notations without adequate definitions and explanations. This lack of clarity makes it challenging for readers to follow the paper's content and fully understand the proposed framework.
While the paper conducts two empirical experiments under two different settings (i.e., fully discrete and semi-continuous), it lacks sufficient discussion and explanation of the experimental results. The authors merely report the values without providing thorough insights into the underlying mechanisms that produce such outcomes, leaving readers with unanswered questions about the practical implications of the findings.
In addition, the figures in this paper are too small. I can barely recognize the legends and lines in Figure 3.3. Larger and more visually clear figures would improve the overall readability and understanding of the paper.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: See weakness.
Some notations and terms should be clearly explained and defined. For example, in line 57, why $\Pi_\mu$ and $\Pi^\nu$ use different script (i.e., one is superscript and the other is subscript)?
In the proposed framework, common ground is defined as a pair of sets of probability measures. Do these probability measures are manually defined?
What is the limitation of the proposed framework? What is the scalability of the proposed method? Does it work in a higher dimensional space comparing to the low dimensional space tested in the semi-continuous settings?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: The paper lacks a systematic discussion of limitations of the proposed framework, which could benefit readers in better understanding the work. Scalability is one potential limitation. Additionally, real-world cooperative communication is more complex than communication over low-dimensional synthetic data used in the experiments, raising questions about the feasibility of deploying the proposed method in more intricate scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and comprehensive review. We have addressed questions and considerations common to all the reviewers in our global response. Here we will address the remainder of your review and considerations unique to it. If you feel something in your review has not been attended either in our global response or below, please let us know. We apologize in advance for anything missed.
**Comment:** While the paper conducts two empirical experiments under two different settings (i.e., fully discrete and semi-continuous), it lacks sufficient discussion and explanation of the experimental results. The authors merely report the values without providing thorough insights into the underlying mechanisms that produce such outcomes, leaving readers with unanswered questions about the practical implications of the findings.
**Response:** In the experiments section of the paper, we systematically vary the defining marginals of common ground sets of joint distributions as well as parameters $p$ (to explore different common ground pairs after the appropriate marginals are fixed) and pair $\delta$ and $\epsilon$ (which shift the balance of the importance of the three terms in our cost/loss function) to describe the implications for effective communication and data activation and provide insight into the underlying mechanisms of our framework. Furthermore, in the theory section of the paper, we offer a 5 paragraph analytical discussion of the role of $\delta$ and $\epsilon$. If you have specific unanswered questions, it would be very helpful if they were articulated.
**Comment:** In addition, the figures in this paper are too small. I can barely recognize the legends and lines in Figure 3.3. Larger and more visually clear figures would improve the overall readability and understanding of the paper.
**Response:** We will enlarge them in the revision.
**Comment:** Some notations and terms should be clearly explained and defined. For example, in line 57, why $\Pi_\mu$ and $\Pi^\nu$ use different scripts (i.e., one is superscript and the other is subscript)?
**Response:** As can be read in the paper, the sets $\Pi_\mu$ and $\Pi^\nu$ are different subsets of $\mathscr{P}(D \times H)$ with respect to two conditions. Hence, the notation differs in two ways: $\mu$ versus $\nu$ and subscript verses superscript. If you see a specific piece of notation or term introduced but undefined, please let us know. That said, we will expand on our definitions in the Supplement for additional clarity.
**Comment:** In the proposed framework, common ground is defined as a pair of sets of probability measures. Do these probability measures are manually defined?
**Response:** Yes, our framework assumes the teacher and learner determine these sets beforehand. | Rebuttal 1:
Rebuttal: We are grateful for the feedback and the effort invested in reviewing our work. We address specific considerations individually.
## Strengths (`quoted text`)
Foremost, `common ground is a very important concept`. Our work `offers a significant contribution`. Our paper's `strength lies in its ability to generalize previous models` in **3** ways. 1. `By explicitly defining a spectrum of common ground possibilities, [we] propose a flexible framework..., surpassing the limitations of previous setups` (e.g., Wang et al NeurIPS2020 (oral presentation)). We `demonstrate clarity in explaining how [our] proposed method can be reduced to previous models`, which `highlights the seamless integration of [our] approach with existing literature.` 2. Our model is `mathematically principled` and `can be set on arbitrary data and hypothesis spaces and communication plans`, which permits safe, precise, and robust study. 3. We `bridge the common ground and the VAE method, which is novel.` And `by drawing parallels to a powerful model in modern machine learning, [we] enhance the theoretical foundations of our framework and provide valuable insights.` Connecting other fields to powerful and scalable theories in ML not only offers solutions to problems therein, but also unlocks avenues for applying them in ML. Finally, we `provide empirical support to [our] theoretical claims` with `rich experimental results on initialization, cooperative inference, perturbation, etc., with analysis`.
## Considerations
**On Establishing Common Ground:** How agents decide on the common ground sets is an *independent problem*. Our framework allows agents to 1. rigorously define (imperfect) common ground and 2. *after establishing common ground*, define and find optimal communication plans. Before, this question was ill-posed: how do you know when or if common ground is reached without a concrete notion of common ground? As noted, perfect common ground effectively/arguably trivializes communication. Our paper opens the door to the systematic study of the establishment and maintenance of common ground. This text will be added to the paper.
**Technical Complexity:** The reviewers have echoed the usual tension between technical precision and readability. With new frameworks, the potential for misunderstanding tips the balance away from common exposition choices, e.g., overloading notation. Math provides a precise language for expressing ideas and eliminates ambiguity to ensure accurate communication and support logic-based rather than just intuitive conclusions. So an adequate amount of notation and technicality is necessary and inevitable. (We expect our work to be accessible to anyone with MS-level ML graduate study.) In the Supplement, we will elaborate on notation, etc.
**Application:** Our framework generalizes a broad class of models that have been proposed in linguistics (Frank et al 2012), cognitive development (Jara-Ettinger et al 2016), robotics (Hadfield-Menell et al 2016), education (Gweon et al 2011), cognitive science (Shafto et al 2008), and ML (Zhu 2015). (All cited in our intro.) In linguistics, the original Rational Speech Act theory papers (Frank et al 2012; Goodman et al 2016) describe applications of pragmatic inference in human language. These papers have 1600 citations combined and have been applied to problems in emergent language, vision and language navigation, cultural evolution, multi-agent RL, generating and following instructions, image captioning, machine translation. In cognitive development, Naive Utility Calculus has over 400 citations and applications to inverse RL and scientific models of children's behavior. In robotics, Cooperative Inverse RL and Pedagogic-Pragmatic Inference have been proposed to explain value alignment (Fisac et al 2020) and have been applied to deep RL for aligning with human preferences, multi-agent systems, and learning from demonstration. In education and cognitive science, the Pedagogical reasoning model explains learning from a teacher. These papers have been cited 1400 times and applied to understanding a broad array of experimental findings, informal human learning, and automated tutoring. In ML, the machine teaching approach is primarily a theoretical object, but has been highly influential spawning iterative machine teaching and data poisoning approaches. To our knowledge, these approaches and applications fail to consider imperfect common ground. Using our theory, each of these works has an avenue to expand into a realistic realm: incomplete or imperfect knowledge sharing. Given this vastness, with each application being a potential paper on its own, we believe that choosing one to demonstrate would detract from the broad applicability of our work. (Note, we include citation counts to illustrate the futility of tracing all of the implications and applications. The potential influence of having a principled, unambiguous, and robust definition of common ground and the problem of finding optimal communication plans is incredibly broad in ML and beyond.) We will integrate this discussion into the Related Work section of our paper to clarify the impact of our work.
**A Discussion of Limitations:** By connecting cooperative communication to VAEs, limitations that arise therein potentially arise here. While our experiments are on small-scale, low-dimensional, synthetic data, transformer-based VAEs, in the form of LLMs, are large-scale, high-dimensional, and applied on real-world data. Applying our model in more complicated scenarios amounts to being able to work with similar architectures. A question for future work is the degree to which limitations are shared or ameliorated by our approach. Another limitation relates to how we have chosen to apply gradient-based optimization schemes and leverage the power of MLP-based models to deal with constrained optimization. More comprehensive study is a question for future work. We will add this discussion to our work. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Analyzing and Improving Greedy 2-Coordinate Updates For Equality-Constrained Optimization via Steepest Descent in the 1-Norm | Reject | Summary: The paper presents new update rules for block-coordinate descent (BCD) methods to minimize a smooth function subject to one linear equality constraint (precisely, all variables must sum to 1) and possibly a box constraint. A popular method to solve large-scale problems of this type is BCD with blocks of size 2 (i.e. we always update 2 coordinates at a time). A prominent example is SVM training in LibSVM.
For problems without the box constraint, the key observation is lemma 2.1, which shows that greedy version of such updates is equivalent to 1-norm steepest descent. This allows the authors to employ known results from non-smooth optimization and propose better update rules and derive their convergence rate. This rate is better than known rates for non-greedy updates.
For problems with a box constraint, the situation is more complicated because Lemma 2.1 does not apply. The authors nevertheless propose a new update (GS-1) based on 1-norm steepest descent, which is not guaranteed to change at most 2 variables in every update, but more than 2 variables are changed only in a finite number of updates. This can be seen as a greedy version of BCD with small blocks. The update has a better convergence rate than known updates that can be computed in reasonable (less than $O(n^2)$) time.
The theory is supported by a synthetic toy experiment on the linear LS problem with one linear equality constraint and a box constraint.
Strengths: Crisp and elegant theoretical result, non-trivial.
However, let me admit that convergence analysis of optimization algorithms is not precisely my expertise, so I am not reliably judge the novelty (these or similar results may be already known or obvious, ...).
The text is precise and clear.
Weaknesses: A major weakness is experiments: they are limited only to a simple problem (linear least squares) on synthetic data, end even then the data variability tested is small (only square system). To my understanding, the GS-1 update has never been applied to SVM training - so such experiments would be very interesting.
Moreover, in the experiments the difference $f(x)-f^*$ (vertical axis in Figure 1) is never shown to converge to zero, it is shown only in the range $10^{4.8} - 10^{5.2}$. In my experience, it sometimes happens that greedy updates improve the objective better initially but later they slow down and cyclic/random updates catch up. Or, perhaps, the vertical axis in Figure 1 shows $f(x)$ rather than $f(x)-f^*$ (as in the Supplement)?
Minor issues in the text:
- The five-line derivation on line 79-80 is trivial - it can be shortened to save space.
- Lemma 2.1 is very much related to the well-known result that greedy coordinate descent is equivalent to 1-norm steepest descent (see, e.g., Section 9.4 in Boyd's book on convex optimization). This may deserve a citation.
- Lemma 2.1 would be clearer if $\nabla f(x)$ were replaced by a general vector, say c.
- Line 128: mentioning SVM is not OK here because SVM needs also a box constraint.
- Line 139: the notion of "dimension independent convergence rate" is not well-defined because the problem dimension can be hidden in constants (such as $L_1, \mu_1$). But I understand that this notion may be widely used.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How would Figure 1 look like if the iterations continued to a very small (10^-6) error f(x)-f^* ?
How does the GS-1 update behave in SVM training?
Are there any other problem types on which you tested the new updates? Were there any negative results (i.e., the convergence was slow in practice despite theoretical guarantees)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The experiments are limited to a very simple problem with synthetic data.
Though the theoretical results are elegant, the optimization problem considered has a rather limited applicability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for pointing out these minor issues. We will fix them.
> "How does the GS-1 update behave in SVM training?"
Please see the discussion of experiments in the general response.
> "How would Figure 1 look like if the iterations continued to a very small (10^-6) error f(x)-f^*?"
Good question. In the PDF attached to the general response we show the curves if the methods are run much longer. We see that the greedy methods find a reasonable solution far before random methods.
If we consider SVMs where we also have bound constraints, then the difference between random and greedy tends to be even larger. This is because greedy methods eventually only focus on updating the support vectors; greedy methods speed up at this point since they are solving a lower-dimensional problem. In contrast, random updates continue to select the non-support vectors which eventually do not move (so most random iterations do nothing). Taken together with the similar iteration cost and slower rate, random methods are not as popular as greedy methods for training SVMs.
> "Were there any negative results?"
Since the time of submission, we found that it is possible for pure random to outperform pure greedy if we are using the Li values to set the step size and if the Li values are very different. However, we always found that greedy was faster than any form of random update if the Li values are similar or if we use the Li values in the greedy update.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. However, the PDF with additional plots (added to the general rebuttal) does not convincingly support the authors' claim that "the greedy methods find a reasonable solution far before random methods". In Figure 2 of the PDF, the "Random" method (blue line) is only slightly worse than the greedy methods, and it is so only for initial $\approx 10^4$ iterations. Since it is not clear at which accuracy the algorithm is supposed to be terminated, the fact that some method is better than the others for some initial number of iterations might be irrelevant.
It is a pity that the authors did not show also the plots zoomed in for small values of $f(x)-f^*$ because now it is hard to see how the methods compare in this range. (this applies both to Figure 1 and 2)
Is Figure 2 for the real data (SVM) or the synthetic data?
---
Reply to Comment 1.1.1:
Comment: At the final iteration on the plot the 3 greedy methods incorporating the Li values are at numerical precision, the classic greedy method has a sub-optimality close to 0, while the random method has a sub-optimality around 30. It is true that we should have included a zoomed-in version of the end or plotted on a log-scale (which we will update in the paper).
(We made Figure 2 in response to the comment "How would Figure 1 look like if the iterations continued to a very small (10^-6) error f(x)-f^", so it is on the same synthetic data. ) | Summary: The first goal of the paper is to minimize a smooth function subject to a summation constraint. The authors demonstrate that the greedy 2-coordinate descent (CD) method, when applied to the problem with equality constraints, achieves a linear rate of convergence under the proximal PL inequality under the L1-norm formulation. Notably, this convergence rate remains unaffected by the problem dimension, which sets it apart from random selection methods. Furthermore, they establish that there exists at least one steepest descent direction with respect to L1-norm, which can be utilized as a 2-coordinate descent update. They leverage this relationship to derive the convergence rates of the CD algorithm.
Additionally, they explore the minimization involving a summation constraint and prescribed lower and upper bounds on the coordinates. They demonstrate that employing bound- and summation-constrained steepest descent in the L1-norm guarantees significant progress at each step, unlike the GS-s rule. Moreover, this method (called GS-1) is computationally more efficient than GS-q, requiring only $O(n \log(n))$ iterations instead of $O(n^2)$
Strengths: 1. The paper gives linear convergence rates for greedy 2-coordinates CD, and the steepest descent in l1-norm under proximal PL for l1-norm. For problems with equality constraints, this work shows that greedy methods may be faster by a factor ranging from $O(n)$ up to $O(n^2)$ than methods picking coordinates at random,
2. For problems with bound constraints and equality constraints, the paper shows that GS-1 rule has the benefits of both GS-q and GS-s. Contrary to GS-q, GS-1 rule can be implemented in $O(n \log n)$ and contrary to GS-s, GS-1 guarantees non-trivial progress
at each iteration.
3. The authors also proved a linear convergence rate for GS-q rule under proximal PL for L2-norm. It is not straightforward to use prox-PL to obtain the linear convergence rate for this update rule. They consider the conformal realization used by Tseng and Yun [2009] to upper bound the extra terms in descent inequality by a notion of gradient mapping used in prox-PL’s definition.
4. The paper is clear and well written.
Weaknesses: The experimental part is limited, which is fine for a theoretical optimization paper, but for a ML venue and given the claim made in the paper that many problems in ML require to satisfy an equality constraint, such as discrete probabilities or SVMs with an unregularized bias term, one could expect the experimental section to show the power of GS-1 for other such examples than only a synthetic equality-constrained least square problem.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. L42-44: More comments are needed when the authors write that ``despite LIBSVM being perhaps the most widely-used CD method of all time, current analyses of greedy 2-coordinate updates either result in sublinear convergence rates or do not lead to faster rates than random selection [Tseng and Yun, 2009, Beck, 2014]." Did they use proximal PL inequality in L1-norm to analyze the convergence rate of greedy 2-coordinate methods? The linear convergence rate of this method may be due to the PL property.
2. L47-49 "The analysis is based on an equivalence between the greedy update and equality-constrained steepest descent in the 1-norm. This leads to a simple dimension-independent analysis of greedy selection showing that it can converge substantially faster than random selection." Under which assumptions is the faster convergence proven?
3. L. 66 ``If f is continuous, this update is guaranteed to decrease $f$ for sufficiently small $\alpha_k$'' Please quote a relevant paper, or a short proof for this sentence.
4. Question 4 removed as workshop papers are not listed in reviews.
5. L91 “For lower bounding sub-optimality in terms of the 1-norm, we introduce the proximal-PL inequality in the 1-norm.” What do you mean by lower bounding sub-optimality? As far as I can see, you are trying to give an upper bound on sub-optimality in Theorem 2.3.
6. Definition 2.2. “A function, that is L1-Lipschitz”: is it L1-Lipschitz gradient? Did this definition not appear in [Karimi et al. 2016]?
7. Does $f=g(Ax)$ satisfy proximal PL condition in the L1-norm for strongly convex function $f$ and a singular matrix $A$? Can we verify this property for functions in practice such as the SVM dual problem?
8. Did [She and Schmidt 2017] assume the same PL condition in the L1-norm (or L2-norm)?
9. Connection between the prox-PL function used in Theorem 3.2 and prox-PL as defined only by (9) in Def. 2.2: could you give an example of a function that satisfies this new assumption?
10. Some minor questions/typos in the appendix:
L372 Should $d_1 + d_2$ not be $d_i + d_j$
L378 second line before end: one $)$ too much
L390 Why do you have $\lambda 1$ in (29) and not simply $\lambda$
L428 $d_1 + d_2 + d_3$ should be $d_1 + d_2 - d_3$
L437 Why do we have the parameter $x^\prime$Conv$(\{x,x+d\})$~?
----
The authors have addressed adequately my questions; they intend to conduct extensive experiments. This is indeed missing in the current version of the paper, it is difficult to evaluate that part with the paper as is. But I find the theoretical contribution solid enough to justify publication.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for carefully reading our paper; your comments will help us improve the manuscript. We respond to the highlighted weakness and your questions below.
> "The experimental part is limited"
Please see the discussion of experiments in the general reply.
> 1. "More comments are needed when the authors write ... current analyses of greedy 2-coordinate updates either result in sublinear convergence rates or do not lead to faster rates than random selection [Tseng and Yun, 2009, Beck, 2014]"
Beck considers a very-general class of problems where the proximal-PL condition is not necessarily satisfied. But in this general setting it is only possible to obtain sublinear rates, which would be slower than the linear rates known for SVMs.
Tseng and Yun consider error bounds, which are equivalent to the proximal-PL inequality for Lipschitz-smooth functions [Karimi et al., 2016, Appendix G] and allow linear rates. But Tseng and Yun only show asymptotic rates and measure the error bounds in the 2-norm. We show that keeping the analysis in the 1-norm is the key to obtaining a fast rate.
> 2. Under which assumptions is the faster convergence proven?
It is sometimes hard to compare rates because they can be measured in different ways. This is particularly true among 2-CD papers. However, a clean comparison can be done under the commonly-used assumptions of Theorem 3.2: twice-differentiable $f$, 2-coordinate Lipschitz with constant $L_2$, step size of $1/L_2$, and proximal-PL. In this setting the greedy rate (11) always gains a factor of at least $2n$ over the random rate (12), and may gain a factor as large as $2n^2$.
> 3. "If f is continuous, this update is guaranteed to decrease $f$ for sufficiently small $\alpha_k$."
Here is a quick proof of this fact. Starting from 2-coordinate smoothness of the function $f$ and considering any pair of indices $i \neq j$, we have
$$
\begin{aligned}
f(x + d_{ij})
&\leq f(x) + \nabla_{ij} f(x)^\top d_{ij} + \frac{1}{L} \|d_{ij}\|_2^2\\\\
&= f(x) - \frac{\alpha}{2} (1 - \frac{\alpha L}{2}) \left[ (\nabla_i f(x))^2 + (\nabla_j f(x))^2 \right]
+ \alpha (1 - \frac{\alpha L}{2}) \nabla_i f(x) \nabla_j f(x).
\end{aligned}
$$
Take $\alpha < L / 2$. If $\nabla_i f(x) \nabla_j f(x) < 0$, then the proof is done. Otherwise, observe that $2 \nabla_i f(x) \nabla_j f(x) \leq (\nabla_i f(x))^2 + (\nabla_j f(x))^2$ by the AM-GM inequality, with strictness unless $\nabla_i f(x) = \nabla_j f(x)$. Thus, sufficiently small $\alpha$ guarantees progress as long as the two gradients are not equal.
> 4. "similar claim in ... NeurIPS 2022 Workshop"
We note that workshop NeurIPS papers are considered non-archival (see the "NeurIPS 2023 FAQ for Authors").
> 5. "What do you mean by lower bounding sub-optimality?"
This is a good point. The proximal-PL condition should be stated as giving a lower bound on the optimal function value $f^*$ as opposed to the sub-optimality. Specifically, the lower bound would be:
$$f^* \geq f(x) - \frac{1}{2\mu_1}\mathcal{D}(x,L_1)$$
This is probably the nicer way to define the proximal-PL inequality as well.
We will correct this in the paper.
> 6. "is it L1-Lipschitz gradient?"
Yes, agreed.
> 7a. "Does $f= g(AX)$ satisfy proximal PL condition in the L1-norm for strongly convex function f and a singular matrix A?"
Yes. Karimi et al. [2016, Appendix F - Case 3] implies that this problem class satisfies the proximal-PL inequality in the 2-norm (though there appears to be a typo in that result - the Hoffman constant should be squared), and from our Appendix C this implies that it satisfies the proximal-PL inequality in the 1-norm.
> 7b. "Can we verify this property for functions in practice such as the SVM dual problem?"
It is more difficult to verify the proximal-PL and PL conditions than it is to verify conditions like convexity and strong-convexity, since we have fewer rules for composing operations that preserve PL. However, the SVM dual problem is covered by Karimi et al. [2016, Appendix F - Case 3 or Case 4] so it will always satisfy the proximal-PL inequality.
> 8. "Did [She and Schmidt 2017] assume the same PL condition in the L1-norm (or L2-norm)?"
They use the standard proximal-PL condition in the L2-norm.
> 9. "Connection between the prox-PL function used in Theorem 3.2 and prox-PL as defined only by (9) in Def. 2.2: could you give an example of a function that satisfies this new assumption?"
The difference between the two settings is the choice of proximal regularizer. Note that any function which is strongly convex in the L1-norm is prox-PL according to both definitions (we will add a proof of this fact to Appendix C). Additional examples include SVMs (in the dual) and optimizing a strictly-convex function over the probability simplex.
> 10. "minor questions/typos in the appendix"
Will fix these, thanks.
---
Rebuttal Comment 1.1:
Title: Empirical evaluation
Comment: My questions on the theory front are adequately addressed, thank you.
About the experimental part: the authors show in Figure 2 in the paper that on the synthetic equality-constrained least square problem, GS-s rule results in the slowest convergence rate. Two of the additional experiments on the SVM real datasets show that GS-s converges as fast as GS-q and GS-1, while the third one shows that GS-s is actually faster than GS-q. Is there any trend (GS-1 being always at least as fast as the GS-q and GS-s) or do all three perform eventually in a similar range, without a clear predictable advantage for either one of them? A more extensive and quantitative empirical evaluation on real datasets would be welcome.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading and responding with your valuable feedback.
We were surprised by the superior performance of GS-s over GS-q on some real datasets, although in subsequent experiments their performance was very similar, particularly after feature normalization. To comprehensively explore this phenomenon, we intend to conduct an extensive set of experiments, to tease apart various practical issues and to pinpoint instances where empirical behavior differs from the presented theory. | Summary: The paper studies new coordinate descent-type methods for equality-constrained problems, where 2 coordinates are updated on each iteration and proves new convergence guarantees under suitable proximal-PL conditions that allow to obtain linear convergence rates for the proposed methods.
The first main result considers the case of a single constraint on the sum of coordinates with a simple greedy rule for selecting the two coordinates to update. A linear dimension-independent convergence rate is established using a fixed step-size and assuming PL inequality w.r.t. 1 norm.
Next the authors consider the more challenging setting in which on top to the constraints on the sum of coordinates, there are also box constraints on the individual coordinates. Here the authors propose to use a greedy-proixmal update named Gauss-Southwell-q which leads to a linear rate using a fixed step-size and assuming PL inequality w.r.t. 2 norm, however with linear dependence in exponent of the dimension.
Finally, in the latter setting (equality constraint on sum + box constraints), considering a PL inequality w.r.t. 1 norm and a diffrerent greedy rule (which corresponds to solving a proximal-style problem w.r.t to 1 norm, which given the gradient direction could be solved in O(nlogn) time), the authors obtain a linear dimension-independent rate.
The proofs hinge on relating the 2-coordinate descent updates to the steepest descent method w.r.t. the 1 norm, which is interesting.
The authors present experiments that support their findings on a random least squares problem with equality (and then also box) constraints.
Strengths: The paper is very well written and is easy to follow. The results and ideas are clearly presented. I enjoyed reading it.
The problems considered are sound and make sense, and the type of methods discussed is of clear practical interest to the community.
The technical argument of relating 2-coordinate descent to steepest descent w.r.t. 1 norm is interesting and might be of further use.
The convergence rates obtained are novel and interesting to the best of my knowledge.
Weaknesses: 1. I think the authors could do a better job in rigorously comparing the complexity of their methods to previous ones. It is not clear at all times if the obtained complexities are state-of-the-art (when not considering only 2-CD greedy methods). In particular, from first paragraph of 2nd page, it is not clear how this compete with known random selection methods.
2. To continue the above point, it seems all selection rules suggested require the full gradient direction. However, in such cases why these methods are better than full-gradient methods? For instance, for unconstrained least squares indeed doing an update to a single coordinate is much faster than computing entire gradient, and makes perfect sense when using random selection (e.g., w.r.t. to coordinate-wise Lipchitz parameters). But in the greedy rules we already need the full gradient.
3. The linear rates that are dimension-independent rely on PL w.r.t 1 norm. This might seem as ``hiding the dimension under the choice of norm''. Could the authors provide examples of interest that will satisfy this inequality with constant independent of the dimension? Or more precisely, that the ratio of the 1 norm constant and 2 norm constant is dimension independent?
4. At some point the authors claim that computing the coordinate-wise Lipchitz parameters can be difficult, however, in the most relevent setting I guess which allows also for the PL inequality: f(x) = g(Ax), with g strongly convex, in case g is well conditioned (e.g., squared Euclid norm in many cases), it is easy to estimate them I believe from the columns of A.
5. Throughout the paper the authors mention SVM as a major application, so why not conduct experiments with this application?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for highlighting these strengths. We put in a lot of work to ultimately find what we believe is a simple and elegant analysis of this issue; our older drafts of the paper had much more complicated analyses while achieving slower rates. We comment on the highlighted weaknesses below.
> 1. "I think the authors could do a better job in rigorously comparing the complexity of their methods to previous ones."
Can the reviewer be more specific about what is missing here?
Several months before submission we contacted several authors of the works on random 2-CD methods (such as Patrascu) and asked them specifically “if you think we have accurately/fairly cited your work and other related works in the area” (we asked them to focus on what is now Section 2.4 in particular). Subject to minor suggestions that we incorporated before submission they did not have any objections.
> 2. "in the greedy rules we already need the full gradient"
While this is true, greedy rules do not need to compute the gradient from scratch at each step. There are many problems where we can track the full gradient for a lower cost than computing the full gradient. For example, for SVMs in the typical case of a dense kernel:
- Computing the full gradient from scratch requires $O(n^2)$.
- Computing a single element of the gradient requires $O(n)$.
- Updating the full gradient after a single-coordinate update costs $O(n)$.
So when we do coordinate descent in the above setting, we can track the full gradient for the same $O(n)$ cost required for random/cyclic selection. The above is the key to the efficiency of LIBSVM. There also exist other important problems where we can track the gradient for a similar cost to computing a random element of the gradient. See Nutini et al. [2015, Appendix A] for an in-depth discussion.
> 3. "hiding the dimension under the choice of norm"
Please see the discussion of dimension independence in the general reply, as well as the response to reviewer TqZB (for cases where $\mu_1$ and $\mu_2$ differ by much less than $n$).
> 4. "coordinate-wise Lipchitz parameters ... it is easy to estimate them"
For many typical problems where we apply coordinate descent it is indeed easy to estimate the coordinate-wise Lipshitz constants (e.g., squared 2-norm of columns for least squares). When discussing the challenge of using coordinate-wise Lipschitz constants, we are referring to the $O(n^2)$ cost of computing greedy rules that incorporate these constants (even without bound constraints). We will clarify this in the paper.
> 5. "SVM as a major application"
Please see the discussion of experiments in the general reply.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their comments.
Regarding comparison to previous work, I think that usually a table does the best job for comparing complexity bounds of different methods, but I leave it to the authors to decide.
Regarding other discussions, in particular concrete discussions where the methods presented can be superior to other, I would be happy if you include them in your revision.
I am maintaining my score. | Summary: This work studies minimizing a smooth function with a summation equality constraint over its variables. The authors show a connection between the greedy 2-coordinate update and steepest descent w.r.t. 1-norm, and introduce a new proximal PL assumption w.r.t. 1-norm. They show improved convergence rates under such assumption over random selection. The authors also consider coordinate-wise Lipschitz smoothness and introduce an approximation for greedy 2-coordinate methods. They complement their theoretical results through numerical experiments on randomly generated problems.
Strengths: 1. The connection from 2-coordinate greedy update to steepest descent w.r.t. 1 norm seems non-trival and potentially interesting for equality-constrained problems.
2. The authors offer detailed comparisons on various Gauss-Southwell update rules (GS-1, GS-q and GS-s).
Weaknesses: I believe this work has potential but lacks a few key ingredients.
1) This work lacks justification for their proximal-PL condition w.r.t. 1-norm. For example, Karimi et al (2016) provides five important classes of functions that statisfy the proximal-PL condition w.r.t. 2-norm. It is unclear to me what function classes can statisfy the proximal-PL condition w.r.t. 1-norm where the worst case dependence on $n$ for $\mu_1$ can be avoided. It would also be useful if there can be empirical comparisions for $\mu_1$ against $\mu_2$ as $n$ scales. At its current state, I do not think it is reasonable to claim that the convergence rate is independent of the problem dimension $n$.
2) I would like to see experiments performed on practical datasets (e.g., LIBSVM datasets) rather than randomly generated Gaussian data, since much of the theoretical results aim to improve upon the rules in LIBSVM. I would also like to see experiments on SVM problems to support their claimed improvements for SVM.
3) I suggest adding markers to the plots in Figure 1 and 2, as the overlapping nature of the results renders some plots unparsable.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. What are the main differences of GS-1 rule (Algorithm 1) for proximal-PL assumption w.r.t. 1-norm, compared to Song et al. (2017)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the suggestions on improving the plots.
> "It is unclear to me what function classes can satisfy the proximal-PL condition w.r.t. 1-norm where the worst case dependence on $n$ for $\mu_1$ can be avoided."
It is a good point that the paper currently does not give an example where $\mu_1$ is better than $\mu_2/n$. We will add a reference to Sections 4.1 and 4.3 of Nutini et al. [2015], who give examples where $\mu_1$ becomes arbitrarily close to $\mu_2$ as $n$ grows. Since these examples are for unconstrained, strongly-convex minimization, we have extended Appendix C to show that strong-convexity of $f$ in the 1-norm implies proximal-PL in the 1-norm with the same constant (i.e. $\mu_1$). Thus, the examples of Nutini et al. apply directly to our constrained setting.
Unfortunately, it is not feasible to compare $\mu_1$ to $\mu_2$ numerically for complicated problems since it is not known how to compute $\mu_1$ explicitly except in simple cases.
> "At its current state, I do not think it is reasonable to claim that the convergence rate is independent of the problem dimension."
Please see the discussion of dimension-independence in the general reply.
> “I would like to see experiments performed on practical datasets… I would also like to see experiments on SVM problems to support their claimed improvements for SVM.”
Please see the discussion of experiments in the general response.
> “What are the main differences of GS-1 rule (Algorithm 1) for proximal-PL assumption w.r.t. 1-norm, compared to Song et al. (2017)?”
While this work and Song et al. both consider using steepest descent in the 1-norm, the problem settings are different: Song et al. consider L1-regularization where you may only need to update 1 variable, while we consider bound and summation constraints which is more complicated because you must always update at least 2 variables. So, the algebraic definitions of the GS-1 rule are the same, but the problem-specific implementations are very different.
---
Rebuttal Comment 1.1:
Title: Response to Authors' Rebuttal
Comment: I thank the authors for the replies and am glad to know that they find my suggestions helpful. My concerns regarding $\mu_1$ vs $\mu_2$ and the comparison to Song et al. (2017) have been properly addressed.
However, I maintain my reserve regarding their subpar experimental results for a ML venue. While this is mostly a theoretical work, it concerns a practical problem with well-defined experimental settings using real datasets. I am not fully convinced by the results in the attached PDF, although I do understand that it is already a big ask for the authors to add these results within such short timeframe, and I thank the authors for that.
As such, I am increasing my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading and considering the response. We plan to add a comprehensive set of experiments on real data (beyond what is included in the author response). | Rebuttal 1:
Rebuttal: We thank the reviewers for taking the time to read the paper and provide feedback on our work. We believe that the paper will be strengthened by incorporating this input. Below we comment on two issues that were brought up in multiple reviews.
**Experiments on real data**
We view our primary contribution as providing the first theoretical proof of the fast convergence of greedy 2-coordinate methods. While the submitted version did not perform experiments on real data, we note that the greedy 2-coordinate code LIBSVM has around 50,000 citations, indicating that practitioners find this approach useful in a variety of practical scenarios. We view providing a theoretical grounding for this approach as the most important contribution of our work and thought further experiments on this topic would be redundant.
Nevertheless, it is true that there are new methods proposed in the paper and it would be interesting to see their performance on real datasets. We are thus now adding experiments on real datasets to the paper. We have found that the performance on real data seems to be similar to the performance on simulated data, although in some cases the GS-1 rule performed better on real data than we expected. As examples, we include experiments on 3 datasets from the LIBSVM webpage in the attached PDF.
**Dimension independence**
In this work we use "dimension independence" as it is used in the optimization literature. Namely, a rate is dimension independent if changing the dimension of the problem (but not other constants) does not change the convergence rate.
For example, the gradient descent rate $(1-\mu_2/L)$ is dimension independent since if we change the dimension $n$ but keep $\mu_2$ and $L$ constant then it converges at the same speed. This is true even though we might expect $L$ to be larger and $\mu_2$ to be smaller for high-dimensional problems. This is in contrast to a random coordinate descent rate of $(1-\mu_2/nL_c)$ which is dimension dependent: it becomes slower as the dimension $n$ increases even with fixed $\mu_2$ and $L_c$ (the coordinate-wise Lipschitz constant of the gradient). Our $(1-2\mu_1/L_2)$ rate is dimension independent since the rate does not change if $n$ changes but $\mu_1$ and $L_2$ are fixed. Notably, our greedy coordinate descent rates are meaningful even if we consider infinite dimensional problems. This is similar to gradient descent.
Pdf: /pdf/9d3162784d06d9637b127375a4164a2bb34ccae0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Efficient Meta Neural Heuristic for Multi-Objective Combinatorial Optimization | Accept (poster) | Summary: This article presents an efficient neural heuristic method based on meta-learning, referred to as EMNH (Efficient Meta-learning Neural Heuristic), for solving Multi-Objective Combinatorial Optimization Problems (MOCOPs). The authors employ a shared multi-task model to expedite the meta-learning process and introduce a weight vector-based scaling symmetric sampling approach to stabilize this process. Furthermore, they propose a layered fine-tuning method to effectively address all sub-problems. Experimental results demonstrate that EMNH outperforms existing neural heuristic methods on three classical MOCOPs and can also compete with traditional heuristic methods.
Strengths: 1. The article addresses the practical and challenging issue of MOCOPs, presenting a novel neural heuristic approach that combines ideas from meta-learning, multi-task learning, and layered fine-tuning.
2. The proposed method's advantages are theoretically analyzed, including accelerated and stabilized meta-learning processes, along with reduced fine-tuning steps.
3. The article conducts comprehensive experiments on three different types of MOCOPs, comparing against various benchmark methods. It showcases the proposed method's advantages in terms of solution quality, learning efficiency, and generalization ability.
Weaknesses: 1. The article inadequately discusses the limitations and shortcomings of the proposed method, such as its applicability to more complex or higher-dimensional MOCOPs, sensitivity to weight vector selection, and risks of overfitting or underfitting.
2. The article lacks sufficient details to explain the model architecture, hyperparameter settings, experimental setup, etc. This might affect the replicability and credibility of the results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Did the authors consider using other types of neural network structures or meta-learning algorithms to implement the proposed method? If so, how do they compare to POMO and Reptile?
2. Did the authors attempt to test the proposed method on other types or scales of MOCOPs? If yes, what were the results? If not, what were the reasons?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. The proposed method might require substantial training data and computational resources to achieve ideal results, potentially limiting its feasibility and scalability in practical applications.
2. The proposed method might struggle with MOCOPs featuring nonlinear or non-convex objective functions, dynamic or stochastic constraints, multi-modal or multi-peak distributions, and other complex features.
3. The proposed method might not guarantee the identification of true Pareto-optimal solution sets, only approximated solution sets, potentially affecting evaluators' judgments on solution quality and diversity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Title: Posted review for the wrong paper?
Comment: Hi Reviewer EY2k,
It looks like this review is for a different paper? Did you accidentally paste the wrong one in?
Thanks for checking!
---
Rebuttal 2:
Rebuttal: We appreciate the reviewer for the valuable comments, and finding our method novel, experiment comprehensive, and results advantegous. We hope the point-to-point response below would address the remaining concerns.
**To Weakness 1:** Our method is designed for general MOCOPs. Other advancing techniques can be integrated to address more complex or higher-dimensional MOCOP.
Regarding underfitting and overfitting, we observe from the training curves (see Figures 4(b) and 6) that our model has converged. Additionally, we evaluate our model on 200 randomly generated instances for each problem, as well as on larger-scale and real-world benchmark instances. The results demonstrate the desirable generalization ability of our model.
We would like to supplement the discussion on weight vector selection as follows.
Different weight vectors will lead to different solution distributions. EMNH can utilize proper weight selection methods to obtain a more uniform Pareto front, since EMNH can flexibly tackle arbitrary weight vectors. Specifically, two methods are used for demonstration. (1) A weight assignment method used in PMOCO [1]. (2) We design a scaling weight assignment (SWA) method as follows. Each of the uniform weight vectors $\lambda$ is scaled by $f'$ as $\lambda_m/f'_m$ and normilized into $[0,1]^M$, where $f'_m$ is estimated on a validation dataset associated with $\lambda=(1/M,\dots,1/M)$.
We have supplemented the results on tri-TSP with asymmetric Pareto fronts, as presented in the PDF in "Global Rebuttal". For tri-TSP instances, the coordinates for the three objectives are randomly sampled from $[0,1]^2$, $[0,0.5]^2$, $[0,0.1]^2$, respectively. According to the first method, the non-uniform weight vectors which are obtained by multiplying uniform weight vectors by (1,2,10) element-wise and then normalizing them back to $[0,1]^3$. The results show that two weight vector selection methods can produce a relatively uniform Pareto front.
Furthermore, in our framework, more complex and effective weight vector selection methods [2] can be adapted to handle irregular Pareto fronts.
[1] Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization, ICLR, 2022.
[2] A Survey of Weight Vector Adjustment Methods for Decomposition based Multi-objective Evolutionary Algorithms, IEEE TEVC, 2020.
**To Weakness 2:** We have tried to provide a comprehensive description of the model architecture in Section 4.1 and Appendix A, along with detailed information on hyperparameter settings and the experimental setup in Section 5.1 and Appendix D. Furthermore, we have made our source code and datasets used in the experiments available, enabling readers to replicate and validate the results.
**To Question 1:** We have focused on utilizing POMO and Reptile, as they are currently considered state-of-the-art methods in the field of single-objective neural combinatorial optimization and meta-learning, respectively. To ensure fairness in our comparisons, all the neural heuristics we compared with employed POMO as their single-objective base model, while the MDRL method we compared with also utilized Reptile. In future research, we plan to explore more advanced meta-learning algorithms and more powerful base models to further enhance the performance of our method.
**To Question 2:** We have conducted experiments on three classic MOCOPs: MOTSP, MOCVRP, and MOKP. The MOTSP consists of four different types. Each problem includes at least three scales, and even five scales for MOTSP.
In our future work, we plan to expand the scope of our method by testing it on more complex MOCOPs. We also aim to improve its generalization capabilities across various problem types and scales by leveraging leading techniques such as pre-training and knowledge distillation.
**To Limitation 1:** Our method may require a large amount of training data and computational resources. However, the offline computational cost is affordable and worthy in practical applications because a well-trained model can quickly generate high-quality solutions for similar problem instances within a specific problem class. This capability makes our method highly valuable in scenarios where efficiency and quality are crucial considerations.
**To Limitation 2:** Our method is designed to address a wide range of MOCOPs, including those with nonlinear or non-convex objective functions. The objective function of a combinatorial problem is inherently non-convex. In the case of the Tchebycheff-decomposed subproblem (as described in Appendix J.2), it even becomes nonlinear.
To handle more complex problem features, our method can be combined with other techniques. For instance, when dealing with complex constraints, our method can incorporate a constraint-handling technique [2] to effectively handle them. Additionally, for dynamic problems, our method can leverage a spatio-temporal neural model [3] to enhance its performance in such scenarios.
[2] Deep Reinforcement Learning with Two-Stage Training Strategy for Practical Electric Vehicle Routing Problem with Time Windows, PPSN, 2022.
[3] Solving Dynamic Traveling Salesman Problems With Deep Reinforcement Learning, IEEE TNNLS, 2023
**To Limitation 3:** Due to the NP-hardness of MOCOPs, finding the exact Pareto set within acceptable computational time is often impractical, especially for large-scale problems. Therefore, it is crucial to design approximate methods that can provide near-optimal solutions within a reasonable timeframe.
---
Rebuttal Comment 2.1:
Title: Official Comment by Reviewer EY2k
Comment: The reviewer appreciates the authors' detailed response. After reading the rebuttal, I have decided to maintain the previous rating.
---
Reply to Comment 2.1.1:
Comment: We appeciate the reviewer for acknowledging our work and response. | Summary: The paper proposes efficient meta neural heuristic (EMNH) for solving multi-objective combinatorial optimization problems (MOCOP). The paper provides novel scaled sampling method for stability and a hierarchical fine-tune method for sub-task specific performance improvement over MDRL. The idea is sound and the experiments yield better performance than the state-of-the-art methods for MOCOP.
Strengths: The idea of using multi-task body with different heads in meta-learning is sound.
The experiments are detailed and convincing.
Weaknesses: I’m not an expert in MOCOP problem, as far as I’m concerned, I detect no weakness of this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. According to Table 2, although the traditional heuristics may yield the best performance, it would take them much more time to solve the problem. Given the same amount of fine-tuning time, what is the performance of the proposed MOCOP method EMNH? (i.e. set K to a large number).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The proposed EMNH provides no guarantee for the pareto front.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the valuable comments, and finding our experiments detailed and convincing. Our EMNH method, when provided with a sufficiently large value of $K$ (the number of fine-tuning steps), may not outperform traditional strong solvers like WS-LKH. We conducted a study to examine the impact of $K$ on performance in the original submission, and the results are presented in Figure 4(c) and Figure 7. It is evident that the model has almost converged when $K$ equals 20. Consequently, further increasing $K$ has little effect on performance, which aligns with our additional tests. It is worth noting that the model is fine-tuned on the fine-tuning instances for a given weight vector, rather than the test instances.
Nevertheless, our EMNH approach excels in generating a desirable Pareto set within a short solving time. It can also serve as an initial solution generator, which can be further enhanced using active search techniques [1] or traditional heuristics. However, we understand and acknowledge the concern of the reviewer. To further improve the performance of EMNH, we also plan to investigate more advanced meta-learning algorithms and employ more powerful base models, as outlined in the future work section of our paper.
[1] Efficient Active Search for Combinatorial Optimization Problems, ICLR, 2022.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: Dear authors, thanks for your time in rebuttal. I have no more questions and I raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: We appeciate the reviewer for acknowledging our work and response. | Summary: This work proposes EMNH, an efficient meta neural heuristic, for solving multi-objective combinatorial optimization problems. It builds a single meta model to tackle different trade-offs among multiple objectives during training, which can be efficiently fine-tuned into specialized submodels to solve different trade-offed subproblems. The main contributions are: 1) a multi-task meta model with parameter-shared body and task-related heads that can handle different trade-offs at the same time; 2) a scaled symmetric sampling method to stabilize the multi-objective training with imbalanced objectives; and 3) a hierarchical fine-tuning method to gradually fine-tune a set of submodel with much fewer steps. Experimental results show that the proposed EMHN method can outperform other neural combinatorial optimization methods on different combinatorial optimization problems such as MOTSP, MOCVRP, and MOKP.
Strengths: + This paper is well-written and easy to follow.
+ Multi-objective combinatorial optimization is important for real-world applications. Neural heuristic is a promising approach to tackle this problem, but only a few methods have been proposed recently. This work is a timely contribution to a promising research direction.
+ The proposed method obtains promising performances on various multi-objective combinatorial optimization problems.
Weaknesses: I have the following concerns for the proposed method:
**1. Runetime and Efficiency of Fine-tune**
The runtime of the fine-tuning approach is not clearly discussed in the paper. In my understanding, for solving a new problem instance, the meta-learning based methods need to first fine-tune the meta model into different submodels, and then run each submodel to generate corresponding approximate solutions. In other words, the prediction is not zero-shot inference. However, in all experiments, the reported runtimes for MDRL and EMNH are similar to DRL-MOA and PMOCO which support zero-shot inference. Why the cost of fine-tuning is not included in the runtime?
**2. Structure of Multi-Task Meta Model**
The proposed multi-task meta model has a single model body that is shared by all tasks and specialized heads for different tasks. Recent work has also investigated fine-tuning a small part of the neural combinatorial optimization model to improve the performance of each instance for single-objective problems [1]. What is the advantage of the proposed model structure for multi-objective optimization? Is there any guideline for model design?
[1] Efficient Active Search for Combinatorial Optimization Problems, ICLR 2022.
**3. Solution Distribution**
EMNH mainly uses the weighted-sum strategy to decompose a multi-objective optimization problem into multiple subproblems. According to PMOCO [1], different decomposition methods will lead to very different solution distributions on a given problem instance, especially those with more than two objectives. In addition, as reported in a current work [2], the decomposition-based PMOCO will generate redundant solutions for different preferences. Will EMNH also have these two issues? Can these issues be addressed by the scaled symmetric weight sampling method?
[2] Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization, ICLR 2022.
[3] Graph Learning Assisted Multi-Objective Integer Programming, NeurIPS 2022.
**4. Fine-Tuning PMOCO**
As reported in this work, PMOCO usually has a promising zero-shot prediction performance, but it cannot be further improved with the fine-tuned approach. In my understanding, PMOCO is also a variant of the AM structure in POMO. According to recent works [1], only fine-tuning a small part of the model parameters in POMO can significantly improve the performance of single-objective optimization. It is interesting to know why fine-tuning does not work for PMOCO.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Please address the concerns raised in Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation of the proposed method has been discussed in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the valuable comments, and considering our paper well written with timely contribution and promising performance. We hope the point-to-point response below would address the remaining concerns.
**To Weakness 1: Runtime and Efficiency of Fine-tune.** The fine-tuning time is not included in the runtime, as the runtime refers to the inference time on test instances. It's important to note that all our inference results are zero-shot on test instances. As explained in Section 3.3, a submodel corresponding to a specific weight vector is fine-tuned from the well-trained meta-model using fine-tuning instances, rather than using the test instances as done in [8]. This means that the submodel hasn't seen the test instances before the inference. Therefore, the runtime represents the zero-shot inference time. In this sense, the runtime of EMNH and MDRL is also similar to that of DRL-MOA and PMOCO.
[8] Meta-learning-based Deep Reinforcement Learning for Multiobjective Optimization Problems, IEEE TNNLS, 2022.
**To Weakness 2: Structure of Multi-Task Meta Model.** One advantage of our model over [1] is faster inference due to its simpler architecture. In [1], EAS is instance-specific, whereas our model is task-specific, meaning it can infer a batch of instances from a similar task in parallel. Unlike [1], our meta-model has the same architecture as the single-objective neural model, eliminating the need for additional model designs. The multi-task model only requires multiple heads to operate in parallel and does not need to be saved after training the meta-model.
Our model design follows the guideline of feature reuse in meta-learning [4]. As discussed in [4], only the head of the meta-model is updated to align with a specific task, while the body remains reusable for all tasks. This principle has inspired us to propose a multi-task model that accelerates the training of the meta-model.
[1] Efficient Active Search for Combinatorial Optimization Problems, ICLR, 2022.
[4] Rapid learning or feature reuse? towards understanding the effectiveness of MAML, ICLR, 2020
**To Weakness 3: Solution Distribution.** Similar to other decomposition-based methods like PMOCO, EMNH also faces the challenge of different decomposition methods resulting in different solution distributions. However, this issue can be mitigated by employing appropriate weight assignment methods, as demonstrated in PMOCO. EMNH offers the flexibility to handle arbitrary weight vectors. Specifically, when the approximate scales of different objectives are known, we can normalize them to [0,1] to achieve a more uniform Pareto front. Alternatively, we can adjust the weight assignment to generate a more uniform Pareto front. In future work, more advanced weight assignment methods [5] could be explored to address irregular Pareto fronts beyond the scope of this paper.
Furthermore, like other decomposition-based methods, EMNH may generate redundant solutions for different weight vectors. This is an inherent limitation of the decomposition approach. To promote greater diversity in solutions, one potential direction is to consider a diversity indicator [6] or divide the objective space into regions [7].
The scaled symmetric sampling method is proposed to stabilize the training process and can be directly used to address these issues. We also attempt to apply the scaling operation to the weight assignment during inference to alleviate the first issue. Specifically, each uniform weight vector $\lambda$ is scaled by $f'$ as $\lambda_m/f'_m$ and normalized to $[0,1]^M$. Here,$f'_m$ is estimated using a validation dataset associated with $\lambda=(1/M,\dots,1/M)$. The advantage of this scaling weight assignment (SWA) method is that it does not require prior problem information.
In the "Global Rebuttal" PDF, we have included additional results on tri-TSP instances with asymmetric Pareto fronts. For these instances, the coordinates for the three objectives are randomly sampled from $[0,1]^2$, $[0,0.5]^2$, $[0,0.1]^2$, respectively. The results demonstrate that EMNH-SWA effectively produces a more uniform Pareto front. Compared to a scaling weight method with prior knowledge, where uniform weight vectors are element-wise multiplied by (1,2,10) and then normalized back to $[0,1]^3$, EMNH-SWA achieves desirable performance.
[5] A Survey of Weight Vector Adjustment Methods for Decomposition based Multi-objective Evolutionary Algorithms, IEEE TEVC, 2020.
[6] Performance Indicators in Multiobjective Optimization, EJOR, 2021.
[7] Improving Pareto Local Search Using Cooperative Parallelism Strategies for Multiobjective Combinatorial Optimization, IEEE TCYB, 2022
**To Weakness 4: Fine-Tuning PMOCO.** Fine-tuning is not effective for PMOCO, since PMOCO has already converged for the subproblems corresponding to the given weight vectors. As mentioned earlier, our fine-tuning process is performed on fine-tuning instances, not on test instances. This approach differs from the 'fine-tuning' executed on each test instance, also known as active search [1]. Hence, our fine-tuning does not improve the performance of the well-trained PMOCO model for zero-shot inference on test instances.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed response.
Comment: Thank you for your detailed response, and all my concerns have been properly addressed. I keep my positive score (6) and lean toward accepting this paper.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for acknowledging our work and rebuttal.
---
Reply to Comment 1.1.2:
Comment: Dear Reviewer d8Lb,
We again appreciate your time for reviewing our paper. We realize that some additional results may faciliate further recognizing the value of our work comprehensively. Thus, we have conducted a supplementary study about the fune-tuning efficiency.
**Additional Response to Weakness 1: Trade-off Between Fine-tuning Efficiency and Performance.** For a given weight vector, EMNH fine-tunes the meta-model to derive a submodel to solve the corresponding subproblem. We study another two (relatively) lightweight fine-tuning methods, including only updating the head parameter (denoted as EMNH-FH) according to feature reuse [1] and only updating the decoder parameter (denoted as EMNH-FD) like PMOCO [2]. These two methods even allow us to only fine-tune and store parts of the original submodels, i.e., $N$ heads or $N$ decoders, thereby being more computationally efficient. Meanwhile, such benefit may bring about performance sacrifices in some cases. We report the results and the parameter numbers of various parts of the model in the table below. The lightweight fine-tuning has slightly inferior performance compared with the original EMNH in most cases except on Bi-CVRP ($n$=100). Generally, the more lightweight of the fine-tuning, the more performance deterioration (i.e.,EMNH-FH v.s. EMNH-FD as displayed in the table below, where FH is more light than FD). However, these lightweight fine-tuning methods can be used as alternatives when the computational and memory resources are limited.
Moreover, same as EMNH, EMNH-FH can also generate much more dense Pareto solutions to improve the performance via increasing weight vectors and corresponding fine-tuned heads. We have plotted the generated Pareto fronts with 105, 300 and 1035 weight vectors on Tri-TSP-1 which verified the above point. However, we will supplement this result in the final version since the "Global Rebuttal" PDF containing figures is not allowed to updated or uploaded at this moment.
**Table: Results of lightweight fine-tuning methods.**
Bi-CVRP ($n$=20)|HV|Gap
---|:---:|:---:
PMOCO-Aug|0.4294|0.19%
MDRL-Aug|0.4292|0.23%
EMNH-Aug|0.4302|0.00%
EMNH-FD-Aug|0.4299|0.07%
EMNH-FH-Aug|0.4298|0.09%
Bi-CVRP ($n$=100)|HV|Gap
---|:---:|:---:
PMOCO-Aug|0.3966|2.77%
MDRL-Aug|0.4072|0.17%
EMNH-Aug|0.4079|0.00%
EMNH-FD-Aug|0.4082|-0.07%
EMNH-FH-Aug|0.4082|-0.07%
Tri-TSP-1 ($n$=20)|HV|Gap
---|:---:|:---:
PMOCO-Aug|0.4712|0.00%
MDRL-Aug|0.4712|0.00%
EMNH-Aug|0.4712|0.00%
EMNH-FD-Aug|0.4710|0.04%
EMNH-FH-Aug|0.4707|0.11%
Tri-TSP-1 ($n$=100)|HV|Gap
---|:---:|:---:
PMOCO-Aug|0.4956|0.34%
MDRL-Aug|0.4958|0.30%
EMNH-Aug|0.4973|0.00%
EMNH-FD-Aug|0.4925|0.97%
EMNH-FH-Aug|0.4906|1.35%
**Table: Parameter numbers of various parts of the model.**
Bi-CVRP Model|#(Parameters)|
---|:---:
Whole Model|1287K
Decoder|98K
Head|16K
Tri-TSP-1 Model|#(Parameters)|
---|:---:
Whole Model|1303K
Decoder|115K
Head|16K
**Reference**
[1] Rapid learning or feature reuse? towards understanding the effectiveness of MAML, ICLR, 2020.
[2] Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization, ICLR, 2022. | Summary: The paper introduces a meta neural heuristic in which a meta model is first trained and then fine-tuned with a few steps
to solve corresponding single-objective subproblems. For the training process, a partial architecture-shared multi-task model is leveraged to achieve parallel learning for the meta model, so as to speed up the training. Meanwhile, a scaled symmetric sampling method with respect to the weight vectors is designed to stabilize the training. For the fine-tuning process, an efficient hierarchical method is proposed to systematically tackle all the subproblems.
The article contains a review of related works and preliminaries, then it presents the introduced methodology.
This is followed by the description of experiments, their settings, results, and analysis. The experiments were carried out for
the multi-objective traveling salesman problem, multi-objective capacitated vehicle routing problem, and multi-objective knapsack problem. They showed that the introduced method is able to outperform the state-of-the-art neural heuristics in terms of solution quality and learning efficiency and yield competitive solutions to the strong traditional heuristics while consuming a much shorter time.
The main text is followed by supplementary materials.
Strengths: The introduced methodology is quite advanced and seems to be innovative and successful.
I also like that it was tested on several problems from different domains and besides reporting efficiency and the quality of the found solutions, the required time is reported too. The paper is well written and it is good that it is followed by the supplementary materials. I like that there is a pseudocode in the supplementary materials and the authors declared that the codes for all the methods will be made available.
Weaknesses: The main weakness I found is that the code and the datasets used in experiments are not available, so it is difficult to verify and reproduce the results. However, the authors provided some pseudocode and declared that the code will be made publicly available. Besides, didn't find serious weaknesses of this paper, but it is possible that I didn't understand some parts.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I don't have any specific questions.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The main limitation, according to the authors, is that the method can not guarantee to obtain the exact Pareto front (similar to other neural heuristics). Also, the code and the datasets used in experiments are not available, so it is difficult to verify and reproduce the results. However, the authors provided some pseudocode and declared that the code will be made publicly available. I didn't find other limitations, but it is possible that I didn't understand some parts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the valuable comments, and considering our method advanced and our paper well written. Regarding the source code, on the one hand, we have stated clearly in our original submission - ' Our codes for all the methods will be made available'. On the other hand, we will upload our source code and datasets, and share the url with the AC (according to the NeurIPS rebuttal policy). We thank the reviewer for the support again.
---
Rebuttal Comment 1.1:
Comment: Thank you for the information. I've read the rebuttal and don't have more questions now. I keep my previous decision.
---
Reply to Comment 1.1.1:
Title:
Comment: We appreciate the reviewer for acknowledging our work and rebuttal. | Rebuttal 1:
Rebuttal: Many thanks for all reviewers' constructive and valuable comments. Following their suggestions, we have made the following main revisions:
1. **Motivation:** We have revised some descriptions to make the connection between our motivation and the proposed method clearer according to the comments of Reviewer TTgk.
2. **Method:** We have supplemented more details of our method, including the model design, fine-tuing process, and inference process.
3. **Experiment:** We have conducted the experiments about the solution distribution according to the suggestions of Reviewer d8Lb.
4. **Addition:** We have further clarified some results of the ablation and hyperparameter study.
Pdf: /pdf/6a7b886990d156c521e869bac78f71d786abe121.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In order to achieve higher learning efficiency and better solution quality, this paper proposed an efficient meta neural heuristic (EMNH), in which a meta model is first trained and then fine-tuned with a few steps to solve corresponding single-objective subproblems. For the training process, an architecture-shared multi-task model is leveraged to achieve parallel learning for the meta model, so as to accelerate the training; meanwhile, a scaled symmetric sampling method with respect to the weight vectors is designed to stabilize the training. For the fine-tuning process, an efficient hierarchical method is proposed to systematically tackle all the subproblems. The general idea of this paper is clear and logical.
Strengths: 1. The methodology part of the paper is clearly described and has a certain degree of logic.
Weaknesses: 1. In the Methodology part, this paper proposed three methods for accelerate training, stabilize training and hierarchical Fine-tuning. However, in the Introduction part, the connection between motivation of this paper and the proposed stabilized training method should be clearer described.
2. In the Experimental Results part, ablation experiment should be added in order to verify the effect of the proposed method. To be more specific, as for the experiments towards learning efficiency, the comparison between original method and method without adding stabilized training part should be done.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In the Experimental Results part, the experiment results in tables should be further analyzed.
2. In the Training Efficiency part, more current comparison method should be added in order to support the experimental conclusion with more sufficient and comprehensive experimental results.
3. Are there any control parameters in the proposed method? how sensitive are them?
4. The picture in Figure 4 is too small for clearly understanding of readers.
5. The writing of the paper could be improved for better description and clarification.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: 1. The experimental problems of this method are limited to real-world problems, and further experimental verification should be carried out on synthetic problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the valuable comments, and considering our idea clear and logical. We hope the point-to-point response below would address the remaining concerns.
**To Weakness 1:** We acknowledge and appreciate the reviewer's concern. It is indeed crucial to establish a clear connection between the motivation of our paper and the proposed stabilized training method. As a result, we will carefully revise and add the following statements to ensure consistency in conveying these points throughout the paper.
'...The recent Meta-DRL (MDRL) [13] has demonstrated the capability to enhance solution quality compared with PMOCO. However, it still faces challenges related to inefficient and unstable training procedures, as well as undesirable fine-tuning processes.'
'During the meta-learning process, the deviation of a few randomly sampled weight vectors may cause fluctuations to the parameter update of the meta model, leading to the unstable training performance. This motivates us to introduce a stabilized training method.'
**To Weakness 2:** Thank you for your suggestion regarding the ablation study. In our original submission, we have already conducted an ablation experiment specifically focusing on the proposed scaled symmetric sampling method. The detailed results can be found in Section 5.3 and Appendix F.2.
To be specific, we compared EMNH with two variants EMNH-R (EMNH with random sampling) and EMNH-S (EMNH with symmetric sampling). EMNH-R reflects the overall effect of the complete scaled symmetric sampling method, while EMNH-S isolates the effect of the corresponding scaling operation. The results clearly demonstrate that the proposed scaled symmetric sampling method effectively stabilizes the training process.
**To Question 1:** Thanks for raising the concern on the experimental results anaysis. We acknowledge that the analysis provided in the original submission was concise due to rich presented results and limited space. According to the suggestions, we will add more detailed analysis in the final version. For example, the analysis about the results in Figure 4(c) will be supplemented as follows.
'...Notably, we observed that EMNH with a few fine-tuning steps (e.g., $K = 5$) generally outperforms PMOCO in most cases, as demonstrated in Appendix F. It is important to note that our fine-tuning process is performed individually for each weight vector on dedicated fine-tuning instances. As a result, the fine-tuning does not enhance the performance of the well-trained PMOCO model for zero-shot inference on test instances. This finding indicates that PMOCO has already converged for the subproblems corresponding to the given weight vectors...'
**To Question 2:** The objective of this paper is to evaluate neural solvers or heuristics for MOCOP. We have carefully selected MDRL [1], PMOCO [2], and DRL-MOA [3] as the neural solvers for comparison. The selection of these representatives is based on the following reasons. As show in the Related Works section, DRL-MOA is acknowledged as a well-known classic neural heuristic for MOCOP. MDRL and PMOCO are recognized as state-of-the-art neural heuristics, surpassing previous approaches, including DRL-MOA.
From our perspective, the experimental results provide compelling evidence to support our conclusion. The original submission includes a comprehensive analysis of training efficiency in Section 5.3 and Appendix F.1. The results clearly demonstrate that EMNH achieves a training time of approximately 1/$\tilde{N}$ compared to MDRL. Moreover, the total training time of EMNH is comparable to that of PMOCO. In contrast, DRL-MOA requires training multiple models, resulting in significantly more time, which is less competitive.
**To Question 3:** Thank you for raising the concern about the parameters. We have addressed this issue in Section 5.1, where all the control hyperparameters are listed. Most of these parameters are adopted from previous works, including the learning rate [1,2], the meta-learning rate $\epsilon$ [1], the number of gradient steps for the inner-loop update $T_u$ [1], the batch size $B$ [2], and the number of weight vectors $N$ [2].
We specifically investigated those parameters used in our EMNH, which include the number of sampled weight vectors $\tilde{N}$, the scalarization method, and the number of fine-tuning steps at each level $K$. The sensitivity analysis of $\tilde{N}$ and the scalarization method can be found in Appendix J, while the analysis of $K$ can be found in Appendix F.3. In summary, our findings suggest that setting $\tilde{N}=M$ and using the weighted-sum scalarization method yield desirable results. Additionally, we observed that the model has nearly converged when $K=20$.
**To Question 4:** Thanks for pointing it out. We appreciate the suggestion, and will consider enlarging the picture in Figure 4 if we have sufficient space in the final version.
**To Question 5:** Thanks for the suggestion. We will thoroughly proofread and revise the writing in the final version.
**To Limitation 1:** Thanks for raising this concern. We want to clarify that our experiments were conducted on a diverse set of problem instances, including both real-world problems and synthetic instances. For the real-world problems, we utilized the KroAB instances from TSPLIB, as indicated in Table 8. These instances have also been used in previous works [1][3].
In addition to the real-world problems, we also incorporated randomly generated instances for our experiments. The details and results of these instances can be found in Tables 1 and 2, which are consistent with the approaches taken in prior studies [1][2][3].
[1] Meta-learning-based Deep Reinforcement Learning for Multiobjective Optimization Problems, IEEE TNNLS, 2022.
[2] Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization, ICLR, 2022.
[3] Deep Reinforcement Learning for Multiobjective Optimization, IEEE TCYB, 2021.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' comprehensive response, which effectively addressed the majority of my inquiries. Nevertheless, given the modest degree of enhancement and the relatively minor contributions, I am inclined to uphold my initial assessment.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for the feedback. And we hope that the response below would address the outstanding concerns. In this regard, we would like to re-highlight our contribution and enhancement as follows.
1. **Contribution.** It is a known fact that there has been a growing trend to exploit neural heuristic based on deep reinforcement learning to solve MOCOP. However, the prior works of this line, including the state-of-the-art MDRL and PMOCO, still struggle to achieve high learning efficiency and solution quality. We thereby propose an efficient meta neural heuristic (EMNH) to push the boundary of this line of research. EMNH outperforms state-of-the-art neural heuristics in terms of learning efficiency and solution quality (see Figure 4, Table 1, and Table 2). Meanwhile, EMNH can produce competitive solutions to the strong traditional heuristics with much shorter solving time, e.g., EMNH's Gap 0.00% with 1.7 minutes v.s. WS-LKH's Gap -0.11% with 1.8 hours for Bi-TSP-1 ($n$=50) in Table 1.
2. **Novelty.** We propose an accelerated training method via feature reuse and architecture-shared multi-task learning, a stabilized training method via scaled symmetric sampling, and an efficient hierarchical fine-tuning method.
3. **Enhancement.** In terms of **learning efficiency**, our EMNH only spends about $1/\tilde{N}$ training time ($\tilde{N}$ is set to 6 at most in Figure 4(a)) of the state-of-the-art MDRL; our EMNH achieves the stablest and best training performance compared with MDRL and other baselines, as shown in Figure 4(b); and our EMNH attains higher performance than MDRL with approximately equal total fine-tuning steps, e.g., EMNH's HV 0.6585 v.s. MDRL's HV 0.6441 for $K=1$ in Figure 4(c). In terms of **solution quality**, EMNH outperforms other neural heuristics, especially demonstrating a significant advantage over the state-of-the-art PMOCO, e.g., EMNH's Gap 0.17% v.s. PMOCO's Gap 4.19% for Bi-CVRP ($n$=100) in Table 2.
In summary, we believe that our contribution and enhancement are significant in the field of neural MOCO, which could also inspire the subsequent works. Notably, as acknowledged by Reviewer d8Lb with high confidence, 'This work is a timely contribution to a promising research direction.'
Please feel free to let us know if the reviewer still has any other concrete or specific concerns. We are happy to take them as suggestions to further improve our work.
---
Reply to Comment 1.1.2:
Comment: Dear Reviewer TTgk,
We again appreciate your time for reviewing our paper. We conducted an additional study about the lightweight fune-tuning which may raise the **contribution** and **enhancement** of our work as shown below.
**Trade-off Between Lightweight Fine-tuning and Performance.** For a given weight vector, EMNH fine-tunes the meta-model to derive a submodel to solve the corresponding subproblem. We study another two (relatively) lightweight fine-tuning methods, including only updating the head parameter (denoted as EMNH-FH) according to feature reuse [1] and only updating the decoder parameter (denoted as EMNH-FD) like PMOCO [2]. These two methods even allow us to only fine-tune and store parts of the original submodels, i.e., $N$ heads or $N$ decoders, thereby being more computationally efficient. Meanwhile, such benefit may bring about performance sacrifices in some cases. We report the results and the parameter numbers of various parts of the model in the table below. The lightweight fine-tuning has slightly inferior performance compared with the original EMNH in most cases except on Bi-CVRP ($n$=100). Generally, the more lightweight of the fine-tuning, the more performance deterioration (i.e.,EMNH-FH v.s. EMNH-FD as displayed in the table below, where FH is more light than FD). However, these lightweight fine-tuning methods can be used as alternatives when the computational and memory resources are limited.
Moreover, same as EMNH, EMNH-FH can also generate much more dense Pareto solutions to improve the performance via increasing weight vectors and corresponding fine-tuned heads. We have plotted the generated Pareto fronts with 105, 300 and 1035 weight vectors on Tri-TSP-1 which verified the above point. However, we will supplement this result in the final version since the "Global Rebuttal" PDF containing figures is not allowed to updated or uploaded at this moment.
**Table: Results of lightweight fine-tuning methods.**
Bi-CVRP ($n$=20)|HV|Gap
---|:---:|:---:
PMOCO-Aug|0.4294|0.19%
MDRL-Aug|0.4292|0.23%
EMNH-Aug|0.4302|0.00%
EMNH-FD-Aug|0.4299|0.07%
EMNH-FH-Aug|0.4298|0.09%
Bi-CVRP ($n$=100)|HV|Gap
---|:---:|:---:
PMOCO-Aug|0.3966|2.77%
MDRL-Aug|0.4072|0.17%
EMNH-Aug|0.4079|0.00%
EMNH-FD-Aug|0.4082|-0.07%
EMNH-FH-Aug|0.4082|-0.07%
Tri-TSP-1 ($n$=20)|HV|Gap
---|:---:|:---:
PMOCO-Aug|0.4712|0.00%
MDRL-Aug|0.4712|0.00%
EMNH-Aug|0.4712|0.00%
EMNH-FD-Aug|0.4710|0.04%
EMNH-FH-Aug|0.4707|0.11%
Tri-TSP-1 ($n$=100)|HV|Gap
---|:---:|:---:
PMOCO-Aug|0.4956|0.34%
MDRL-Aug|0.4958|0.30%
EMNH-Aug|0.4973|0.00%
EMNH-FD-Aug|0.4925|0.97%
EMNH-FH-Aug|0.4906|1.35%
**Table: Parameter numbers of various parts of the model.**
Bi-CVRP Model|#(Parameters)|
---|:---:
Whole Model|1287K
Decoder|98K
Head|16K
Tri-TSP-1 Model|#(Parameters)|
---|:---:
Whole Model|1303K
Decoder|115K
Head|16K
**Reference**
[1] Rapid learning or feature reuse? towards understanding the effectiveness of MAML, ICLR, 2020.
[2] Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization, ICLR, 2022. | null | null | null | null | null | null |
Bayesian Metric Learning for Uncertainty Quantification in Image Retrieval | Accept (poster) | Summary: This paper presents a Laplace approximation-based probabilistic retrieval approach (aka. Bayesian metric learning for image retrieval). The author provides a probabilistic view of the contrastive loss based on the von-Mises Fisher distribution and corrections for the Hessian positive definiteness. Extensive experimental evaluations are performed, demonstrating the advantages of the presented approach in calibrated uncertainties, out-of-distribution detection, close-set and open-set retrieval, and ablations on a few parameters of design choices.
Strengths: [Originality] This paper presents the use of Laplace approximation instead of amortized inference for probabilistic retrieval. Although similar ideas have been proposed to improve the amortization gap of variational autoencoders in several prior works, its development in the context of metric learning is novel. This includes the introduction of a probabilistic view of the contrastive loss and the utilization of the Hessian approximation based on GCN.
On the other hand, previous Bayesian metric learning approaches have also been proposed, such as hedge instance embedding (HIE) and probabilistic face embeddings (PFE). In this regard, the novelty of this work lies in the contrastive learning-based loss and the learning of stochastic embeddings based on the Laplacian autoencoder (Miani, M. et al., 2022).
Miani, M., Warburg, F., Moreno-Muñoz, P., Skafte, N., & Hauberg, S. (2022). Laplacian autoencoders for learning stochastic representations. *Advances in Neural Information Processing Systems*, *35*, 21059-21072.
[Quality] Extensive empirical evaluations, including a careful ablation study, are conducted. The results demonstrate that the proposed approach outperforms HIE and PFE in the considered cases.
[Clarity] The paper effectively uses figures to demonstrate ideas. However, I do find the need of reading with jumps between the main text and the supplemental material in order to grasp the concept. The organization can be improved.
[Significance] This paper demonstrates uncertainty quantification based on Laplace approximation in the context of probabilistic retrieval. This is an important topic for improving the robustness and mitigating the silent failure of deep neural network systems. The main contribution of this paper appears to be the empirical study, which can be informative and useful for practitioners.
Weaknesses: It appears that this paper builds upon prior work, including the Laplacian autoencoder (Miani, M. et al., 2022), as well as several works on uncertainty in metric learning. Consequently, I am more concerned about the unique technical contributions of this work. In this regard, the necessity of the probabilistic view of the contrastive loss is not well presented and not well motivated.
Another missing piece is how to perform test-time retrieval within the proposed framework and how its efficiency compares to the amortized approach. Given a query image, does the proposed approach learn a stochastic representation for it? Is the retrieval based on ranking the deterministic similarities between the query image and all other candidate images? If not, how is it performed?
The organization and clarity of this paper could be improved. If certain results are deemed important enough, they should be moved into the main text. The authors should focus on introducing the unique aspects of this work and provide a strong motivation for them, such as the Von Mises-Fisher distribution. The frequent references to the Appendix disrupt the logical flow of the paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The probabilistic view in the supplemental also seems a little bit weird to me since both argument and parameter are data-dependent. Would it be sufficient to just treat the probabilistic contrastive likelihood as a second-order differentiable loss, if they are mathematically equivalent anyway?
What are the main roadblocks that have been overcome in applying the Laplacian autoencoder (LAE) to probabilistic retrieval?
Are there any other ways of approximating Hessian for contrastive loss, while still maintains scalability and positive definiteness? Is KFAC applicable, e.g., Ritter et al., 2018?
Ritter, H., Botev, A., & Barber, D. (2018, January). A scalable laplace approximation for neural networks. In 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings (Vol. 6). International Conference on Representation Learning.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors discussed limitations about computation load which I think is reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > It appears that this paper builds upon prior work, including the Laplacian autoencoder (Miani, M. et al., 2022), as well as several works on uncertainty in metric learning. Consequently, I am more concerned about the unique technical contributions of this work. In this regard, the necessity of the probabilistic view of the contrastive loss is not well presented and not well motivated.
The technical contribution: The paper proposes to use Laplace for metric learning. This requires (1) proving that the contrastive loss is a log-likelihood because this is a fundamental assumption of the Laplace approximation. This provides the probabilistic motivation that the reviewer request. (2) Identify approximations of the second-order derivative of the loss that ensures that it is semi-positive definite because otherwise, one might end up with a covariance matrix with negative variances. (3) A novel decomposition of the GGN approximation, and (4) “putting things together”, which is all too often trivialized.
> Another missing piece is how to perform test-time retrieval within the proposed framework and how its efficiency compares to the amortized approach. Given a query image, does the proposed approach learn a stochastic representation for it? Is the retrieval based on ranking the deterministic similarities between the query image and all other candidate images? If not, how is it performed?
The model predicts stochastic representations. The test time retrieval is performed by finding the nearest neighbors using the mean of the representation. This gives a ranking and can be computed as efficiently as in the deterministic case. The uncertainty of the retrieved candidate is based on the variances (or concentrations for vMF distributions) of the query and the candidate. Thus, the retrieval system provides a ranking of the nearest neighbors and the uncertainty of each neighbor.
> The organization and clarity of this paper could be improved. If certain results are deemed important enough, they should be moved into the main text. The authors should focus on introducing the unique aspects of this work and provide a strong motivation for them, such as the Von Mises-Fisher distribution. The frequent references to the Appendix disrupt the logical flow of the paper.
Clarity of the presentation: We thank the reviewer for highlighting that the frequent references to the Appendix disrupt the reading flow. We will update the manuscript, such that it can easily be read without referencing the appendix, e.g. move vMF motivation to the main text.
> Questions:
> The probabilistic view in the supplemental also seems a little bit weird to me since both argument and parameter are data-dependent. Would it be sufficient to just treat the probabilistic contrastive likelihood as a second-order differentiable loss, if they are mathematically equivalent anyway?
The classic contrastive loss is a second-order differentiable loss, but this is *not* sufficient to treat it as a log-likelihood. Sufficient conditions would be to prove positiveness and integrability, specifically:
(1) $loss>0$ everywhere, which is equivalent to $probability=exp(-loss)<1$
(2) $\int exp(-loss) = C < \infty$
Condition (1) is feasible to prove directly. Condition (2) is more tricky, proving the existence of such constant C is feasible, finding the exact value of such constant is more annoying.
Importantly, such constant C is in general different from 1 (this can be seen for example by considering trivial cases) and thus the actual probabilistic loss needs to be renormalized. This means that in any case we need to introduce a new loss
$$ probabilistic loss = constrastive loss - log(C) $$
This is of course “equivalent” as losses, since we are simply adding a constant term that will be neglected by the gradient.
This approach (modifying the contrastive loss until it is a log-likelihood) is totally ok, and the reviewer may consider it simpler. We argue that our approach (defining a log-likelihood and then showing it is equivalent) is more elegant, and perhaps more fundamental. Building a probabilistic loss from the basics, i.e. repulsive and attractive terms for each pair, leads to a better intuition on what is going on and how the different pairs interact with each other. Moreover, it gives cleaner access to the explicit value of the normalization constant, as an explicit function of the von-mises-fisher normalization constants.
> What are the main roadblocks that have been overcome in applying the Laplacian autoencoder (LAE) to probabilistic retrieval?
The main roadblocks are (1) showing that the contrastive loss is a log-likelihood, (2) prosing novel approximations to ensure that the second-order derivative is semi-positive definite, (3) proposing a novel decomposition of the Hessian, such that the l2 normalization layer is not linearized.
> Are there any other ways of approximating Hessian for contrastive loss, while still maintains scalability and positive definiteness? Is KFAC applicable, e.g., Ritter et al., 2018?
Our methods will work out of the box for KFAC. Although, using ReLU to ensure semi-positive definiteness is only sufficient for the diagonal case. Both fixed and pos approximations can be used.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the rebuttal! I've read it and I maintain my current rating. If "(1) proving that the contrastive loss is a log-likelihood" is viewed as the key technical contribution by the authors, it deserves more presence in the main text - It was put in Appendix D only.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thanks for the support and feedback. To bring more attention to the likelihood-contribution of the paper, we propose to include a proof sketch showing that the contrastive loss is indeed a log-likelihood. | Summary:
The authors show that contrastive loss can be viewed as a likelihood after projection onto the spherical space. This consequently allows them to use the Laplace approximation to estimate the posterior over the parameters. To make the construction further amenable to estimation, the authors propose approaches to make the Hessian positive definite. Finally, the experimental results and ablations go into details about the design choices.
Strengths: - The authors use a conceptually simple approach to constructing approximate posteriors on top of neural networks using the Laplace approximation.
- The authors provide several analogies for various concepts introduced which makes the reader comfortable, and would be resourceful paper for the community.
- Most design choices in the method are covered by ablations, which is much appreciated.
Weaknesses: - The Laplace approximation hinges on an assumption of vanishing gradient which may not be true for modern large neural networks and large datasets used for representation learning. See Question 2.
- The overall method involves a fair number of moving parts, and it would be good to reconcile them as a single algorithm or a list of bullet points for easy digestion for the reader.
- The method can be computationally expensive, and the authors have to resort to construct stochasticity of the last-layer parameters only. Much of the post-hoc LA literature seems to be working with a similar setup, so I do not count this as a major weakness of this paper but rather the whole community.
### Minor
- Please use `\citet` instead of `\citep` for references directly referring to the paper, for instance in Line 164.
- It would be great to have Figure 3 on the same page as the text on ensuring positive definiteness on Page 4.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. In Line 46, the authors claim that they do not assume any distribution on the stochastic embeddings. But the choice of using Laplacian approximation on the parameters to construct the posterior implicitly makes an assumption on the constructed embeddings. Could the authors clarify this?
2. As stated in Eq. (2), the Laplace approximation comes from a second-order Taylor expansion, where the first-order gradient vanishes due to $\theta^\star$ being the optima. Does this does really happen in practice, and how numerically close are we to zero for instance w.r.t. the size of models/data?
3. Could the authors confirm if my understanding of the overall approach is correct? (a) Using online LA to construct the posterior (b) Use samples from the posterior to construct stochastic embeddings (c) Use embedding samples to construct $\kappa$ for the von Mises-Fisher distribution which is used as a single number to quantify uncertainty.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes. See also weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The overall method involves a fair number of moving parts, and it would be good to reconcile them as a single algorithm or a list of bullet points for easy digestion for the reader.
We thank the reviewer for the suggestions and will include the following snippets of pseudo-code in the paper.
```python
def train(x, y, hessian, prior_prec):
# x is the input data
# y is the target data
# parameters is the network parameters
# hessian is hessian of the network
# prior_prec is prior precision. We set it to 1
mu_q = parameters
sigma_q = 1 / (hessian + prior_prec)
network_samples = sample_from_normal(mu_q, sigma_q)
for sample in network_samples:
emb = model(x, sample)
pairs = miner(y)
loss += contrastive_loss(emb, pairs)
hessian_batch += hessian_calculator(x, pairs)
loss =/ len(network_samples)
hessian_batch =/ len(network_samples)
hessian = alpha * hessian + hessian_batch
```
```python
def inference(x, parameters, hessian):
# x is the input data
# parameters is the network parameters
# hessian is hessian of the network
# prior_prec is prior precision. We set it to 1
mu_q = parameters
sigma_q = 1 / (hessian + prior_prec)
network_samples = sample_from_normal(mu_q, sigma_q)
emb = []
for sample in network_samples:
emb.append(model(x, sample))
mu, sigma = vmf_from_samples(emb)
```
We hope this will improve the clarity of the presentation.
> The method can be computationally expensive, and the authors have to resort to construct stochasticity of the last-layer parameters only. Much of the post-hoc LA literature seems to be working with a similar setup, so I do not count this as a major weakness of this paper but rather the whole community.
Bayesian deep learning builds on approximations: We agree with the reviewer that the Bayesian literature tries to approximate the posterior in various ways to come up with computationally feasible and useful methods. We follow common practices in LA and rely on a last-layer, diagonal assumption.
>Minor
> Please use \citet instead of \citep for references directly referring to the paper, for instance in Line 164.
> It would be great to have Figure 3 on the same page as the text on ensuring positive definiteness on Page 4.
We thank the reviewer for the suggestions to improve clarity and have updated the paper accordingly.
>Questions:
> In Line 46, the authors claim that they do not assume any distribution on the stochastic embeddings. But the choice of using Laplacian approximation on the parameters to construct the posterior implicitly makes an assumption on the constructed embeddings. Could the authors clarify this?
The reviewer is right that the Laplace approximation implicitly puts some assumptions on the embedding distribution. Note that these assumptions depend on the choice of the neural network architecture. So the proper statement would be “the Laplace approximation, when tied with a specific neural network architecture, implicitly puts some assumptions on the embedding distribution”. This is exactly what we observe in our case by only considering architecture with a normalization layer at the end: such architecture choice resulted in the implicit constraint of embedding distribution supported on the sphere.
Importantly, no assumptions are present before conditioning on the network architecture. Our text was meant to contrast with existing variational methods that make the Gaussian assumption over the embeddings. Our model makes no such explicit choice, and different instances of the method will result in different assumptions. In principle, with a sufficiently flexible network architecture, it should be possible to obtain any embedding distribution from the Laplace approximation (in a similar spirit to change-of-variables in normalizing flows). In practice, we do observe non-Gaussian unimodal embedding distributions.
> As stated in Eq. (2), the Laplace approximation comes from a second-order Taylor expansion, where the first-order gradient vanishes due to being the optima. Does this does really happen in practice, and how numerically close are we to zero for instance w.r.t. the size of models/data?
The reviewer is right that the gradient is usually ignored in the Laplace approximation, and the weight posterior is sampled from N(mu, H^{-1}). We experimentally find that if we do not ignore the gradient term, and instead sample from N(mu + grad * H^{-1}, H^{-1}), then we get similar results:
| | map@1 | map@5 | map@10 | auroc | auprc | ausc |
|---|---|---|---|---|---|---|
| N(mu, H^{-1}) | 0.46 | 0.72 | 0.70 | 0.99 | 1.00 | 0.50 |
| N(mu + grad * H^{-1}, H^{-1}) | 0.46 | 0.72 | 0.70 | 0.99 | 1.00 | 0.50 |
Here, grad is the average gradient over the dataset estimated from one epoch. These results are obtained from LFW, similar to the rest of the ablation studies. We will include it in the ablation table. To answer, how numerically close the gradients are to zero, we provide the min and max of the average gradient: min = -0.0008, max = 0.0008. We believe that the Table and the absolute values of the gradients suggest that it is reasonable to ignore the first-order term.
> Could the authors confirm if my understanding of the overall approach is correct? (a) Using online LA to construct the posterior (b) Use samples from the posterior to construct stochastic embeddings (c) Use embedding samples to construct for the von Mises-Fisher distribution which is used as a single number to quantify uncertainty.
Your understanding is correct. However, note that the last step [fitting the vMF distribution] is optional, e.g. maybe you do not need a single measure of uncertainty but would prefer to work with the samples directly.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I maintain my accept score. | Summary: They propose a Bayesian encoder for metric learning. They learn a distribution over the network weights with the Laplace approximation. They first prove that the contrastive loss is a negative log-likelihood on the spherical space. They propose three methods that ensure a positive definitive covariance matrix. They present a novel decomposition of the Generalized Gauss-Newton approximation.
The empirical results leads previous methods on OOD examples.
Strengths: - Well organized paper and good writing. Clearly presenting the idea. Code released.
- The bayesian approach for uncertainty measurement in metric learning is interesting.
- The method show improved results, especially on OOD examples, and ablation study is comprehensive.
Weaknesses: - First the experimental results only achieved limited improvement.
- As the paper claimed, The method is slow in computation as it is a bayesian method.
- The paper only focus on a classical contrastive loss which is a margin-based loss, how about other cases? for example, the proxy-based losses?
- Overall the performance improvement is not significant, and the computational efficiency is not competitive, compared with other uncertainty method.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Could you provide a systematic analysis of the computational efficiency?
2. Proxy-anchor loss can be a good case to study with this uncertainty measurement.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: yes, they have addressed the limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > First the experimental results only achieved limited improvement.
**Strong UQ performance:** The reviewer is correct that the predictive performance does only improve slightly upon the baselines. However, we highlight that the OOD performance (the focus of the paper) across all 4 datasets improves significantly, e.g. for LFW, Deep Ensemble, PFE, MC dropout has 0.52, 0.03, 0.03 AUROC (recall 0.5 is a random baseline), whereas LAM (post-hoc) and LAM (online) yield 0.65 and 0.71 AUROC. Across all datasets, similar, large performance improvements are observed for uncertainty metrics on OOD and ID.
> As the paper claimed, The method is slow in computation as it is a bayesian method.
**Computational overhead:** It is true that our approach comes with a higher computational overhead than a standard feedforward network, but it also produces more information: a useful estimate of uncertainty. Depending on the application, this trade-off can be very worth making.
At inference time, we require N forward passes, thus requiring N times more compute than deterministic methods. In practice, however, these N forward passes can easily be parallelized, such that no time overhead might be observed.
> The paper only focus on a classical contrastive loss which is a margin-based loss, how about other cases? for example, the proxy-based losses?
**Extension to other losses:** The focus of the paper is the contrastive loss, a very common loss in metric learning, which has been shown to perform on par with newer, more sophisticated losses [Metric Learning a Reality Check, Musgrave et al.]. We believe that the method is applicable to the proxy-anchor loss, and this it is an interesting direction to explore in future work.
> Overall the performance improvement is not significant, and the computational efficiency is not competitive, compared with other uncertainty method.
We do not agree. See [Strong UQ performance] above.
> Questions:
> Could you provide a systematic analysis of the computational efficiency?
[see above]
> Proxy-anchor loss can be a good case to study with this uncertainty measurement.
[see above]
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the rebuttal, which addresses some of the previous concerns. After reading the other reviewers' comments, I slightly improved my rating to Borderline Accept. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their positive and constructive feedback. The reviewers found the problem considered “interesting” [R1] and stated that it is an “important topic for improving the robustness and mitigating the silent failure of deep neural network systems.” [R3] The paper is “well organized” [R1] and with “good writing. Clearly presenting the idea” [R1] “effectively uses figures to demonstrate ideas” [R3], providing “several analogies for various concepts introduced which makes the reader comfortable, and would be a resourceful paper for the community.” [R2] “The use of Laplace approximation [...] in the context of metric learning is novel. This includes the introduction of a probabilistic view of the contrastive loss and the utilization of the Hessian approximation based on GCN.” [R3] They found that the method is “conceptually simple”[R2] yielding “improved results, especially on OOD examples.” [R1] The experimental section provides “Extensive empirical evaluations, including a careful ablation study, are conducted.” [R3] “Most design choices in the method are covered by ablations” [R2] “and ablation study is comprehensive” [R1]. “The results demonstrate that the proposed approach outperforms HIE and PFE in the considered cases.” [R3] We are excited of the reviewers positive reception of the paper, and will address their questions below. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Reward-Directed Conditional Diffusion: Provable Distribution Estimation and Reward Improvement | Accept (poster) | Summary: The paper addresses conditional generation with reward-conditioned diffusion models. They propose to learn a reward function from a small subset of labeled data. The paper aims to answer an intriguing research question: "How can we reliably estimate the reward-conditioned distribution through diffusions and balance the trade-off between the reward signal and generating quality?" Additionally, the paper claims that the reward-conditioned diffusion model implicitly learns the latent subspace representation of x.
Strengths: Strength:
1) The paper is well written and the problem they try to tackle is interesting.
2) The paper provides an insightful theory for conditional distribution learned through a reward-based diffusion model
Weaknesses: 1) The paper lacks a comparison to other similar models, such as classifier-guided diffusion models.
2) The assumptions made for the theoretical work appear to be overly simplified.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Some suggestions:
1) It would be good to add what each color represents in Figure 2.
2) I think the main paper should include at least a summary of related work so that reader can have a better picture of where this work stands when compared to the existing works.
3) In Figure 1, in the block for the diffusion model, X_0 is represented as the noise, I think it is better to keep the same notation as in other literature to avoid confusion.
Questions
1) Regarding Figure 6, it is mentioned that those are picked examples, does it mean the presented results are "cherry-picked" and not generalizable? also, I think here y represents the colorfulness level ( as the ground truth reward model favors colorful images), there seems the context is also changed, not only the color, do you have any insight on that?
2) The model seems very similar to the classifier-guided diffusion model, but it never compared to that in the experiments, any reason for that?
3) I am wondering if it is possible to train the reward function together with the diffusion model instead of pertaining to the reward network.
4) The paper assumes generated data is a linear transformation of the latent z (assumption 3.1), I am wondering if this assumption is too strong and does not hold in general.
5) Assumption 3.2 assume a linear reward, however, the ground truth reward in the experiment used the imageNet with a randomly initiated final prediction layer. I am not sure how to interpret this part.
6) After reading the paper, I am still confused about the assumption that the model implicitly learns the latent subspace representation of x.
Some suggestions:
1) It would be beneficial to add a legend in Figure 2 that explains the meaning of each color.
It is recommended to include a summary of related work in the main paper to provide readers with a better understanding of where this work stands in comparison to existing literature.
2) In Figure 1, the block representing the diffusion model uses X_0 to denote noise. It may be clearer to use consistent notation with other literature to avoid confusion.
Questions:
1) Regarding Figure 6, it is mentioned that the examples shown are picked intentionally. Does this imply that the presented results are cherry-picked and may not be generalizable? Additionally, it appears that y represents the colorfulness level (as the ground truth reward model favors colorful images), but there seems to be a change in context as well. Do you have any insights on this?
2) The model appears to be quite similar to the classifier-guided diffusion model, yet there is no comparison with it in the experiments. Is there a specific reason for this omission?
3) I'm curious if it is possible to train the reward function concurrently with the diffusion model instead of relying on a separate reward network.
4) The paper assumes that the generated data is a linear transformation of the latent z (assumption 3.1). However, I'm wondering if this assumption is overly strong and may not hold in general.
5) Assumption 3.2 assumes a linear reward, but the ground truth reward used in the experiments involved ImageNet with a randomly initiated final prediction layer. I'm unsure how to interpret this aspect.
6) Even after reading the paper, I'm still unclear about the assumption that the model implicitly learns the latent subspace representation of x.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper did not discuss any limitations.
The paper has no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1**. The paper lacks a comparison to other similar models, such as classifier-guided diffusion models.
**A1**. Alg 1 is not an alternative to classifier-guided diffusion. Instead, it is a simplification of it and also generalized to continuous reward and semi-supervised learning. Please refer to "2.Comparison" in [our rebuttal](https://openreview.net/forum?id=58HwnnEdtF¬eId=npWKushsuE) for a detailed comparison between Alg.1 and guidance methods. Rather than proposing an algorithm that outperforms the existing ones like classifier(-free)-guided diffusion, the purpose of Alg.1 is to give a formal mathematical statement of the conditional diffusion procedure with rigorous mathematical guarantees.
>**Q2**. The assumptions appear to be overly simplified.
**A2**. There is a misunderstanding. Our assumptions are in fact mild and general. We allow practical encoder-decoder architecture for score matching, data with latent representation, nonlinear reward, noparametric reward learning, nonparametric distribution with mild regularity conditions.
Next we discuss two of our assumptions: (1) low-dimensional linear representation that x = Az. (2) parametric/nonparametric form of the ground-truth reward and regularities of data distribution.
For (1) , our theorems hold in either case where data lies in an unknown subspace spanned by columns of $A$ with a smaller column dimension, or the case $A = I_D$ meaning that data is full-dimensional. Thus, our theory adapts to data distributions with an arbitrary intrinsic dimension. Our analysis also applies to nonlinear data through kernel transformation. For example, when data $x = \phi^{-1}(z)$ for a known mapping $\phi$ (for example if we have kernel or feature map) and latent variable $z$, we consider linear conditional diffusion on $z = \phi(x)$ and our results immediately apply with a simple transform.
For (2), in section 3 we present in detail the theorems for a parametric setting of reward and data distribution; and then in Section 3.3 and Appendix G an extension to nonparametric setting that allows nonlinear reward and general data distributions.
In the parametric setting, reward is assumed to be linear and x is assumed to be Gaussian. As the first theory for conditional diffusion, this parametric configuration is a go-to setting to study since it builds up the theoretical foundation and gives most insights for other advanced settings such as logistic, kernel, neural network (NTK) models. The nonparametric setting further gives generality of our theorems so that it covers the practical scenario where deep ReLU networks are adopted to approximate the reward and score.
>**Q3**. It would be beneficial to add a legend in Figure 2 that explains the meaning of each color and include a summary of related work in the main paper
**A3**. Figure 2 is updated with legend ([preview is here](https://openreview.net/attachment?id=npWKushsuE&name=pdf)). Thanks for your suggestion. Due to the space limit, we had to defer related work to Appendix A. We will try to move it back in the final paper.
>**Q4**. In Figure 1, the block representing the diffusion model uses X_0 to denote noise. It may be clearer to use consistent notation with other literature to avoid confusion.
**A4**. The block in Figure 1 represents the backward process of diffusion models (note the left arrow in the superscript), which is not to be confused with the often-seen forward process. Therefore, the starting point of the backward process is pure noise. This notation is also adopted by some other papers in diffusion literature such as [1].
[1] Chen et al. Score approximation, estimation and distribution recovery of diffusion models on low-dimensional data. arXiv:2302.07194, 2023.
>**Q5**. Figure 6, are examples cherry picked? why does context changes with colorfulness?
**A5**. The examples are typical and we didn’t cherry pick. The reward was randomly constructed for generality. We will release the model checkpoints and our code upon acceptance. During rebuttal, we also conducted a [new experiment](https://openreview.net/attachment?id=npWKushsuE&name=pdf) on RL task.
>**Q6**. Is it possible to train the reward function with the diffusion model instead of relying on a separate reward network.
**A6**. In practice it’s possible. In theory, we use a separate reward trained on labeled data that is independent from unlabeled data, for ease of theoretical analysis and better separating the error from reward training and diffusion training in our guarantees.
>**Q7**. Assumption 3.2 assumes a linear reward, but the ground truth reward used in the experiments involved ImageNet with a randomly initiated final prediction layer. I'm unsure how to interpret this aspect.
**A7**. Our main paper only presented results with linear reward in detail due to page limit, but our theory extends to nonparametric nonlinear rewards and general distributions (section 3.3). Our synthetic experiment in section 4.1 directly tested the linear reward, and our imagenet experiment tested with nonlinear reward which can be interpreted using the neural tangent kernel theory. During rebuttal, we also provided a new RL experiment where y is the value of policy and depends on x via complicated nonlinear relations.
>**Q8**. I'm still unclear about the assumption that the model implicitly learns the latent subspace representation of x.
**A8**. There is a misunderstanding. We never assume “The model implicitly learns the latent subspace representation of x”. Instead, we proved it as one of our main results, i.e., (3.2) in Theorem 3.5 where the subspace angle between the ground truth $A$ and the learned one $V$ is upper bounded by $\frac{1}{\sqrt{n_1}}$, proving that diffusion model learns the latent subspace representation of $x$. The implication is that the DM is able to learn the true natural space of data and thus generate high-fidelity samples.
---
Rebuttal Comment 1.1:
Title: Author's Follow-up
Comment: Dear Reviewer,
We want to follow up to gently check in your opinions on our rebuttal posted one week ago. A brief summary of our rebuttal to your concerns is:
- We compared our method to guidance methods in detail in 2. Comparison of [our rebuttal](https://openreview.net/forum?id=58HwnnEdtF¬eId=npWKushsuE). It's evident from the comparison that Alg.1 and its analysis captures the essence of both classifier guidance and classifier free guidance methods.
- "The assumptions appear to be overly simplified" is a misunderstanding. In **Q2&A2**, we gave a fine-grind breakdown of what assumptions we made and what results we include in our paper.
By the way, we added a [new experiment](https://openreview.net/attachment?id=npWKushsuE&name=pdf) on solving RL tasks via Alg.1 to showcase its versatility. Please let us know if you have further suggestions. Thank you very much! | Summary: In this work, authors explore the problem of reward-directed generation using conditional diffusion models in a semi-supervised learning setup. More specifically, they consider a dataset which has a small subset of it labeled with rewards and the majority of it unlabeled. Using the small labeled subset, they first train a reward approximator with regression, then use the learned reward approximator to label the unlabeled portion of the dataset with pseudo labels. Finally, they train the conditional diffusion model using the samples and their pseudo labels. Authors present theoretical explanations for conditioned diffusion models and reward improvement guarantees for reward-conditioned generations. Finally, experimental results are provided that demonstrated the findings of the theory.
Strengths: Overall Strengths:
1. This paper is very well-organized and well-written and I was able to follow along fairly easily. So, I highly commend the authors for the great job they have done. In particular, all the sections leading up to the main theory are organized well and give a nice background and trajectory to where the theory lies.
2. The high-level idea of theoretically studying how reward-conditioned distribution of diffusion models change and how to balance reward to trade-off sample quality and reward maximization is well-worth pursuing.
3. Theory is well-driven and thoroughly discussed.
Weaknesses: Overall Weaknesses:
1. One of the main questions that this paper aims to answer is “How to balance the reward signal and distribution-shift effect to have high-reward and high-quality samples?” which is what I was most excited to learn about. However, I'm not confident that I truly got an answer to this question.
2. The experimental setup is limited and the computed metrics do not thoroughly cover the theoretical findings of the papers.
3. Although figure 2 looks very nice, I’m not sure if it has any added value in the main body of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How will the results generalize to settings where data doesn’t have a latent linear representation?
2. How will the results of the current model (conditional diffusion model) compare to using an unconditional diffusion model with classifier guidance to achieve high rewards?
3. For text-to-image generation, it is said that higher reward refers to more vividly colored images. Do you have a numerical metric for this?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: In addition to the points provided in the weaknesses section, I believe a deeper effort on the experiments section would be of tremendous value for this work. For me, the experimental results fall short in supporting the theory of the paper and I’m not sure if I’m convinced the full message is conveyed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments!
>**Q1**. “How to balance the reward signal and distribution-shift effect to have high-reward and high-quality samples?” Experimental setup is limited
**A1**. Our paper provides the first theory for conditional diffusion and use of reward-conditioned diffusion to generate better samples. There is a rich body of empirical works using CDM for reward maximization in various contexts [1].
In general, designing an explicit formula for the optimal target is difficult, as the interplay between the guidance level and the distribution shift is data dependent and complicated. There is unlikely a one-fits-all solution, which is why we need theory research. Theorem 3.6 is the first rigorous result characterizing this complicated tradeoff under very general assumptions. In practice, one can use a simple doubling trick (ie binary search) to tune the target value since it is only 1-dim, at a cost of only a log factor in time.
Our experiments are for the purpose of illustration and complements the theory. For rebuttal, we add a [new experiment](https://openreview.net/attachment?id=npWKushsuE&name=pdf) on RL and showcased the use of CDM with varying target values to maximize value. We welcome any further suggestions on the experiment part. But we want to keep our focus on deep learning theory, more extensive experiments would better fit an application paper.
[1] https://arxiv.org/abs/2211.15657
>**Q2**. Although figure 2 looks very nice, I’m not sure if it has any added value in the main body of the paper.
**A2**. Thank you for the comments. Figure 2 is closely related to our method and results, but we realize that it lacks annotations and might have confused you. We have updated Figure 2 with clear annotations and explanations ([a preview is here](https://openreview.net/attachment?id=npWKushsuE&name=pdf)).
Subfigure (a) and (b) illustrate the distribution shift when extrapolating reward prediction and the distribution shift in extrapolating diffusion respectively, these two are the key components in the error decomposition shown in Theorem 3.6. Subfigure (a) corresponds to term $\mathcal{E}_1$ and Subfigure (b) depicts the on-support situation corresponding to $\mathcal{E}_2$, $\mathcal{E}_3$ occurs in the space perpendicular to the subspace in (b) and is not explicitly drawn out . We updated the text annotations wrapped around figure 2 pointing its connection to theory. (c) illustrates the architecture of score matching network we analyzed.
>**Q3**. How will the results generalize to settings where data doesn’t have a latent linear representation?
**A3**. Our theory applies to general data distributions. We show that the conditional diffusion model naturally adapts to data distribution (high-dimensional data simply corresponds to the case where $A = I_D$, and when data lies in an unknown subspace, it corresponds to $A$ with a smaller column dimension).
Our results also apply to nonlinear data such as Fourier series or data from kernel spaces with a known basis transformation. For example, when data $x = \phi^{-1}(z)$ for a known mapping $\phi$ and latent variable $z$, we consider linear conditional diffusion on $z = \phi(x)$ and our analysis immediately applies with an additional $\phi^{-1}$ transform.
>**Q4**. How will the results of the current model (conditional diffusion model) compare to using an unconditional diffusion model with classifier guidance to achieve high rewards?
**A4**. Our model can be viewed as a special case of classifier-free guidance. It is simpler than classifier guidance and doesn’t need multiple classifier $c_t$’s. Please see "Q2.Comparison" in [our renuttal](https://openreview.net/forum?id=58HwnnEdtF¬eId=npWKushsuE) for detailed discussion.
>**Q5**. For text-to-image generation, it is said that higher reward refers to more vividly colored images. Do you have a numerical metric for this?
**A5**. In our text-to-image generation experiment, a higher reward only "favors" more colorful images, that is, we observe a high correlation of large reward and colorfulness. As our target is to maximize the reward (which is not exactly the same as colorfulness), we only need to track the reward values of the generated samples. Therefore, the numerical metrics can be evaluated by running samples through our ground-truth reward model, which only *correlates* with more vividly colored images.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' thorough response.
After a careful re-evaluation of the work, considering the input from other reviewers and taking into account the authors' rebuttal efforsts, I have decided to increase my rating. I particularly find the newly added experiment on all baselines helpful.
---
Reply to Comment 1.1.1:
Comment: Thank you for your swift and positive feedback on our rebuttal. We appreciate your effort in reviewing and discussion period, which really helps us improve. | Summary: This paper presents an approach to generation using diffusion models augmented with a reward function. It does so by setting up a semi-supervised learning setup, where the reward function is learned from a small set of data. The reward is then used to learn a reward conditioned score function, which is subsequently used to generate data conditioned on requested reward.
Assuming a linear subspace, the analysis approaches this setting through the lens of linear bandits, and characterizes the error or suboptimality in terms of regret to the target or requested reward, off-distribution error, and on-distribution error.
Experiments evaluate the above interplay, and how requesting higher rewards leads to distribution shift in the generated samples. Further experiments show how pre-trained models can be adapted to generate reward-conditioned samples.
Strengths: * Presents a practical way to add subjective rewards to generate data beyond concrete prompts like text
* Analyzes this reward conditioned generation setting and identifies the interplay between rewards and distribution shift.
* Experimentally verifies the claims made in the analysis.
Weaknesses: * The paper does not discuss a practical manner in to identify when the generative model starts deviating from the training distribution.
* Comparing the diffusion model's error to a linear bandit setting is interesting, but the exact connection between the two seems a bit murky. The training process does not seem to take advantage of any bandit learning algorithms.
* While it shows that the model can be adjusted to arbitrary rewards, it does not showcase a practical use case.
* The technical novelty is unclear.
=============================
### Post-Rebuttal
* The author's rebuttal and other reviews have made the technical contribution of the paper abundantly clear.
* Additionally, the connection to _offline_ bandits is more evident and does add value.
* The additional RL experiment grounds the claims made in the paper more concretely than the previous experiments.
As such, the majority of my concerns have been allayed with the author response.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Section 2.1 gives the scale of $\sigma$ but does not mention any such constraint about $f^*$. Section 4.2 constructs a random reward, but this setting also makes no mention of the scale of the reward. How does the scale of the reward affect learning?
* If someone were to try and reproduce these results, how would they go about it? Perhaps an experiment that specifically tries to maximize some specific property that is hard to communicate through a text prompt could be shown here, to communicate the effectiveness of this approach, as well as to assist reproducibility.
* The labeled data is based on the CIFAR-10 dataset. Is the unlabeled dataset from a corresponding distribution? Does it have similar resolution, size, and kinds of pictures?
* The paper states `To optimally choose a target value, we must trade off between the two counteractive effects.` Are there any practical methods to do so?
* Reinforcement learning faces the problem of reward design. The analysis done in this paper could be useful for this problem of reward design. It would be useful to deepen the discussion on reward design and reference some related work [1, 2]
### References
[1] Booth, S., Knox, W.B., Shah, J., Niekum, S., Stone, P. and Allievi, A., 2023, June. The perils of trial-and-error reward design: misdesign through overfitting and invalid task specifications. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 5, pp. 5920-5929).
[2] Knox, W.B., Allievi, A., Banzhaf, H., Schmitt, F. and Stone, P., 2023. Reward (mis) design for autonomous driving. Artificial Intelligence, 316, p.103829.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: while the proposed approach opens up avenues to communicate and generate data using feedback other than text prompts, it does not sufficiently address problems that might arise from such freeform feedback.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1**. The paper does not discuss a practical manner in identifying when the generative model starts deviating from the training distribution.
**A1**. Our focus is theory and implications are listed in “impact and novelty” in [our rebuttal](https://openreview.net/forum?id=58HwnnEdtF¬eId=npWKushsuE). We agree that to identify a proper target so that generated samples are deviating from training distribution towards higher reward but not deviating too much is the key of success for empirical applications. In practice, the guidance level is viewed as a tuning parameter, and optimal choice is achieved by hyperparameter tuning. In theory, designing an explicit way for identifying the optimal target is difficult, as the interplay between the guidance level and the distribution shift is data dependent and complicated (see our discussion following Theorem 3.6 (Line 214)). There is unlikely to be a general rule of thumb or a one-fits-all way to identify.
>**Q2**. Comparing the diffusion model's error to a linear bandit setting is interesting, but the exact connection between the two seems a bit murky. The training process does not seem to take advantage of bandit algorithms.
**A2**. Thank you for bringing up this confusion. We agree that the theoretical connection between CDM and bandit is intriguing and we are the first to observe this potential connection. A detailed discussion:
Our problem is similar to *offline* bandits in two ways: (1) both problems deal with offline data of the form {(x,y)}; (2) the goal is to find new x with improved reward value y. Given its offline nature, it does not benefit from any online bandit exploration techniques such as UCB or Thompson sampling.
Note that our theory is not about linear bandits, and we never established any form of equivalence. The statistical error of condition diffusion consists of several parts (Theorem 3.6): $\mathcal{E}_1$ (due to reward function estimation) and $\mathcal{E}_2, \mathcal{E}_3$ (due to conditional score matching)
Among these terms, only a single term $\mathcal{E}_1$ resembles the suboptimality gap in off-policy bandit/RL (if we pick target value to be the max value). Our full analysis of reward-directed DM is much more complex beyond $\mathcal{E}_1$. In addition, our analysis is not limited to linear reward, but allows general nonparametric reward (Section 3.3).
Further, in our [new experiment](https://openreview.net/attachment?id=npWKushsuE&name=pdf), we tested CDM on an RL problem and obtained improved reward performance close to the best known off-policy RL benchmark. This observation is also consistent with our theory.
>**Q3**. While it shows that the model can be adjusted to arbitrary rewards, it does not showcase a practical use case.
**A3**: Our primary focus is to establish thoery for CDM, especially for reward maximization. Many papers have already showcased the practical usage of CDM for generating high-reward samples in various contexts, not limited to image generation [3] but also control and RL [1,2]. We were motivated by those empirical successes and decided to go deep in theory. To this end, we formulate the practical scenario of semi-supervised learning and provided Alg. 1 as a meta algorithm (see "2. Comparison" in [our rebuttal](https://openreview.net/forum?id=58HwnnEdtF¬eId=npWKushsuE) for its close relation to guidance methods). Our results are not limited to a specific use case, instead they provide insights on diverse applications.
- [1] https://arxiv.org/abs/2211.15657 (2022)
- [2] https://arxiv.org/abs/2302.01877 (2023).
- [3] https://arxiv.org/abs/2305.13301 (2023).
>**Q4**. The technical novelty is unclear.
**A4**. The theory of (conditional) diffusion models is widely open. There are very limited results providing statistical theories. To the best of our knowledge, we are the first to give theory for conditional diffusion models and connect them to bandits. In our theory, we provide novel analysis on conditional score estimation from both a parametric and nonparametric point of view. Moreover, in Theorem 3.6, we developed an oracle-type decomposition of the suboptimality gap into three terms, indicating the trade-off on the guidance level $a$ and distribution shift. These results are all derived by using novel analysis and new techniques.
>**Q5**. Section 2.1 gives the scale of sigma but does not mention any such constraint about $f^*$ .
**A5**. In section 2.1, constraint on the scale of $f^*(x)$ is imposed by the assumption on $g*$ (on-support component of $f^*$) and the distribution of $z$ (latent of $x$). In the non-parametric section, $f^*(x)$ has a similar constraint. The scale of reward reflects on term $\mathcal{E}_1$ in Theorem 3.6, which grows linearly with the scare of $f^*$. In practice, It is common to normalize the reward by preprocessing data before training so that the scale does not influence learning.
In both of our experiments, the scales of the reward function fall into normal ranges. Please refer to Figure 2 (Left, Right) for the training distributions of the rewards. In the new RL experiment (see general response above), the rewards are normalized to line in [0,1].
>**Q6**. Reproducibility of experiments.
**A6**. Our main results are Theorem 3.5, 3.6 and their extensions to nonparametric setting in Section 3.3, with proofs fully fleshed out in appendix and easily reproducible. While the experiment is only for illustration, we will release code and demos in the final paper. Any user can supply a reward model oracle and directly use our code to generate high-reward data.
>**Q7**. The labeled data is based on the CIFAR-10 dataset. Is the unlabeled dataset from a corresponding distribution? Does it have similar resolution, size, and kinds of pictures?
**A7**.The unlabeled dataset is LAION-5B. LAION-5B dataset has various image resolutions and sizes and covers various scenes. Both datasets correspond to natural images.
---
Rebuttal Comment 1.1:
Title: Thank you for the Response
Comment: I thank the authors for the detailed response.
The confusion about the connection to linear bandits likely arose from this sentence in the paper on line 67:
```
In the case of a linear reward model, we show that the regret mimics the off-policy regret of linear bandits with full knowledge of the subspace feature.
```
I agree that in the offline setting, typical bandit algorithm approaches are not applicable.
Additionally, my main concern about technical novelty has been allayed. The experiment in the RL domain has especially helped to ground the idea in a concrete problem.
I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt feedback on our rebuttal. The sentence you pointed out does need more clarity to avoid confusion. We will modify it based on the facts in **A2** to
```
Given a target reward value, we analyze the statistical error of reward-directed generation, measured by the difference between the target value and the average reward of the generated population. In the case of a linear reward model, we show that this error includes the suboptimality gap of linear off-policy bandits with full knowledge of the subspace feature, if taking the target to be the maximum possible.
```
In addition, thanks for your especially positive comment on our new experiment. Your positive rating flipped from the negative is very encouraging and reassuring to us :) | Summary: The paper addresses the problem of conditional generation with diffusion models in a self-supervised setting, where the conditional generation is guided by a learned regressor on the small labeled subset. This is referred to as reward-directed conditional diffusion. Assuming the inputs have a latent linear representation, it is shown that reward-conditioned diffusion models implicitly learn this latent representation. Further, assuming a linear reward model is used, it is shown that reward-conditional generation can be viewed as off-policy bandit learning in latent feature space. The theory is also extended to non-nonparametric reward and score functions. Experiments on synthetic and text-to-image data support the theory.
Strengths: Steering generate models (especially diffusion models) towards generating samples with desirable properties is a topic of wide interest. The semi-supervised setting is a particularly important special case of that, which comes up in many real-world applications. As the authors note, there is not much theoretical work in this space yet, making this a valuable contribution.
I found the theoretical analysis to be quite insightful. I particularly like the analysis regarding the trade-off between distribution shift and reward maximization. The simulation experiment seems to align well with the theory. I also appreciate that the theoretical analysis extends to more general reward and score functions.
Weaknesses: I found some parts of the paper a bit difficult to read. First, the motivation and derivation of the score network architecture is unclear without reading the reference ([8]). For example, it is not obvious that this functional form follows from the linearity assumption of x = Az. Second, a lot of notation is not introduced, e.g. $k$ in Eq. 2.3, $x_\parallel$ and $x_\perp$ in Assumption 3.2. This makes some equations difficult to understand. Third, it does not become clear until halfway through the paper why the pseudo-labeller is called a reward function. Perhaps a forward reference would help to clarify this.
I am also not sure what the practical implications are. I could have (qualitatively) predicted the results for text-to-image generation without the theory, simply because you are asking the model to extrapolate beyond the training data. The quantitative results also only make qualitative statements.
Apart from that, I found the experimental setup of using a random reward model a bit strange.
Minor comments:
- Given the similarity of Fig. 2c with Fig 2 in [8], citing [8] is probably warranted ("Figure adapted from [8]")
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. It is not clear to me why Algorithm 1 (especially Eqs 2.2 and 2.3) is necessary. Why couldn't you use either classifier-free or classifier-based guidance? Why does Algorithm 1 offer "theoretical cleanness"?
2. I do not understand why Assumption 3.2 is required. To me, it feels more like a user choice to penalize off-support data in the reward as this should also be reflected in the unconditional score (recall that $\nabla_x \log p(x|y) = \nabla_x \log p(y|x) + \nabla_x \log p(x)$). Could you clarify this?
3. In the experiments, what is the reward distribution of the training data? At which reward values are you asking the model to extrapolate?
Minor comments:
- In the problem setup section, why is $1 > \sigma$ necessary?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have clearly laid out the assumptions for the theoretical analysis. I do not see any major limitations within this scope.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your thoughtful and insightful review. We’ve revised our paper to clarify the notations and added more explanations around the score network according to your suggestions.
>**Q1**. What are the practical implications. I could have (qualitatively) predicted the results for text-to-image generation without the theory.
**A1**. Our theory is not limited to understanding image generation, but also covers applying diffusion for RL and control tasks (see [1,2] and our [new experiment](https://openreview.net/attachment?id=npWKushsuE&name=pdf). You are absolutely right that there’s an extrapolation from training data happening, but there are more questions that we answered by our theorems.
Implications of theory:
- Fidelity of generated samples: conditional score matching network with architecture (2.8) accurately recovers any latent subspace structure in data.
- Trade-off between high target reward $a$ and large distribution-shift and a comprehensive exposition of the distribution-shift associated with $a$.
- Low-dimensionality dependency: error coming from reward learning only depends on the subspace dimension $d$, spot a relation to offline bandit/RL.
- Nonparametric results beyond linear settings
Please refer to the "impact and novelty" section in [our rebuttal](https://openreview.net/forum?id=58HwnnEdtF¬eId=npWKushsuE) for a more detailed exposition on this question.
>**Q2**. the experimental setup of using a random reward model a bit strange.
**A2**: Our paper focuses on general reward-guided diffusion rather than the specific application of images, thus we use a random function for reward for generality. In particular, in the image example, we constructed the reward by using a random linear projection layer on top of an ImageNet pre-trained ResNet model. It can reflect real-world applications because (1) in practice rewards may be unknown and require expensive data collection processes; (2) the pre-trained ResNet models are believed to extract useful representations for images that are semantically meaningful, thus it’s very likely a real-world reward is an easy (MLP) function of these representations. In addition, we provided a [new experiment](https://openreview.net/attachment?id=npWKushsuE&name=pdf) on RL where the reward is predefined in the environment but has to be learned through jointly modeling the trajectories and the final rewards using a conditional diffusion model.
>**Q3**. Why Algorithm 1 (especially Eqs 2.2 and 2.3) is necessary. Why couldn't you use either classifier-free or classifier-based guidance? Why does Algorithm 1 offer "theoretical cleanness"?
**A3**: Alg. 1 does offer cleanness to our analysis. To clarify, classifier and classifier-free guidance can be formulated by the exact same backward sampling as (2.3), by plugging in a similar but different score from what is learned by (2.2). Comparison [link] discusses the relation between Alg.1 and guidance methods. Alg.1 captures the essence of both classifier guidance and classifier free guidance, i.e. to estimate the conditional score $\nabla \log p_t (x_t | y)$, but with an ease of putting aside other less-essential components, such as the multiple classifiers in classifier guidance and the mix of conditioned and unconditioned scores in classifier-free guidance.
>**Q4**. Why Assumption 3.2 is required. To me, it feels more like a user choice to penalize off-support data in the reward as this should also be reflected in the unconditional score.
**A4**. Assumption 3.2 configures the ground truth reward by its $g$ and $h$ components. Linearity in $g$ is required for bounding the term $\mathcal{E}_1$ in Theorem 3.6. As for the $h$ component, we make it explicit otherwise the reward could be ill-defined. We agree that $h$ is for penalizing off-support data in the reward, and it can be problem dependent. Our result (Theorem 3.6) still holds with any choice of $h$, depending on problem nature or user choice.
Yes, the off-support portion of x will be reflected in its score $\nabla_x \log p(x|y)$. If the conditional diffusion model learns the score $\nabla_x \log p(x|y)$ well, then generated data form it should have little off-support component, which is what we proved in Theorem 3.5 by upper bounding $x^{\perp}$ showing $x$ has high fidelity to the support. The ground-truth reward in Assumption 3.2 models a combination of data utility and fidelity.
>**Q5**. In the experiments, what is the reward distribution of the training data? At which reward values are you asking the model to extrapolate?
**A5**: In our synthetic experiment (Section 4.1), the unconditioned reward distribution follows the standard Gaussian distribution $N(0, 1)$. When we set target reward value $\ge 3$, this falls outside the $3\sigma$ region of the Gaussian distribution, and we view any reward level beyond 3 as extrapolation. The visualization of the reward distribution can be found in Figure 2 (Left) in our supplementary pdf file.
In our image generation experiment (Section 4.2), the mean and the standard deviation of the reward of the training data are mean -0.9113 std = 0.4130
When we set target reward value $0.4$ (outside $3\sigma$), we can view it as asking the model to extrapolate. The visualization of the reward distribution can be found in Figure 2 (Right) in our supplementary pdf file.
In our new RL experiment, the mean and the standard deviation of the reward are
mean =294.8 std = 91.2. (Note: This is an offline dataset generated by human experts, so it is higher than all reported offline RL algorithms.) The visualization of the reward distribution can be found in Figure 1 (Right) in our supplementary pdf file.
>**Q6**. In the problem setup section, why is 1>sigma necessary?
**A6**: We let $\sigma<1$ for simplicity. It is only a scaling constant. If $\sigma>1$, Theorem 3.6 would still hold and the term $\mathcal{E}_1$ will linearly scale up with $\sigma$.
---
Rebuttal Comment 1.1:
Title: Locating Figures Mentioned in Q5 & A5
Comment: We want to remind you that **Figure 1** and **Figure 2** mentioned in our reponse **A5** are located in our [new experiment page](https://openreview.net/attachment?id=npWKushsuE&name=pdf), in case there's any confusion on where to find them.
---
Reply to Comment 1.1.1:
Title: Authors' Follow-up
Comment: Dear Reviewer,
We want to follow up to gently check in your opinions on our rebuttal posted one week ago. To give a quick recap, we explained in detail in [our rebuttal](https://openreview.net/forum?id=58HwnnEdtF¬eId=npWKushsuE) on 1. implications of theory 2. comparison to guidance methods. In addition, we added a [new experiment](https://openreview.net/attachment?id=npWKushsuE&name=pdf) on solving RL task via Alg.1 to showcase its versatility.
Please let us know if we have addressed your concern and if you have further suggestions. Thank you very much! | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for valuable comments!
**Q1 Impact and novelty of theory**
**A1:** Conditional diffusion models (CDM) have emerged as a powerful generative model with diverse applications from image generation to control and RL[1, 2, 3]. In sharp contrast to abundant empirical successes, theoretical understanding of diffusion is limited, let alone theory of conditional diffusion. Our paper established the *first finite-sample statistical theory for training CDM* and *the first provable efficiency guarantee for CDM applied to optimizing reward*. We assume a practical encoder-decoder architecture for score matching and provide theoretical results that apply to both linear and nonparametric nonlinear reward (Theorem 3.5, 3.6), general distribution with mild regularity assumptions (Section 3.3), which covers the use of ReLU networks for training.
We summarize implications of our theory:
*a. Fidelity of generated samples:* Theorem 3.5 proves that conditional score matching network with architecture (2.8) accurately recovers any latent subspace structure in data.
*b. Trade-off between high target reward $a$ and large distribution-shift:* Theorem 3.6 shows that the average reward of generated samples is lower bounded by $a - Error(a)$. $Error(a)$ further decomposes into distribution shift penalties $\mathcal{E}_1$ (due to reward function estimation) and $\mathcal{E}_2$ (due to conditional score matching). It provides a comprehensive exposition of the interplay between reward signal and distribution-dependent factors.
*c.Low-dimensionality dependency:* $\mathcal{E}_1$ only depends on the subspace dimension $d$, free of the ambient dimension $D$. Moreover, $\mathcal{E}_1$ turns out to be the distribution shift term commonly seen in offline bandit/RL, whereas they require prior knowledge of the latent space but our conditional diffusion automatically learns the latent space.
*d. Beyond linear settings:* We extend to nonparametric settings in Section 3.3, allowing general distributions and nonlinear rewards.
**Q2 Comparison between Alg.1, diffusion with classifier guidance or classifier free guidance**
**A2:** Alg. 1 is not an alternative to classifier(-free) guidance, but it is a simplification and a generalization of them for theoretical purposes. We will add the following extensive remark about this point in our final paper:
>Our work draws motivation from the guidance-based diffusion classifier which was originally designed for discrete labels-conditioned generation. We generalize it with provable guarantees to continuous reward by formulating Alg.1. Our analysis provides theoretical justification for reward-conditioned diffusion in control and RL [2,3], not limited to classification.
Notably, we provide the first theoretical guarantee for general conditional diffusion models. We show that CDM with score network as in (2.8) (FIgure 2(c)) guarantees generation fidelity in Theorem 3.5 for general light-tailed distribution with mild assumptions on score regularity.
>*Alg.1 and its analysis captures the essence of both classifier guidance and classifier free guidance methods.* From a mathematical point of view, classifier guidance and classifier-free guidance share the same foundational objective, which is also the objective used in Alg. 1 — to estimate the conditional score $\nabla \log p_t (x_t | y)$. Accordingly, the foundational theoretical question is the same: understanding score matching and finite-sample distribution approximation, see Lemma E.1 in Appendix. Thus, one can view Alg. 1 as a meta algorithm and a simplification of these two guidance diffusion methods for elegance of mathematical analysis. We provide more detailed discussion below:
>By the Bayes’ rule, it holds that
$$\nabla \log p_t\left(x_t \mid y\right)=\nabla \log p_t\left(x_t\right)+\nabla \log c_t\left(y \mid x_t\right). \qquad (\star)$$ Recall that Alg.1 directly learns the conditional score $\nabla \log p_t\left(x_t \mid y\right)$ on the LHS of ($\star$). The score network is trained on $(x, \hat{f}(x))$ pseudo-labelled by reward prediction $ \hat{f}$. Next we discuss its relation to classifier guidance and classifier-free guidance:
> + Classifier guidance [1] focuses on the RHS of ($\star$) and estimates $\nabla \log p_t\left(x_t\right)$ and $\nabla \log c_t\left(y \mid x_t\right)$ separately. Here $c_t$ is trained to take in noisy input $x_t$ and the time index $t$ to predict the label $y$. In contrast, Alg. 1 avoids the hassle of training a classifier/reward model with noisy inputs and additional input dimension $t$, and also offers theoretical cleanness.
> + Classifier-free guidance learns the same conditional score $\nabla \log p_t\left(x_t \mid y\right)$ as Alg.1 does, and uses a linear combination of conditioned and unconditioned scores for inference: $$\widetilde{s}_\theta(x, y, t)=(1+\eta) \widehat{s}(x, y, t)-\eta \widehat{s}(x, \emptyset, t).$$ When $\eta = 1$, Alg.1 is equivalent to classifier free guidance.
>We are not aware of any existing theory for classifier guidance and classifier-free guidance. Given that Alg.1 resembles yet simplifies both of them and keeps the essential spirit, we believe one can extrapolate from our theory to better understand these two methods.
**3 New Experiments on RL**
**A3:** To verify our theory beyond synthetic experiments and text-to-image (sections 4.1, 4.2), we conducted a new experiment using condition diffusion for offline RL, following [2]. See the pdf attached for details.
Further, we were able to match best known benchmarks of offline RL. Empirical observations match the our theory.
[1] "Classifier-free diffusion guidance." arXiv preprint arXiv:2207.12598 (2022).
[2] "Is conditional generative modeling all you need for decision-making?." arXiv preprint arXiv:2211.15657 (2022).(ICLR 2023 Oral)
[3]. "Adaptdiffuser: Diffusion models as adaptive self-evolving planners." arXiv preprint arXiv:2302.01877 (2023).
Pdf: /pdf/121cd07efa580925101d6d4e26758fc514445a1d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Language Models are Weak Learners | Accept (poster) | Summary: This paper explored an interesting problem that how to apply and extend LLMs over tabular supervised learning tasks. The paper first described each tabular sample as text, and then resorted to LLM to generate the summary for a set of selected representative samples as the template, which can be viewed as a weak classifier. Finally, by integrating these induced weak classifiers followed by booting learning paradigm, a stronger boosting classifier is constructed. Experiments were conducted over several tabular classification data sets to demonstrated the effectiveness of such learning procedure.
This procedure makes full use of zero-shot/few-shot learning ability of LLMs and extends such ability to few-shot learning setting over tabular data.
Strengths: 1.This paper extends the application scope of LLMs to traditional tabular data and sheds light on integrating LLMs to many real world machine learning systems.
2.The paper gave some effective guidelines to transform tabular samples to text description based on LLMs and metadata automatically with minimal manual engineer.
3.Extensive experiments are conducted to demonstrate some important impact factors on the classification performance.
Weaknesses: 1.The current method seems not easy to apply on high-dimensional tabular data (and for high-dimensional data, we might suffer from insufficient training data, also might due to the limited text sequence handled by LLMs) or when there might be many irrelevant features (this might introduce noise in text description).
2.There are generally three types of features in traditional tabular data, quantitative, ordinal and categorical. The paper talks more about numerical features. It is better to give a systematical discussion of how to generate text description over these three different types while considering the meta-data or give the guidelines to generate the prompt patterns over LLMs to get the text description.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.In Algorithm 1, the proposed Clustering sampling is based on GPT embedding to group the data. How does the clustering performance? Such as inter- intra clustering distance metrics? Or does such clustering method will group samples with different class labels together? Can we also make use of some encoder based LLMs (such as BERT, BigBird for long sequence) to perform such clustering?
2.How to make use of meta-data of the feature to generate the text description of categorical feature values? And how to generate the summarization prompt pattern based on meta-data? Dose this pattern depend on the property of learning target? For example, binary classification vs multi-class classification.
3.Can we make use of results of tree-based method (such as XGBoost) to guide the preprocess of continuous features? That is, making use of same bin method with XGBoost and remap each bin to a meaningful text phrase based on the feature’s meta-data? This is due to the fact that the performance of proposed method heavily depends on such discretization method.
One benefit of XGBoost is that it is able to perform feature selection when generate sub-trees. The irrelevant features of course have negative impact on clustering and summarization. This might be also affecting the conclusion that more examples do not means better performance in “How does the performance scale with more examples?”
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1.The proposed method might difficult to be directly applied to high-dimension tabular data.
2.The tabular data might be derived from different domains, such as biological? May be we need domain-specific fine-tuned LLMs or pretrained LM to perform clustering or summary to get optimal performance. Therefore, some conclusions especially from ablation study might be LLM depended.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are thankful for your positive review of our work! We are happy that you quote our experiments as extensive and as demonstrating the important aspects of our method. Please find our responses to your comments as follows.
> 1. The current method seems not easy to apply on high-dimensional tabular data (and for high-dimensional data, we might suffer from insufficient training data, also might due to the limited text sequence handled by LLMs).
Thank you for the comments! In general, high-dimensional data usually causes problems for machine learning due to the curse of dimensionality. So, for very high-dimensional data, we could generally expect there to be low-dimensional structure in the data. In our case, one could perhaps first summarize the individual data description to get a shorter data description. As we have explained in Appendix A.1, our data-to-text conversion pipeline tries to obtain a textual description of 80 words or less, which maintains a uniform length among all the descriptions.
> 2. There are generally three types of features: quantitative, ordinal and categorical in tabular data. It is better to discuss/give guidlines on how to generate text description over these three different types while considering the meta-data.
Currently, we encode only the quantitative features separately and we rely on the LLM to convert ordinal / categorical data appropriately. This is because we believe that the LLM should be able to infer what these types of data represent from the context and meta data.
However, we could potentially prompt the model to be aware of these ordinal/categorical features explicitly. We do not investigate this direction in the current work, but we will add this for future work.
**Questions:**
> 1. In Algorithm 1, the proposed Clustering sampling is based on GPT embedding to group the data. How does this group samples with different class labels? What are distance metrics used? Can we also use some encoder based LLMs (such as BERT, BigBird for long sequence) for clustering?
Specifications of our sampling are explained in Appendix A.7. The clustering algorithm used is Agglomerative Hierarchical Clustering (AGNES) from sklearn, with cosine distance metric, average linkage and a distance threshold of 0.05.
Algorithm 2 presents our stratified cluster sampling approach. This method creates separate clusters for different class labels and independently samples from them in a stratified fashion. Specifically, for each class, we obtain samples uniformly over its respective clusters. The stratification is performed at the class-level, ensuring that the proportion of class labels in the obtained samples remains the same as in the original training dataset.
Yes, we can also use other language embedding techniques such as BERT for getting these embeddings.
> 2. How to make use of meta-data of the feature to generate the text description of categorical feature values? And how to generate the summarization prompt pattern based on meta-data? Dose this pattern depend on the property of learning target? For example, binary classification vs multi-class classification.
We are not sure if we understand the question properly, but the meta-data is generally for the whole dataset. It is integrated into the prompts for data conversion and summarization, providing the LLM with contextual understanding of the task (refer to figures 1 and 2). Further explanation can be found in Appendix A.1-2.
It's important to note that the inclusion of meta-data is independent of the target labels. Prompts for all datasets are detailed in Table 3. The summarization process is solely guided by the summarization directive, whether it's "Tl;dr" (Too long; didn't read) or "Summarize in detail." When making inference, the prompt does mention the target labels.
> 3. Can we use XGBoost to guide the preprocessing of continuous features? That is, apply the same bin method with XGBoost and remap each bin to a meaningful text phrase based on the feature’s meta-data? The irrelevant features have a negative impact on clustering and summarization. This might be also affecting the conclusion that more examples do not means better performance in "How does the performance scale with more examples?"
We thank the reviewer for this suggestion. This should be a promising experiment to try. Since trees created inside XGBoost have different feature splits in each boosting round, it might be difficult to formulate how they can be effectively combined to create a general discretization method for every feature. We leave this for future work.
On the flip side, as we explore ways to improve the processing of numerical values, this problem itself might go away as newer LLMs may become better at quantitative reasoning.
We further seek one clarification from the reviewer, to understand what they meant by: "This might be also affecting the conclusion that more examples do not means better performance in "How does the performance scale with more examples?""
**Limitations:**
> 1.The proposed method might difficult to be directly applied to high-dimension tabular data.
We refer to our explanations addressed in the previous comments.
> 2. The tabular data might be derived from different domains, such as biological? Can we use domain-specific fine-tuned LLMs or pretrained LM to perform clustering or summary ?
The reviewer’s observation may be correct since tabular datasets can represent different domains. Indeed, we expect domain specific LLMs to perform better however we don't think is this a limitation per se. For example, in scenarios such as medical diagnosis, it can be more prudent to use a LLM pretrained on medical data which will achieve better results. For example a medical-specific LLM might perform better on these datasets - verterbra-column, breast-cancer, caesarian, blood-transfusion-center, and haberman-survival. This observation should be true of the LLM embeddings used in clustering as well.
---
Rebuttal Comment 1.1:
Comment: Thanks for detailed explanation and clarification on my questions. The response answers most of questions. And I keep my positive rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our rebuttal and engaging in discussion! | Summary: This paper explores the concept of weak learners, which are classifiers that achieve slightly better than random performance on any given data distribution. The paper demonstrates the effective utilization of large language models (LLMs) as weak learners. The study focuses on applying a large language model to tabular data using a boosting algorithm. By providing properly sampled text descriptions of tabular data samples according to the target distribution, LLMs can generate a summary or template for classification, serving as a weak learner for the task. The paper incorporates these models into a boosting approach, which, in certain cases, outperforms traditional tree-based boosting methods by leveraging the knowledge within the LLMs. The experimental results indicate that the proposed method outperforms both few-shot learning and complex fine-tuning procedures, particularly when dealing with a limited number of data points. These findings highlight the potential of prompt-based LLMs not only as few-shot learners themselves but also as components of larger machine learning pipelines. Overall, the paper showcases the effectiveness of prompt-based LLMs as weak learners in boosting algorithms for tabular data, offering insights into how they can improve classification performance, particularly in situations with scarce data availability.
Strengths: Strengths:
---------------
1. The paper successfully brings together the concept of weak learners in boosting algorithms with the advancements in large language models (LLMs), creating a novel approach for utilizing LLMs as weak learners in tabular data classification.
2. The paper introduces a unique approach by converting tabular data into text form and using LLMs to generate summaries or prompts. This methodology allows the LLM-generated prompts to serve as effective templates for tabular data classification without the need for retraining or fine-tuning the LLM itself.
3. Through comprehensive evaluations, the paper demonstrates that the proposed approach outperforms alternative techniques such as zero-shot and few-shot learning. It also showcases the approach's superiority over traditional tree-based boosting and LLM-based fine-tuning methods, particularly in domains with limited examples. This performance advantage highlights the potential of LLMs as weak learners in boosting frameworks.
Overall the paper is well-written and paves a way to utilize LLMs in boosting.
Weaknesses: Weaknesses:
------------------
1. The paper focuses specifically on tabular data classification, which may restrict the generalizability of the proposed approach to other types of data or domains. It would be valuable to explore the performance and applicability of LLM-based weak learners in a wider range of datasets and tasks.
2. Although LLMs have shown impressive performance in various natural language domains, they still have inherent limitations, such as sensitivity to input phrasing and potential biases in the training data. The paper would benefit from discussing and addressing these limitations to provide a more balanced perspective on the capabilities and potential drawbacks of LLM-based weak learners.
What was the rationale for choosing these datasets? Are there some data settings that are favorable to summary boosting and similarly the settings that are not good for the proposed method?
3. Minor comments/questions,
a) Please make fonts bold (or use color) in the tables for the best numbers in each row.
b) Are the results reproducible? What is the cost of running the experiments?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see the weaknesses above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review of our manuscript! We appreciate your recognition of our work as novel and paving a way to utilize LLMs in boosting.
**Weaknesses:**
> 1. The paper focuses specifically on tabular data classification, which may restrict the generalizability of the proposed approach to other types of data or domains. It would be valuable to explore the performance and applicability of LLM-based weak learners in a wider range of datasets and tasks.
Thank you for your suggestions! The primary goal of this paper is to test whether LLMs can be incorporated into larger ML systems like boosting. Since boosting is primarily used for tabular data, we decided to focus on tabular data. Indeed, in principle, the method can be applied to any other data in text form, such as GLUE, SQUAD and we plan to explore this in the future.
> 2. The paper would benefit from discussing the limitations in LLMs such as sensitivity to input phrasing and biases in training data to provide a more balanced perspective on the capabilities and potential drawbacks of LLM-based weak learners.
Indeed, LLM exhibit sensitivity to the presentation of prompts which elicit different behaviors. Although our core methodology involves using LLMs to generate prompts (both summary and inference), it still requires a modest amount of manual engineering. We have elaborately discussed failure modes in the creation of these LLM prompts in Appendix A.1-3. Moreover Table 3, lists all the prompts used in this paper, dataset-wise.
The reviewer is correct in pointing out that any biases in the dataset will also reflect in the summary produced by the LLM and this is true of ML model in general.
Yes our method inherits the problem of bias in LLM that comes from pretraining data. Specifically, it can affect the model’s ability to objectively summarize the examples or make predictions that are inconsistent with the biased pre-training data. We believe with better and debiased LLMs these issues can be alleviated. We will update the paper to emphasize more on this aspect.
> 3. What was the rationale for choosing these datasets? Are there some data settings that are favorable to summary boosting and similarly the settings that are not good for the proposed method?
These datasets were picked to be diverse in the number of features, proportions of different feature types (continuous, categorical) and the dataset size, so we believe that we have covered some of the most common settings. These were also the datasets commonly used in related papers, including TabPFN [1] and LIFT [2]. We haven’t tested on datasets with larger numbers of data points because the context size of current LLMs are limited.
In Section 4.2 we have mentioned that our method generally works well for small tabular data without many continuous features. It especially works well when the task benefits from background knowledge where the LLM pretraining is helpful, such as *caesarian, somerville-happiness-survey, haberman-survival* and *TA evaluations*.
When the dataset is large, this prior knowledge might become less relevant, so methods like finetuning become more competitive. Summary boosting is also less useful on datasets with many continuous variables, such as wine, iris, glass, vehicle, even though we encode these numerical values as descriptive attributes. Quantitative reasoning is inherently a problem with LLM but it is increasingly being solved in newer models like GPT-4 and Minerva.
[1] N. Hollmann, S. Müller, K. Eggensperger, and F. Hutter. Tabpfn: A transformer that solves small tabular classification problems in a second. arXiv preprint arXiv:2207.01848, 2022.
[2] T. Dinh, Y. Zeng, R. Zhang, Z. Lin, S. Rajput, M. Gira, J.-y. Sohn, D. Papailiopoulos, and K. Lee. Lift: Language-interfaced fine-tuning for non-language machine learning tasks. arXiv preprint arXiv:2206.06565, 2022.
**Minor comments/questions:**
> a) Please make fonts bold (or use color) in the tables for the best numbers in each row.
We have posted a PDF of the table with the best-performing results highlighted in bold. We refer to our overall response note to all reviewers.
> b) Are the results reproducible? What is the cost of running the experiments?
Yes our results are reproducible and we have included code and instructions to replicate our experiments. However, there has been evidence that the ability of the models exposed through OpenAPI APIs are changing which is outside of our control.
The cost of running the experiments is discussed in Appendix A.11.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: I appreciate the authors' response and would like to share some lingering concerns regarding the utilization of OpenAI APIs for scientific research, primarily centered around issues of reproducibility, ethics, and cost implications. It might be worth considering an alternative title such as "Examining OpenAI GPT-3 as a Weak Learner" unless the scope of the study encompasses a broader range of language models, especially those that are openly accessible.
While I find the concepts presented in the paper intriguing, I do have reservations about the evaluation methodology due to its reliance on a commercial API, which could potentially evolve over time. My intention is not to undermine the value of the paper's ideas, but rather to emphasize the importance of a robust evaluation process that stands up to scrutiny and remains valid regardless of potential changes in API availability.
From a broader perspective, I am concerned about the overreliance on OpenAI APIs within machine learning research. Paying money to OpenAI to get better numerical results, and associating acceptance solely with better numerical results, can inadvertently limit the overall progress of the field.
Personally, I would find the paper's concepts more compelling if the evaluation encompassed open language models that provide full transparency regarding their details and weights. My focus lies more on the soundness and reproducibility of the evaluation rather than pursuing impressive numerical results.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response and engaging in discussion! We agree that the changing API is an important issue that needs to be addressed. **We will add more detailed discussion on this into our paper to emphasize the implication and limitation of using a commercial API**. On the other hand, as far as we are aware, the changes only affected ChatGpt instead of the model through API access so our results should not be affected. You should be able to reproduce the result with the code we provided.
The reason why we chose OpenAI’s API access is not for numerical results. As you can see, the work is exploratory in nature and we do not actually always outperform existing methods. The actual reason is that using API lets us test the capabilities of the models at a reasonable efficiency even in an academic setting, as doing the boosting requires many sequential calls to the language model and we need to do it on many datasets in parallel. We fully agree that this is a less-than-ideal solution, but it does enable us to do research that would otherwise be inaccessible to us. At the current pace of innovation in LLMs, we are cautiously optimistic that there will be software solutions (e.g., for efficient inference) that will change the circumstances.
Apologies that it took us a while to respond because we were investigating the possibility of using an open-sourced model. Unfortunately, as it stands, we simply do not have the infrastructure to host a LLM with size of GPT3 at a high-enough throughput to run all the experiments in a reasonable amount of time. We will try to reproduce subsets with open-source models such as BLOOM or GPT-J for the final version of the paper. In the meantime, is there anything else that we could discuss or add to the paper that would partially alleviate your concerns? | Summary: The paper investigates the use of large language models (LLMs) as weak learners in a boosting algorithm applied to tabular data. By providing text descriptions of tabular data samples, LLMs can generate a summary that acts as a template for classification, effectively serving as a weak learner. The authors incorporate these LLM-generated weak learners into a boosting approach and demonstrate their performance superiority over few-shot learning and fine-tuning procedures, especially for tasks with limited data points.The results showcase the potential of prompt-based LLMs not only as standalone learners but also as components of larger machine learning pipelines.
Strengths: 1. The experiments are conducted on a large number of datasets and illustrate the effectiveness of the method proposed on tabular data.
2. Clear writing and visualization.
3. The examples in the appendix are enjoyable to read. The prompt templates in the appendix are worth learning.
Weaknesses: 1. The integration of multiple weak learners using ensemble learning methods, each requiring the invocation of LLM, may result in significant resource costs. In Appendix A.11, it is calculated that running a dataset of 175 instances would incur a cost of $25, which seems relatively high.
2. The presentation of experimental results could be improved by highlighting the relevant information in the tables. Adding bold formatting to the results would make them more prominent and eye-catching.
3. In Appendix A.12, it is mentioned that ChatGPT performs poorly on some datasets due to the utilization of RLHF. However, no experimental evidence or persuasive argument is provided to support this claim. It is also possible that the issue stems from suboptimal prompts. Also, the Figure 3 shows the results of using GPT-3, where GPT-3 does not consistenty improve upon the smaller model. GPT-3 and ChatGPT have the same pretrained data, which means the performance degradation may be influenced by the pretraining data of the model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The proposed method to split continuous values in tabular data into discrete attributes and generate corresponding summaries. I wonder whether using this approach to fine-tune the LLM could yield promising results. This way, it can effectively align the tabular data format with the language model while enabling the model to learn latent knowledge embedded in the data.
2. How can the determination of the number of discrete categories (e.g., low, medium, high) for continuous numerical values in a table be made? From what I understand, your approach involves experimenting to determine the optimal number of categories.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are discussed in Section 6 of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that you found our paper enjoyable to read and our writing lucid to follow. Thank you for your positive feedback of our work! Please find our responses to your review as follows.
**Weaknesses**
> 1. The integration of multiple weak learners using ensemble learning methods, each requiring the invocation of LLM, may result in significant resource costs. In Appendix A.11, it is calculated that running a dataset of 175 instances would incur a cost of $25, which seems relatively high.
Thank you for the suggestion! It is true that querying OpenAI for creating the prompts seem to be expensive, however in principle our approach does not need OpenAI API and can utilize open-source models hosted locally which could drastically reduce the cost.
We also show in Appendix A.10 that our method is infact more compute-efficient compared to finetuning. For a dataset of 175 points, finetuning makes 7000 calls to the LLM, while prompting needs only 1250 passes through the LLM.
> 2. The presentation of experimental results could be improved by highlighting the relevant information in the tables. Adding bold formatting to the results would make them more prominent and eye-catching.
We have posted a PDF of the table with the best-performing results highlighted in bold. We refer to our overall response note to all reviewers.
> 3. In Appendix A.12, it is mentioned that ChatGPT performs poorly on some datasets due to the utilization of RLHF. However, no experimental evidence or persuasive argument is provided to support this claim. Also, GPT-3 does not consistenty improve upon the smaller model and GPT-3 and ChatGPT have the same pretrained data, which means the performance degradation may be influenced by the pretraining data of the model.
In Appendix A.12, we show that the performance of ChatGPT is worse than GPT-3 Curie on many datasets, especially medical-related such as patients, diseases, etc.
We speculate that RLHF is responsible for this behavior since it is one of the main differences between ChatGPT and GPT-3 Curie. Since we do not have information about the pretraining data changes, we believe it is reasonable to infer that RLHF is a major factor. We note that we did not alter any prompts for ChatGPT and our method mainly relies on the LLM generating summary for itself which it uses to perform reasoning.
We have provided our explanations based on experiments in Table 6. We have observed that it was hard for models with RLHF to change their opinion on scenarios that sound unlikely but plausible. For example in the *somerville-happiness-survey* dataset, a resident rates many amenities as low but is overall *happy* due to other factors. ChatGPT was unable to reconcile this instance with the remaining examples. Further, we have observed that the models with RLHF often would abstain from answering (making predictions) on such examples, making the boosting algorithm less effective.
Again, we wish to highlight that this is just speculation based on our empirical observations, since we don't have access to the actual models/ pretraining data of OpenAI. This observation is important because if one wishes to use this method it would be better to use the base pretrained model rather than one that has been fine-tuned with RLHF.
**Questions:**
> 1. The proposed method to split continuous values in tabular data into discrete attributes and generate corresponding summaries. I wonder whether using this approach to fine-tune the LLM could yield promising results. This way, it can effectively align the tabular data format with the language model while enabling the model to learn latent knowledge embedded in the data.
Yes we have compared a similar approach called LIFT [1] in Table 2, which is comparable to XGboost on most datasets. It involves finetuning the LLM directly with plain English sentence form of the tabular records. Without needing to convert continuous values to discrete categories, this method is already competitive on datasets with many continuous features such as glass, wine, iris, vehicle, and wholesale-customers. This suggests that with finetuning, LLMs are able to handle continuous attributes better.
[1] T. Dinh, Y. Zeng, R. Zhang, Z. Lin, S. Rajput, M. Gira, J.-y. Sohn, D. Papailiopoulos, and K. Lee. Lift: Language-interfaced fine-tuning for non-language machine learning tasks. arXiv preprint arXiv:2206.06565, 2022.
> 2. How can the determination of the number of discrete categories (e.g., low, medium, high) for continuous numerical values in a table be made? From what I understand, your approach involves experimenting to determine the optimal number of categories.
Yes in Appendix A.6 and Table 4 we provide a detailed description of various techniques used for encoding numerical values into descriptive attributes. Among these strategies, we found that binning with quantifiers works best, and further the the number of bins was determined through hyperparameter search. These results are summarized in Figure 4 (right top). | Summary: This paper demonstrates that prompt-based LLMs can be used as weak learners, with applications on boosting algorithms for tabular data. By providing text descriptions of tabular data samples, the authors show that LLMs can produce a summary of the samples and use it as a template for classification that can be leveraged as weak learners for the task at hand. The proposed approach outperforms zero- and few-shot learning and, occasionally, even SOTA algorithms.
Strengths: The paper introduces a novel approach to using LLMs as weak learners that can be leveraged on tabular tasks via boosting. The proposed approach appears to be novel, but in order to properly judge the paper's significance, the authors should address the comments below on the quality and clarity of presentation. Compared to zero and few- shot approaches, the proposed Summary & Summary Boosting is clearly superior; however, when compared with KNN, XGBoost, and (especially) TabPFN, the practical applicability seems quite limited and not well understood.
Weaknesses: The paper can be significantly improved by providing a crisper, more intuitive description of the proposed approached (both Summary & Summary Boosting), and by offering an in-depth discussion of the practical impact of the empirical results in Section 4.
With respect to the presentation: Figure 2 appears to present the "Summary" algorithm, without Boosting. Even so, it is unclear (i) what is the output of the stratified cluster sampling (I expect it to be a subset of the input to this step), (ii) what does "Select BEST summary" mean - best based on what? only one summary is selected? probably not, because one would expect it to be not so much "the best" but rather "the MOST USEFUL" to classify a new, unlabeled example. Furthermore, lines 137-138 seem to imply that just one summary is chosen (per class?). Adding all these key details to Figure 2 is critical for removing the current ambiguities.
The paper should add a new figure, similar to Fig 2, that fully illustrates in detail the Summary Boost algorithm.
Last but not least, the authors should also clarify & discuss the results in Tables 1 & 2, for both of which they should BOLD the best result (to improve readability). While it is obvious that both Summary & Summary Boosting outperform zero- and few- shot learners (Table 1), the picture is a lot more confusing when comparing Summary Boosting with the four strong "baselines" in Table 2. First of all, the novel approach obtains the best results on only 3 or 4 of the 16 datasets. Second, often times one of the baselines outperforms the novel approach by almost an order of magnitude (eg, wc & wine). Last but not least, the two conclusions of the paper (that SB does best on datasets with few examples and worst on those with continuous attributes) are not necessarily correct: even though the best performances of SB are on datasets with 1/3/3/5 continuous attributes and 73-306 examples, it is unsafe to generalize from here. For example, iris has 4 continuous attributes and 150 examples, but SB's performance on it is abysmal (0.193 vs 0.027), even though this dataset's properties are similar in nature hams (5 & 73) or tae (1 & 151), where it does comparable/better than the competition. The authors should do an in-depth study about the suitability of the approach to various types of domains; if they need extra space, they could shrink/send-t0\o-appendices Section 5.
OTHER COMMENTS:
- line 87: you should either explain what "the special token" does, or omit to mention it
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: see Weakness above
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The main concern of this reviewer is the practical usage of Summary & Summary Boost. Without clarifying the issues and questions raised under "Weaknesses," the proposed approach has very limited applicability under still-unclear circustances.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable suggestions for our manuscript.
> 1. The paper can be significantly improved by providing a more intuitive description of the proposed approachs and by discussing the practical impact of the results in Section 4.
Thank you for your comments! We have brought out the intuiton behind our proposed method in multiple places leading upto Section 3.1. Lines 32-37 introduces the idea, lines 77-80 and 95-100 explain it further.
A practical applicability of our work will be the fact it improves the ability of LLMs to reason on tabular data over few-shot and zero-shot. We have shown that “summary” as an intermediate step makes reasoning easier and infact can be used inside boosting models for small tabular data upto 300 data points without many continuous features.
> 2. With respect to the presentation: Figure 2 appears to present the "Summary" algorithm, without Boosting. Even so, it is unclear (i) what is the output of the stratified cluster sampling
Stratified cluster sampling yields a representative subset of examples that will be injected into the prompt for summarization. Since the LLM’s context length is limited, only a few examples selected by this sampling process can be fed as input. This has been mentioned in lines 142-154 and depicted in Figure 2.
> 3. What does "Select BEST summary" mean - best based on what? only one summary is selected? probably not, because one would expect it to be not so much "the best" but rather "the MOST USEFUL" to classify a new, unlabeled example.
We use validation set to select the best summary. The best summary is the one that achieves the least validation error rate. We are not sure what “most useful” could mean in this case without a more concrete definition.
> 4. Furthermore, lines 137-138 seem to imply that just one summary is chosen (per class?).
We would like to clarify that there is only one summary for each weak learner.
For the "Summary" method compared in Section 4 - Experiments, we generate a fixed number (25) of summaries and pick the one with the smallest validation error rate. This is mentioned in lines 137-139.
However to obtain a weak learner that is used inside "Summary Boosting", the procedure is slightly different. We sample summaries until we find the first summary that does better than random guessing on the training distribution in that round. This summary will be our weak learner. We refer to the lines 165-167.
Further details have been explained in Appendix A.2.
> 5. Adding all these details to Figure 2 is critical for removing the current ambiguities.
We thank the reviewer for the suggestion. Figure with the boosting algorithm included will take up a lot of space. So, we found it better to describe the missing details in text and through Algorithm 2. In the revision we can refer to the text in the caption if that will be more helpful.
> 6. The authors should also clarify & discuss the results in Tables 1 & 2, for both of which they should BOLD the best result (to improve readability). While both Summary & Summary Boosting outperform zero- and few- shot learners (Table 1), it is not true of the "baselines" in Table 2.
We have posted a PDF of the table with the best-performing results highlighted in bold. We refer to our overall response note to all reviewers.
> 7. The two conclusions of the paper (that SB does best on datasets with few examples and worst on those with continuous attributes) are not necessarily correct: iris has 4 continuous attributes and 150 examples, but SB's performance on it is abysmal (0.193 vs 0.027), even though this dataset's properties are similar in nature hams (5 & 73) or tae (1 & 151), where it does comparable/better than the competition.
Our observations are based on the majority of datasets and not any single one. It is true that the LLM does not perform well when the dataset has more continuous features as supported by its error rate on *iris, glass, wine*, and *wholesale-customer* datasets. On the other hand, our approach works best when the dataset is small, as evident in *caesarian, TA evaluations, somerville-happiness-survey, haberman-survival*, and *visualizing-hamster* datasets. This is reasonable because LLM leverages prior knowledge from pretraining, which becomes less relevant and less competitive as the dataset size increases.
However, it's worth noting that the *visualizing-hamster* dataset is very small (< 80 points), and in this case, it appears that the XGboost overfitting makes our method seem like the best performer. It is our hypothesis that prompting is likely to not overfit on any dataset since it remains a few-shot learner.
We wish to highlight here that our method does not always show improvements over the state of the art due to some current limitations in LLM with respect to context size that prevent it from ingesting a large number of examples. This may be addressed in newer LLMs - for instance, GPT-4 has 32k context length. Additionally, even converting numerical values to discrete categories does not help address continuous features, which is a known problem in LLM that they are bad at quantitative reasoning. These issues may be resolved as LLMs get better. Also there may be ways to combine trees and LLM in a single boosting algorithm.
> 8. line 87: you should either explain what "the special token" does, or omit to mention it
By "special token" we mean the [MASK] token in prompt which is commonly used for Masked Language Modeling in BERT, i.e. predicting a masked token in a sequence. We will include this change in the revision.
**Limitations:**
> 1. The main concern of this reviewer is the practical usage of Summary & Summary Boost. Without clarifying the issues and questions raised under "Weaknesses," the proposed approach has very limited applicability under still-unclear circustances.
We refer to our explanations addressed in the previous comments. We will emphasize these aspects more in the revised paper.
---
Rebuttal Comment 1.1:
Title: After authors' rebuttal
Comment: Thank you very much for the detailed answer to my review. Your comments helped clear up most of my "tactical comments," but less so when it comes to the "strategic" ones. IMO, the empirical evidence is still too mixed to get this paper out of the "borderline" zone; the situation would be different if the authors had a real-world application domain with clear cut results. Furthermore, the comments of fellow reviewers NJzp and gzhs have also convinced me that my original rating is correct. | Rebuttal 1:
Rebuttal: A common point shared by many reviewers was to make fonts bold in the tables 1 & 2 for the best numbers in each row. We have added bolding to highlight the best-peforming results which can be viewed in the attached file.
We note that our method doesn't always show improvements over the state of the art due to some current limitations in LLM with respect to context size that prevent it from ingesting a large number of examples (might be solved in GPT-4 or newer models). We show that our approach works best when the dataset is small and doesn't contain many continuous features, which is reasonable since LLM exploits prior from pretraining to its advantage. In the future, one may combine this strength with other models to get the best of both worlds.
Pdf: /pdf/c77a94d8c8b6b314767dc42020d960afc7bd090b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a novel way to use large language models (LLMs) as weak learners in boosting frameworks for tabular data. The core idea is called LLM Summary Boosting, a novel method that prompts large language models (LLMs) to create weak learners for usages within a boosting framework to make predictions on tabular data. The authors conducted thorough experiments on a variety of benchmark datasets. Compared to conventional tree-based boosting and LLM-based finetuning, this approach demonstrates superior performance in specific scenarios and remains effective with limited examples. To the best of my knowledge, the contribution of this paper is novel despite its incremental nature of contribution. I therefore recommend weak accept and am willing to hear what the authors think.
Strengths: This paper is well-motivated and well-written. I also find the paper very easy to follow along with. Ideas of prompting LLMs to perform data manipulation/wrangling task aren’t new but this paper focuses on the specific usages of prompting for use with boosting scheme on tabular data. I think the contribution is clear here with the experiments.
Weaknesses: While I find the arguments and experiments quite comprehensive, I am personally on the fence about the title, but I do not fully object to it. However, in the current form, the title seems to be a little bit overclaiming.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors described the limitations of the proposed methods in section 6. The authors also provide a detailed failure modes documentation and I think it is very valuable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive views of our paper! We are glad that you found it well-written and easy to follow along with. Please find our responses to your review as follows.
**Weaknesses**
> 1. While I find the arguments and experiments quite comprehensive, I am personally on the fence about the title, but I do not fully object to it. However, in the current form, the title seems to be a little bit overclaiming.
Thank you for your suggestion! We have justified the title by showing that LLMs can be used for boosting, i.e. serve as weak learners.
By definition, a weak learner is a classifier that achieves better than random guessing performance under the distribution of interest. We believe that our tabular data classifier created through prompts was able to demonstrate this property. | null | null | null | null | null | null |
GeoPhy: Differentiable Phylogenetic Inference via Geometric Gradients of Tree Topologies | Accept (poster) | Summary: This work presents a new robust and scalable method for inferring phylogenetic trees based on (variational) Bayesian inference.
Strengths: Originality
- the originality of this work is in providing a robust and rigorous solution to an important application problem where the application of Bayesian methodology has so far been relatively limited
Quality
- the work is well motivated and appropriately implemented including the relevant derivations and experimental benchmarking
Clarity
- language is fluent, and intuitive illustrations support reading; the technical details are described in sufficient detail and complemented by intuitive explanation. Conclucions and limitations are clearly stated.
Significance
- significant, general and timely application problem addressed
- improved robustness, while maintaining scalability that allows practical application
Weaknesses: Treatment of the benchmark data sets is limited compared to the potential of the method for real applications. This could be expanded to highlight the relevance of the work.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Good performance compared to alternatives is shown but is it possible to demonstrate more on practical relevance for this?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: No source code.
No assessment of societal implications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > 1. Treatment of the benchmark data sets is limited compared to the potential of the method for real applications. This could be expanded to highlight the relevance of the work.
Thank you for your constructive comments.
Regarding the improved treatment of the current experiments, we introduced an intuitive understanding and comparison of topology spread and consensus trees through visualization analysis, as shown in Fig. R1. Additionally, to gain a deeper understanding of the estimated topology distribution Q(τ), we analyzed the relationship between the marginal log-likelihood (MLL) estimates and the fidelity of the majority consensus tree obtained from $Q(\tau)$ in Fig. R2.
> 2. Good performance compared to alternatives is shown but is it possible to demonstrate more on practical relevance for this?
Thank you for your question. In Fig. R2, we confirm that the performance of the marginal log-likelihood (MLL) estimates aligns well with the quality of the consensus tree obtained from $Q(\tau).
We believe that the estimation of evolutionary parameters and demographic history, which are more practical issues, will become possible in the future by expanding the simple phylogenetic tree model and deriving variational algorithms for it.
> 3. No assessment of societal implications.
Thank you for your insightful comments.
Regarding the societal implication of our work,
we recognize the importance of addressing this aspect.
Scalable phylogenetic inference methods, like the ones we propose, have the potential to greatly enhance our understanding of the evolution, origin, and spread mechanisms of viruses and bacteria. Such insights could have profound societal impacts, especially in the areas of public health and disease control.
We will include a discussion on the potential applications and the future perspectives related to these methods in the 'Limitations and Future work' section.
> 4. No source code.
We have included our source code in the supplementary materials.
We will ensure that our updated manuscript provides clear references to it for more clarity.
---
Rebuttal Comment 1.1:
Title: Review responses
Comment: Thank you, the comments have been adequately addressed. My overall scoring remains unaltered. | Summary: The authors propose a method for learning phylogenetic trees from sequence data.
The key idea is borrowed from [16], which is to represent the tree topology in terms of an embedding of the leafs of the trees in a continuous space, $z \in \mathcal{Z}$ from which the topology is extracted via a mapping $\tau \;:\; \mathcal{Z} \to \mathcal{T}$, where $\mathcal{T}$ is the space of binary trees, using a distance-based technique (in particular the popular Neighbor-Joining (NJ) method). The fact that the representation in terms of $z$ is continuous has benefits in a variational representation of the posterior distribution over trees.
The theoretical contribution of the present paper is resolving some issues with "an unevaluated Jacobian determinant between $B_\tau$ and $z$" (see Sec. 4 "Related work", p. 6).
The resulting method is demonstrated through simulations, showing that the method yields high-scoring models (high log-likelihoods), but not as high as existing variational and MCMC techniques (Table 2).
Strengths: - I like the idea (which really borrowed from [16], so not really a new idea) of the embedding in terms of continuous coordinates
- solid theoretical derivation of the variational estimators
- promising results
- quite well written (although a bit dense)
Weaknesses: - not clear what the benefits are compared to existing variational and MCMC methods (which are presented as the gold-standard)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: My main question is what are the benefits compared to, e.g., MrBayes, VBPI-GNN, and the method proposed in [33], the first two of which are referred to as gold standard. I'm guessing the proposed method is computationally more efficient and scalable, but this should be clearly demonstrated (or am I missing something?).
I should point out that I'm no expert of variational Bayes, so I didn't check the derivations from that point of view. I'm counting on the other reviewers to comment on that side.
detailed comments:
p. 7: "marginal log-likelihood (MLL) estimates": Does this mean MLL values? I mean, "maximum (log-) likelihood estimate" usually refers to an estimate of a parameter that is obtained by likelihood maximization, so it'd be good to be unambiguous.
p. 7: typo "Monte-Calro"
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: I think so (but see my question about the benefits wrt. existing methods above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive review and comments.
> My main question is what are the benefits compared to, e.g., MrBayes, VBPI-GNN, and the method proposed in [33], the first two of which are referred to as gold standard. I'm guessing the proposed method is computationally more efficient and scalable, but this should be clearly demonstrated (or am I missing something?).
Thank you for your question. Our main contribution is the development of variational Bayesian inference methods for phylogenetics without the preselection of tree topologies, which VBPI-GNN requires.
For the evaluation of the computational efficiency, we have included the performance comparison in Fig. R3 First, where the runtimes of our methods are comparable to VBPI-GNN for the same number of likelihood evaluations.
Although MrBayes is a highly optimized and practical implementation for MCMC-based Bayesian phylogenetic inference,
the extensibility, variational Bayesian methods is a promising alternative that facilitates a gradient-based optimization, evaluation of model evidence, such as $\ln P(Y)$, based on a tractable approximate posterior distribution, and the extensibility of the algorithms to more complex and large models.
> detailed comments: p. 7: "marginal log-likelihood (MLL) estimates": Does this mean MLL values? I mean, "maximum (log-) likelihood estimate" usually refers to an estimate of a parameter that is obtained by likelihood maximization, so it'd be good to be unambiguous.
Thank you for your question. In the context of the variational inference, the marginal log-likelihood (MLL) estimate is used as a quality metric of the posterior approximation $Q(\tau, B_\tau)$.
This is because the expectation value of MLL estimates $L^{(K)}[Q(\tau, B_\tau]$ is the lower bound of the true MLL values, which coincides when the $Q(\tau, B_\tau) = P(\tau, B_\tau | Y)$.
> p. 7: typo "Monte-Calro"
Thank you for your suggestions. We will correct the spelling of our revised manuscript. | Summary: The authors present a novel variational distribution for tree topologies, $Q(\tau)$, in the context of variational inference (VI) in phylogenetics. They construct their $Q(\tau)$ by introducing a continuous distribution $Q(z)$ in hyperbolic space and define $Q(\tau)$ by an expectation over the support of $Q(z)$ of an indicator function acting on a mapping from the hyperbolic space to the topology space; this allows sampling from $Q(z)$ and reconstructing \tau through the neighbour joining algorithm and, by the differentiable $Q(z)$ distribution, enables Monte-Carlo gradient estimation to maximize the Evidence Lower Bound (ELBO).
An ablation study is conducted to evaluate different $Q(z)$ distributions (Normal and Wrapped-Normal) and control variates. The model is then compared to other VI methods and MrBayes across 8 datasets.
Strengths: The problem of phylogenetic tree reconstruction is paramount to understanding evolution, e.g., the evolution of species and cancer. The grand size of the tree topology space induced by a set of taxa constitutes a major obstacle for inference; any method successful in efficient exploration of this space is a significant contribution to the field. Here, the authors construct a differentiable $Q(\tau)$, enabling the use of gradients to indirectly maneuver the tree topology space. Furthermore, the proposed framework does not require restricting the tree topology space based on pre-processing steps as in VBPI.
The novel, differentiable approach to $Q(\tau)$, the avoidance of restrictions of the tree support and reporting strong results on datasets DS1-DS8 in terms of the ELBO(Q,R), makes the paper a significant and original contribution to the field of Bayesian phylogenetics.
Weaknesses: The standard deviations of the proposed method reported in Table 2 are low for multiple datasets; this may be due to the consistency of the optimization across seeds, but may also indicate that the support of $Q(z)$ collapses to regions containing one/few unique $\tau(z)$. Since there is no analysis/discussion regarding this, the paper presents limited information regarding the representative power of $Q(\tau)$, which is a major set back of the paper, especially as the MrBayes "golden-run" posteriors over the tree topologies for DS1-DS8 support multiple topologies (Whidden and Matsen 2015, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4395846/ ).
The use of $Q(z)$ makes the method extra susceptible to limited topology support by the inherent mode-seeking behaviour in VI; while a mode in the topology space could represent multiple tree topologies, a mode in Z-space might only support reconstruction of, at worst, one tree topology.
The paper would benefit from reporting runtime compared to MrBayes and other VI methods, especially as one of the strengths in VI is speed. The current report on runtime is limited and is relegated to Appendix C.
Furthermore, an obvious weakness is the need to introduce a lower bound of the ELBO.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The paper is a significant, novel and promising contribution to the field and is in its current format acceptable for publication.
However, based on my concerns in the weakness section, addressing the following points in the rebuttal could make me increase my score:
1. A discussion regarding the low variance in Table 2 and limited knowledge of the representative power $Q(\tau)$.
2. Clearer reporting of the runtime of the algorithm in the main paper.
Addressing the following points in the rebuttal could increase my score even further:
1. A experiment demonstrating the representative power/limitations of $Q(\tau)$
Some misprints: "Monte-Calro" figure text of Figure 2 line 4.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations has been somewhat discussed (see weaknesses for omitted discussion of possible limitation).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful suggestion and feedback.
> W1. The standard deviations of the proposed method reported in Table 2 are low for multiple datasets; this may be due to the consistency of the optimization across seeds, but may also indicate that the support of $Q(z)$ collapses to regions containing one/few unique $\tau(z)$.
> Q1-1. A discussion regarding the low variance in Table 2 and limited knowledge of the representative power $Q(\tau)$.
Thank you for your question. We expect that the main source of relatively low variance in Table 2 of our methods comes from that the marginal log-likelihoods (MLLs) are sufficiently close to the reference values.
Indeed, although VBPI-GNN needs the reasonable preselection of candidate topologies, it exhibits the lowest variance among the methods (Table 2) and still shows aligned bipartition frequencies to those of MCMC (Fig. R2 Fourth).
The future work should address the expressivity of $Q(z)$ to represent a more diverse topological distribution $Q(\tau)$.
> The paper would benefit from reporting runtime compared to MrBayes and other VI methods, especially as one of the strengths in VI is speed. The current report on runtime is limited and is relegated to Appendix C.
> Q1-2. Clearer reporting of the runtime of the algorithm in the main paper.
Thank you for your suggestion. We have compiled the runtime of our methods in comparison with MrBayes and VBPI-GNN (Fig. R3 First).
While we expect that more efficient per-step computation will be an important future work, current performance for the fixed number of iterations is comparable to that of VBPI-GNN.
We also included the estimated runtime across the eight datasets
and the different $Q(z)$ model configurations,
where the number of species ranges from $N=27$ to $64$ (Fig. R3 Second).
> Q2-1. A experiment demonstrating the representative power/limitations of Q(τ)
> The use of Q(z) makes the method extra susceptible to limited topology support by the inherent mode-seeking behaviour in VI; while a mode in the topology space could represent multiple tree topologies, a mode in Z-space might only support reconstruction of, at worst, one tree topology.
Thank you for your constructive suggestion.
We confirm that the mode of the topological distribution $Q(\tau)$ is well matched to that of MCMC when the MLL values are close to goldstandard, through the visualization of consensus trees obtained from $Q(z)$ and MCMC shown in Fig. R1 and
the evident correlation between Robinson-Foulds (RF) distance and MLL estimates for multiple datasets in Fig. R2 First to Thrid.
For $Q(\tau)$ in DS1 dataset, we showcase bipartition frequencies observed in the posterior tree distribution of MCMC, VBPI-GNN, and our methods (Fig. R2 Fourth).
The divergence index of our $Q(\tau)$ in Fig. R2 Fourth is 0.36 > 0.
However, the divergence is considerably lower than those of MCMC (0.87) and VBPI-GNN (0.86),
and it is still difficult for our method to represent the tree distributions around the mode (the consensus tree).
> Some misprints: "Monte-Calro" figure text of Figure 2 line 4.
Thank you for your suggestion. We will correct the spelling.
---
Rebuttal Comment 1.1:
Title: Response rebuttal
Comment: Thank you for addressing my concerns raised in the review. I will raise my score to a 7, for now, as the rebuttals added experiments and discussion adequately addresses Q1-1 and Q1-2. However, regarding Q2-1 I still have some questions. Furthermore, I was not aware of the resemblance to [16] pointed out by reviewer xhDC, which changes my assessment on novelty and therefore less prone to raise the score further.
Q1-1:
I agree, this a plausible explanation for the low variance.
Q1-2:
Figure R3 more than adequately addresses my concerns. In light of these results, the paper would benefit from mentioning the faster learning runtime of VCSMC and Vaiphy, illustrating the trade-off between high performance but long training runtime approaches (Geophy, VBPI) vs lower performance but short training runtime approaches (Vaiphy, VCSMC).
Q2-1:
R2 First to third: I don't understand the experiment behind this, is the consensus trees of Geophy and MrBayes are calculated, and then the ELBO measured under this fixed tree setting? What do each dot represent in each plot? What do 'dim' refer to? Could you please provide a more elaborate description of the experiments behind this plot so I can better assess how they address my concerns.
R2 Fourth:
I think that the DS1 dataset is not preferable for this analysis, as the support of the MrBayes posterior includes few topologies. Datasets DS4 and higher would've been a better selection (see figure 3 of https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4395846/). Nonetheless, it does shine some light on the representative power of $Q(\tau)$ and is a good addition to the paper.
Furthermore, what in R2 First-Fourth informs the diversity of the trees sampled by $Q(\tau)$? How many unique trees are used to construct the consensus-tree?
---
Reply to Comment 1.1.1:
Title: Response on the remaining concerns
Comment: We sincerely appreciate your detailed feedback and the time you've taken to review our manuscript.
> I was not aware of the resemblance to [16] pointed out by reviewer xhDC, which changes my assessment on novelty and therefore less prone to raise the score further.
We'd like to address the perceived resemblance between our work and [16] and clarify our contributions, which we partly discussed in the 'Related work' section.
While we acknowledge [16] as an inspiring and original contribution to the field, we believe that our work also contributes distinctly and significantly.
Similarities:
* The use of the neighbor-joining (NJ) algorithm that maps continuous coordinates to a phylogenetic tree
* Both works focus on Bayesian phylogenetic inference
Differences:
* We developed a variational inference (VI) algorithm, while [16] developed an MCMC-based algorithm.
* We addressed the issue of parameterizing $Q(\tau)$ in the VI algorithm to cover a vast number of tree topologies, while [16] highlighted the fidelity of hyperbolic spaces to embed the distribution over trees with distances $P(\tau, B_\tau | Y)$.
* For the use of the NJ, we explicitly defined a distribution over topologies $Q(\tau) = E_{Q(z)}[I[\tau(z) = \tau]]$ instead of mapping coordinates $z$ to $(\tau, B_\tau)$ as seen in [16]. This distinction is crucial as our approach avoids the issue of the Jacobian determinant seen in [16].
* Unlike [16], our link function $\tau(z)$ does not necessarily rely on NJ, as we don't directly link $z$ to $B_\tau$. We showcase the results using UPGMA in Fig. R3 Fourth.
Distinct contributions not comparable to [16]:
* We introduced a tractable lower bound $\mathcal{L}$, then explored designs of variational distributions and control variates to complete a novel VI algorithm (GeoPhy).
* We benchmarked the model evidence estimations (MLLs) across approaches and exhibited significant improvement over other methods that considered whole topologies.
We hope these clarifications address your concerns.
> the paper would benefit from mentioning the faster learning runtime of VCSMC and Vaiphy ...
Thank you for your suggestion. Accordingly, we will include a discussion on the trade-off of the performance and runtimes between these methods in their standard use in our revised manuscript.
> R2 Fourth: I think that the DS1 dataset is not preferable for this analysis, as the support of the MrBayes posterior includes few topologies.
> Datasets DS4 and higher would've been a better selection
Thank you for the valuable suggestions. In response, we've investigated DS4 and DS7 alongside DS1 to discuss the limitations of $Q(\tau)$, especially in cases with more diffused tree samples.
Also, we present the difference of tree topology distributions more clearly by showing (b) the frequency of the most frequent topology and (c) the number of topologies up to their cumulative frequency matches to 95 percentile, in addition to (a) the diversity index.
| DS1 | MrBayes | VBPI-GNN | GeoPhy |
|--|--:|--:|--:|
| (a) Simpson's diversity index | 0.87 | 0.86 | 0.36 |
| (b) Top freq. topology | 0.27 | 0.26 | 0.79 |
| (c) #topology up to 95% freq. | 42 | 44 | 11 |
| DS4 | MrBayes | GeoPhy |
|--|--:|--:|
| (a) | 0.90 | 0.68 |
| (b) | 0.28 | 0.55 |
| (c) | 208 | 58 |
| DS7 | MrBayes | GeoPhy |
|--|--:|--:|
| (a) | 0.99 | 0.99 |
| (b) | 0.02 | 0.02 |
| (c) | 753 | 553 |
For DS4, the overall tendencies in GeoPhy: lower (a), higher (b), and lower (c), are observed as DS1. Interestingly, for DS7, we observed that GeoPhy also represents more diverse tree samples than DS1 and DS4. However, the number of unique topologies up to 95% freq. is still lower than MrBayes, which implies the requirement of more expressiveness on $Q(\tau)$ to represent fine topology weights.
While the results for VBPI-GNN are not readily available for DS4 and DS7 due to time constraints, we would like to include the corresponding results in our revised manuscript.
> Furthermore, what in R2 First-Fourth informs the diversity of the trees sampled by $Q(\tau)$?
Given that most tree topologies $\tau$ had very low frequencies, we used the bipartition frequencies of species defined for each tree topology edge as a more concise statistic for the distribution of $Q(\tau)$.
In Fig. R2, we present bipartitions ordered by descending frequency as seen in MrBayes. The intermediate values between 0 and 1 in this frequency plot reveal topology diversities, where VBPI-GNN aligns more closely with MrBayes for the DS1 dataset compared to GeoPhy. As GeoPhy tends to take values near zero and one in the slope region, it indicates a need for increased expressiveness of $Q(\tau)$ to better represent intermediate frequencies. This trend was also noticeable for DS4. For DS7, while GeoPhy traced the curve more accurately, fluctuations around this curve highlight potential room for improvement. We intend to incorporate these figures in our updated manuscript.
---
Reply to Comment 1.1.2:
Title: Clarifications on Fig. R2 First to Third
Comment: We apologize for our oversight of your questions regarding Fig. R2 First to Third.
> Q2-1: R2 First to third: I don't understand the experiment behind this, is the consensus trees of Geophy and MrBayes are calculated, and then the ELBO measured under this fixed tree setting? What do each dot represent in each plot? What do 'dim' refer to? Could you please provide a more elaborate description of the experiments behind this plot so I can better assess how they address my concerns.
> How many unique trees are used to construct the consensus-tree?
We complement the details for Fig. R2 First to Third as follows:
Each dot represents a different experimental run. The term 'dim' stands for the dimension of wrapped normal distributions employed for $Q(z)$. We extended our experiments to include 5 and 6 dimensions in response to a query from reviewer AJKq.
The procedure is outlined below:
1. We sampled 1000 trees from the posterior distribution $Q(\tau)$ of GeoPhy
2. We then deduced the majority-rule consensus tree from these sampled trees
3. Subsequently, we calculated the Robinson-Foulds (RF) distances between the GeoPhy consensus tree and the consensus tree derived from MrBayes tree samples.
Note that the RF metrics are randomly jittered within $\pm 0.2$ to prevent the overlapping of dots in the plot.
As the consensus tree is derived from multiple sampled trees, the RF distance represents a summarized metric of alignment of $Q(\tau)$ to MCMC samples. Through Fig. R2 First to Third, we have confirmed that the performance metrics (i.e., MLL estimates) of the GeoPhy model are well aligned with the proximity of $Q(\tau)$ and the MCMC samples in terms of their consensus trees (the modes of the tree topology distribution). | Summary: Authors proposed GeoPhy as a fully differentiable approach for phylogenetic inference, addressing a fundamental problem in phylogenetic inference. In experiments with real benchmark datasets, GeoPhy demonstrated its superior performance compared to other methods when considering all topological candidates. This approach is of interest to the general ML community.
Strengths: Overall, the manuscript is well-written.
Weaknesses: It is highly recommended to include an algorithm block summarizing the GeoPhy method from input to output for clarity. Additionally, there are some details in the experimental section that need clarification.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Major questions:
1. In Section 3.1, the neighbor-joining algorithm is defined as the projection from the distance matrix to the tree topology. However, other standard algorithms, such as UPGMA, could be alternative approaches. Have you investigated if the choice of projection leads to different performance in the benchmark tests?
2. Appendix C.5 provides a rough estimation of the running time, but it is unclear how the other baseline models included in the benchmark tests perform in terms of time. Furthermore, how does the algorithm scale as the number of tips and sites increase? Have you observed any trends in the time curve with respect to the number of taxa and sites?
3. In Table 2, GeoPhy achieves superior performance in the benchmark tests based on likelihoods. Did the optimized topologies look significantly different across the different methods?
4. How were the per-step Monte Carlo samples (K) determined in the benchmark tests? There is no clear winner based on Table 1.
5. Could you elaborate on the approaches used to calculate the marginal log-likelihood (MLL) for all the listed methods?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Authors briefly mentioned two limitations of the proposed model: 1) Q(z) can be improved to enhance its expressive power, and 2) the computation cost per update step can be optimized to speed up the overall inference process. These practical concerns can be further addressed in future studies.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1. It is highly recommended to include an algorithm block summarizing the GeoPhy method from input to output for clarity. Additionally, there are some details in the experimental section that need clarification.
Thank you for your suggestion. We will include the algorithm block that summarizes the method workflow in our revised manuscript.
> Q1. In Section 3.1, the neighbor-joining algorithm is defined as the projection from the distance matrix to the tree topology. However, other standard algorithms, such as UPGMA, could be alternative approaches. Have you investigated if the choice of projection leads to different performance in the benchmark tests?
Thank you for your suggestion. Our framework defined in equation (4) certainly allows an alternative link function to obtain a tree topology.
We have included an experiment with UPGMA with a promising optimization trajectory (Fig. R3 Fourth).
> Q2. Appendix C.5 provides a rough estimation of the running time, but it is unclear how the other baseline models included in the benchmark tests perform in terms of time. Furthermore, how does the algorithm scale as the number of tips and sites increases? Have you observed any trends in the time curve with respect to the number of taxa and sites?
Thank you for your suggestion.
we have analyzed the runtime of MrBayes for MCMC and Stepping-Stone (SS) method, VPBI-GNN, and our methods with their standard use we have employed in this work for dataset DS1 (Fig. R3 First).
As MrBayes is written in C and highly optimized its per-iteration computation time is much faster than current variational methods.
Runtimes of our methods per iteration (number of likelihood evaluations) were comparable to VBPI-GNN.
> Q3. In Table 2, GeoPhy achieves superior performance in the benchmark tests based on likelihoods. Did the optimized topologies look significantly different across the different methods?
In our additional experiments, we have confirmed that the discrepancy measure (Robinson-Foulds distance) of the majority-rule consensus trees derived from the topological distribution $Q(\tau)$ and MCMC (MrBayes) are highly correlated with the marginal log-likelihood (MLL) estimates (Fig. R2 First to Third).
We have not evaluated topology distributions obtained with methods whose MLL estimates were far deviated from the reference values.
Although when the consensus tree of our inference runs exactly matches that of MCMC, it is still difficult to express the distribution of tree topologies faithfully due to the limitation of expressivity in $Q(z)$.
We exemplify the case with DS1 dataset in Fig. R2 Fourth,
where VBPI-GNN shows more aligned bipartition frequencies to MrBayes than ours.
> Q4. How were the per-step Monte Carlo samples (K) determined in the benchmark tests? There is no clear winner based on Table 1.
Thank you for your question.
We have observed that the smaller $K$ shows relatively faster convergence in terms of the number of iterations.
We included the corresponding experiment in Fig. R3 Third.
> Q5. Could you elaborate on the approaches used to calculate the marginal log-likelihood (MLL) for all the listed methods?
For VBPI-GNN, the importance-weighted ELBO with 1000 iterations was used for the MLL estimates.
For CSMC-based methods, we will include the details of computations in our revised manuscript.
---
Rebuttal Comment 1.1:
Comment: I have read through the responses and all of my concerns have been properly addressed. Thus I raised my score to 7. | Rebuttal 1:
Rebuttal: We thank all the reviewers for taking the time to provide thorough and insightful feedback on our manuscript.
Your constructive comments have greatly enhanced the quality of our work.
In response to the points raised, we have engaged in a detailed discussion on the expressivity, performance, and limitations of our study with Figures R1 to R3 in the attached PDF file.
Pdf: /pdf/bae465276d31f1e587246592ff0112bec45dbd45.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposed a family of implicit distribution over tree topologies which allows support free variational Bayesian phylogenetic inference. The distribution is constructed based on the neighbor joining algorithm which maps a distribution over the tip node vectors to the tree topology space. Both Euclidean and hyperbolic spaces are considered for the tip node distributions. As the resulting distribution over tree topologies is intractable, an auxiliary reverse distribution was introduced which leads to a looser lower bound for optimization. Various gradient estimators were investigated for the tree topology variational parameters optimization. Experiments on a benchmark of real data problems demonstrated the advantage over other variational approaches (mainly sequential monte carlo based) that does not require preselected tree topololgies.
Strengths: 1. The paper is written clearly and well organized. The problem of constructing flexible families of distributions over tree topologies is important for the phylogenetic communities and would to of interest to many practitioners.
2. In addition to a careful derivation to the training objective, the authors also investigated different gradient estimators for tree topology variational parameters.
Weaknesses: 1. As admitted by the authors, the proposed variational approximation $Q(z)$ and the reverse distribution $R(z|\tau)$ is simple, which may damage the overall approximation quality of the method.
2. In the derivation of the lower bound, the reverse distribution $R(z|\tau)$ needs to satisfy the constraint: the support of $R(z|\tau)$ should be inside the region $\{z: \tau(z)=\tau\}$.
2. In the experiments, it seems that the performance would be really sensitive to the choice of gradient estimators. I wonder if that is related to the small sample size $K$? The marginal likelihood estimates also tends to have large variance in most cases.
3. It may be hard to assess the performance on the tree topology posterior estimation as the distribution is implicit and defined in a rather sutble way.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In equation 10 and equaiton 12, the expection of the last term is actually zero. Any idea why this term is kept there given that other control variates are considered?
2. How many samples are used to compute the marginal likelihood estimates?
3. The current construction of the reverse distribution $R(z|\tau)$ seems not take the constraints $\tau(z)=\tau$ into consideration directly. Would this have a negative impact on performance?
4. I note that only low dimension tip node vectors ($d\leq 4$) are considered in this paper. Would higher dimension embedding be helpful?
5. How about the tree topology approximation given by the implicit distribution? Although it may be hard to estimate the density, one can also report summary statistics from the sampled trees, for example.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes, they do.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and feedback.
> W1. As admitted by the authors, the proposed variational approximation $Q(z)$ and the reverse distribution $R(z | τ)$ is simple, which may damage the overall approximation quality of the method.
We believe that enhancing the expressiveness of $Q(z)$ and $R(z | \tau)$ is a valuable and non-trivial direction for future work. However, despite the simplicity of our choice, we consider it a significant contribution that we can consistently outperform CSMC-based methods (Table 3).
> W3. In the experiments, it seems that the performance would be really sensitive to the choice of gradient estimators. I wonder if that is related to the small sample size K? The marginal likelihood estimates also tends to have large variance in most cases.
Thank you for your detailed feedback.
While the variance of our methods is still higher than that of goldstandard MCMC runs,
we have confirmed that we can achieve better optimization than CSMC-based methods in most cases
by using either $K=1$ (LAX), $K=3$ (LOO), or $K=3$ (LOO+LAX) (Table 2). We are interested in whether increasing
$K$ could help reduce variance. However, we observed that increasing $K$ requires more iterations to achieve the same accuracy (Fig. R3 Third).
> Q1. In equation 10 and equaiton 12, the expection of the last term is actually zero. Any idea why this term is kept there given that other control variates are considered?
Thank you for your question.
It may be confusing because we refer to $\nabla_\theta Q_\theta(z)$ as the score function.
While the expectation of $\nabla_z \log Q_\theta(z)$ is zero, the term with $\nabla_theta$ remains.
We will include a note in the footnote for clarification.
> Q2. How many samples are used to compute the marginal likelihood estimates?
We used 1000 MC samples. We will move the description from Table 1 caption to section 5.1 Experimental setup for more clarity.
> W2. In the derivation of the lower bound, the reverse distribution $R(z | \tau)$ needs to satisfy the constraint: the support of $R(z | \tau)$ should be inside the region $z: \tau(z)=\tau$.
> Q3. The current construction of the reverse distribution
seems not take the constraints
into consideration directly. Would this have a negative impact on performance?
Thank you for highlighting this important point.
Initially, we believed that the expressiveness of $R$ is only beneficial when a more complex $Q$ is used.
At least given that $L[Q] \geq L[Q, R]$,
it is not expected that we overestimate the quality of $Q$.
However, there might be cases like
$L[Q] \geq L[Q'] \geq L[Q', R] \geq L[Q, R]$,
so even with a simple $Q$,
designing $R$ to meet the conditions could prevent suboptimal trnasitions like $Q \to Q'$.
We would like to include the discussion in our revised manuscript.
> Q4. I note that only low dimension tip node vectors (
) are considered in this paper. Would higher dimension embedding be helpful?
Thank you for your question. There is an advantage in using a low dimension as it reduces the number of parameters. To answer the question empirically, we have added experiments for $Q(z)$ with dimensions 5 and 6 (Fig. R2 Left to Right). While the difference between dims 2 and 4 is more pronounced, there appears to be a slight improvement in DS3 and DS7 when the dimension is increased to 5 or 6.
> W4. It may be hard to assess the performance on the tree topology posterior estimation as the distribution is implicit and defined in a rather sutble way.
> Q5. How about the tree topology approximation given by the implicit distribution? Although it may be hard to estimate the density, one can also report summary statistics from the sampled trees, for example.
As we can sample easily from $Q(z)$, sampling
from $Q(\tau)$ is straigtfoward.
Therefore, evaluating the empirical distribution is not an issue.
Each of the consensus tree used in Fig. R1 and R2 was obtained from 1000 samples obtained from $Q(\tau)$
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: Thanks for the reply. The issue with equation 10 and 12 still remains. I think the current form in equation 10 is wrong, the remaining term from the gradient of the entropy is not the last term in equation 10 (which is the score and hence has 0 expectation). As the author admitted, the reverse distribution $R(z|\tau)$ does not take the constraint into consideration, which would lead to potential problems, e.g., the lower bound is not right. The samples from $Q(\tau)$ do not seem to provide good approximation to the exact posterior, this would be a cause of the biased estimate of MLL compared to MrBayes/VBPI-GNN. I will keep the score.
---
Reply to Comment 1.1.1:
Title: Clarifications on review points
Comment: Thank you for your continued feedback and assessments.
> The issue with equation 10 and 12 still remains. I think the current form in equation 10 is wrong, the remaining term from the gradient of the entropy is not the last term in equation 10 (which is the score and hence has 0 expectation)
We acknowledge an oversight in our previous rebuttal wherein we mistakenly wrote $\nabla_\theta Q_\theta(z)$ instead of $\nabla_\theta \ln Q_\theta(z)$.
It's worth emphasizing that both $\nabla_z \ln Q_\theta(z)$ and $\nabla_\theta \ln Q_\theta(z)$ are referred to as the 'score function' in literature, but we meant the latter in our manuscript. Notably, while $E_{Q_\theta(z)}[ \nabla_z \ln Q_\theta(z) ] = 0$, the expectation $E_{Q_\theta(z)}[\nabla_\theta \ln Q_\theta(z) ]$ is not zero in general.
Regarding the transformation of $\mathbb{H}[Q_\theta(z)]$ in Equation 10, we utilized the reparameterization trick as follows:
$
\nabla_\theta \mathbb{H}[Q_\theta(z)]
= - \nabla_\theta E_{Q_\theta (z)} [ \ln Q_{\theta}(z) ]
= - \nabla_\theta E_{p_z(\epsilon_z)}[ \ln Q_{\theta}(h_\theta(\epsilon_z)) ]
= - E_{p_z(\epsilon_z)} [ \nabla_{\theta} \ln Q_{\theta}(h_\theta(\epsilon_z)) ].
$
This aligns with the second term of Equation 12.
We will include the transformation above in our revised manuscript for more clarity.
We hope this addresses the concerns raised.
> As the author admitted, the reverse distribution $R(z | \tau)$ does not take the constraint into consideration, which would lead to potential problems, e.g., the lower bound is not right. The samples from $Q(\tau)$ do not seem to provide good approximation to the exact posterior, this would be a cause of the biased estimate of MLL compared to MrBayes/VBPI-GNN.
We would like to complement the above arguments as follows:
* The optimization target $L[Q, R]$ is still a right lower bound that satisfies $\ln P(Y) \geq L[Q] \geq L[Q, R]$, irrespective of constraints on $R(z | \tau)$.
* However, the maximization of $L[Q, R]$ with respect to $Q$ might yield a suboptimal $Q$ if $L[Q, R]$ hasn't been fully maximized with respect to $R$.
* Importantly, maximizing $L[Q, R]$ with respect to $R$ will elicit $R$ to satisfy the constraint (i.e., all $z$ in the support of $R(z | \tau)$ satisfies $\tau(z) = \tau$). Thus, it is not imperative to modify our framework to explicitly ensure the constraint for $R$ (except the expressiveness of $R$ to be able to meet the condition).
* As we partly discussed in the section "Limitations and Future work", the remaining challenge for further improvement on the quality of posterior approximation $Q(\tau)$ is the introduction of more expressive distribution families for $Q$ as well as $R$.
We appreciate the reviewer to shed light on these important points. We will address these points in detail in our revised manuscript. | null | null | null | null | null | null |
Quantifying the Cost of Learning in Queueing Systems | Accept (poster) | Summary: In this paper, the authors introduce a new regret metric, called Cost of Learning in Queueing (CLQ), to quantify the rate at which an optimal scheduling policy can be learned to minimize the time average queue lengths. The authors derive a lower bound to CLQ and show that an UCB-based policy comes close to achieving the lower bound. They also extend their policy to multiple queues in a network setting by combining the UCB policy with the celebrated Backpressure policy.
Strengths: 1. The dependence of the regret bound with respect to the slack parameter (epsilon) is optimal.
2. Bounding the CLQ metric in terms of satisficing regret is intriguing.
Weaknesses: 1. The paper [34] considers an unnormalized version of the same metric proposed in this paper. However, the results presented in this paper give a potentially weaker bound (linearly increasing) than the result in [34]. To be more specific, Theorem 2 proves a constant (let’s denote it by $c$) upper bound to the CLQ metric. Plugging this upper bound into Eq (3) yields the following linear bound on the queue length regret:
$\sum_{t=1}^T \mathbb{E}\left[(Q(t, \pi)) - Q(t, \pi^*) \right]\leq cT, ~ \forall T \geq 1 $
But it is already known from [34, Theorem 2, 3] that there exist simple dynamic policies under which the queue-length regret (i.e., the LHS of the above equation) can be bounded by a constant. Clearly, the CLQ metric fails to capture this strong result and paints an overly pessimistic picture. It is also not clear if the result presented in this paper strictly and quantitatively improves upon [34], even for smaller horizon lengths.
2. The proposed algorithm directly uses UCB, and its analysis does not present any new technical insights.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Please respond to Comment 1 above by arguing why Theorem 2 gives a stronger bound than [34].
2. Since the proposed UCB-based policy directly estimates the mean service rate, it might not work in the non-stationary setting, although the classic Max-weight (or Backpressure) might be able to stabilize the queues. Can the authors shed light on this aspect?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: This is a theoretical work and does not seem to have any potentially negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comparison to [34] (Question 1 and Weakness 1).**
The reviewer is right that the analysis in~[34] gives a stronger asymptotic queue length bound. However, the setting in [34] that resembles our single-queue setting ([34, Theorem 4]) gives a bound with worse dependence on $\epsilon$. In fact, we show below (at the end of our response) that their bound on $\sum_{t\leq T} Q(t)-Q^{\star}(t)$ is of order $1/\varepsilon^8$, compared to our $\sum_{t\leq T}\frac{Q(t)-Q^\star(t)}{T}\leq \tilde{O}(K/\epsilon)$. In that regard, our bound is stronger for small $T$, whereas the bound in [34] is stronger for large $T$ (even ignoring the gap dependence of their bound). To illustrate that their algorithm has a worse transient performance we simulated the algorithm from [34, Figure 7] under the setting of Fig. 1 in our paper (note that they propose some heuristics for their simulations, though they do not prove theoretical guarantees for these). The figure in the attached PDF shows that their algorithm has a significantly worse transient behavior despite its optimal asymptotic regret scaling. We view this as additional evidence that one should consider a metric focused on early stages, e.g., CLQ.
**Technical novelty (Weakness 2).**
The reviewer is right that our algorithm for the single-queue setting is just UCB; however, the extension to general queueing networks requires a more careful adaptation. Indeed, for learning in general queueing networks, to the best of our knowledge, we provide the first guarantees of any kind. Moreover, our optimal dependence on $\varepsilon$ involves a novel separation of the horizon in two stages (learning and regenerate), which should be of independent interest for analyses of learning in queueing systems. An additional technical benefit of our analysis is that by applying UCB directly, we avoid the inefficiencies induced by forced exploration (which is a common tool for learning in queueing systems [20,34]). Finally, as noted on page 1771 of [34], although forced exploration allows an "easier way to analyze", UCB is "generally a better method and should be used in practice"; and an "interesting direction" is to analyze how UCB can achieve good performance while interacting with the queue's dynamic. Our work serves as an advancement in this direction.
**Connection to [39] (Question 2).**
The reviewer is right that our UCB-based policies may not work in a non-stationary setting. Most closely connected, we highlight that [39] adapts an exponentially decaying rule to the UCB estimation and gives an any-time queue length bound. However, applying their bound in the stationary setting (which is the focus of our paper) does not give a tight dependence on $1/\varepsilon$. As listed in our conclusion (line 365), extending our approach to the nonstationary setting is an exciting future direction.
**Derivation of the $1/\varepsilon^8$ bound for [34, Theorem 4].**
In the proof of Theorem 4 on page 1770 in [34], the authors shows in Eq. (16) and thereafter that
$$
R^{\pi_3}_{(\lambda,\mu)}(T) \leq \sum\_{p=1}^{p_0-1}(M_2p^2+\beta_2) + \sum\_{p = p_0}^T (M_2p^2+\beta_2)M_0e^{-\chi p}
$$
where $R^{\pi_3}_{(\lambda,\mu)}(T) = \sum\_{t \leq T} (Q^{\pi_3}(t) - Q^{\star}(t))$ and $\pi_3$ is their policy in [34, Figure 7]. The constants $M_0,\chi$ are from Lemma 6 and $M_2,\beta_2$ are from Lemma 8. We obtain lower bounds of these constants from the corresponding proofs as follows.
For $M_0, \chi$, the last inequality (after Eq. (28)) in the proof of Lemma 6 on page 1776 requires $M_0e^{-\chi p}$ to be at least $e^{-\frac{1}{2}p\delta^2} + 2(N-1)e^{-2\frac{p}{N}\gamma^2}$. Here $N$ is the number of servers (i.e., $K$ in our paper), $\gamma$ is the minimum service rate suboptimality gap, and $\delta$ is at most the expected decrease in queue lengths by choosing the fastest server, which is thus at most the traffic slackness $\varepsilon$ in our paper.
For $M_2p^2 + \beta_2$, Eq. (34) in the proof of Lemma 8 on page 1778 requires $M_2 \geq 0$ and $\beta_2 \geq 2\sum_{n=0}^{\infty} n^2e^{-c_3 n}$ and $c_3$ is a constant from Lemma 10. Checking the proof of Lemma 10 on page 1780, after Eq. (41), one can see that $c_3$ is at least $2\delta^2$ and $\delta$ is at most the traffic slackness $\varepsilon$ in our paper (see the definition of $\delta$ after Eq. (40)). Therefore, $\beta_2 \geq \sum_{n=0}^{\infty} n^2e^{-\varepsilon^2 n} = O(\frac{1}{\varepsilon^6})$ since $\int_0^{\infty} x^2e^{-\alpha x} dx = \frac{2}{\alpha^3}$ for any $\alpha > 0$.
Putting these constants together, the upper bound in [34, Theorem 4] is at least (for $T \geq \frac{1}{\varepsilon^2}$)
$$
\sum_{p=1}^{p_0-1} (M_2p^2+\beta_2)+\sum_{p=p_0}^T (M_2p^2+\beta_2)M_0e^{-\chi p} \geq O\left(\sum_{p=1}^{T} \frac{e^{-\frac{1}{2}p\varepsilon^2}}{\varepsilon^6}\right) \geq O\left(\frac{(1-e^{-\frac{1}{2}T\varepsilon^2})}{\varepsilon^8}\right)\geq O\left(\frac{1}{\varepsilon^8}\right).
$$
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarification. It is difficult to review a completely new theoretical result at this stage. Even assuming that the authors' claims are correct, compared to [34], they achieve a $\textbf{polynomial}$ improvement w.r.t. $\epsilon$ $(\epsilon^{-8} \to \epsilon^{-1})$ at the expense of an $\textbf{exponential}$ degradation w.r.t. $T$ ($\log T \to T$), which is hard to justify.
In any case, the authors should do a thorough comparison of their results with [34], in particular pointing out that they prove a $\textbf{linear}$ regret bound (which becomes trivial for a reasonably large horizon) compared to a $\textbf{logarithmic}$ regret bound established in [34].
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response. We want to clarify that the result we provide in the rebuttal is not a new result about our work but the further comparison to [34] that the reviewer requested. The reviewer expressed concern that the algorithm of [34] may be superior to our approach. We show theoretically that this is not the case for the transient behavior (which is the focus of our paper) and display it numerically in a very simple example (in the figure of the attached PDF that we would encourage the reviewer to have another look at as it showcases the high transient queue length in [34]).
Finally, given that the reviewer's comment doubts the contribution of our work to the literature of learning in queueing systems, we want to reiterate that:
* The new metric that we propose (CLQ) captures the transient performance of a scheduling algorithm and
can be interpreted as an approximation to the maximum increase in average wait time (see the response to Reviewer KN9r). The reviewer's focus on $T\to\infty$ and the resulting queueing regret does not tackle this important consideration.
* We study much more general systems, including queueing networks, and propose algorithms whose CLQ matches the provable lower bound. These are vastly more complex (and realistic) systems than the single-queue setting in [34].
* Our bound has no gap dependence, while [34] has (see the discussion in the derivation).
Of course, we will discuss the comparison in the final version but the reviewer's focus on the single-queue bound for large $T$ ignores that a) our bound is meaningful for the transient setting, b) our approach applies much more generally than the single-queue setting, and c) we make no "gap" assumptions. | Summary: In this paper, the authors propose a new metric to quantify the cost of learning in queueing networks. This notion is required to capture the differences between holding costs in queues and, say costs accumulated in a bandit setting; the latter having a monotonicity property (in expectation). The authors then derive a worst-case lower bound for this metric, and propose UCB based algorithms that come `close' to the lower bound in the order sense.
Strengths: 1. The CLQ metric proposed here is meaningful, and quite appropriate for queueing systems. Prior work uses a standard regret metric, which may not be very appropriate given that queue tend to regenerate.
2. The UCB-based analysis appears quite novel; the authors have to bound the CLQ differently in the initial learning phase and subsequently; this issue does not arise in standard UCB analyses for bandits.
3. The analysis extends to queueing networks.
Weaknesses: No significant weaknesses here. I would have liked to see some more exposition in certain places, including a description of the algorithms, and a comparison between the lower and upper bounds derived here. But I can see that the authors have done the best they could to tell the story within the space constraints.
I do have some (minor) suggestions though.
1. I think it is worth highlighting around Theorem 1 that the bound derived is not instance-dependent, but worst-case in nature.
2. The following sentence on Line 270 on Page 7 "Therefore, in expectation, the queue under .... never empties." was unclear to me.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: This is not a serious issue for me, but I did not get what the authors meant in saying that the guarantee on time averaged queue lengths translates to one on wait times via Little's Law. This statement seems somewhat vague. Can CLQ be formally be related to a sub-optimality on average wait times?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Non-instance dependent guarantees (Weakness 1).**
We appreciate and will adopt the reviewer's suggestion to discuss this further around Theorem 1, and not just in the conclusion (as we currently do).
**Confusing sentence on page 7 (Weakness 2).**
We agree that this sentence was confusing; we intended to say that, for any policy, the expected combined service is less than the expected number of arrivals up to this point (which is still confusing); we will try to give a clearer description of this intuition in the camera-ready.
**Connection to suboptimality of wait time (Question 1).**
The relationship can be approximately justified through the following derivation. Let $W^{\pi}(T)$ be the average wait time of all jobs up to period $T$ under a policy $\pi$ and $Y(T)$ be the number of arrivals (to all queues) in the first $T$ periods. Then, we obtain from a sample-path version of Little’s Law $$\frac{\sum_{t \leq T} Q^{\pi}(t)}{T} = W^{\pi}(T) * \frac{Y(T)}{T}\text{ and therefore } \frac{\sum_{t \leq T} Q^{\pi}(t)-Q^{\star}(t)}{T} = \left(W^{\pi}(T)-W^{\star}(T)\right) * \frac{Y(T)}{T}.$$ Since $Y(T)$ concentrates near $\lambda T$, the maximum increase in average wait time is approximately $CLQ/\lambda$. | Summary: This paper studies a problem that involves both learning with queueing.
In the simple setting, there is a single queue served by multiple
servers. However, the service rate at each server is unknown and needs
to be learned. Intuitively, the combination of the learning policy and
the scheduling policy will impact the queue length dynamics of the
system. Prior work mostly focuses on the queue length performance in
the late stage of the system. However, due to the nature of queueing
systems, this late-stage performance is relatively invariant to the
learning policy, and thus the late-stage metric does not adequately
capture the impact of learning. In contrast, the first contribution of
this paper is to propose a new metric, called CLQ (Cost-of-Learning).
CLQ takes the difference between the time-averaged queue length of a
given policy and that of the optimal, and then takes the maximum over time.
This maximization over the entire time-horizon allows CLQ to capture the
early-stage dynamics of the system, where the impact of learning is more
obvious. Then, the authors provide both lower bounds and achievable
bounds for this new metrics, which differ only by a logarithmic factor
in the number of servers. Finally, the results are also extended to
more general multi-queue multi-server systems.
Strengths: 1. The new notion of CLQ captures the impact of learning more accurately
than existing studies, by including the early-stage dynamics of the
queue. This is a significant contribution.
2. The lower-bound and achievable-bound are nice and differ only by a
logarithmic factor.
3. The results are extended to general multi-queue multi-server systems.
4. The proof idea based on satisfying regret is also very interesting.
Weaknesses: I don't find major weaknesses.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. The work in [39] also provides any-time queueing bounds. I wonder if
they can be translated into an achieveable result for CLQ. Can the
authors comment on how the any-time bound from [39] may compare with the
achieveable bound in this paper?
2. The lower bound involves a very large $K > 2^14 \approx 16000$. Can
the authors comment on what will happen when $K$ is not this large?
Post rebuttal phase:
The reviewer wishes to thank the authors for their response, which clarifies the relation to [39].
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I do not find discussions on limitations or potential negative societal
impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comparison to [39] (Question 1).**
The reviewer is right that the guarantee in [39, Theorem 1] can be translated into a CLQ bound. In particular, it implies a cost of learning of $O(\frac{N^4M^4}{\varepsilon^3})$ for a stationary multi-server system with $N$ agents and $M$ workers. This is suboptimal compared with our guarantee (Theorem 4 in the supplementary material), which shows that the cost of learning under \textsc{MW-UCB} is bounded by $O\left(\frac{N^{3.5}M^3}{\varepsilon}\right)$ (see line 123).
**$K$ required in lower bound (Question 2).**
The goal of Theorem 1 is to characterize the statistical complexity of learning in queueing and we did not optimize the constant. The exact lower bound we derive (for any $K$) is $\frac{K}{2^{13}\varepsilon}-\frac{1}{2\varepsilon}-1$ (line 522 in the supplementary material); a more careful analysis may be able to reduce the constant in this bound and give a non-vacuous lower bound for small $K$.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I will keep my review score. | Summary: The authors consider online queuing systems in a discrete time setting. They study settings with single class queue and multi-class queues, and they propose to consider a metric CLQ that serves as a conservative measure on how the queue length(s) could grow across every time point in a horizon. The authors propose a natural UCB algorithm, and demonstrate that it has a near optimal CLQ in a single queue setting. The authors also derive similar bounds in a queueing network setting.
Strengths: - The analysis is quite novel. In addition, the definition of the CLQ and its analysis is new to the best of my knowledge. In particular, the consideration of the satisficing regret is quite interesting.
- The authors achieve a nearly tight result in the case of one queue. Overall, the technical results are solid.
- The notion of CLQ seems adequate as an alternate metric for online queueing systems, but I still have some question (see Question 2 in ``Question'')
Weaknesses: 1. The discussions on queueing networks could be benefited from more details. While the authors stated that they will provide additional examples on settings modeled by their queueing networks model in Appendix A, Appendix A does not seem to contain much details. To overcome this weakness, the authors should provide a detail account on how
- the model on the bipartite queueing system in [11] (Line 187),
- the model on the multi-server system in [39] (Line 190),
are modeled by the queueing network formulation used by the authors. In particular, it will be crucial for the authors to highlight what are the individual element in the instance tuple (Line 170) in these models. While there could be quite a fair bit of details, I believe that the authors could include them in the Appendix A so that useful details are provided without violating the page limit.
2. There is no simulation to showcase the results.
3. The notion of CLQ seems not to tell us the long term behavior of a policy, since it is taking a worst case over all horizon lengths $1, 2, ....$ (See Question 2)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can I confirm if $\{B_n\}_{n\in{\cal N}}$ is a partition on ${\cal K}$, since the authors say that each server in $B_n$ belongs to one queue?
- While I find the CLQ notion interesting, I still find that it might not have told us the whole story on who the queue length fluctuate under a policy. Let's stick with the single queue model for our discussion sake. For example, let say we have a policy $\pi$ that achieve a CLQ of $\text{constant} \times K/\epsilon$, which is optimal, within a constant factor. Knowing that $\mathbf{E}[Q(t, \pi^*)] = O(1/\epsilon)$ for a large $t$ (I think you can show this by a similar logic to how we derive the average queue length of a stable M/M/1), achieving an optimal CLQ stated above actually does not prevent us from having $\mathbf{E}[Q(t, \pi)] = O(K/\epsilon)$ for all sufficiently large $t$. In this way, achieving an optimal CLQ does not necessarily mean that we converge to the optimal queue length when $t$ grows.
A priori, this might not be a critical issue since the authors aspire to investigate a short time horizon regime and not only a long one, and IMHO I regard CLQ as a conservative measure that tells us how long the queue length could be over all possible horizon lengths. But I am wondering if the authors could also consider a worst case over the horizon lengths in $\{\tau, \tau+1, \ldots \}$ instead of $\{1, 2, \ldots \}$, for an arbitrary $\tau$? More precisely, do the authors' analysis inform us of a bound on (3) with the range of maximum $\{1, 2, \ldots \}$ replaced with $\{\tau, \tau+1, \ldots \}$ for and arbitrary $\tau$?
- What is the difference between the two definitions (5), (6)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is no potential societal impact to my best knowledge. This is a theoretical paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Long-term optimality (Question 2/Weakness 3).**
We thank the reviewer for pointing out that we may get a better bound for a larger $\tau$. Indeed, our analysis directly extends to show that UCB converges to the optimal $O(1/\varepsilon)$ scaling (for the single-queue setting). By combining Lemma 4.2 and 4.3 we can obtain a bound of $\tilde{O}(1/\varepsilon + K^2/(T \varepsilon^4))$ for the time-averaged queue length in the initial $T$ periods. As a result, if $\tau \geq K^2/\varepsilon^4$, the maximum of time-averaged queue lengths for $T \geq \tau$ will be bounded by $O(1/\varepsilon)$ with no dependence on $K$.
**Clarification comments (Question 1, Question 3, and Weakness 2).**
The reviewer is right that $\\{\mathcal{B}_n\\}$ is a partition of $\mathcal{K}$. The difference between (5) and (6) is that (5) is tailored to the multi-queue multi-server system, i.e., the DM knows that there are no queue transitions; this is a special case of the queueing networks whose CLQ we define in (6). The reviewer is right that we do not have a subsection for simulation results, but we highlight that Figure 1 includes some numerical evidence for the performance of our algorithm (see also the PDF uploaded with an updated figure).
**More details on the queueing models (Weakness 1).** In a bipartite queueing system [11], there are $N$ agents and $M$ workers. In period $t$, a new job arrives to each agent $n$’s queue with probability $\lambda_n$. The DM selects a matching $\sigma_{n,m}$ between agents and workers such that $\sum_{m’\in[M]} \sigma_{n,m’}\leq 1,\sum_{n’\in [N]} \sigma_{n’,m} \leq 1, \forall n \in [N], m\in [M]$. If $\sigma_{n,m}=1$, then the first job in agent $n$’s queue (if any) is cleared with probability $\mu_{n,m}$. Otherwise, the job stays in the queue. All arrivals and services are independent across queues, servers and periods. To translate this model into our queueing network formulation (lines 168-204), let $\mathcal{N} = [N], \mathcal{K} = [N] \times [M] = \\{(n,m), n\in [N],m\in [M]\\}$. $\Lambda$ corresponds to $N$ independent Bernoulli random variables. For each $k=(n,m) \in \mathcal{K}$, we have $\mu_k = \mu_{n,m}$. The set $\mathcal{A} = \\{0,1\\}^{\mathcal{N}}$.
$\Sigma$ is the set of subsets $\boldsymbol{\sigma}$ of $\mathcal{K}$, such that $\sum_{k=(n,m’)} \sigma_k \leq 1, \sum_{k=(n’,m)} \sigma_k \leq 1, \forall n\in[N],m\in[M]$. $\mathcal{B}_n = \\{k=(n,m), m \in [M]\\}$. $\mathcal{D}_n = \emptyset$ for all $n$, and $P$ only includes transitions to the virtual queue.
The multi-server system [39] is similarly defined but instead of selecting a matching between agents and workers, the DM can match an agent with multiple workers, as long as there are sufficiently many jobs assigned to different workers. Therefore, the only change compared with the bipartite queueing system would be that $\Sigma$ consists of all $\boldsymbol{\sigma}$ such that $\sum_{k=(n’,m)} \sigma_k \leq 1, \forall m \in [M]$. | Rebuttal 1:
Rebuttal: We conducted a new simulation with the algorithm from [34, Figure 7] under the same setting of Figure 1 in our paper. The figure in the attached PDF shows that their algorithm has a significantly worse transient behavior despite its optimal asymptotic regret scaling.
Pdf: /pdf/a8e12f787a89256cabedd7ea8813e963f4e53603.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Towards a fuller understanding of neurons with Clustered Compositional Explanations | Accept (poster) | Summary: This paper is a niche extension of seminal work on network dissection. The authors present a generalization, called Clustered Compositional Explanations, that combines Compositional Explanations with clustering and a novel search heuristic to approximate a broader spectrum of the neuron behavior.
Strengths: Easy to read
Weaknesses: cf. Limitations
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: This is mainly a search-based approach on a set of clusters which group neurons.
This is very incremental work. Research angle of the work is limited to the application of search on clusters grouping neurons
No novelty
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We understand that our work did not present well to the reviewer, but perhaps that is due to a misunderstanding of the paper. First of all, the reviewer mentions the research of a “set of clusters which group neurons.” However, we never grouped neurons in our paper. Instead, clusters group activations of a single neuron over the full dataset, and this is a big and crucial difference that maybe can denote a superficial reading. The topic of the paper is the analysis of the full spectrum of the neurons' behavior, and thus it is inside the scope of the venue. Moreover, the original technique to compute compositional explanations (the paper that we generalize) is search-based, and it was chosen as an oral presentation for this same conference (NeurIPS). Therefore, we think that the usage of search-based techniques is not a limitation for the current venue; rather it is a trend of high-quality work.
We are grateful for your comments and respect your concerns about the novelty. However, to our knowledge, this is the first work to cluster and analyze different ranges of activations in terms of compositional explanations (or dissection). Further, it is the first work that proposes heuristics to speed up the computation of compositional explanations. | Summary: The authors propose a novel XAI method called Clustered Compositional Explanations (CCE), that aims to descibe the function that a group of neurons in a neural network perform. The method is built on top of CoEx (Mu and Andreas 2020) and NetDissect (Bau et al 2017) with the novelty being its generalization to multiple activation ranges instead of high activation thresholds only. The objective for CCE aims to maximize the IoU score between the activations of an input and a neuron in the range of min and maximum activation thretholds across all clusters, over all set of logical connections between the annotated sets of concepts in the dataset. The authors solve the optimization problem by presenting a heuristic algorithm MMESH. Results from the authors indicate that the explanations obtained from proposed method does better than NetDissect and CoEx on various metrics that include accuracy and coverage among others.
Strengths: The connection between Clustered Compositional Explanations and existing neuronal explainability methods CoEx and NetDissect is explained very well. The proposed method generalizes both these methods, and the reviewer agrees with the authors that explaining groups of neurons over larger activation ranges could lead to broader explanations. This is confirmed in the results, as the proposed CCE method outperforms CoEx and NetDissect on various metrics, but especially so on coverage related metrics.
Weaknesses: 1. My biggest concern is with the MMESH algorithm, and while it is admissible as shown in the appendix, the algorithm is tailored to the CCE method and not easy to follow. Unless the authors plan to release their code (not included in submission), I believe it will greatly limit the utility of their approach since their is a higher barrier of implementation for practitioners versus CoEx.
2. I would like to see results across a broader range of models and datasets in order to ensure that the proposed method does not overfit to explaining the neurons of a particular model, dataset, and training procedure. I think this is reasonable since MMESH takes less than 2 minutes per unit (L219). All results have been reported on CNNs (AlexNet, ResNet, DenseNet) which I presume are trained with supervised learning.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Could the authors please explain why having more clusters than one is necessary, since in Table 2 the first cluster has perfect dataset and sample coverage and outperforms the averaged across clusters on all desiderata, similarly for Table1, 3 in the appendix.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors have both mentioned potential limitations and highlighted avenues for future work. See also Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation of the important research questions addressed by our paper and our presentation. We hereby clarify and answer point by point both the questions **(Q)** and highlighted weaknesses **(W)**.
**W1)** We can assure the reviewer that, as written in the checklist, the code will be released upon acceptance. We greatly believe that releasing the code is a fundamental step for research in Deep Learning. Regarding the MMESH heuristic, it is tailored to the Compositional Explanation algorithm, not to the Clustered variant. It can be used (and it is used in our code) to also speed up the vanilla CoEx algorithm. An ablated version of it (Areas Heuristic) can also be used to speed up Network Dissection, as noted in Appendix B.2.
**W2)** As written in Section 4.1, we follow the same setup of Mu and Andreas [24]. Therefore, all the models are trained on the Place365 dataset, and we use the publicly available models used by both Compositional Explanations and Network Dissection. They are the standard models commonly used by all the literature on the topic (see Section 2). The procedure has been motivated in literature by the fact that the concepts annotated in the Ade20k dataset are connected to the Place365 dataset.
To address the concern of the reviewer, we added new experiments in the file attached to the rebuttal, where we show (Table 2 and Table 3) that the results hold even when models are trained on ImageNet, despite the fact the concepts are weakly connected to the trained model. In this case, we tested ResNet and VGG16 (which is an additional model to the pool of already tested ones). We can observe that the results are similar to the ones obtained before. Moreover, in Appendix C2, we tested the method using a different concept dataset. Thus, given the new and old results, the methods do not depend on any particular used training procedure, architecture, or concept dataset.
**Q1)** The short answer is that all the clusters have an impact on the decision process (Table 1 of the pdf attached to the rebuttal) and that the goal of the paper is to provide a broader view of the neuron behavior by extracting as many concepts recognized by the neuron as possible.
In more details, there are several reasons to consider multiple clusters instead of a single one. Note that: the long-term goal is to understand exactly which (and all) the concepts the neuron recognizes; and that a feature map (i.e., the output of a neuron in our case) is generally composed of multiple activations associated with multiple clusters.
If we consider a single cluster that includes all the possible activations, then the algorithm will assign to that single cluster the n concepts (in our case n=3) that overlap the most. Since a single cluster that covers all the activation means that there is no mask applied to the activations, then the algorithm simply will select the concepts whose annotations are the biggest ones (i.e., it is a degenerate case), and thus the explanations do not depend on the given model/neuron.
Now, let us consider the case where we compute multiple clusters and consider as an explanation only the one associated with the cluster with the best scores (i.e., usually Cluster 1 as you noted). In this case, we are providing as an explanation the concepts recognized by the lowest activations (e.g., person). Therefore, if the neuron recognizes other concepts (e.g., ice cream) using a different activation band (e.g., the highest ones as in NetDissect) we are not including them in our explanation since we only selected the one of Cluster 1. And this is in contrast with the goal of our paper, which is extracting a broader view of the concepts the neuron is able to recognize with respect to CoEx and NetDissect, which can be seen as methods that use a single cluster. In an ideal scenario, as written in the conclusion, we would like to obtain a clustering algorithm that assigns a cluster for each set of semantically connected concepts recognized by the neuron, and thus obtaining complete explanations. This paper represents a step in this direction.
Finally, in Table 1 of the pdf attached to the rebuttal, the reviewer can observe that almost all the cluster have the same impact on the decision process, and thus, considering only one cluster ignore part of the decision process.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for taking the time to write a thorough rebuttal, as well as for adding the experiments related to testing explainability on ImageNet. I think it makes the work more comprehensive, and the authors' release of the CoEx code should make it easier for others to use their method as well. I have therefore revised my rating from a Borderline Accept to a Weak Accept. | Summary: This paper extends the ideas of Mu (2020) to examine a more powerful class of compositional explanations of neurons, by adding the goal of explaining other ranges of neuron activations, unlike previous work that had restricted analysis to the top ranges only. Like the previous work by Mu, the paper searches for compositions of human-understandable concepts that explain a neuron’s behavior, but the proposed method begins by first subdividing neuron activations into ranges. Adding the ability to explain lower ranges results in much better coverage: it allows the authors to create explanations of a larger portion of neurons’ behavior, and as a result this paper achieves much higher coverage metrics, explaining more neurons with high matching scores, compared to previous methods.
Strengths: The main strength of this paper is the way that it brings a systematic interpretability analysis to neurons and ranges of neuron activations that have not previously been systematically analyzed.
The paper finds that, at lower activation ranges, consistent interpretations can be found for most neurons. On the other hand, the paper finds that most of these interpretations are generic concepts like sets of colors, and the paper hypothesizes that these observed labels are “default labels” that describe inputs when neurons are in a random state. The paper applies their explanations to neurons in a random untrained network to validate this hypothesis; that is a good baseline-setting experiment.
The paper also finds that as neuron activations rise to higher levels, the explanations of their behavior becomes progressively more specific. Unsurprisingly, when analyzing the entire activation range of neurons, the paper finds that more individual neurons activate on several unrelated concepts over their range, observing a higher level of such polysemanticity than previous works, but interestingly, the paper finds a portion (15%) of neurons that have a consistent semantics over their range.
Weaknesses: On weaknesses: beyond the clever uninitialized-network experiment, the paper does not conduct experiments that would triangulate the proposal that the intermediate-activation states of neurons might be meaningful. For example, it is natural to ask: if there are ranges of concepts that seem to match middle-range activations, is the network actually "looking" for those midrange concepts? For example, will forcing those neurons to those middle ranges cause a network to change its predictions towards classes that correspond to those explained mid-range concepts?
The paper makes a good contribution by increasing the observational breadth of neuron interpretations to include middle-range activations, but it does not directly defend the idea that such midrange activations will be useful to understand.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The main question is the one posed above. How can we know that the concepts matched in middle ranges of activations are meaningful to map out?
Specifically: the paper observes that that the neurons sometimes match different concepts in the middle ranges than they do at extreme ranges. Do the mid-range matched concept correspond to meaningful decisions by the network? For example, would forcing neurons to those middle ranges cause the network to behave as if those middle-range concepts are detected? Does it cause the network to behave as if different concepts were detected compared to those same neurons set to the highest ranges or lowest ranges?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors note several of the potential limitations of their methods and address some of them in the appendix.
No immediate negative societal impacts are anticipated from this line of work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the meaningful suggestions. Following the recommendation, we ran an experiment to better consider middle activations, and we feel that the addition of this experiment to the appendix makes the paper stronger.
In particular, we tested how many times the network changes its prediction when we mask the activations covered by each cluster or covered by the CoEx thresholds (see Table 1 in the pdf attached to the rebuttal). Intuitively, if an activation band is not used in the decision process, then it should never change the prediction if it is masked out.
Conversely, we can observe that the change in prediction is similar to the CoEx ranges in almost all the clusters but Cluster 1, which often contains default rules and unspecialized activations, as described in the paper. This means that the full spectrum of activations has an impact on the decision process, as we hypothesized at the beginning of our paper. We prefer this experiment over imposing a middle range over all the activations since the ranges are usually tailored to specific positions in the input (as shown in Figure 4 in the main paper), and it is difficult to impose a realistic range (in distribution) over different positions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the added measurements of prediction changes across ranges. | Summary: This paper focuses on a problem with Network Dissection and Compositional Explanation methods: these two methods explain the concept encoded by a neuron (or, more precisely, a convolutional filter) by only considering highly activated regions in the feature map. To address the problem, this paper proposes to divide the activation values into different ranges (clusters) and generate an explanation for each range of activation value. This paper also proposes a heuristic to accelerate the search for the optimal logical combinations of concepts in the Compositional Explanation method.
Strengths: 1. This paper focuses on an important issue, i.e., previous concept-based explanation methods, such as Network Dissection or Compositional Explanation, neglect feature map regions with low activation values.
2. The authors theoretically prove that the proposed heuristic is admissible, which guarantees the optimality of the solution found using the heuristic.
Weaknesses: 1. The contributions are incremental and miscellaneous. From my view, the contribution of this paper is mainly two-fold. The first contribution is that the paper considers different ranges of activation values to address the problem with previous explanations (Network Dissection and Compositional Explanation) that they only analyze the feature map regions with extremely high activation values. However, the proposed method is only a simple extension of previous methods, and the motivation of using clustering instead of manually dividing the activation values into different ranges is not well explained. The second contribution is to propose a heuristic searching method to accelerate the Compositional Explanation method. However, the second contribution is not inherently correlated with the first one, and I feel the paper is separated into two uncorrelated parts.
2. Many of the notations are inconsistent or confusing. For example:
(a) In Line 106, the intersection size is defined as $IMS(x,L)$. But in Eq. (5) to Eq. (7), it is denoted as $IMS_{[\tau\_1, \tau\_2]}(x,L)$. The meaning of the subscripts needs to be clarified.
(b) In Eq. (1), $\mathcal{L}^n$ should be $L$, in order to be consistent with Eq. (2).
(c) In Eq. (5)-(7), there is a notation $L_L$. I’m confused about what the two L mean respectively. If they refer to different meanings, please use different letters.
(d) In Eq. (5)-(7), there is no explanation for the subscript $R$ in the notation $L_R$.
(e) In many equations, such as Eq. (2), Eq. (11), Eq. (13), Eq. (14), an equal sign is missing.
3. The metric Activation Coverage seems redundant. Given the metric Intersection Over Union and the metric Detection Accuracy, we can actually derive the metric Activation Coverage by using the inclusion-exclusion principle $|A\cup B|=|A|+|B|-|A\cap B|$. Therefore, I doubt this metric is redundant.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. In Table 2, when computing the desiderata qualities (or metric scores) of the proposed method, the authors simply average the scores from the 5 clusters. However, since different clusters correspond to different ranges of activation values, does it make more sense to assign different weights to the scores of these clusters? For example, it is natural to assign higher weights to clusters with higher activation values. Nevertheless, the authors are also encouraged to explain why doing simple averaging is a reasonable choice.
2. As noted by the authors in Line 259 to Line 276, the concept labels obtained from Cluster 0 and Cluster 1 usually represent the “default labels” to which the algorithm converges when the activations are random. This implies that the labels obtained from Cluster 0 and Cluster 1 actually cannot explain what the model has learned from the data. In this way, the metric scores of Cluster 0 and Cluster 1 are meaningless and should not be used to compute the final score of the proposed method.
3. Why do the authors set the number of clusters to 5? If there are more or fewer clusters, will the analysis and conclusion in Sections 4.3 and 4.4 change? I suggest the authors conduct ablation studies regarding the number of clusters and clearly explain how it will affect the empirical results.
4. In Table 3, why do the results with ReLU greatly differ from results without ReLU? Could the authors give some explanations or discussions on this issue?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes, the authors have discussed the limitations and broader impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation of the important research questions addressed by our paper. We hereby clarify and answer point by point both the questions **(Q)** and highlighted weaknesses **(W)**.
**W1)** While we do agree that our paper is an extension of previous work, we stress the fact that every generalization paper is, by definition, an extension. And the compositional explanation paper is an extension of NetDissect too. We believe that Deep Learning and XAI needs work that generalizes previous claims to broader contexts.
Regarding the choice of clustering over manual splitting the activation space, we will add a discussion about it in the appendix. Briefly, the choice is motivated by wanting clusters that aggregate activations associated with a common semantic. By doing a manual split (e.g., using the percentile) it is more likely to separate activations that are associated with the same concept/s. Using a clustering algorithm mitigates this problem.
As noted in the conclusion section, this opens a promising future research direction: the development of a clustering algorithm tailored to the task to solve the problem completely.
Finally, regarding the two contributions, note that the MMESH heuristic is a practical pre-requisite for the clustered compositional explanation in order to make their computation feasible in a reasonable time. While it could be possible to propose MMESH without Clustered Compositional Explanations, the inverse does not hold, and thus, in our opinion, the two contributions are chained and linked.
**W2)** Thank you for pointing out some incontinence in the notation. We think these inconsistencies are easy to fix. We will follow your suggestions for points a), b), and e). Regarding c) and d) $L_L$ and $L_R$ are the formula's left side and right side connected by the “op” operator. And we use L since both sides are labels. We will fix them by better specifying the subscript notation in the camera-ready version and we are considering replacing L and R with left and right arrow symbols.
**W3)** It is true that the denominators of the three metrics are connected, and thus there is a dependency between them. However, the metrics refer to and express different qualities of the compositional explanations. Moreover, it is not possible to obtain Activation Coverage from the mere scores of Detection Accuracy and IoU; you need the quantities needed to compute them. For these reasons, we keep Activation Coverage as a separate metric.
**Q1)** As highlighted in the paper, our opinion is that the best way to compare different methods is to look at the scores of each cluster separately since one or two big clusters can influence the average scores.
Regarding the weights, finding the best way to assign them is not trivial, and it is probably worth its own proper investigation. Each of the basic weighting mechanisms (e.g., based on the size of the cluster or the impact of it) has its own caveat. For example, assigning high scores to high activations is debatable since the findings of our paper. Table 1 in the rebuttal file shows that middle activations have the same impact on the decision process of high activations. Moreover, while high activations are rare, low activations are common, and one could argue that an explanation should capture the most common behavior. We thank the reviewer for the interesting idea, which we will add as a future direction.
**Q2)** As written in the previous answer, our opinion is that the best way is looking at the scores of each cluster separately instead of trying to remove the terms from the average. By analyzing multiple metrics and multiple clusters, one could obtain a better idea of the quality of explanations deeper understanding of the neurons' behavior and the quality of the returned explanations instead of analyzing the average of one metric. Regarding the possibility of ignoring lower clusters, there are cases (e.g., the last layer of DenseNet, as shown in the paper) where Cluster 0 and Cluster 1 include specialized activations. Moreover, even when Cluster 0 and Cluster 1 are unspecialized on average, there are units that could have specialized lower activations, and thus, they should be taken into account in the average scores. Therefore, we think that removing entire clusters from the average is not the right approach.
**Q3)** We performed this study in Appendix D. We observed a marginal loss in qualities when increasing the number of clusters. Moreover, we found that several labels are repeated over the clusters, and less than ~30% of the labels are novel with respect to the usage of fewer clusters. Since there is no gain in increasing the number of clusters and it is more difficult to evaluate a greater number of explanations, we fixed the number of clusters to 5. However, as previously written, we think that a promising direction for the future is to develop a clustering algorithm tailored to the task, which could find the optimal clusters (in terms of number and semantics).
**Q4)** We think that the behavior is not an issue but just a different way in which different layers use the activations and parse the concepts. And actually, despite the differences, we can find similarities in the behavior of different layers when activations are close to 0. In ReLU layers, activations are stored in Cluster 1, and they are unspecialized 93% of the time. This percentage becomes smaller when we go up towards the higher clusters. We can observe similar behavior in the case of the layer without ReLU reported in Table 3 of the main paper. Here, since the activations can assume negative values, the activations close to zero are stored in the middle clusters. And indeed, we can observe that Cluster 3 includes unspecialized activations 95% of the time. And again, when we move far away from zero, the percentage starts to decrease, as in the ReLU layers. We will add this discussion to the final paper.
---
Rebuttal Comment 1.1:
Comment: Since my major concerns are addressed by the authors' rebuttal, I would like to raise my rating to 6. | Rebuttal 1:
Rebuttal: Dear reviewers,
we report the additional experiments requested by “Reviewer 2vbF” and "Reviewer CMC6” in the file attached to this global comment. We validated the importance of the middle activation (**Table 1**) and tested our algorithm on models (VGG16 and ResNet18) trained on a different dataset and using a different training procedure (ImageNet) (**Table 2** and **Table 3**).
The new results confirm the broad scope of our findings and the importance of including all the activations in the explanation process. We thank all of you for your suggestion since we feel that the requested fixes, the discussion, and these new results strengthen the paper.
Pdf: /pdf/f1babf01103e30a0fa943fbb942838e5601e8694.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper represents a generalization of compositional explanation called clustered compositional explanations which combines compositional explanations with clustering and a search heuristic to approximate a broader spectrum of the neuron behavior, by proposing the Min-Max Extension per Sample Heuristic (MMESH). This paper gives an analysis of the phenomena connected to the neuron's activations like the unspecialization of the lowest activations in ReLU networks and the progressive specialization.
Strengths: 1. This paper well delivers its contribution on the generalization of CoEx based on a heuristic and a wider spectrum of activations by clustering them.
3. The experiments are thoroughly conducted with a detailed analysis of the proposed MMESH. The paper also addresses limitations with future direction of the research.
Weaknesses: 1. It would be nicer to clearly show the difference between Mu and Andreas [24].
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to the weakness section.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Please refer to the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation of our paper and our contribution.
Regarding the difference between Mu and Andreas [24], given the additional page available in the camera-ready version, we plan to move Appendix A into the main paper to address the reviewer's concern and to better highlight mathematically the differences between our approach and theirs. The differences are in the algorithm: they use an exhaustive search while we propose a heuristic search, in the usage of clustering to compute thresholds instead of using a fixed ad-hoc threshold - used to mask the activation maps, and in the consideration of multiple intervals of activations and thus the full spectrum of the neuron's activations.
Indeed, [24] uses a single threshold computed as 0.005 quantiles of the activation for each neuron and considers only activation above this threshold. Conversely, we compute multiple thresholds $\tau_1$, $\tau_2$..$\tau_n$ using clustering, and consider multiple intervals for each neuron ($[\tau_1,\tau_2], [\tau_1 \tau_2,..],[\tau_{n-1},\tau_n]$). In this way, we can analyze the full spectrum of activations for each neuron. Hopefully, adding Appendix A and this discussion to the paper will address the concern of the reviewer. | null | null | null | null | null | null |
Recovering Simultaneously Structured Data via Non-Convex Iteratively Reweighted Least Squares | Accept (poster) | Summary: This paper proposes an IRLS method for recovering data with multiple, heterogeneous low-dimensional structures from linear observations. It combines non-convex surrogates for row-sparsity and rank, to identify simultaneously row-sparse and low-rank matrices from limited measurements. Theoretical results are provided that show locally quadratic convergence. The experiments demonstrate favorable empirical convergence and prove its efficacy in handling challenging data recovery scenarios.
Strengths: The paper is well-written overall and the contributions are clear. The challenge of the combination of structures is made apparent. The related work discussion is thorough. The IRLS is a popular approach and is practical even for large-scale systems. Experiments suggest the method is also robust to the choice of parameters r and s.
Weaknesses: Although it is also true of other results in this area, only local convergence is guaranteed and practically it may be challenging to guarantee an initialization within the required radius. The results would be strengthened if other matrices besides Gaussian were shown to satisfy this RIP, and if others were even just used empirically and shown to still offer convergence.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: It would be good to remind the reader what \cal{F}_ functions are in (13).
Are there other matrices besides Gaussian that are known to satisfy the RIP given in Definition 2.3?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discuss the non-optimality of the bounds in Theorem 2.4, arguing that experimentally the convergence radius appears to be much larger. The authors also discuss future work about generalizing to other combined structures, which the reviewer agrees is interesting future work but certainly not a limitation to the results in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive and very detailed feedback to our submission. A point-by-point response to your comments follows below:
> Although it is also true of other results in this area, only local convergence is guaranteed and practically it may be challenging to guarantee an initialization within the required radius. The results would be strengthened if other matrices besides Gaussian were shown to satisfy this RIP, and if others were even just used empirically and shown to still offer convergence.
First, we would like to mention that our supplementary material includes experiments with rank-1 and subsampled Fourier measurements, which show similar outcomes as in the Gaussian case. Extending the theory to these measurement operators would be attractive. As long as the RIP holds for such measurements our theory directly applies. Extending the RIP, e.g., from Gaussian to subgaussian measurements is straight-forward.
Furthermore, in [52, ``Blind recovery of sparse signals from subsampled convolution''], Lee et al. used that, for rank one matrices, subsampled Fourier measurements of a certain shape satisfy the RIP we use (requiring additional log-factors in the sample complexity). Since extending such results to more general settings requires substantial additional effort, this is however beyond the scope of our present work.
**Questions**:
> It would be good to remind the reader what $\mathcal{F}$ functions are in (13).
Thanks for this good suggestion. In the final version of the manuscript, we will be happy to remind the reader that Equation (13), which defines the quadratic models minimized in the weighted least squares steps, uses as functions $\mathcal{F}$ the $\varepsilon_k$-smoothed log-determinant function and the $\delta_k$-smoothed sum of logarithmic row-wise $\ell_2$-norm, which had been previously defined in Equation (2)-(3).
> Are there other matrices besides Gaussian that are known to satisfy the RIP given in Definition 2.3?
Apart from the straight-forward generalization to subgaussian measurement matrices and the subsampled Fourier measurements by [52, Lee et al.] (where the RIP is only needed for rank one matrices), we are not aware of further matrix types which provably satisfy our RIP with near-optimal rates.
Please let us know if you have further questions or concerns regarding our submission. If we could clarify your questions, we would appreciate it very much if you considered an adjustment of your rating. Thank you!
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and thank the authors for their responses. | Summary: This article introduces an algorithm for recovering jointly row-sparse and low-rank matrices from (underdetermined) linear measurements. The algorithm is based on iteratively reweighted least squares for a non-convex objective. The method is theoretically analysed, establishing local quadratic convergence rates under the restricted isometry property. The second theoretical result relates the algorithm more directly to IRLS to show that the objective is non-increasing and proves convergence to a stationary point.
Strengths: I found the paper well-written, the problem of simultaneously recovering sparse and low rank matrices is challenging one cannot simply rely on standard convex regularisation. This work provides a nice solution to this problem via IRLS and the theoretical results are interesting: the authors establish both 'compressed sensing' type results that guarantee recovery under near optimal sampling complexity, as well as results that provide an understanding of the optimisation properties of the algorithm.
Weaknesses: - The numerical results are somewhat limited, it would have been nice to have a discussion on the applications of the proposed method and more realistic numerical examples.
- the result of Theorem 2.4 is restricted to the setting of exact measurements and does not cover robustness results, i.e. what if y has been corrupted with noise? Also, what if $X_*$ is only approximately sparse and low rank? Both of these are the more realistic settings, so it would have been nice to see a more complete result. Also, theory aside, equation (1) mentions additive noise, but it is not clear from the algorithm and results that the proposed method can handle additive noise.
- Given that the title mentions "simultaneously structured data", I was expecting more than just low rank + sparse. It would be interesting to see the method extended to other kind of structures such as sparsity with respect to some dictionary.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - can you mention the per-iteration complexity of your method and contrast it with existing methods?
- the introduction mentions simultaneously low rank and both column and row sparse. Can your results be extended to this setting?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive and very detailed feedback to our submission. A point-by-point
response to your comments follows below:
> The numerical results are somewhat limited, it would have been nice to have a discussion on the applications of the proposed method and more realistic numerical examples.
We agree that additional experiments on some of the real-world applications mentioned in our introduction (like sparse phase retrieval, blind deconvolution, or hyperspectral imaging) could enrich the paper. Since the focus of this work was on algorithm development and analysis, and an efficient implementation of IRLS will depend on the specific problem instance (i.e., type of measurement operator), we refrained from heading into this direction to keep the exposition concise.
> The result of Theorem 2.4 is restricted to the setting of exact measurements and does not cover robustness results, i.e. what if $\mathbf{y}$ has been corrupted with noise? Also, what if $\mathbf{X}_*$ is only approximately sparse and low rank? Both of these are the more realistic settings, so it would have been nice to see a more complete result. Also, theory aside, equation (1) mentions additive noise, but it is not clear from the algorithm and results that the proposed method can handle additive noise.
Certainly, the noise robustness of the proposed methods if of interest both theoretically and empirically. We restricted ourselves to setup of exact measurements in the theory as even for the case of only low-rank structures, IRLS has less developed theory in this case. However, from an empirical point of view, IRLS (without modification of the optimization problem) returns a solution whose reconstruction error is of the order of a constant times the magnitude of the additive perturbation. We add a plot exploring this behavior in the rank-$1$ case of Figure 1 of the paper in the attached PDF above.
> Given that the title mentions "simultaneously structured data", I was expecting more than just low rank + sparse. It would be interesting to see the method extended to other kind of structures such as sparsity with respect to some dictionary.
This is a good point. Indeed, one could add a fixed dictionary to the problem formulation which can be absorbed into the measurement operator in the analysis. The only point where we would expect some additional work to be necessary would be in proving that the concatenation of measurement operator and fixed dictionary still satisfy the RIP. For Gaussian $\mathcal{A}$ this should be possible. Also note that the paper [55, Lefkimmiatis et al.], which we mentioned in our related work discussion and which restricts itself to single structures, combines IRLS with dictionary learning techniques, i.e., it addresses the question of how to learn a dictionary on the fly.
> The introduction mentions simultaneously low rank and both column and row sparse. Can your results be extended to this setting?
Thanks for mentioning this important aspect of our setting! While we restricted the presentation to the one-sided sparsity case for simplicity, it is indeed possible to adapt our IRLS method to handle sparsity on both sides of the matrix, i.e., column- and row-sparsity, by adding a second sparsity weighting term to the combined weight operator $W_{\mathbf{X}^{(k)},\varepsilon_{k},\delta_{k}}$.
For instance, in the symmetric setting $\mathbf X_\star = \mathbf X_\star^T$ (naturally occurring in sparse phase retrieval) one would define the weight operator $W_{X^{(k)},\varepsilon_{k},\delta_{k}}$ as in (8), but with an additional term that multiplies $W_{X^{(k)},\delta_{k}}^{sp}$ _from the right_ to $Z$, which corresponds to minimizing the sum of _three_ smoothed logarithmic surrogates.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and thank the authors for their responses. | Summary: Paper proposes to solve inverse problems on matrices subject to multiple types of sparsity (e.g. low-rank and element-wise sparsity) using an algorithm designed for the non-convex objective functions involved. The algorithm is a re-weighted least squares, which despite not solving a convex problem, authors prove a local convergence and a global consistency theorem for. They illustrate the performance of their algorithm on synthetic data.
Strengths: The algorithm proposed solves a difficult non-convex problem, and the theoretical results by authors are original and interesting.
Weaknesses: 1. The paper contains statistical results of the type "if at least m measurements are available then the algorithm achieves good performance", but is missing a more thorough discussion on algorithmic complexity. In particular, I would be curious to know if the truncated SVD in the Update Smoothing step is the heaviest computational piece of the algorithm or if the computational bottleneck is somewhere else.
2. Numerical experiments are run on very small data. To really show the benefit of two sparsity types I would believe that bigger values of n1 n2 result in more convincing evidence that there is benefit in mixing the two prior knowledges
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. what is the sensitivity of the algorithm to performing approximate SVDs
2. Figure 3: fewer iterates are needed, OK, but what's the FLOP in each iterate? what's the overall computational cost of achieving epsilon close results as compared to others? is anything parallelizable?
3. A related stream of research (see Tight Convex Relaxations for Sparse Matrix Factorization, NIPS 2014) characterizes the atoms that form the basis in which a doubly sparse matrix is sparse. How does this work, especially the lower bound Omega(r(s1+s2)), compare to those methods which directly minimize for the desired structure to recover?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: because the computational complexity is not deeply discussed, it's hard to know how applicable the methods are to larger data
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive and very detailed feedback to our submission. A point-by-point
response to your comments follows below:
> The paper contains statistical results of the type ”if at least m measurements are available
then the algorithm achieves good performance”, but is missing a more thorough discussion
on algorithmic complexity. In particular, I would be curious to know if the truncated SVD
in the Update Smoothing step is the heaviest computational piece of the algorithm or if the
computational bottleneck is somewhere else.
> Numerical experiments are run on very small data. To really show the benefit of two sparsity types I would believe that bigger values of $n_1$ and $n_2$ result in more convincing evidence that there is benefit in mixing the two prior knowledges.
On the one hand, we agree with the reviewer that a larger scale of experiments is always preferable. On the other hand, we would like to emphasize that the theoretical results on locally quadratic convergence rates and monotonic decrease of the objective function are guaranteed to hold in higher dimensions as well. In evaluating these, the scale of the specific simulation does not matter. Finally, note that the scales of the problem instances we use are comparable to the ones used in the papers [24] and [53] describing the state-of-the-art
methods RiemAdaIHT and SPF, respectively.
**Questions**:
> (1) What is the sensitivity of the algorithm to performing approximate SVDs?
This is an interesting point. In fact, we did not test the present IRLS method for robustness against using approximate SVDs. However, in [47], Kümmerle et al. used a related IRLS method for low-rank recovery (i.e., a setting with only a single parsimonuous structure) in which case high-accuracy low-rank solutions were obtained using a truncated SVD implemented by a randomized block Krylov method [NIPS 2015, "Randomized Block Krylov Methods for Stronger and Faster Approximate Singular Value Decomposition", Musco et al.]. We therefore expect that this behavior extends to the simultaneously structured case considered here.
> (2) Figure 3: fewer iterates are needed, OK, but what's the FLOP in each iterate? What's the overall computational cost of achieving epsilon close results as compared to others? Is anything parallelizable?
We refer to our general response above for a discussion of the per-iteration computational cost.
Regarding a potential way to parallelize the main computational steps, it would be possible to use a conjugate gradient method to solve the main linear system of the IRLS step whose complexity is dominated by matrix-vector and matrix-matrix multiplications (similar to previous work on low-rank IRLS of [47]), which parallelize well. Second, the matrix multiplications needed to obtained the truncated SVD via block Krylov could also be parallelized.
> (3) A related stream of research (see Tight Convex Relaxations for Sparse Matrix Factorization, NIPS 2014) characterizes the atoms that form the basis in which a doubly sparse matrix is sparse. How does this work, especially the lower bound Omega(r(s1+s2)), compare to those methods which directly minimize for the desired structure to recover?
Thanks for pointing us to this interesting work! We will be happy to include a reference to the NIPS 2014 paper by Richard et al. to the literature review in the final version. From a practical point of view, however, the paper does not propose a polynomial time algorithm for the problem, and the proposed heuristic algorithm focuses exclusively on Sparse PCA, which is a different problem setting than the underdetermined matrix recovery we consider.
Therefore, it is hard to compare the numerical performance to our work right away. Regarding the theory, the estimates on the statistical dimension of the proposed (k, q)-trace norm provided by Richard et al. are restricted to (atomic) rank-1 matrices, whereas our local convergence theory for IRLS hold for arbitrary rank.
Please let us know if you have further questions or concerns regarding our submission. If we could clarify your questions, we would appreciate it very much if you considered an adjustment of your rating. Thank you!
---
Rebuttal Comment 1.1:
Title: Addendum to rebuttal
Comment: Dear reviewer,
Due to a technical issue, we could not submit a general rebuttal. The area chair kindly agreed to allow us to address your remaining question regarding the per-iteration and total time cost: We address this question in our "Addendum to rebuttal" to the review of Reviewer bSFz. Thank you for your understanding! | Summary: This work studies the problem of recovering a low-rank and row-sparse matrix from its compressed linear observations. A method based on iteratively re-weighted least squares is proposed, in which the sparsity inducing function is a non-convex log function. For theoretical contributions, this work provides a local quadratic convergence analysis of the iterates of the proposed algorithm under a restricted isometry property assumption, and it also studies the convergence of function value for any linear measurement operator using a Majorize-Minimize algorithm framework. Numerical experiments on synthetic datasets show that the proposed method can recover the ground truth with fewer measurements than some state-of-the-art methods.
Strengths: 1. This work provides convergence analyses not only for linear operators satisfying restricted isometry property but also for more general linear operators. Furthermore, it mentions when the restricted isometry property can be satisfied, and it also discusses the radius of the neighborhood in the local quadratic convergence.
2. The proposed algorithm is well explained by identifying that the underlying family of objectives that are minimized during the iterations is (2).
3. There are experiments for cases where the rank and sparsity are not accurately known as a prior, and thus the sensitivity to hyper-parameter choice is discussed, and also it provides a better simulation of potential real-world scenarios.
Weaknesses: (1) The algorithm requires estimates of the rank and row sparsity, and the theoretical conclusions require that these estimates are accurate, i.e., $\tilde r = r$ and $\tilde s = s$. These priors and assumptions can limit the application of this method and its theoretical convergence guarantees.
(2) There are some assumptions on the iterates $\varepsilon_k$ and $\delta_k$ in Theorem 2.4 and Theorem 2.5, but it is not clear whether these assumptions will ever be satisfied. Please refer to question (1) and (3) for details.
(3) In numerical experiments, it would be better to report the time cost of different methods, including both the per iteration time cost and the total time cost.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: (1) In Theorem 2.4, will the assumptions $\varepsilon_k = \sigma_{r+1} (X^{(k)})$, $\delta_k = \rho_{s+1}(X^{k})$, and $\varepsilon_k \leq \sigma_r(X_\star) / 48$ ever be satisfied?
(2) The inequality at the end of page 6 only holds for certain $k$ satisfying the assumptions, so how do we arrive at the conclusion $X^{(k+l)} \to X_\star$ from such an inequality for one step?
(3) In Theorem 2.5, how strong/restricted is the assumption that $\delta_k$ and $\varepsilon_k$ have limits, and the limits are positive?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: As far as I see, there is no potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive and very detailed feedback to our submission. A point-by-point
response and clarification with regards to your comments follows below:
> (1) The algorithm requires estimates of the rank and row sparsity, and the theoretical conclusions require that these estimates are accurate, i.e., $\widetilde{r} = r$ and $\widetilde{s}=s$. These priors and assumptions can limit the application of this method and its theoretical convergence guarantees.
We agree that our theoretical convergence analysis is limited to this choice of hyperparameters. We, however, would like to point out that competing methods we are aware of, including the ones of papers [24] and [53], likewise need rank and row sparsity estimates. Furthermore, the experiments we performed in Fig.\ 2, in which the rank parameter was overestimated by $100\%$ and the sparsity parameter was overestimated by $50\%$, suggest that IRLS is more robust to poor tuning of the hyperparameters than other methods such as RiemAdaIHT and SPF of papers [24] and [53]. Finally, we would like to point out that the IRLS method we propose does not require any additional hyperparameters except rank and row sparsity estimates such as a step size choice, unlike, e.g., RiemAdaIHT.
> (3) In numerical experiments, it would be better to report the time cost of different methods, including both the per iteration time cost and the total time cost.
We agree that this would be insightful. We decided against including direct timing comparisons with other methods as a fair such comparison would require efficient implementations of all algorithms involved; since we were more interested in the _quality_ of the returned solutions, we focussed less on efficiency. We refer to our general answer above for a discussion of the per iteration cost.
> (2) There are some assumptions on the iterates $\varepsilon_k$ and $\delta_k$ in Theorem 2.4 and Theorem 2.5, but it is not clear whether these assumptions will ever be satisfied. Please refer to question (1) and (3) for details.
**Questions**:
> (1) In Theorem 2.4, will the assumptions $\varepsilon_k = \sigma_{r+1}(\mathbf{X}^{(k)})$, $\delta_k=\rho_{s+1}(\mathbf{X}^{(k)})$, and $\varepsilon_k \leq \sigma_r(\mathbf{X}_*)$ ever be satisfied?
We note that it suffices if $\varepsilon_k = \sigma_{k+1}(\mathbf{X}^{(k)})$ \emph{or} $\delta_k = \rho_{s+1}(\mathbf{X}^{(k)})$ at some step $k$, i.e., we do not require that both of these statements are satisfied at the same time.
> (2) The inequality at the end of page 6 only holds for certain $k$ satisfying the assumptions, so how do we arrive at the conclusion $\mathbf{X}^{(k+\ell)} \to \mathbf{X}_*$ from such an inequality for one step?
Note that this condition is always fulfilled in the first step of Algorithm 1 since both parameters are initialized with $\infty$. Whether the condition still holds in later steps depends on the quality of initialization. The proof of Theorem 2.4 works via induction and relies on initializing sufficiently close to the ground truth. Given such an initialization, the proof shows that $\varepsilon_k = \sigma_{k+1}(\mathbf{X}^{(k)})$ or $\delta_k = \rho_{s+1}(\mathbf{X}^{(k)})$ holds for all consequent steps.
Regarding the condition $\varepsilon_k \leq \sigma_{r}(X_{\star}) /48 $, we noticed that this condition is actually superfluous since the locality assumption on $\mathbf{X}^{(k)}$ in (15) automatically implies that $\varepsilon_k \leq \sigma_r(\mathbf{X}_*)/48$. Just note that
$\varepsilon_k = \min\left(\varepsilon_{k-1}, \sigma_{r+1}(\mathbf{X}^{(k)})\right) \leq \sigma_{r+1}(\mathbf{X}^{(k)})$
$\leq || \mathbf{X}^{(k)} - X_*|| \leq \sigma_r(X_*)/48$
if $\widetilde{r} = r$ as in the assumptions of Theorem 2.4. In the second inequality we used that $\mathbf{X}_*$ is of rank $r$ and in the last inequality we used (15). We will remove (14) in the revised version. Thanks for catching this!
> (3) In Theorem 2.5, how strong/restricted is the assumption that $\delta_k$ and $\varepsilon_k$ have limits, and the limits are positive?
Regarding Question 3, note that the sequences $\delta_k$ and $\varepsilon_k$ are, by definition, decreasing and bounded from below, i.e., by the monotone convergence theorem they converge to a limit in any case.
Please let us know if you have further questions or concerns regarding our submission. If we
could clarify your questions, we would appreciate it very much if you considered an adjustment of
your rating. Thank you!
---
Rebuttal Comment 1.1:
Title: Addendum to rebuttal
Comment: Dear reviewer,
Due to a technical issue, we could not submit a general rebuttal. The area chair kindly agreed to allow us to address your remaining question regarding the per-iteration and total time cost:
In Appendix B of the submission, we discuss the per-iteration cost of the proposed IRLS algorithm on a basic level and mention that by applying the Sherman-Morrison-Woodbury formula, it is possible to rewrite the weighted least squares problem (18) such that the computational bottleneck is the inversion of an $O(r (n_1+n_2)) x O(r (n_1 + n_2))$, symmetric linear system.
We note that in general, this can be done in a time complexity of $O(r^3 max(n_1,n_2)^3)$ using standard linear algebra. However, this system itself has a close-to-diagonal structure and is positive definite, so that high quality solutions could arguably found within few inner iterations (which would cost $(O(r (n_1 * n_2))$ each). If such a an implementation avenue is taken, the structure of the measurement operator might dominate the per-iteration cost of IRLS; for dense Gaussian measurements, the computation of one auxiliary matrix that is needed would cost $O(m n_1 n_2)$ flops. If rank-one or Fourier-type measurements are taken, this cost can significantly be reduced, see [24, Table 1 and Section 3] for an analogous discussion.
We will include a more thorough discussion of the computational aspects into a final version of the manuscript, explaining also the details of our current Matlab implementation. Overall, we concede that additional work on the implementation will be needed to make the framework applicable to large-scale data such as in blind deconvolution and hyperspectral imaging problems.
As for the total time cost of our method, we do not have a theoretical result due to the fact that we only analyze the convergence rate _close to the solution_, so the theory does not tell us how long it needs to takes to get into the local neighborhood starting from which we can quantify the complexity. However, we tried to illustrate the generic behavior of the algorithm in Figures 3 and Figures 6, which indicate that we find ourselves within a neighborhood within which quadratic convergence can be observed after only few iterations in many cases. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Deep Evidence Regression for Weibull targets | Reject | Summary: This paper aims to explore the application of a scalable UQ-aware deep learning technique, Deep Evidence Regression, and applies it to predict Loss Given Default. It extends the Deep Evidence Regression methodology to learn target variables generated by a Weibull process and provides the relevant learning framework. By testing on both simulated and real-world datasets in the context of credit risk management, the proposed method exhibits enhanced suitability for applications in which the target variable originates from a Weibull distribution, better capturing the uncertainty characteristics of such data.
Strengths: 1.This paper innovatively extends the Deep Evidence Regression methodology to learn target variables generated by a Weibull process and provides the relevant learning framework.
2.The article provides a clear and coherent description of the proposed method, including a thorough derivation of the relevant formulas.
Weaknesses: 1.The article contains several mathematical formula writing errors, such as the missing "dθ" in line 51 and Equation (16), the missing "-1" in Equation (10), and the inconsistencies in Equation (31) and Equation (33) with their original definitions.
2.The figure legends in the article do not indicate which parts of the content reference them or provide explanations for their content.
3.The experimental section lacks clear exposition, such as the specific settings of parameters k and λ for the Weibull distribution, as well as the target or subject for the MSE metric.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1.Can the Weibull distribution still be selected if the LGD values are symmetric?
2.What is the purpose of calculating E[Z] in Section 2.3.1 and Var(Z) in Section 2.3.2? How are the calculated E[Z] and Var(Z) specifically used in the experiment?
3.How are the parameters k and λ of the Weibull distribution set in the experiment?
4.What is the specific object of MSE metric in the experiment?
5.Why is it claimed that the MSE of the benchmark and the proposed method in Table 1 are similar?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The proposed approach serves as a valuable tool for capturing and quantifying uncertainty in cases characterized by Weibull distributions. Therefore, the key to using this method lies in determining whether the data is suitable for the Weibull distribution. Furthermore, the applicability of the proposed method to distributions other than the Weibull distribution requires further investigation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | null | Summary: Authors tackle the problem of uncertainty quantification for predicting credit risks. Concretely, they have applied a scalable UQ-aware deep learning technique, Deep Evidence Regression to predicting Loss Given Default with uncertainty. Authors argue that the conventional methods use for uncertainty quantification are too computationally and memory intensive and therefore they adopt Deep Evidence Regression as their statistical model.
They extend the framework of Deep Evidence Regression to predict targets generated with a Weibull distribution. The original method relies on the normally distributed targets which is not fitting for credit risk applications. Authors therefore re-derive the necessary formulas based on Weibull distributed targets; concretely they focus on log likelihood needed for training and the mean/uncertainty needed for predictions. Beyond that, authors adapt the regularizers found in the original paper to their purpose.
The new method is tested empirically on a synthetic dataset with points sampled from a Weibull distributions as well as a single real-world dataset focused on peer to peer mortgage lending data with recovery rate has been used as a proxy for Loss Given Default. Both datasets were used to test the Weibull-based approach against the original method relying on Gaussian distributed targets. The results show the modification made by the authors indeed helps with lower MSE and NLL on both test and train datasets.
Strengths: 1. Authors tackle an important problem of uncertainty quantification for risk assesment where robust, and efficient measures of uncertainty are crucial to ensure fair treatment of customers.
2. Authors provide an expansion of a well established method to make it much more attractive to specialized domains such as risk assesment. This work can be directly useful for practicioners in the field of risk assesment and indirectly useful for researchers in other fields who can adapt the Deep Evidential Regression to work with different, field-specific targets distributions.
Weaknesses: 1. This paper provides an incremental improvement over the original Deep Evidential Regression (DER) work. The derivation of DER with Webull distribution instead of original gaussian target distribution is interesting but the majority of what makes the method impactful remains unchanged.
2. Authors provide very limited empirical evaluation of their work. The single study with synthetic data provides a good proof of concept but gives little assurance of real-world impact of the work. The single real-dataset study is relatively limited and suffers from some issues: 1/ Authors do not have any baselines beyond the original methods while other methods, for example based on conformal predictions or quantile regression, could be competitive. Any other approach for uncertainty quantification would be useful to gauge the difficulty of the task. 2/ Authors only use a single dataset making the evaluation process less robust, it is possible that this method works well only for this particular dataset rather than for a generic class of problems. 3/ Authors only consider NLL and MSE as metrics for their evaluation, there are may more metrics to quantify uncertainty quantification (such as coverage) that could be used to make the evaluation more robust.
3. The presentation of the initial method is vague, I undrstand that the DER work is presented in its own paper but presenting it in more detail would make this paper more self-contained, especially given the reliance on the original work.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Authors suggest that “the proposed model has only two outputs, which could limit its flexibility when compared to the benchmark model, which had four outputs from the neural network” which is a little unclear to me. I understand that the model (NN) is as flexible as many free parameters (weights) it has rather than outputs. Maybe this refers to the parametrization of the distribution (gaussian with 4 parameters and Weibull with 2) but in any case, I would like to get some clarity on this claim.
2. The model has a few hyperparameters such as c (regularization constant), it is currently unclear how were they chosen and what HPO procedure was used.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 1 poor
Limitations: Authors provide a formulation for extending Deep Evidential Regression for problems where targets are Weibull distributed. However, they do not cover the extension of this method to problems with targets sampled from a different distribution. There is only a small class of problems where Weibull is appropriate and authors do not provide evidence that this approach generalizes beyond Loss Given Default estimation. I would appreciate additional real-world experiments thta could show whether this parametrization (choice of target distribution) can work beyond the single dataset chosen in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | null | Summary: This paper introduces the utilization and extension of deep evidential regression for uncertainty estimation in credit risk prediction. The approach assumes a Weibull distribution for the target variable (e.g., LGD or a synthetic target). The authors modify the evidential regression mechanism to accommodate targets from this distribution, providing equations to illustrate the training process. Simple experiments are conducted on synthetic and real-world peer-to-peer lending datasets, showcasing potential improvements over vanilla deep evidential regression for credit risk management.
Strengths: The incorporation and extension of evidential regression-based UQ into finance-related problems is the main contribution and the key strength of this paper.
Weaknesses: Refer to Questions.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. What is the motivation for choosing the Weibull distribution? Can the authors comment on whether the method can be utilized for other off-the-shelf finance datasets where the target is distributed otherwise or has a high degree of heteroskedasticity ?
2. The experimental setup is very limited. For the synthetic example, the parabolic function is essentially corrupted with noise drawn from a known Weibull distribution. Therefore the performance of the proposed approach is expected to be better than the baseline. Can the authors explain why the MSE of the baselines is better than the proposed approach ? It seems counterintuitve. How exactly are the metrics evaluated ?
3. No references to any figure has been made in the paper. The figure captions are also very generic. The figures as well as the captions need to be enhanced.
4. How does this approach fare against other uncertainty estimation based methods for e.g, Monte-Carlo dropout ? I believe it is straightforward to implement and verify.
5. Does Figure 3 depict the epistemic uncertainty or the total uncertainties around every sample?
6. Eqn 3 and 4 are general descriptions qualifying Eqn 2. It would be good if they are not written as equations but written as text. This is the case with many other redundant equations mentioned in the paper.
7. Identified typos:
a) Line 9 - approach to - approach on
b) Line 23 - The citations [23], [24] can be provided at the end of the preceding sentence.
c) Line 35 - predicts the types - predicts the type
d) Line 52 - trained t --> trained to
e) Line 56 - the fullstop can be removed before the citation [7]
f) Line 72 - real world datasets
g) Line 76 - rate parameter $k$
h) Line 91 - spelling of ensembling
i) Eqn 31, the mean term E[\lambda] must be squared.
j) Line 107 - increases --> increase
k) Line 183 - Spelling of evidential needs to be correct
8. Discrepancies between title in the paper and the title shown in the submission page ? Why ?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The paper's experimental setup is quite limited and requires further motivation. The demonstrations illustrating how uncertainty quantification can benefit credit risk problems lack sufficient substantiation. Moreover, the paper has a limited exploration of related work, and the quality of references needs improvement. In its current stage, the paper requires significant improvement in terms of better problem motivation and comprehensive qualitative and quantitative evaluations prior to publication.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | null | null | null | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A Unified Detection Framework for Inference-Stage Backdoor Defenses | Accept (poster) | Summary: This work formulates the inference-stage backdoor detection problem. The authors then propose a framework to establish provable guarantees w.r.t. the detection FPR, given some validation data on hand. Finally, they derive the optimal detection rule (in the Neyman-Pearson paradigm) in a simplified scenario, and suggest a practical proxy using the empirical Mahalanobis distance metric w.r.t. DNN's latent representations. Their results on both CV and NLP domain show significant improvements over prior arts
Strengths: 1. The authors formally study the inference-stage backdoor detection problem. Specifically, they propose the conformal backdoor detection (CBD) framework, which ensures the detector's FPR does not deviate too much from a pre-selected value with a high probability. Their CBD framework provides theoretical guidance on how to select the decision threshold $\tau$.
2. They also study the optimal score function in a simplified scenario. Above this theretical analysis, they propose a practical proxy to this score function (which cannot be practically computed). They further address the potential numerical instability problem using matrix shrinkage technique.
3. The authors conduct extensive experiments, in both CV and NLP domain. The cross-modal experiment setup indeed helps demonstrate the generalizability of their method. They consider 3 datasets and 10 attacks for CV tasks, with 2 datasets and 2 attacks for NLP task. I appreciate the diverse experimental setting very much.
Weaknesses: 1. The paper title is not expressing your work accruately. Your title is "A Unified Framework for Inference-Stage Backdoor **Defenses**", but your work focuses only on inference-stage backdoor **input detection** (I don't think inference-stage backdoor defense = inference-stage backdoor input detection). Also, the word "unified" seems quite strong, since your theoretical analysis (e.g. optimality) is mostly based on a simplified scenario. I suggest you narrow down the scope (and consider tuning down the tone) of your title.
2. The **intuitions behind of your method** are not fully specified. For example, as a reader, I am quite confused about why the empirical Mahalanobis distance (Line 276) is a good proxy to Eq (2)? It seems like the case that you simply discard the second term in Eq (2), and try to only approximate the first term. Please explain more about the connection between your practical proxy and Eq (2).
3. The experiment setting should include comparisons with **more baselines defenses and attacks**. First, I suggest the authors also consider other non-poisoning backdoor attacks (e.g. modifying trained model weights [1], fine-tuning, etc.). Second, there are more recent and advanced inference-stage backdoor detector baselines (e.g. [2] and [3]) other than STRIP, l2 and MAD. You should definitely consider adding at least one or two inference-stage backdoor detectors in the recent two years into comparison in the main Table 1.
4. It's good to see several ablation studies in the Appendix (e.g. poison rates). I suggest you consider three more important **ablation studies**: 1) The number of validation data you have in hand. How many samples at least are necessary to make your defense effective? 2) I appreciate your consideration of several adpative attacks. But these adaptive attacks are "adaptive" against the latent-separation based defenses. Could you also study/propose potential adaptive attacks that specifically target the weakness of your method? 3) What if the validation data is OOD (e.g. corrupted with noises). In the real world, model deployers sometimes cannot guarantee to have the exactly validation data drawn IID from the training samples' distribution.
5. **Typos**: Line 147 "to to"; Line 214 "at lease"; Line 230 "we will assume that the Suppose that"; Line 264 $\eta^* \to \gamma^*$ (?).
Still, I would be happy to adjust my rating if the above concerns are somehow addressed.
[1] Qi, Xiangyu, Tinghao Xie, Ruizhe Pan, Jifeng Zhu, Yong Yang, and Kai Bu. "Towards practical deployment-stage backdoor attack on deep neural networks." In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 13347-13357. 2022.
[2] Zeng, Yi, Won Park, Z. Morley Mao, and Ruoxi Jia. "Rethinking the backdoor attacks' triggers: A frequency perspective." In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 16473-16481. 2021.
[3] Guo, Junfeng, Yiming Li, Xun Chen, Hanqing Guo, Lichao Sun, and Cong Liu. "Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency." *arXiv preprint arXiv:2302.03251* (2023).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. In Line 109-111, you made a claim that "these methods rely on the assumption that there is a clear separation...". But isn't your method also relying on the separation of clean and backdoor data in the latent space? Do you consider utilizing such latent separation as a weakness/flaw when designing backdoor defenses? If you do, why won't your method suffer from this weakness?
2. How is the violation rate $\delta$ in Eq (1) selected in practice?
3. In Figure 2, I see that for some attacks (TrojanNN and A-Blend), even your proposed optimal scoring function cannot separate well enough between clean and backdoor data. Could you discuss more about these results?
4. Could youn also visualize the SCM score histogram for backdoor and clean inputs? It would be straightforward for readers to see how your proposed method distinguishes between clean and backdoor inputs.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are discussed together with future work in Sec 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: Inaccurate terms in the title
R: We appreciate your suggestion to improve the current title to better align with specific content, such as focusing on inference-stage backdoor input detection. Your input is valuable, and we will carefully incorporate your suggestions in our revision process.
> Q2: Intuitions behind the proxy term
R: Recall that the optimal decision rule is represented as $s(Z) \propto P_{\text{BD}}(Z)/P_{\text{CL}}(Z)$, where $P_{\text{BD}}(\cdot)/P_{\text{CL}}(\cdot)$ denotes the distribution of backdoor and clean data, respectively.
Since we do not have access to the backdoor data, computing $P_{\text{BD}}(\cdot)$ is not possible.
As an alternative, empirically, we consistently observed that $P_{\text{CL}}(\text{Clean Data})$ is much greater than $P_{\text{CL}}(\text{Backdoor Data})$. Hence, we use $P_{\text{CL}}(\cdot)$ as a substitute for $s(Z)$. When considering Gaussian data assumptions, the proxy $P_{\text{CL}}(Z) \propto 1/\exp{(0.5Z^{\top}\Sigma Z)}$ effectively represents the inverse of the Mahalanobis distance to the clean data distribution, which explains the use of SCM as the proxy.
> Q3: Comparison with more baseline defenses [2], [3] and attack [1]
R: We conducted two sets of experiments: (1) assessing our method against [1], and (2) comparing our method with [2] and [3]. We present the outcomes in **Table 2 in the PDF file of the global response**, respectively. In summary, our methods excel in non-poisoning backdoor attacks [1]. Also, they consistently outperform [2] and [3].
> Q4: Claim in Lines 109 - 111
Yes, our methods also assume a separation between the latent representations of clean and backdoor data.
Nevertheless, our approach diverges from other work mentioned in the Related Work section in how we exploit this distinction property. We utilize SCM, consistently outperforming previous methods.
However, an issue arises if knowledgeable attackers know the SCM score and exploit it to bypass our defense, a concern we'll address next.
> Q5: Potential adaptive attacks that specifically target the weakness of our method
R: Following the previous response, we developed a new attack, referred to as *M-attack*, which introduced a regularization term to reduce the Mahalanobis scores between clean and backdoor samples. Experiment outcomes are presented in **Table 2 in the PDF file of the global response**. Our methods maintain superiority over other defenses, with a moderate performance decrease compared to different attacks like BadNets and SSBA. This is reasonable, considering the nature of the newly proposed attack.
> Q6: The selection of $\delta$
R: Choosing the value of $\delta$ should be contingent on both $n$ (sample size) and $\alpha$ (desired type I error rate). This ensures the establishment of a testing procedure that holds a high probability guarantee on the desired type I error rate.
To be specific, it can be shown that, with similar arguments in Theorem 1, to ensure a $1-\delta$ probability guarantee on the type I error rate of $\alpha$, the values of $n$, $\delta$, and $\alpha$ must lead to the right-hand side term of Eq. (1), i.e.,
$1-\alpha+\sqrt{\log(2/\delta)/(2n)} $ being strictly less than 1.
For instance, consider the values $n=200$, $\alpha = 0.05$, and $\delta = 0.01$. In this case, we have $ 1-0.05+\sqrt{\log(2/0.01)/400} > 1,$ indicating the infeasibility of achieving a type I error rate less than $\alpha = 0.05$ with a probability of at least $1 - 0.01$.
> Q7: Ablation studies on the size of the validation data and OOD validation datasets.
R: Firstly, we'd like to draw your attention to the **challenges** linked with establishing a provable type I error rate in the two scenarios you mentioned. As we highlighted in our previous response, a validation size of 200 doesn't offer a high probability guarantee for the type I error rate. Moreover, when using out-of-distribution (OOD) data as the validation dataset, it can generally be shown that ensuring any guarantees for the type I error rate becomes **unfeasible**. Hence, these two scenarios might not be the primary application contexts for our methods, which primarily emphasize defense techniques with provable type I error guarantees and strong detection power.
Nevertheless, according to your suggestions, we conducted two ablation studies involving variations in the validation dataset size and the inclusion of OOD data in the clean validation dataset. The summarized results are presented in **Table 3, and 4 respectively in the PDF file of the global response**. Our observations indicate that our methods are consistently effective when the validation dataset size is above 200. Moreover, if the validation dataset consists of less than 25% of the OOD samples, our methods exhibit consistent effectiveness. These findings highlight the robustness of our approaches.
> Q8: For some attacks (TrojanNN and A-Blend), even your proposed optimal scoring function cannot separate well enough between clean and backdoor data.
R: We provide an explanation for the less distinct separation between clean and backdoor data under optimal score rules as follows.
It's important to remember that the uniformly most powerful rule in Eq.(2) is established based on various conditions, including the presence of fully specified Gaussian data distributions for both clean and backdoor data. However, our investigations revealed a significant departure from Gaussian distributions in the latent spaces of clean and backdoor data during TrojanNN and A-Blend attacks. This divergence leads to a violation of the assumptions necessary for the application of the optimal decision rule in Eq.(2).
> Q9: Histograms of SCM scores
R: We have incorporated the SCM score histograms in **Figure 2 the global response PDF file**. In summary, we notice distinct separations between the SCM scores of clean and backdoor samples consistently across various scenarios.
---
Rebuttal Comment 1.1:
Title: Thanks for your effort and clarifications
Comment: I appreciate your efforts in the rebuttal very much! I do believe most of my concerns are resolved. A few suggestions / further concerns:
- Your "uniformly most powerful rule" is established in a simplified cases where you assume "Gaussian data distributions for both clean and backdoor data". Therefore, your defense may not generalize well to attacks which violate such assumptions or to scenarios when the defender cannot acquire enough validation clean samples. Still, I appreciate your specifically designed adaptive attack & analysis a lot. Then please tone down your statement like "uniformly most powerful rule".
- I also noticed the concern raised by reviewer ibgE about your work's similarity to SPECTRE. Indeed, I also find it to be beneficial to include an independent section to comprehensively discuss about your work's relationship (especially similarity) to it (and also other latent-space analysis backdoor defense) in the major paper (since you are both conducting an outlier analysis in the latent representation space).
- Also, in Table 2 of your rebuttal PDF, why is Frequency performing so bad on SSBA? Could you elaborate the experiment configurations, e.g. whether you are using their pretrained models? According to my own experiment experience, Frequency can actually achieve a much higher AUCROC against SSBA on CIFAR10.
Thanks again for your efforts during the rebuttal period. I will adjust my rating to 5 for now.
---
Reply to Comment 1.1.1:
Title: Thanks Reviewer 1Ekh's feedback on our rebuttal and increasing the score; Further Clarifications
Comment: Thank you for your valuable feedback on our rebuttals and for increasing the score. We address your further concerns/suggestions in the following.
> Q1: Your "uniformly ... Then please tone down your statement like "uniformly most powerful rule".
R: Per our prior response, we acknowledge the need for a more specific title and moderated language. We promise to revise both the title and content with accuracy and an appropriate tone.
> Q2: Include an independent section to comprehensively discuss about your work's relationship (especially similarity) to it (and also other latent-space analysis backdoor defense) in the major paper (since you are both conducting an outlier analysis in the latent representation space).
R: In the revision, we'll incorporate a dedicated section to thoroughly explore the connection between our approach and other comparable methodologies.
> Q3: why is Frequency performing so bad on SSBA? Could you elaborate the experiment configurations, e.g. whether you are using their pretrained models?
R: Regarding the subpar performance of SSBA on the *GTSRB* dataset presented in *Table 2* of the global response PDF, this outcome aligns with previous observations indicating that the Frequency method fails against advanced non-patch-based backdoor attacks, such as SSBA and WaNet [C]. Specifically, the study by [C], introducing a defense approach at the ICLR 2023 event as you mentioned, reported AUC scores around 0.5 for the Frequency defense method when applied against the SSBA attack on both the CIFAR10 and Tiny ImageNet datasets. These outcomes parallel our findings, corroborating that the Frequency method inadequately addresses the SSBA attack.
In terms of the implementation, we implemented all the backdoor attacks, including the SSBA attack, by using open-source packages, e.g., [A] and [B]. The resulting backdoor model achieves both high clean and backdoor accuracy on the CIFAR10, the GTSRB, and the Tiny ImageNet dataset. For instance, a clean accuracy of 95\% and a backdoor accuracy of 97\% accuracy are observed on the GTSRB dataset for the SSBA attack.
Thank you again for your feedback on our response and the increased score. If you have any more questions, feel free to let us know.
### Reference
[A] T. Xie, “Backdoor toolbox,” https://github.com/vtu81/backdoor-toolbox, 2022.
[B] B. Wu, H. Chen, M. Zhang, Z. Zhu, S. Wei, D. Yuan, and C. Shen, “Backdoorbench: A comprehensive benchmark of backdoor learning,” in NIPS Datasets and Benchmarks Track, 2022.
[C] Guo, Junfeng, Yiming Li, Xun Chen, Hanqing Guo, Lichao Sun, and Cong Liu. "Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency." in ICLR 2023. | Summary: This paper proposes a backdoor sample detection method. It utilizes Mahalanobis distance as the score function to compute the probability of a given sample being poisoned. It also leverages an existing statistical tool, the conformal prediction framework, to determine a statistical threshold for the computed scores given a false positive rate (recognizing clean data as poisoned). The experiments are conducted on three image datasets and two text datasets. Comparing to several baseline methods, the proposed approach achieves higher detection rate on poisoned samples.
Strengths: 1. This paper studies an important problem of detecting backdoor samples. The evaluation shows the proposed approach is effective against various attacks.
2. The conformal prediction framework is used in the paper to statistically balance the true positive rate and the false positive rate, which is interesting. The Mahalanobis distance is leveraged to differentiate the statistical difference between clean and poisoned data, which is empirically validated in the paper.
Weaknesses: 1. While this paper compares with a few baselines, it still misses a closely related work SPECTRE [41]. SPECTRE also leverages statistics techniques to estimate the mean and the covariance of the clean data, which is then utilized to differentiate poisoned samples from the clean data. What is the fundamental difference between the proposed approach and SPECTRE? This paper seems to just use a different statistics tool to estimate the clean distribution. Fundamentally, it is no different from SPECTRE. However, there is no discussion, comparison, and empirical evaluation regarding SPECTRE.
2. Although the evaluation considers a number of attacks, an important aspect is missed in the paper. A knowledgeable attacker who is aware of the proposed detection approach can design an attack specifically targeting it. Particularly, this paper uses the Shrunk-Covariance Mahalanobis (SCM) score function to distinguish clean and poisoned data. An adaptive attacker can use this function during attack to reduce the scores for poisoned samples. This is critical to demonstrate the robustness of the proposed detection approach. In addition, there are several strong attacks that were designed to evade statistics based defenses [1,2]. They should be empirical evaluated.
3. In Algorithm 1, a transformation method T is introduced as the defender-constructed transformation on the original input. However, there is no explanation on what this function is and how it is applied on the input.
[1] Doan, Khoa, Yingjie Lao, and Ping Li. "Backdoor attack with imperceptible input and latent modification." Advances in Neural Information Processing Systems 34 (2021): 18944-18957.
[2] Shokri, Reza. "Bypassing backdoor detection algorithms in deep learning." 2020 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2020.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewers for investing their time and energy into reviewing our manuscript and offering valuable feedback. We're pleased that they recognized the quality of our writing, acknowledged the novelty of our proposed framework (Reviewer vAAp), and found our approach effective across various domains and setups (JEgp, ibgE, 1Ekh). We will incorporate all their comments into the revised paper.
> Q: Comparaison with SPECTRE [41]
R: Fundamentally, our approach diverges significantly from SPECTRE in terms of both the threat model and technical components, detailed as follows.
1. **Differences in terms of the threat model, goals, and evaluation metrics**. SPECTRE [41] is a training-stage defense approach with access to both clean and backdoor training data, **affording them information about both types of clean and backdoor data**. Their goal is to differentiate between clean and backdoor training data, ultimately constructing a clean model through the use of filtered clean training data, obtained by analyzing all training instances. Hence, their evaluation metrics are the clean and attack success rate of the cleansed model.
In contrast, our method relies solely on a small (e.g., 1000) set of clean validation data and operates **without any knowledge of backdoor data**. Our goal is to detect future backdoor inputs, and our evaluation metric is the AUCROC score of the detector.
2. **Differences in terms of Technical Aspects**. SPECTRE employs robust statistics to separate clean and backdoor training data, with **access to both clean and backdoor training data**. On the other hand, our SCM technique is employed to mitigate numerical challenges when estimating high-dimensional covariance matrices for the set of clean validation data. Notably, there is **no information available regarding the backdoor data distributions**.
Therefore, direct comparisons between SPECTRE [41] and our methods may not be equitable or appropriate.
**Nonetheless**, we have included the AUCROC scores of SPECTRE on distinguishing between clean and backdoor data in Tables 1 and 2 below. We observed that our method consistently outperforms SPECTRE under different types of backdoor attacks. These results underscore our method's superiority over SPECTRE in terms of detection performance.
Table 1: AUCROC performance comparison of our method with SPECTRE [41] on CIFAR10
| Defense ↓ | BadNets| Dynamic | SSBA | Adaptive-Blend | Adaptive-Patch |
| :---------------- | :------: | :----: |:----: |:----: |:----: |
| Our Method | 1.0 | 1.0 | 0.97 | 0.96 | 0.98 |
| SPECTRE [41] | 0.95 | 0.96 | 0.56 | 0.62 | 0.61|
Table 2: AUCROC performance comparison of our method with SPECTRE [41] on GTSRB
| Defense ↓ | BadNets| Dynamic | SSBA | Adaptive-Blend | Adaptive-Patch |
| :---------------- | :------: | :----: |:----: |:----: |:----: |
| Our Method | 0.99 | 0.99 | 0.99 | 0.99 | 0.87 |
| SPECTRE [41] | 0.96 | 0.95 | 0.56 | 0.62 | 0.7|
> Q: Robustness of the proposed detection approach against a knowledgeable; comparison with strong attacks that were designed to evade statistics-based defenses [1,2].
R: Taking your advice into account, we developed an attack inspired by [1], refered to as *M-attack*, which introduced a regularization term to reduce the Mahalanobis scores between clean and backdoor samples. Moreover, in response to your recommendations, we conducted supplementary experiments to evaluate our method's effectiveness against attacks [1,2], as summarized in Tables 3 and 4 below.
Table 2: AUCROC performance comparison of our method under *M-attack*, [1] and [2] on CIFAR10
| Defense ↓ | BadNets | SSBA | Adaptive-Patch | *M-Attack* | [1] | [2] |
| :---------------- | :------: | :----: |:----: |:----: |:----: | :----: |
| Our Method | 1.0 | 0.97 | 0.98 | 0.85 | 0.84 | 0.89 |
| STRIP | 1.0 | 0.68 | 0.76 | 0.71 | 0.69 | 0.74 |
Table 3: AUCROC performance comparison of our method under *M-attack*, [1] and [2] on GTSRB
| Defense ↓ | BadNets | SSBA | Adaptive-Patch | *M-Attack* | [1] | [2] |
| :---------------- | :------: | :----: |:----: |:----: |:----: | :----: |
| Our Method | 0.99| 0.99 | 0.87 | 0.82 | 0.86 | 0.85|
| STRIP | 0.99| 0.80 | 0.33 | 0.68 | 0.62 | 0.81 |
We observed that our methods still outperform other state-of-the-art defenses, even though there is a moderate decline in performance compared to different attack scenarios like BadNets and SSBA. Nonetheless, we believe that this is a reasonable outcome, given that no defense can be universally effective against all attack variations.
> Q: notations on T
R: We introduced this transformation $T$ and offered specific examples in Lines 171 - 175 of the main text, before presenting the Pseudocode of Algorithm 1. In the revised version, we plan to incorporate these discussions directly into the pseudocode of Algorithm 1 to enhance clarity.
Specifically, $T(\cdot)$ denotes a defender-constructed transformation that
typically depends on both the backdoored model $f^{\text{poi}}$ and the validation dataset $D^{\text{Val}}$, to reflect special properties of backdoor data, e.g., the predicted value
$T(X_{\text{test}}) = f^{\text{poi}}(X_{\text{test}})$ and the latent representation $T(X_{\text{test}}) = \phi^{\text{poi}}(X_{\text{test}})$.
### Reference
[1] Doan, Khoa, Yingjie Lao, and Ping Li. "Backdoor attack with imperceptible input and latent modification." Advances in Neural Information Processing Systems 34 (2021): 18944-18957.
[2] Shokri, Reza. "Bypassing backdoor detection algorithms in deep learning." 2020 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
Thanks for providing the results on one of the strong attacks. It is would be better to evaluate on an adaptive attack that is tailored for the proposed detection method.
Could you provide a specific example of function T? Is it an encoder model?
---
Reply to Comment 1.1.1:
Title: Thanks for the reviewer's comments; Clarification on the new experiments and T
Comment: We would like to express our appreciation to the reviewer for bringing up the subsequent inquiries. We proceed to respond to the reviewer's comments as follows.
> Thanks for providing the results on one of the strong attacks. It is would be better to evaluate on an adaptive attack that is tailored for the proposed detection method.
In our rebuttals above, we have provided **Tables 1 and 2**, where we carry out a total of additional **three distinct backdoor attacks** to showcase the efficacy of our methodologies. The **M-attack is designed specifically to target our defense method**, focusing on reducing the Mahalanobis distance between the clean and the backdoor latent representations. Furthermore, as per your suggestions, we also subjected our method to testing against **two other approaches** [1,2] that similarly target our method. These outcomes underscore the consistent effectiveness of our approach across various backdoor threat models.
To summarize, we have rigorously evaluated our method against a **comprehensive set of 16 distinct backdoor attack types**. However, should you have additional types of backdoor attacks in mind that you would like to see addressed, kindly provide us with detailed information, and we would be more than willing to conduct tests in those scenarios.
> Could you provide a specific example of function T? Is it an encoder model?
An illustration of a commonly employed $T$ is the latent representation, which pertains to the penultimate layer of the backdoored model. This choice is based on the assumption that the latent representation encapsulates high-level information from the original data. | Summary: This paper formulates the inference-stage backdoor detection in terms of backdoor-sample identification and proposes a unified defense framework. It derives a theoretically optimal detection rule and validates its effectiveness in both CV and NLP domains.
Strengths: 1. The paper is well-organized with a comprehensive structure. For example, section 3, first formulates the backdoor detection and then illustrates the proposed conformal detection, followed by the derivation of optimal score functions and a practical proxy, which is clearly illustrated.
2. The proposed method can work well under different domains, including CV and NLP, which are more general than the domain-specific methods.
3. The idea is theoretically correct and easy to follow.
4. The experiments are reliable with 10 times independent repetition as illustrated in 4.1.
5. The comparisons under different backdoor attacks are sufficient.
Weaknesses: 1. Some redundancy and unclear expressions exist. For example,
1). The $\lambda_{\alpha,s}$ is first shown in line 5, Algorithm 1, and $\tau$ is used in Equation (1), where the relationship between them should be clearly claimed.
2). The backdoor trigger is defined unclearly and misunderstood in section 3.3. The $\eta_1$ expressing backdoor trigger is used in lines 225 and 244, while in line 233, it represents the backdoor transformation. Also, the $\gamma$ is used in other places.
2. Lack of comparisons with the SOTA backdoor defense methods that include the separation of the input data, such as the ABL[1] and DBD[2].
3. The implementation code of this paper does not release.
[1] Li, Yige, et al. "Anti-backdoor learning: Training clean models on poisoned data." *Advances in Neural Information Processing Systems* 34 (2021): 14900-14912.
[2] Huang, Kunzhe, et al. "Backdoor defense via decoupling the training process." *arXiv preprint arXiv:2202.03423* (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please see the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewers for investing their time and energy into reviewing our manuscript and offering valuable feedback. We're pleased that they recognized the quality of our writing, acknowledged the novelty of our proposed framework (Reviewer vAAp), and found our approach effective across various domains and setups (JEgp, ibgE, 1Ekh). We will incorporate all their comments into the revised paper.
> Q: Some redundancy and unclear expressions exist
R:
- The relationship between $\tau$ and $\lambda_{\alpha,s}.$ In brief, $\tau$ and $\lambda_{\alpha,s}$ denote the same decision value, but they are referenced differently in the text and Algorithm 1's pseudocode. Specifically, within our BCD framework, $\tau$ is sought as a solution to Equation (1) based on given $\alpha$ (type I error rate), $\delta$ (violation rate), and $n$ (sample size of the validation dataset). This $\tau$ is later employed as the decision threshold $\lambda_{\alpha,s}$ in the pseudocode of Algorithm 1. We will correct and unify the notations in the revision.
- The backdoor transformation $\eta_1$ and the backdoor trigger $\gamma$.
We use $\eta_1$ to denote the general backdoor transformation (from clean data to backdoor data). For instance, in poisoning backdoor attacks, we have $\eta_1(x) = x + \gamma$, with $\gamma$ being the backdoor trigger.
In Line 225, when we mention "transforming clean data with backdoor triggers $\eta_1$," we are referring to converting clean data into backdoor data using the backdoor transformation $\eta_1$.
In Line 244, there is a typo. It should read "backdoor triggers $\gamma \in T_c$ that" instead of "backdoor triggers $\eta \in T_c$ that".
> Q: Comparisons with the SOTA backdoor defense methods that include the separation of the input data, such as
the ABL[1] and DBD[2].
R: First, It's important to emphasize that the ABL [1] and DBD [2] methods differ significantly from our proposed approaches in terms of threat models and methodology, as detailed below.
ABL [1] and DBD [2] are **training-stage defense** approaches with access to both clean and backdoor training data, **affording them information about both types of clean and backdoor data**. Their goal is to differentiate between clean and backdoor training data, ultimately constructing a clean model through the use of filtered clean training data, obtained by analyzing all training instances. Hence, their evaluation metrics are the clean and attack success rate of the cleansed model.
In contrast, our method is an **inference-stage defense** that relies solely on a very small (e.g., 1000) set of clean validation data and operates **without any knowledge of backdoor data**. Our goal is to detect future backdoor inputs, and our evaluation metric is the AUCROC score of the detector.
Therefore, direct comparisons between ABL [1], DBD[2], and our methods may not be equitable or appropriate.
**Nonetheless**, we included the AUCROC scores of ABL [1] and DBD [2] on distinguishing between clean and backdoor data in Tables 1 and 2 below. We believe that our method consistently outperforms ABL [1] and DBD[2] under different types of backdoor attacks. These results underscore our method's superiority over ABL [1] and DBD[2] in terms of detection performance.
Table 1: AUCROC performance comparison of our method with ABL[1] and DBD[2] on CIFAR10
| Defense ↓ | BadNets| Dynamic | SSBA | Adaptive-Blend | Adaptive-Patch |
| :---------------- | :------: | :----: |:----: |:----: |:----: |
| Our Method | 1.0 | 1.0 | 0.97 | 0.96 | 0.98
| ABL | 0.98 | 0.92 | 0.81 | 0.79 | 0.72|
| DBD | 0.98 | 0.89 | 0.87 | 0.78 | 0.67|
Table 2: AUCROC performance comparison of our method with ABL[1] and DBD[2] on GTSRB
| Defense ↓ | BadNets| Dynamic | SSBA | Adaptive-Blend | Adaptive-Patch |
| :---------------- | :------: | :----: |:----: |:----: |:----: |
| Our Method | 0.99 | 0.99 | 0.99 | 0.99 | 0.87
| ABL | 0.97 | 0.85 | 0.81 | 0.59| 0.72|
| DBD | 0.97 | 0.84| 0.77| 0.78 | 0.67|
> Q: Code Release
R: We have already included the preliminary codes in the originally uploaded supplementary materials, allowing the reproduction of NLP backdoor attack results showcased in Figure~4 of the main text. Replicating all other results merely necessitates adjustments to the clean and backdoor representations. That being said, following the convention, we will release the complete version of our codes upon the paper's acceptance.
### Reference
[1] Li, Yige, et al. "Anti-backdoor learning: Training clean models on poisoned data." Advances in Neural Information Processing Systems 34 (2021): 14900-14912.
[2] Huang, Kunzhe, et al. "Backdoor defense via decoupling the training process." arXiv preprint arXiv:2202.03423 (2022).
---
Rebuttal Comment 1.1:
Comment: We sincerely appreciate Reviewer JEgp for their insightful and positive comments. In our response, we have addressed concerns regarding (1) conducting additional experimental studies to compare performance with the mentioned SOTA method, and (2) clarifying notations and code release.
As the discussion phase nears its conclusion, we kindly inquire if the reviewer has any further comments on our response. We are readily available for any additional queries they may have.
Once more, we appreciate your time and effort in reviewing our paper. | Summary: This paper proposes a unified inference-stage detection framework to defend against backdoor attacks. The authors first formulate the inference-stage backdoor detection problem, discuss its challenges and limitations, and then suggest a framework with provable guarantees on the false positive rate or the probability of misclassifying a clean sample. The authors also derive a detection rule to maximize the rate of accurately identifying a backdoor sample, given a false positive rate under classical learning scenarios. Based on this, they then suggest a practical and effective approach for real-world applications. The proposed method was evaluated on 12 different backdoor attacks on computer vision and NLP benchmarks. The experimental findings align with the theoretical results, showing significant improvements over the state-of-the-art methods.
Strengths: - The proposed framework for defending against backdoor attacks is novel to the best of my knowledge.
- The paper is sound and decently written (beyond some mathematical clutter, see belo).
- The proposed method is validated through extensive experiments on multiple datasets and compared to many existing defenses, demonstrating its effectiveness.
- The authors provide a theoretical analysis and derive technical insights on toy settings which motivates their practical defense on the real settings.
Weaknesses: - There is quite a bit of mathematical clutter in S2 and S3 which I believe can be avoided for a smoother read.
- Some of the details about the backdoored models are not present in the paper (see below) which make it a bit hard to asses the faithfulness of the comparisons to previous models.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - For all the backdoored models used in the paper, what is the percentage of poisoned training data, and what are the clean and robust accuracy of these models (before the defense). It is crucial to know this as different defenses might behave differently as the “strength” of the backdoor attack varies. I encourage the authors to include these details along with an ablation study for this.
- As shown in Fig 4, the proposed attack performance is not very different than prior work. The authors “justify” this by saying “These findings suggest that the current NLP attacks retain a considerable amount of information in the latent representations that can be utilized to differentiate between clean and backdoor data.” Can the authors explain this in more detail? What is special about NLP? Are backdoor attacks themselves weaker there? This ties back to my first point on clarifying the performance of the backdoor models on clean and modified data.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewers for investing their time and energy into reviewing our manuscript and offering valuable feedback. We're pleased that they recognized the quality of our writing, acknowledged the novelty of our proposed framework (Reviewer vAAp), and found our approach effective across various domains and setups (JEgp, ibgE, 1Ekh). We will incorporate all their comments into the revised paper.
> Q: Implementation details
R: To fit the page limit, we’ve included all the implementation details and several ablation
studies in the originally uploaded supplementary material. We summarize them below.
1. Open-source package used. All the backdoor attacks and associated backdoor data in our work are obtained based on open-source projects. For computer vision (CV) attacks, we employed the Backdoor ToolBox [1] and BackdoorBench [2] to ensure result consistency. NLP attacks were conducted using the specialized OpenBackdoor [3] package.
2. We provide the poisoning rate, clean accuracy, and backdoor accuracy for backdoor attacks presented in the main text in Table 1 below. In general, the backdoored models exhibit comparable clean accuracy to the normal models, while achieving nearly perfect accuracy on backdoor data, with a relatively low poisoning ratio.
Table 1: Poisoning ratio, clean and backdoor accuracy of backdoor attacks used in our paper. * For the SSBA and WaNet attacks, backdoor poisoning rates are 5\%. For the A-Blend and SIG attacks, the backdoor accuracies are around 80\%. † For the A-Patch attack, the backdoor accuracy is around 60\%.
| | |Backdoor Model| Backdoor Model| Clean Model |
|--------------|-----------------|----------------|------------------|----------------|
| Dataset ↓ | Poisoning Ratio | Clean Accuracy | Backdoor Accuracy | Clean Accuracy |
| CIFAR10* | 0.3% | ≥ 93% | ≥ 97% | ≥ 93% |
| GTSRB† | 1% | ≥ 95% | ≥ 97% | ≥ 96% |
| Tiny ImageNet | 1% | ≥ 37% | ≥ 96% | ≥ 40% |
| SST2 (NLP) | 10% | ≥ 91% | ≥ 99% | ≥ 92% |
| IMDB (NLP) | 10% | ≥ 90% | ≥ 96% | ≥ 91% |
> Q: Ablation study on poisoning rates and model architectures
R: We provide the outcomes of two ablation studies: (i) Table 2 below, addressing the poisoning ratio, and (ii) **Figures 1 in the PDF file of the global response**, depicting detection power with VGG 19. Overall, our method's detection performance remains consistent across diverse poisoning ratios and architectures. This highlights the stability and efficacy of our approach.
Table 2: AUCROC of our proposed method on CIFAR10
| Poisoning ratio | 0.3% | 1% | 5% |
| :---------------- | :------: | :----: | :----: |
| BadNets | 0.99 | 0.99 | 0.99 |
| Blended | 0.96 | 0.95 | 0.96 |
| WaNet | 0.85 | 0.91 | 0.93 |
| SSBA | 0.92 | 0.95 | 0.97 |
> Q: Explanations on the claim of "These .. data." regarding the performance of our methods on NLP backdoor attacks
R: We explain our claim of "These .. data." in the following.
- **The discrete nature of NLP data often leads to noticeable NLP backdoor triggers**.
NLP backdoor attacks stand apart from CV backdoor attacks due to the distinct nature of their data representations. Image data is commonly described by continuous values, while textual data is characterized by symbolic and discrete forms. This discreteness often renders NLP backdoor triggers more conspicuously visible [4], in contrast to the human-imperceptible CV backdoor triggers [5]. For example, the SOS attack [6] introduces the non-relevant and easily noticeable word combination 'mn' into the text.
- **Noticeable NLP backdoor triggers lead to clear distinctions between the latent representations of clean and backdoor data.**
Due to the conspicuous nature of backdoor triggers in NLP attacks, distinct differences emerge in the latent representations of clean and backdoor data under different backdoor attacks. Defenders can leverage these differences to accurately discern, e.g., with near-perfect accuracy, between clean and backdoor data, as empirically demonstrated in our main text.
As a result of the above two points, we made the claim that "current NLP attacks retain a ... utilized to differentiate between clean and backdoor data" in our main text. We will improve the clarity of this statement in the revision.
> Q: mathematical clutter in S2 and S3
R: We will enhance the clarity of concepts, expressions, and language in the revised version.
### Reference
[1] T. Xie, “Backdoor toolbox,” https://github.com/vtu81/backdoor-toolbox, 2022.
[2] B. Wu, H. Chen, M. Zhang, Z. Zhu, S. Wei, D. Yuan, and C. Shen, “Backdoorbench: A comprehensive benchmark of backdoor learning,” in NIPS Datasets and Benchmarks Track, 2022.
[3] G. Cui, L. Yuan, B. He, Y. Chen, Z. Liu, and M. Sun, “A unified evaluation of textual backdoor learning: Frameworks and benchmarks,” arXiv preprint arXiv:2206.08514, 2022.
[4] Chen, X., Salem, A., Chen, D., Backes, M., Ma, S., Shen, Q., and Zhang, Y. "Badnl: Backdoor attacks against nlp models with semantic-preserving improvements," in ACSAC, 2021
[5] Anh Tuan Nguyen and Anh Tuan Tran. WaNet – Imperceptible wapping-based backdoor attack. In ICLR, 2021.
[6] W. Yang, Y. Lin, P. Li, J. Zhou, and X. Sun, “Rethinking stealthiness of backdoor attack against nlp models,” in ACL, 2021.
---
Rebuttal Comment 1.1:
Comment: We sincerely appreciate Reviewer vAAP for their insightful and positive comments. In our response, we have addressed concerns regarding (1) implementation details, (2) ablation studies on different poisoning rates and model architectures, and (3) NLP backdoor attack issues.
As the discussion phase nears its conclusion, we kindly inquire if the reviewer has any further comments on our response. We are readily available for any additional queries they may have.
Once more, we appreciate your time and effort in reviewing our paper.
---
Rebuttal Comment 1.2:
Title: Thanks for clarification
Comment: I thank the authors for their time to clarify my concerns. I think in particular these extra experimental details are important for clarity of the paper. I am happy to maintain my score.
---
Reply to Comment 1.2.1:
Title: Thanks the Reviewer vAAP
Comment: Thank you for your helpful feedback on our responses. We'll make sure to include these suggestions when revising our work. | Rebuttal 1:
Rebuttal: We sincerely thank reviewers for investing their time and energy into reviewing our manuscript and offering valuable feedback. We're pleased that they recognized the quality of our writing, acknowledged the novelty of our proposed framework (Reviewer vAAp), and found our approach effective across various domains and setups (Reviewer vAAP, JEgp, ibgE, 1Ekh). We will incorporate all their comments into the revised paper.
In the following, we will begin by addressing a potentially unclear aspect related to our method's threat model and its comparison with various defenses mentioned by the reviewers. After that, we'll provide a concise summary of our tailored responses for each reviewer.
Regarding the threat model, our approach serves as an **inference-stage defense** with **no access to the training data** and **no ability to manipulate the training process**, including model parameter adjustments. Similarly to other inference-stage defenses [1,2], we operate under the assumption that the defender possesses a limited clean validation dataset but **no foreknowledge of future backdoor test inputs**. Our goal is to detect future backdoor inputs, and our evaluation **metric is the AUCROC score** of the detector.
In contrast, the defenses mentioned by the reviewers, namely ABL [3], DBD [4], and SPECTRE [5], belong to the training-stage defense category. These approaches have the advantage of **accessing both clean and backdoor training data**, which provides them insights into both types of data. Their objective is to build a clean model by isolating a subset of the training set through analysis of specific characteristics of all training examples. Consequently, their evaluation criteria focus on the **clean and attack success rates** of the purified model.
- **Rebuttal Summary for Reviewer vAAP**:
1. Added implementations details of backdoor attacks
2. Added Ablation studies on the poisoning rate and different model architectures (*Figure 1 in Global Response PDF file*)
3. Clarified issues regarding NLP backdoor attacks in Fig 4
- **Rebuttal Summary for Reviewer JEgp**:
1. Clarified issues regarding notations
2. Added experiments on performance comparison with SOTA backdoor defenses methods mentioned by reviewers
3. Clarified issues regarding the code release
- **Rebuttal Summary for Reviewer ibgE**:
1. Clarified the relationship between SPECTRE and our method; Providing empirical evaluations between SPECTRE and our method
2. Added experiments on the performance of our methods under the reviewer-mentioned backdoor attacks, and a new backdoor attack specifically targeted for our method (proposed by ourselves)
3. Clarified notations regarding the data transformation $T$
- **Rebuttal Summary for Reviewer 1Ekh**:
1. Explained intuitions for the proxy terms
2. Added experiments on reviewers-mentioned defenses and attack(s) (*Table 2 in Global Response PDF file*)
3. Added ablation studies on different poisoning ratios; new backdoor attacks specifically targeted at our method; and the OOD dataset (*Tables 3 & 4 in Global Response PDF file*)
4. Clarified issues regarding the selection of $\delta$
5. Clarified issues regarding the claim in Lines 109 - 111
6. Clarified issues regarding the observations in Figure 2
7. Added histograms for SCM scores (*Figure 2 in Global Response PDF file*)
### References
[1] Y. Gao, C. Xu, D. Wang, S. Chen, D. C. Ranasinghe, and S. Nepal, “Strip: A defence against trojan attacks on deep neural networks,” in Proceedings of the 35th Annual Computer Security Applications Conference, 2019, pp. 113–125.
[2] J. Guo, Y. Li., X. Chen, H. Guo, L. SUN, and C. Liu. "Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency," in ICLR 2023.
[3] Li, Yige, et al. "Anti-backdoor learning: Training clean models on poisoned data." Advances in Neural Information Processing Systems 34 (2021): 14900-14912.
[4] Huang, Kunzhe, et al. "Backdoor defense via decoupling the training process." arXiv preprint arXiv:2202.03423 (2022).
[5] J. Hayase, W. Kong, R. Somani, and S. Oh, “Spectre: defending against backdoor attacks using robust statistics,” in ICML, 2021.
Pdf: /pdf/a65531553402c1a556d1d9575d641040af488adb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On Proper Learnability between Average- and Worst-case Robustness | Accept (poster) | Summary: This paper initiates the study of a new kind of PAC learning: probabilistically robust PAC learning. The authors show that the finiteness of the VC dimension of the function class is not sufficient to obtain a proper learning rule in this new PAC learning setup. However, they show that for Lipschitz losses that interpolate between the average and worst case, proper learning is possible given finite VC dimension. They also consider the settings of adversarially robust PAC learning and tolerant PAC learning; ultimately, this results in several extensions to previous works on these topics.
Strengths: **New problem setting.** This is a new problem. The authors consider several recent results in the robustness literature concerning robustness between the average and worst case, and they form a new definition of probabilistically robust PAC learning. I imagine that this is a direction that other may be interested in, and therefore this constitutes an interesting contribution.
**Non-proper learnability.** Perhaps the most interesting/surprising result here is that the finiteness of the VC dimension of the function class is not sufficient for proper $\rho$-probabilistic robust PAC learning. This implies that both probabilistic and adversarial robustness do not easily admit proper learning rules. This is somewhat surprising, given previous empirical results that find that probabilistic robustness does not come at the cost of degraded nominal performance with respect to an empirical risk minimizer. Building on this, it is also interesting that there do exist interpolation schemes that do admit proper learning rules given finite VC dimension. In this way, this paper is a first step toward characterizing a hierarchy of interpolating losses based on which ones admit proper learning rules. It would be interesting to know whether other interpolation methods, e.g., those in [Rice et al., 2021] and [Li et al., 2020] also engender proper learning rules.
**Technical rigor.** The paper seems technically sound to me. The proofs in the appendix are well-structured, and the array of tools used in the appendix may be of broader interest to the community.
Weaknesses: **Unevenness of the presentation.** There's a certain unevenness about the presentation of the main results. In general, the trend is that as one gets further into the paper, the results get harder to parse. This doesn't seem to be a function of the complexity of the results; rather, it seems as if less space was dedicated to fully explaining the results that appear later in the paper.
For instance, the results in Section 3 are outlined in detail. Theorem 3 tells us that there exists hypothesis classes for which $\rho$-probabilistically robust PAC learning is not possible, and the proof is sketched almost in full. However, by the time we reach Section 5, the results are stated with less explanation. The text becomes quite difficult to read because nearly every equation on pages 8-9 is inline. Definitions and theorems seem to be crammed in, and the paper ends without discussing the implications of the final theorem. I think that readibility would be greatly improved if the authors moved the proofs from Section 3 to the appendix, and (i) expanded more on the results later in the paper, (ii) broke up the texts by putting long equations on their own lines (i.e., not in-line), and (iii) adding a discussion section where the implications of the results are discussed. In its current form, the paper ends rather abruptly, and I think point (iii) would help to ameliorate this.
**Writing and notation.** I think that the presentation could be improved in several ways. Here are some points that occurred to me while reading the paper:
* It would also be helpful if the authors could use equation numbers.
* It's confusing that the authors introduce VC dimension and Rademacher complexity, but a definition arguably more central to this paper -- the definition of a *proper* learning rule -- was omitted.
* ERM seems to be used to denote "empirical risk minimization" (throughout) and "empirical risk minimizer" (line 95). I think it'd be worth picking one.
* What is $(\mathcal{X}\times\mathcal{Y})^\star$, i.e., what does the *\star* denote?
* Why is there a change in notation from $\mathcal{L}^{\mathcal{H}}$ to to $\mathcal{F}$ in Definition 2?
**Section 5.3.** When reading, I wasn't sure how Section 5.3 fit in with the rest of the paper. Whereas the other sections of the paper focus on proper learnability for probabilistic robustness and generalizations which interpolate between average and worst-case robustness, Section 5.3 seems to address questions that are somewhat different in spirit. I suppose one could argue that in the tolerance setting, reducing $\mathcal{G}$ to a singleton set containing the identify function would show that this paradigm can interpolate between the standard PAC model and robust variants, but this connection feels tenuous. Perhaps the authors could elaborate more on why this section fits in with this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is it fair to say that the fact that the probabilistic robustness loss function is non-Lipschitz is the reason behind Theorem 3.1. In other words, is Lipschitzness a necessary condition? My understand from Section 4 is that it is sufficient, but it may not be necessary.
**Overall assessment.** Overall, I do not see a strong reason for this paper not to be accepted. It studies a new problem and the insights are novel and interesting. There are some drawbacks regarding the presentation, but one imagines that these can be easily ironed out.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding the results to be a surprising/interesting contribution, this setting to be interesting to others, and the array of tools used to be of broader interest to the community.
**Q1**: *"Unevenness of the presentation."*
A1: We agree with the reviewer and will make sure to incorporate all three recommended changes: (i) expand more on the results later in the paper, (ii) break up the texts by putting long equations on their own lines (i.e., not in-line), and (iii) add a discussion section where the implications of the results are discussed.
**Q2**: *"Writing and notation."*
A2: We agree with all of the reviewer's comments and will make the recommended changes in the camera-ready version. The star in (X \times Y)^* is used to denote the space of arbitrarily long, but finite, sequences of labeled instances. The change in notation from L^H to F s a typo and will be fixed in the camera-ready version.
**Q3**: *"How does Section 5.3 fit in the paper?"*
A3: One central theme of the paper is identifying settings where proper learners, and more specifically, ERM works. In Section 4, we showed that ERM works when \ell is a Lipschitz loss function of the probabilistic margin. In Section 5.1, we showed that PRERM works if you allow the learner to compete against a slightly stronger notion of probabilistic robustness. In Section 5.2, we showed that RERM works if you compare the learner's probabilistic robust risk to the best adversarially robust risk over hypothesis in H. Likewise, in Section 5.3, we identify another setting where the RERM works. Sections 5.1, 5.2, and 5.3 are further unified in the sense that they all consider a learning setting where the learner competes against a slightly stronger notion of robustness. Finally, another unifying theme throughout Section 5 is the use of Lemma 5.1, named Sandwich Uniform Convergence. Indeed, Lemma 5.1 is used to prove all results in Sections 5.1, 5.2, and 5.3.
**Q4**: *"Is it fair to say that the fact that the probabilistic robustness loss function is non-Lipschitz is the reason behind Theorem 3.1. In other words, is Lipschitzness a necessary condition? My understand from Section 4 is that it is sufficient, but it may not be necessary."*
A4: Lipschitzness is sufficient but, in full generality, not necessary for proper learnability. For example, the loss function that completely ignores G and \mu and just computes the 0-1 loss is not Lipschitz, however, it is learnable via ERM. That said, among those losses that are a function of the probabilistic robust margin, it is an interesting open question to understand whether Lipschitzness is necessary for proper probabilistic robust learnability. if the loss function \ell is not a Lipschitzness function of the probabilistic robust margin, we may be able to construct a counterexample similar to the one in Lemma 3.2 by having the probabilistically robust loss class "spike" in all possible combinations across the sample while maintaining low overall complexity of the original hypothesis class. We will include a discussion of this in the camera-ready version.
---
Rebuttal Comment 1.1:
Title: Rebuttal response
Comment: **Q1 and Q2:** Great, I think that this will improve the paper.
**Q3:** I'm still a bit confused about this. The title of the paper is "On Proper Learnability between Average- and Worst-case Robustness," and the argument made in the rebuttal doesn't do a lot to convince the reader that tolerant PAC learning fits within the bounds of the average-to-worst-case paradigm. Generally, I don't quite see the connection between competing with a stronger notion of robustness and probabilistic notions of robustness.
**Q4:** I think adding this discussion will sure up this part of the paper.
Other than that, I don't have much to say. I think that this paper should be accepted.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer dxD9
Comment: We proved a technical lemma, termed Sandwich Uniform Convergence (SUC), to derive results on proper probabilistic robust learnability. However, we show that SUC is a general technical tool that has wider applicability by including the section on Tolerant Robust Learning. We agree that Section 5.3 is slightly tangential from the main story. We will move this Section to the Appendix and use the additional space to address reviewer feedback. | Summary: This paper investigates the relaxations of the worst-case robust loss to make VC classes properly PAC learnable. Firstly, this paper shows that an exsiting and natural relaxation does not work. Then, the paper gives a family of robust loss relaxations that interpolate between average- and worst-case robustness. Finally, the paper studies the generalization guarantees for the adversarially robust empirical risk minimizer.
Strengths: - This paper shows that an exsiting and natural relaxation does not work.
- This paper gives a family of robust loss relaxations that interpolate between average- and worst-case robustness and shows that they make the VC classes properly learnable. The results are interesting.
Weaknesses: - Lack of descriptions about the high-level intuitions (please refer to the questions part).
- Some minor issues. The label space is defined as $\mathcal{Y} = \\{ -1, 1 \\}$, however, in the proof of Lemma 3.2, the paper uses $\\{ 0,1 \\}$. In Lemma 4.1, it seems that $\ell$ needs to be bounded but the paper ignores it.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Would you please show some ideas about the proof of Lemma 3.2? When considering this problem, is it the case that you first consider giving an upper bound of the VC dimension of $\mathcal{L}$ in terms of the VC dimension of $\mathcal{H}$ or the case that you directly try to construct the counterexample? Would you please show some high-level thoughts about adapting the proof of Omar to the case in this paper?
- Would you please provide some high-level insights about the construction of the counterexample in Theorem 4.3?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding the results in this work to be interesting.
**Q1**: *"Minor issues."*
A1: We thank the reviewer for pointing out these issues. We will fix them in the camera-ready version.
**Q2**: *"Would you please show some ideas about the proof of Lemma 3.2? When considering this problem, is it the case that you first consider giving an upper bound of the VC dimension of L in terms of the VC dimension of H, or the case that you directly try to construct the counterexample? Would you please show some high-level thoughts about adapting the proof of Omar to the case in this paper?"*
A2: We will make sure to include a proof sketch of Lemma 3.2 in the camera-ready version of the paper. In order to prove Lemma 3.2, we directly construct a counter-example where VC(H) <=1 but VC(L) >= m. We do include some high-level thoughts about the differences between our construction and Montasser et. al's construction in Lines 162-173. However, we will expand more on this in the camera-ready version. In Montasser et. al's proof, in order for a hypothesis to have an adversarial robust loss of 1 on an instance x, it was sufficient to have just one perturbation in the ball on which the hypothesis was non-robust. However, in our case, in order for a hypothesis to have a probabilistic robust loss of 1 on an instance x, we need the hypothesis to be non-robust on a sufficiently large number of perturbations in the ball. Accordingly, the main challenge/contribution in our paper is how to construct these regions of non-robustness such that the VC dimension of the probabilistic robust loss class can be made arbitrarily large, yet the VC dimension of H remains small.
**Q3**: *"Would you please provide some high-level insights about the construction of the counterexample in Theorem 4.3?"*
A3: We thank the reviewer for pointing this out. For the camera-ready version, we will include a proof sketch and some high-level insights about the construction of the example relevant to Theorem 4.3. The idea is to consider the well-known infinite VC class H = {sign(sin(wx)) : w in R }, but to pick a G and \mu, such that the expectation E[h(g(x))] is essentially constant in x for all hypothesis h in H. We provide the exact example in Appendix B.2.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, I am looking forward to reading your revision. | Summary: This paper studies the proper robust learnability under relaxation of the (usual) worst-case/all powerful adversary assumption.
- The authors first show that finite VC dimension is not sufficient to enable proper learnability under the relaxation proposed by Robey et al. (2022).
- For another generalization of worst-case relaxations, the authors show that finite VC dimension enables proper robust learnability
- The authors study the "relaxed competition" setting where the hypothesis is compared to the optimal hypothesis under a slightly stronger notion of robustness, proper learning guarantees are possible
Strengths: - The paper is well-written, clear and easy to follow
- I believe the topic and results are of interest to the learning theory community,
- Relaxing the worst-case analysis is well-motivated
Weaknesses: 1. It seems perhaps a considerable number of proofs rely on standard techniques
2. Is it a limitation / too big of a relaxation to have the adversary pick a perturbation independently of the unperturbed point $x$? (l.70-71) Unless $\mu$ can be defined for each $x$. Either way, it would be worth discussing and clarifying this point in the main body (unless I have missed this somewhere).
Overall I think even if the potential limitations pointed out above are right, the paper still offers a good contribution.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Could you address point 2 above?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: If point 2 in "Weaknesses" is correct, it would be worth including as a limitation of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for noting that the results in this work are of interest to the learning theory community and that relaxing the worst-case analysis is well-motivated.
**Q**: *"Is it a limitation / too big of a relaxation to have the adversary pick a perturbation independently of the unperturbed point x? (l.70-71) Unless μ can be defined for each x. Either way, it would be worth discussing and clarifying this point in the main body (unless I have missed this somewhere)."*
A: In our model, the measure \mu is fixed beforehand and does not depend on the unperturbed point x. We will make sure to clarify this point in the main text of the camera-ready version. Allowing the measure to depend on the unperturbed point x is an interesting future direction that lies between our model and adversarial robustness. That said, we believe that having one fixed measure is natural from a practical perspective. In practice, both G and \mu will be picked during training time (for example Robey et al. pick \ell_infty balls and the uniform measure), and it is unclear why one would want to weight different perturbations differently for different instances. One would also need to then define a measure for each instance, which might not be computationally feasible. In addition, even when the same fixed measure \mu is used for all unperturbed points x, we show that there are natural losses where proper learning is not always possible. Lastly, we note that our model of having the measure \mu fixed beforehand can be motivated by considering a computationally lazy/bounded adversary who may not be able to define and sample from different measures. We provide this motivation in lines 68-74 of the main text. We will further discuss and clarify these points in the main text of the camera-ready version.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the response! I am looking forward to reading the discussion and clarification of $\mu$ in the final version. | Summary: This paper studies the setting of robust PAC learning to test time attacks, using a relaxed notion of robustness on average instead of robustness to the worst-case attack.
The contributions are as follows.
-Negative result: even when using the relaxed notion of robustness, improper learning is impossible. This is a stronger result from the example in the worst-case setting [Montasser et al. 2019]. Moreover, this is achieved by a natural example of $\ell_p$ balls and the uniform measure.
The intuition is that the non-Lipschitzness of the 0-1 loss enforces to use improper learning.
-Positive results:
1. When considering Lipschitz loss functions, uniform convergence hold, and as a result, ERM is sufficient for learnability.
2. Instead of relaxing the robust loss, it is possible to relax the benchmark we compare to, i.e. the best function in the class but with a smaller parameter in the probabilistic loss. This is similar to the setup of Tolerant Robust PAC Learning.
Strengths: This paper provides nice contributions to the literature on robust learning, by finding natural relaxations on the robust model that allows learning similar to non-robust learning.
Weaknesses: See Questions.
The writing can be improved. This paper has many good ideas, but sometimes it is hard to follow them.
Also, many relevant references from theory on robust learning are missing that should be included as related work.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The description of the model might be confusing. I will explain my point of view.
In the standard setting, the set of possible perturbations is fixed and known to the learner. It's not chosen by an adversary, it's just the possible attacks the learner is aiming to protect from at test time. An adversary would just pick the set of all possible perturbations.
In this model, is the set G and measure $\mu$ being chosen at training time and known to the learner? If so, that makes sense to me and should be clear in the model.
What's a reasonable choice measure? is it representing the importance of each $g$? I think that some motivation is missing.
The connection between the average case and worst case model is through using the $\rho$-probabilistically robust loss, maybe it should be mentioned before section 3. This is a very important explanation for defining the model of robustness on average!
Some definitions are used throughout the paper. It could improve the readability if those be numbered and referred to when used.
For example, the risk under the probabilistic robust loss (line 116) is used in section 5. It takes some time to find the definition.
Many references are missing. For example,
H-consistency bounds for surrogate loss minimizers (ICML 2022),
Multi-class H-consistency bounds (NeurIPS 2022),
Theoretically grounded loss functions and algorithms for adversarial robustness (AISTATS 2023),
Cross-Entropy Loss Functions: Theoretical Analysis and Applications (ICML 2023),
A Characterization of Semi-Supervised Adversarially Robust PAC Learnability (NeurIPS 2022),
Adversarially Robust PAC Learnability of Real-Valued Functions (ICML 2023),
On the hardness of robust classification (JMLR)
...and many more!
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: There are no limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding that this paper provides a nice contribution to the literature on robust learning.
**Q1**: *"In this model, is the set G and measure μ being chosen at training time and known to the learner?"*
A1: Yes, the set G and the measure \mu are chosen at training time and known to the learner. Moreover, the same G and measure \mu are used to evaluate the model at test time. We will make this more clear in the main text of the camera-ready version.
**Q2**: *"What's a reasonable choice measure? is it representing the importance of each g?"*
A2: If G encodes a \ell_p ball, then a reasonable choice of measure \mu over G could be the uniform measure. In this case, this measure would encode the idea that every perturbation is equally important. For example, Robey et al. use \ell_infty balls with the uniform measure in their experiments training probabilistically robust neural networks. Another reasonable choice of measure \mu could be one whose mass concentrates at the center of the ball but decays radially as you move out towards the edge of the ball. This measure would encode the idea that perturbations near the center of the ball are more important than those further out. We will make sure to include this motivation/intuition in the main text of the camera-ready version.
**Q3**: *"The connection between the average case and worst case model is through using the ρ-probabilistically robust loss, maybe it should be mentioned before section 3."*
A3: We thank the reviewer for this comment and will make sure to include a discussion of this connection before Section 3 in the camera-ready version.
**Q4**: *"Some definitions are used throughout the paper. It could improve the readability if those be numbered and referred to when used."*
A4: We agree with the reviewer and will make the recommended changes in the camera-ready version.
**Q5**: *"Many references are missing"*
A5: We thank the reviewer for pointing out these missing references. We will make sure to discuss these works and reference them in the camera-ready version. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Attention as Implicit Structural Inference | Accept (poster) | Summary: The paper shows how attention mechanisms can be interpreted as expectation values over learnable graph connectivity structures given a structural prior; that is from a perspective of (structural) variational inference. The authors first demonstrate this link for cross- and self-attention heads, then proceed with interpreting iterative attention mechanisms as performing gradient descent on an approximate variational free energy. Finally, based on this perspective of variational structural inference, the authors propose two new attention mechanisms which allow for more complicated connectivity structures.
Please note that due to receiving 6 papers to review from NeurIPS alone, I have allocated a time budget of 4h per paper, and my review is based on that. In particular I did not read the supplementary material in greater detail. I regret this situation and apologize for possible inaccuracies.
Strengths: - A framework to interpret attention mechanisms in Transformers.
- Principled design of new types of attention is possible by reasoning in the proposed framework.
- Original contribution to literature linking iterative attention to variational inference.
Weaknesses: - The proposed multi-hop and expanding attention designs are only evaluated on toy tasks with structure matching the design.
- Significant discussion and additional results are hidden in the supplementary material, and the presentation in the main text is dense at times. It might seem that the paper would be more coherent and better to understand in a journal/venue with larger page limit.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - The equation showing the attention definition in the introduction is not directly referenced or explained. Also $d_k$ is undefined.
- Since cross attention and self attention only differ by setting $x'=x$ for self attention, sections 3.2 and 3.3 could easily be merged, resulting in additional space being available for larger Figures 2 and 3 (which are essentially unreadable on A4), or for additional explanations in the beginning to not loose non-expert readers: For example clarifying why $p(\phi|x)=softmax(\ln p(x,\phi))$ in line 97/98, or making the paragraph on pMRFs in lines 109-115 more accessible.
- Would introducing temperature scaling in the softmax lead to a useful generalization?
- In the expression for $F(x,\mu)$ in line 177, the normalization $Z$ is missing (although of course it drops out in $\partial_{\mu}F$).
- How eq. (7) follows from $\partial_{\mu}F = 0$ is not immediately clear.
- In sect. 4.2, it was not clear to me why $z_i$ is can be replaced by $\mu_i$ in the eq. following line 192. Also, is there an asterisk missing at the $\mu_i$ on the rhs here and in eq. (7)?
- In sect. 4.4, how does the uniform prior over incoming synapses relate to the standard version of predictive coding?
- Is the task described in sect. 5.1 lines 243-249 motivated as a toy version of a real world task?
- In sect. 5.2, the description currently does not reference the supplementary material.
Some typos: \
l109: comma missing after partition function \
l159: Laplace \
l193: the system the fixed point \
l261: the size of the size of \
l262: "the" dimension \
l292: . instead of ,
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: A short but reasonable discussion of limitations is provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review,
- *Since cross attention and self attention only differ by setting $x=x'$ for self attention, sections 3.2 and 3.3 could easily be merged*
In hindsight we agree with this, and would reduce self-attention and cross-attention to a single section.
- *Would introducing temperature scaling in the softmax lead to a useful generalization?*
Within the framework there are two ways of accommodating temperature, either as a parameter of the graphical model (in the edge potentials) or in terms of *tempered/safe* Bayes. Indeed including the temperature as a latent variable represents an interesting direction, however for space reasons we felt it would confuse the presentation here.
- *How eq. (7) follows from is not immediately clear.*
This follows from application of the Convex-Concave Procedure (instead of gradient descent), giving a fixed point equation which necessarily reduces the objective function. We would be happy to make this clearer in a revision.
- *In sect. 4.2, it was not clear to me why $z_i$ is can be replaced by $\mu_i$ in the eq. following line 192.*
This is due to the Laplace approximation where we are updating the variational parameters rather than the unknown variable z, however we could discuss this more explicitly for increased clarity.
- *Also, is there an asterisk missing at the $\mu_i$ on the rhs here and in eq. (7)?*
We use the asterisk on the L.H.S to indicate an approximate stationary point (solution), whereas the right hand side is determined by initialization and not necessarily a fixed point. We now realise this is not mentioned, so thanks for pointing it out, again we could make this clearer in the final presentation.
- *how does the uniform prior over incoming synapses relate to the standard version of predictive coding?*
In standard formulations of predictive coding the equivalent term is governed by a precision matrix (of the generative model), learnt slowly across the data, however in practice this term is fixed as a scaled identity matrix and so acts as a uniform weighting over incoming signals (cf. uniform prior).
Additionally, even when they are learnt these matrices are treated as parameters, rather than latent variables, and so remain fixed during inference (no in-context updates) arguably making the structural inference perspective a more powerful model of attention.
- *Is the task described in sect. 5.1 lines 243-249 motivated as a toy version of a real world task?*
Since this concern was brought up more than once we have included the response in the global comment.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I do not have additional questions at the moment, and keep my high score also in the light of the other reviews. | Summary: This work presents a theoretical framework of how the attention mechanism often used in transformers can be recast as inference over possible adjacency structures in graphical models. In particular, there is an implicit inference on the distribution of edges within a graphical model defined over nodes in the query and key nodes. They can explain many kinds of recent models in this framework such as slot attention, modern hopfield nets, etc. Then, they do two small experiments illustrating the use in using the framing. I am not familiar enough with the literature to know how novel this framing is.
Strengths: * The methods seem very theoretically rigorous.
* Lots of connections to other models in the literature.
Weaknesses: * For the first toy problem, there could be more description as to why having a two-hop neighborhood would be advantageous, or what kind of data would have this property. The second toy problem's motivation was much more clear to me.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: * The analysis to Predictive Coding Networks is interesting. It would be nice to know how neuroscience models of hippocampal structural inference can be explained under this framework (Clone Structured Cognitive Graphs, Tolman Eichenbaum Machine, etc).
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Limitations were addressed in section 6.1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review,
- *For the first toy problem, there could be more description as to why having a two-hop neighborhood would be advantageous, or what kind of data would have this property. The second toy problem's motivation was much more clear to me.*
Since this concern was brought up by more than one reviewer we have addressed it in the global comment.
- *The analysis to Predictive Coding Networks is interesting. It would be nice to know how neuroscience models of hippocampal structural inference can be explained under this framework (Clone Structured Cognitive Graphs, Tolman Eichenbaum Machine, etc).*
Thank you, we agree that the relationship to hippocampal function and indeed the transformer-TEM is an interesting direction that we are excited about pursuing. For now, it remains unclear exactly what the relationship is, other than it may be important for the hippocampus to perform discrete (relational) inferences in-context (cf. structural inference) while more abstract general properties are consolidated in cortex (i.e. the complementary learning systems hypothesis).
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will keep my score as is. | Summary: The paper proposes a framework for interpreting standard formulations of attention mechanisms through the lens of graphical models. The authors illustrate that their formulation unifies architectures and offers a way to easily generalize and improve the existing formulations.
Strengths: - The paper offers an interesting direction of looking at existing formulations from the lens of probabilistic models, which should open up several exciting works in the future.
- Interpreting the attention mechanism in terms of an implicit probabilistic model opens up the possibility of understanding & changing the underlying modeling assumptions in a principled manner to design new attention mechanisms.
- The paper is well-written and easy to follow.
- In addition, the authors also include quantitative analysis to demonstrate that their framework can assimilate the table structures to generate robust representations.
Weaknesses: One of the main issues in the paper lies in the experimental evaluation & baselines.
I encourage the authors to include a more rigorous evaluation of their approach, comparing the proposed framework with the formulations of stochastic attention in the literature. The authors chose to demonstrate the results on toy problems, and it is hard to evaluate the effectiveness of the approach based on those results.
Without appropriate experiments, it is hard to judge the utility of the proposed approach; though intuitively, it could do well, I anticipate issues in practice.
How scalable are the proposed changes to the attention mechanisms? How do these changes interplay with popular architectures?
Does the formulation impose any constraints on decoding algorithms? For instance, does it affect the scalability & run time?
In addition, a few relevant references need to be included. Please refer to the list below:
Shujian Zhang, Xinjie Fan, Bo Chen, Mingyuan Zhou: Bayesian Attention Belief Networks. ICML 2021
Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, Pascal Poupart: Variational Attention for Sequence-to-Sequence Models. CoRR abs/1712.08207 (2017)
Shiv Shankar, Sunita Sarawagi Posterior Attention Models for Sequence to Sequence Learning In ICLR, 2019.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Refer to Weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations:
The authors do include a section on limitations, but I would say it isn't complete. I encourage the authors to refer to the questions & address them in the list of limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - *How scalable are the proposed changes to the attention mechanisms? How do these changes interplay with popular architectures?*
While we see no reason, in principle, for scaling issues, since both modifications were designed with computational complexity in mind. (Multihop requires a single extra matrix multiplication which is cheap compared two two-layers, while expanding is specifically designed to minimise the amount of computation due to context window length.) It remains to be seen if the distributional properties exploited in the toy examples are present in natural data.
More broadly, we see our key contribution as a theoretical framework which can be used to develop or reason about architectures, while our small experiments serve as proof of principle that this approach works. We would be happy to make this more explicit in the limitations section of the paper.
- *Does the formulation impose any constraints on decoding algorithms? For instance, does it affect the scalability & run time?*
We are not quite sure what is meant here by decoding algorithms, but would be happy to discuss further.
- *..comparing the proposed framework with the formulations of stochastic attention in the literature*
- *In addition, a few relevant references need to be included*
Thank you, we will certainly include these references in the paper. Although we appear to have missed these, we would like to point out we did highlight the connection to stochastic attention (alignment) formulations (L.59-72) that we feel does capture the main approaches of prior work. Indeed for *Bayesian Attention Belief Networks* and *Posterior Attention Models for Sequence to Sequence Learning* we cited earlier contributions from (a subset of) the authors containing the key ideas.
---
Rebuttal Comment 1.1:
Comment: Thank you for response, I appreciate it, based on the rebuttal, and other reviews, I have decided to increase my score.
"Does the formulation impose any constraints on decoding algorithms? For instance, does it affect the scalability & run time?" I was curious if the the modified attention mechanism improves decoding, i.e. produces better outputs, in addition, I also wanted to understand if interpreting attention mechanisms as graphical models would enable us to design better decoders?
---
Reply to Comment 1.1.1:
Comment: In terms of decoding quality, we imagine a mechanism like multi-hop attention could reduce the number of parameters dedicated to approximating some functions thus freeing up parameters for other tasks. While having a context-size that scales in a data-dependent manner should enable better performance on tasks with long range dependencies in the sequences. However, in both of these cases we imagine both of our design suggestions can be improved upon, here we wished to show that there at least exists some data where they would perform better (synthetic data).
In general, we believe interpreting the mechanisms through graphical models will definitely enable better design. For example including more structured prior distributions over the edges, or placing priors over hyper-edges to leverage higher order correlations. Aside from changing the model, approximate inference techniques from the graphical model literature could be leveraged to reduce the computational overhead of attention. | Summary: This paper proposes a probabilistic interpretation of attention mechanism, where the computation of attention can be expressed as the expectation of a value function defined on the nodes of a graph consisting of the query nodes and key nodes, and the expectation is taken with respect to the posterior distribution over the edges of the graph. Under such interpretation, different "soft" attention mechanisms fall under the same probabilistic framework, with different node configurations and prior distribution over the edges. Building upon existing works of linking attention mechanisms with variational learning with Gaussian mixture model, the authors additionally established the link between variational inference over the edges (graph structure) and slot attention and hopfield networks. Based on the probabilistic insights, the authors propose new modelling assumptions for designing new attention mechanism.
Strengths: - The proposed probabilistic interpretation for attention mechanisms is sounding and widely applicable to various attention types;
- The new perspective on the link between attention and predictive coding network based on the probabilistic framework is interesting;
- The evaluation of the proposed new attention mechanisms on toy problems exhibit promising results;
Weaknesses: - Despite the mathematical derivations are sounding, the paper can be hard to read since the connection based on the graph formulation is implicit, arguably this can be omitted due to page limits, but the authors should consider improving on this respect;
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - d is not defined on line 51;
- Figure 2 and 3 can be made larger for readability;
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review,
since both your questions were raised by more than one reviewer we have included the answer in the global response. | Rebuttal 1:
Rebuttal: Thank you to the reviewers for their insightful comments. A couple of concerns were repeated across reviewers which we will address here.
We appreciate reviewers concern that the experiments were on toy data, however, we would like to stress that we view the main contribution of the paper as theoretical, providing a perspective on why attention mechanisms are so useful, and unifying different uses of the attention mechanism. While the experiments serve as a proof of principle that we can use this understanding to design new mechanisms, hopefully enabling future researchers to develop new architectures which scale to natural data.
- *What is a natural motivation for the task set-up in multihop attention?*
Requiring two previous states in order to generate the next one serves as a natural example. Consider a character-level sequence model for English language; attending to the letters “ee”, since they are almost always followed by a consonant, greatly reduces the uncertainty of the next character compared to simply attending to a single “e”. Of course, such cases can be handled with higher capacity heads, multiple heads or multiple layers. However it is possible that using some multi-hop heads could aid learning efficiency if patterns such as this are common in the dataset.
More generally, if the sequence has useful information spread across two tokens, that is not exclusively available from either of them individually, multihop could serve as an alternative to increasing the number of parameters (e.g. capacity, layers or heads).
- *Figure size too small*
We will make sure we increase the figure sizes in future versions.
- *$d$ not defined in first equation.*
Thanks to reviewers for pointing this out, we will make sure this is defined. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This submission describes how many different transformer architecture variants can be see as implicit structural inference. The inference is understood as taking an expectation over possible connectivity structures constrained by a prior over structures. Several variants of attention are shown to be describable in this framework (with more in the appendix). Two new 'designs' of attentional systems are described and illustrated with two briefly-described experiments.
Strengths: The idea of unifying related architectures under a common Bayesian inference framework seems useful and the authors have thoughtfully engaged with a number of related architectures to characterize how they can be integrated under a common framework.
The idea of allowing context to select a graph over which to perform inference is important and interesting and I would like to see it developed further.
The experiments introduce ways of thinking about how transformer style architectures could be extended in novel ways.
Weaknesses: The paper is largely limited to describing various existing models within the attention as expectation over structures framework. The new experiments are described extremely briefly and the neural network architecture used was very hard to discern, even after a close reading of the appendix.
The idea of allowing context to select a graph over which to perform inference was not developed. I thought at first that the expanding attention experiment was going to address this, but, if I understand correctly, there are several different experiments each with a fixed p, so that there is no run-time context sensitivity.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: A more selective presentation of the core ideas (e.g. focusing on 2-3 examples rather than 5 with the others relegated to the appendix, coupled with a more extensive set of investigations addressing the above points would have resulted in a more useful and impactful contribution.
I thought the paper would have benefited by showing how a graph was actually latently inferred by a learning experiment -- the title suggested that such an idea might be forthcoming, but it didn't seem to be. If the attention allocation illustrated in the lower right of Fig 2 was intended to demonstrate what the authors meant by this, they should have made this more explicit. But if this is what they meant, perhaps the finding is disappointing. Indeed, the failure of one layer of one-hop attention to learn the problem in 5.1 suggests that any interesting graph that is learned by a transformer must depend on multiple layers and/or heads.
The authors have a whole page in the main text in which they could have developed their presentation more fully. A revision might use this space to address some of these questions or expand the presentation and analysis of the experiments.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: The limitations section of the paper does hint at some of the weaknesses mentioned and points to possible future directions. There are no ethical concerns with this research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review.
- *The paper is largely limited to describing various existing models within the attention as expectation over structures framework.*
We see our key contribution as a unifying theoretical framework helping to understand the fundamental computation underlying attention, which is why we thought it appropriate to dedicate so much space to recovering models.
- *The new experiments are described extremely briefly and the neural network architecture used was very hard to discern, even after a close reading of the appendix.*
We apologise for this, given a revision we could make the architectures clear through inclusion of pseudocode.
- *The idea of allowing context to select a graph over which to perform inference was not developed. I thought at first that the expanding attention experiment was going to address this, but, if I understand correctly, there are several different experiments each with a fixed p, so that there is no run-time context sensitivity.*
Indeed we see the key property unifying the different attention mechanisms is their use of context to determine a graph over which to perform inference at runtime. We believe the theory we developed here describes exactly this.
While the experiments here were not designed specifically to demonstrate this property (rather to use our understanding to design extensions to the attention mechanism) they also exhibit inference time context sensitivity.
Specifically, in the expanding task with different p; the p value is a parameter for the data generating (task) distribution*, not the model —- the model determined the context window based on an individual instance of the task, growing the window as needed (on a per data-point basis).
*The parameter is of a geometric distribution, i.i.d. draws from which determine the distance of the signal token from final token. This distance is therefore still different per datapoint.
- *Indeed, the failure of one layer of one-hop attention to learn the problem in 5.1 suggests that any interesting graph that is learned by a transformer must depend on multiple layers and/or heads*
We agree multiple heads or layers are crucial for learning complex functions, however we believe interesting graphs can be inferred by a single head.
The reason this is not evident in 5.1, the one-hop attention used a restricted internal dimension (keys, queries) (supplementary L.126) where typically the embedding space is large enough to incorporate multiple types of relationship within a single head leading to more interesting graphs. | null | null | null | null | null | null |
Med-UniC: Unifying Cross-Lingual Medical Vision-Language Pre-Training by Diminishing Bias | Accept (poster) | Summary: This paper proposes a vision language pretraining method that focusing on tackling the bias caused by different languages. The Text Alignment Regularization (CTR) is proposed to unify cross-lingual semantic representations of medical reports. The experiments show that the proposed CTR can effectively eliminate the bias between English and Spanish medical report.
Strengths: 1. This paper address an important and interesting issue of the bias caused by different languages in medical visual language pretraining tasks.
2. The experiments show the performance superiority of the proposed method on both normal medical recognition tasks and cross-linguistic tasks.
Weaknesses: 1. The design of the proposed visual language model is closed to existing MLM and CLIP based VLP methods with a incremental improvement of CTR. More analysis is needed for the difference between existing methods and existing VLP methods.
2. Seems only zero-shot classification tasks are evaluated under the cross-linguistic setting. Is it possible to evaluate other downstream task under this setting?
3. Can you provide more analysis on why the proposed method outperform other medical VLP methods? It seems like the main difference of this method (CTR) does not really have a strong positive impact on single language recognition tasks.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to the concerns in the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer uyBA
### 1. Response for Weakness 1:
> The design of the proposed visual language model is closed to existing MLM and CLIP based VLP methods with a incremental improvement of CTR. More analysis is needed for the difference between existing methods and existing VLP methods.
We deeply appreciate your insightful critique of our research. We would like to highlight the overall novelty of our proposed approach from the following three aspects compared with existing VLP methods:
- Our proposal to introduce Med-UniC marks a pioneering step in investigating $\textbf{the bias influenced by language differences}$ within the field of medical VLP. This innovative approach $\textbf{is acknowledged by reviewers}$ `YFc1` and `ZNkV`.
- We have developed a novel CTR loss strategy aimed at improving the performance of VLP across various downstream tasks, by reducing this bias. To the best of our knowledge, Med-UniC $\textbf{is the first initiative to identify and mitigate}$ language-driven bias in medical VLP through cross-lingual text alignment regularisation (CTR). This innovative approach to CTR design is appreciated by reviewers `ZNkV` and `YFc1`.
- Mostly SOTA methods do not directly impose constraints on text embeddings, which consequently limits their ability to extract high-level semantics from text. For instance, MRM [1], despite utilising Masked Language Modeling (MLM) as a pre-training objective for the text, is mainly focused on reconstructing masked tokens rather than learning the overall embeddings of entire sentences [2]. As a solution, we propose the use of CTR to improve the learning of comprehensive sentence embeddings through contrastive learning for each text sample via $L_{TT}$. Additionally, CTR aids in disentangling the text's latent space through $L_{TF}$ to maximise the information on each latent dimension [3].
References
[1] Zhou, Hong-Yu, et al. "Advancing Radiograph Representation Learning with Masked Record Modeling." The Eleventh International Conference on Learning Representations. 2023.
[2] Neelakantan, Arvind (OpenAI), et al. "Text and code embeddings by contrastive pre-training." arXiv preprint arXiv:2201.10005 (2022).
[3] Zbontar, Jure, et al. "Barlow twins: Self-supervised learning via redundancy reduction." International Conference on Machine Learning. PMLR, 2021.
### 2. Response for Weakness 2:
> Seems only zero-shot classification tasks are evaluated under the cross-linguistic setting. Is it possible to evaluate other downstream task under this setting?
We greatly value your constructive feedback on our research. It's important to note that the other downstream tasks we have investigated - specifically, image classification, segmentation, and detection $\textbf{solely rely on images as input}$ , hence not necessitating any cross-lingual configurations.
### 3. Response for Weakness 3:
> Can you provide more analysis on why the proposed method outperform other medical VLP methods? It seems like the main difference of this method (CTR) does not really have a strong positive impact on single language recognition tasks.
We would like to express our sincere gratitude for your thoughtful feedback regarding our research. As demonstrated in $\textbf{Appendix Tab 6}$ , Med-UniC, equipped with the CTR loss and solely pre-trained on an English dataset, outperforms all baselines across three separate vision tasks. This is despite our configuration only utilising uni-lingual Bert and thus, $\textbf{not implementing MLM loss}$ in this pretraining setting. This performance boost may result from the disentanglement effect that the CTR loss has on the text's latent space. This effect, in turn, allows the model to become more adept at extracting high-level semantics and representing features with greater effectiveness. Furthermore, we compare the backbone with CTR and without CTR in $\textbf{Tab A of the attached PDF}$ in the author rebuttal. As the table shows, the CTR loss brings substantial improvement.
---
Rebuttal 2:
Comment: Dear Reviewer, We are deeply grateful for the attention and care you've given to our work. Understanding the importance of thorough feedback, we're here to address any queries or points of ambiguity regarding our response. Please feel free to reach out with any further questions. | Summary: This paper presents a unified framework for Cross-Lingual Medical Vision-Language Pre-Training (Med-UniC), integrating multimodal medical data from different languages (e.g., English and Spanish). A Cross-lingual Text Alignment Regularization (CTR) is proposed to explicitly unify cross-lingual semantic representations of medical reports originating from diverse language communities. It reaches superior performance across 5 medical image tasks and 10 datasets encompassing over 30 diseases.
Strengths: 1. Practical and interesting problem setting, which attempts to unifying the Medical Vision-Language Pre-Training for multiple languages.
2. The method is straight-forward and simple, making the paper easy to understand.
3. The Cross-lingual Alignment loss seems reasonable.
Weaknesses: 1. The experimental setting is unclear. For example, the implementation and the training data of the SOTA methods are not introduced.
2. The ablation study seems incomplete, only four settings are shown. It is hard to tell how much gain actually comes from the key design of the paper, i.e., the CTR loss. The biggest gain seems to come from the component "MLM", which is not introduced in the paper. I would also suggest the authors to provide ablation showing how much gain is from the increase in data quantity.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. How are the SOTA methods implemented? What data set are they trained on? I wonder whether the comparison is fair if the SOTA methods and the proposed method are trained with different set of data. If the SOTA methods are only trained with one language, I wonder how much gain comes from the additional data.
2. Table 1. Why for CXP500, most SOTA methods show a decrease in performance when the prompts transition from English to Spanish on CXP500, but an increase on PDC? The proposed method shows a consistent decrease in performance when the prompts transition from English to Spanish on both data sets.
3. What does the learning objective “MLM” mean in Table 4?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: No discussion on limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer NLjC:
### 1. Response for Weakness 1:
We sincerely appreciate your insightful feedback about our research. Concerning the results from other SOTA methods, these $\textbf{were all pre-trained on MIMIC-CXR}$, the English dataset, with the exception of GLoRIA, which was pre-trained using in-house data. Results specifically pertaining to GLoRIA pre-trained on MIMIC-CXR are highlighted as GLoRIA-MIMIC. We directly adopted these results $\textbf{from their original publications}$. Where experimental results were not documented in the original papers, we obtained their official checkpoints and used identical settings to those in our work. Thorough details of the implementation for pretrain and all downstream tasks can be found in appendix Sec B and C. The clear experimental setting with comprehensive experiments $\textbf{are also acknowledged by}$ reviewer `ZNkV`.
### 2. Response for Weakness 2:
> The ablation study seems incomplete, only four settings are shown.
Please refer to $\textbf{Tab 1 and 6 of the main paper}$ , and $\textbf{Tab 4, 5, 6 , and 7 in the appendix}$. We have done additional ablation study for Med-UniC.
- 1. We ablate three different visual backbones on the medical image classification task in Tab 1 of main paper.
- 2. We ablate the impact on performance from the number of frozen layers in language model in Tab 6 of the main paper.
- 3. We also further ablate three different visual backbones on the medical image segmentation task in Tab 4 in appendix.
- 4. We ablate each sub-component's influence belonging to Cross-lingual Text Alignment Regularization (CTR), including the impact of text-feature alignment $L_{TF}$ and text-to-text alignment $L_{TT}$ in Tab 5 in appendix.
- 5. We study the performance of only using the English dataset, MIMIC-CXR for pre-training without MLM but with CTR in Tab 6 of appendix.
- 6. We analyse the dimension of the text alignment projector $\mathcal{P}_{d}$ in Tab 7 in the appendix.
Hope those further ablation studies resolve your confusion.
> It is hard to tell how much gain actually comes from the key design of the paper, i.e., the CTR loss.
From $\textbf{Tab 4}$ of main paper and $\textbf{Tab A in the attached PDF}$, we were able to clearly observe that CTR is a crucial component of our model. Compared to the full version of the model, without CTR causes more performance drop than without MLM (Masked language modelling). Besides, the superior performance of the full model proves that the CTR component is complementary to the MLM strategy, because we leverage MLM to initialise Med-UniC with the ability to process different languages and learn fundamental cross-lingual syntactic and semantic knowledge. But we find the model bias stemming from the language model (LM) pre-trained on predominantly English corpora. To eliminate language bias , we further optimise our model with CTR loss. Therefore, in Tab 4 of the main paper, the comparison of two rows (w/o CTR and with CTR) that the CTR loss leads to excellent performance gains.
> The biggest gain seems to come from the component "MLM", which is not introduced in the paper.
Please refer to $\textbf{Fig 1 or Sec 3.2}$ in our paper, where we have described the definition of MLM (Masked language modelling).
> I would also suggest the authors to provide ablation showing how much gain is from the increase in data quantity.
Please refer to $\textbf{Appendix Sec D4: Med-UniC Pre-training on Uni-lingual data}$,
The ablation results are shown in Tab 6 in D4, which reveals that although using uni-lingual data to pre-train Med-UniC causes performance drop, compared to other baselines,our framework can also bring benefits. We attribute this to the text-to-text alignment to keep in-modal consistency and better extract text semantics.
### 3. Response for Question 1:
To fairly compare Med-UniC with other baselines without additional Spanish dataset, we have conducted an ablation study detailed in $\textbf{Tab 6 in the Appendix}$. For this experiment, Med-UniC was solely pre-trained on MIMIC-CXR, and our model continues to significantly outperform all baselines across three disparate vision tasks. This result corroborates that the effectiveness of Med-Unic does not solely depend on the additional data, but also capitalises on the innovative CTR loss to further extract the nuanced, high-level semantics of the text.
### 4. Response for Question 2:
We are deeply appreciative of your constructive critique concerning our research. Initially, it is significant to note that despite a minor decline observed in the transition from English prompts to Spanish in the zero-shot classification task, our approach still holds superiority over all existing baselines. Additionally, referring to Table 1 in the primary manuscript, the F1 scores for most baselines consistently exhibit a downturn from English to Spanish on the PDC task. Even though there is a marginal ascent observed in the AUC scores from English prompts to Spanish, the values predominantly hover around 50. This statistic suggests a rather poor performance [1]. Hence, the minor increment observed could potentially be attributed to random fluctuations stemming from mis-tokenization, as elaborated in $\textbf{Appendix Sec D.1}$.
Reference:
[1] de Hond, et al."Interpreting area under the receiver operating characteristic curve." The Lancet Digital Health 4.12 (2022): e853-e855.
### 5. Response for Question 3:
> What does the learning objective “MLM” mean in Tab 4?
Thanks for your comments, MLM means Masked Language Modeling [1], and it is a type of language modelling task aimed to train a model to predict masked or hidden words in a sentence or text given the surrounding context to learn the semantic information.
Reference:
[1] Kenton, et al. "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding." Proceedings of NAACL-HLT. 2019.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response which has cleared most of my concerns. I have raised my rating.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate you taking the time to read our rebuttal and update the rating. Your feedback is deeply valued and we are always here to assist further.
---
Rebuttal 2:
Comment: Dear Reviewer, Your thoughtful review of our work is profoundly appreciated. We recognize the dedication it takes to provide such feedback. If there are areas in our response that need further clarification, or if additional questions arise, we stand ready to engage and offer any necessary assistance. | Summary: One common challenge in performing medical vision-language pre-training (VLP) is data scarcity, especially in languages other than English. This challenge can be addressed by combining datasets from various languages to train language-agnostic models, but the authors empirically show that each language community (especially non-English ones) induces distinct linguistic biases in their data, even in language-agnostic models. The authors therefore introduce Med-UniC, which leverages cross-lingual text alignment regularization (CTR) (experimented with English and Spanish) to mitigate these biases and achieve SOTA results using chest X-ray scans and reports on many uni-modal visual tasks.
Strengths: • Appears to be novel, the idea of cross-lingual text alignment regularization (CTR) seems to effectively address a significant per-language bias problem in multilingual models based on self-vision, vision-language, and cross-lingual alignment strategies.
• Clear architecture explanation with hyperparameters with comprehensive experiments.
• Strong results on all experiments.
• Generally well-written
Weaknesses: The bias analysis section seems brief given how much attention it was given in the abstract/intro. The authors state that more analysis is in the appendix, but I would have wanted to see more in the main paper – I’m curious whether they attempted to identify the sources of the bias, or it would have been neat to see some samples with similar (language-agnostic) content but different embeddings due to this bias.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer ZNkV
### 1. Response for Weaknesses:
> The bias analysis section seems brief given how much attention it was given in the abstract/intro. The authors state that more analysis is in the appendix, but I would have wanted to see more in the main paper .
Thanks for your comment on our paper structure. We will modify the paper layout in the camera-ready version, and move the Sec D1, D3, D4 in the Appendix to the main paper to further explain the bias.
> I’m curious whether they attempted to identify the sources of the bias, or it would have been neat to see some samples with similar (language-agnostic) content but different embeddings due to this bias.
To delve deeper into the origin of the bias, we randomly selected 20 reports from the English dataset, translated them into Spanish by native Spanish speakers, and created 20 English-Spanish text pairs which share the same semantic meaning. We computed the correlation coefficient of their embeddings, derived from the uni-lingual Bert, prior to our implementation of cross-lingual MLM pre-training. The correlation coefficient for each sample is depicted in $\textbf{Fig B (left)}$ in the $\textbf{attached PDF of authors rebuttal}$. It's noteworthy that the correlation coefficient for each English-Spanish pair (highlighted as the diagonal elements) does not approach 1.0. For enhanced visual clarity, we have plotted a histogram of the diagonal elements from the correlation coefficient matrix in $\textbf{Fig B (right)}$ in the attached PDF of authors rebuttal, all of which fall below 0.65. This suggests that the uni-lingual Bert perceives the two language versions of the same report, despite having identical semantic meaning, as distinct texts.
Moreover, following the 3D T-SNE method we used in $\textbf{Fig A}$ in our main paper, we visualise the embeddings of Spanish and English reports generated from Uni-lingual Bert (CXR-Bert) in the $\textbf{Fig A a1}$ and $\textbf{Fig A a2}$ in the $\textbf{attached PDF of authors rebuttal}$. From the $\textbf{Fig A}$ in attached PDF, we can clearly observe that the center distance between two language cluster (language bias) is larger than Cross-lingual MLM and Med-UniC (e.g., $\textbf{5.22}$(Uni-lingual bert) → $\textbf{2.38}$(Cross-lingual Bert) → $\textbf{0.31}$(Med-UniC)), demonstrating that language models pre-trained predominately on the English corpus produce obvious bias for other languages.
---
Rebuttal 2:
Comment: Dear Reviewer, We deeply appreciate the time and effort you've dedicated to reviewing our work. Your insights are invaluable to us. If any aspect of our response remains unclear or if there are further questions you'd like to discuss, please know that we are more than willing to assist and engage in further dialogue. | Summary: The paper aims to address community bias caused by having data in multiple languages in medical vision-language pre-training (VLP). Specifically, it introduces the Unifying Cross-Lingual Medical Vision-Language Pre-Training framework to integrate multi-model data from English and Spanish and proposes a Criss-lingual Text Alignment Regularization (CTR) objective to unify cross-lingual semantic representations of medical reports from different languages. The paper shows superior performance compared to other methods on 5 tasks spanning 10 datasets and provides evidence of community bias in cross-lingual VLP through experimental findings.
Strengths: - Mitigating cultural bias in cross-lingual visual-language datasets in English/Spanish is a significant task with potential applications in the clinical field
- The findings are novel. In particular, the separation between the learned representations in Med-Unic w/o CTR and the clear closer alignment in embedding space in Med-Unic with CTR is very interesting as it shows clearly that the representational spaces of the datasets in English and Spanish were separated without CTR and are better integrated now, which shows the alignment power of the CTR objective
- Results - the paper obtains significant improvement over SOA techniques.
- The CTR loss is an innovative method to address community bias in multi-lingual pre-training methodologies in a mathematically interesting way
Weaknesses: - Looking at Figure 4, it is clear that the pre-training methodology proposed by the authors has significant improvements in bringing the learned representations from the two languages closer in the latent space. However, the two representations are still not perfectly aligned in the latent space, which means that the embeddings are still not perfectly integrated. Full integration would show Figure 4a where the two representations are fully overlapped, whereas now there is still separation between the two, with minimal overlap. While this is still a significant result, the model could probably be improved to better integrate and align the datasets.
- I would like to understand if the authors think that with further training, these two representations can be further aligned, or if this is the maximum representational power capacity of the proposed pre-training methodology.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How was the similarity in Figure 2 between medical reports obtained? What does the X-axis represent?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors have adequately addressed limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer YFc1
### 1. Response for Weakness 1 and 2:
> - Looking at Fig 4, it is clear that the pre-training methodology proposed by the authors has significant improvements in bringing the learned representations from the two languages closer in the latent space. However, the two representations are still not perfectly aligned in the latent space, which means that the embeddings are still not perfectly integrated. Full integration would show Fig 4a where the two representations are fully overlapped, whereas now there is still separation between the two, with minimal overlap. While this is still a significant result, the model could probably be improved to better integrate and align the datasets.
> - I would like to understand if the authors think that with further training, these two representations can be further aligned, or if this is the maximum representational power capacity of the proposed pre-training methodology.
We appreciate your invaluable guidance regarding our research. In addressing the query about Fig 4, the sub-optimal fusion of image representations can be attributed to two fundamental factors. Initially, the two image sets, drawn from disparate communities, do not correspond to the same patient, thereby encapsulating distinct semantic interpretations. This factor could inevitably result in less than perfect integration of the representations. Following on from this, the images' origin diversity, covering a broad range from different hospitals, scanning devices, radiologists, and scanning methodologies, leads to what we identify as a `domain gap`. This inherent variability introduces additional challenges to the process of integration. However, the core thrust of our research is geared towards exploring biases introduced by language. Therefore, we do not forecast a fully congruent representation of images from two distinct communities.
### 2. Response for Question 1:
> How was the similarity in Figure 2 between medical reports obtained? What does the X-axis represent?
Thanks for your comment. For the similarity matrix shown in Fig 2 in the main paper, we randomly sample 250 Spanish and 250 English medical reports from both datasets. Then we utilise Cross-lingual Medical LM proposed in Sec.3.2 to obtain the [CLS] (a.k.a classification token) embedding of the texts. Therefore, X-axis represents the index of text embeddings, and the first 250 of these are Spanish samples, and the last 250 are English samples.
---
Rebuttal 2:
Comment: Dear Reviewer, we are truly grateful for your thoughtful feedback on our work. Should you have any additional questions regarding our response, please feel free to ask.
---
Rebuttal Comment 2.1:
Comment: Thanks to the authors for the response - makes sense that other types of variability will prevent full overlap between the representations. The response answers my questions. | Rebuttal 1:
Rebuttal: - We add the graphical explaination of the bias in $\textbf{Fig. A}$ and $\textbf{Fig. B}$ for reviewer `ZNkV`.
- We select the results from Tab 5 in the appendix D.2 and Tab 4 in the main paper to construct $\textbf{Tab. A}$ for reviewer `NLjC`. The $\textbf{Tab. A}$ shows the ablation experimental results of CTR loss.
Pdf: /pdf/c2f388202932c362314f51a75018cccb4bd6d38a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Unsupervised Behavior Extraction via Random Intent Priors | Accept (poster) | Summary: The authors propose UBER, a method for learning a collection of behavior policies from offline experience data lacking reward labels and ultimately adapting these behaviors in an online setting. UBER generates a collection of randomly-initialized reward models and trains a policy on the offline data using each. During online adaptation, UBER jointly learns an action-value function and policy, selecting between the pre-trained policies to collect data using the critic. In several simulated offline-to-online settings, UBER performs similarly or better than some existing algorithms.
Strengths: The paper studies an important and relevant problem, offline-to-online RL, and describes a simple but interesting approach to pre-training useful behavior policies for online exploration. The results are fairly promising, showing that UBER outperforms existing approaches to online adaptation after pre-training on online data. The experiments cover a decent collection of environments.
Weaknesses: While I appreciate an attempt to rigorously understand the proposed method, the interpretations in the theoretical sections seem a bit too generous to me. For example, the suboptimality bound in Eq. 8 seems very loose (proportional to $\sqrt d^3H^3$), requiring a massive number of samples for fairly modestly state spaces and horizons. Further, the condition in Proposition 4.3 that the infinity norm of the difference of the reward functions is small actually seems very strong to me, and it seems fairly intuitive that the values of the optimal policies under each wouldn't change much. There is no attempt to show experimentally that this norm is small for the actual random reward functions used in practice, so I'm unconvinced this proposition explains anything about the algorithm.
I'm not intimately familiar enough with the area to know all relevant related work, but it seems like additional baseline methods could be included to really show that the proposed method is effective. For example, why not compare with AWAC?
The paper is overall a bit difficult to follow. In particular, some key related work (RLPD, CUP) and experimental setup (what does “using average reward to learn offline policies” in 5.2 mean?) is not explained. Figure captions are generally too short; for example, the Fig 4 caption says "distribution of random intent priors, datasets, and behavior cloning policies." The y axis is "Statistics", with a maximum value of 100. Is this a histogram? Figures 6 and 7 are difficult to unpack because the captions are uninformative, and because the text doesn't adequately explain PEX and CUP.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What is the y axis in Figure 4?
Why is RLPD an oracle? It seems to consistently outperform UBER in some settings.
Is any exploration bonus/epsilon sampling used for TD3 and RLPD? It is surprising that they consistently achieve exactly zero return.
Maybe I’m missing something, but in Figure 4, UBER isn’t clearly learning more diversity? For example, in walker2d, it actually seems like the dataset returns have more mass in the high-reward areas?
Why is the reward-free (vs action-free and reward-free) setting the one we should be interested in? Justifying the problem setting would be helpful.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Discussion of when the various assumptions (offline-only vs having online exploration available, reward-free vs reward-free and action-free) are realistic/application would be very helpful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We appreciate the reviewer's valuable feedback.
**W1: Concerns about the generosity of the theorems.**
- Theorem 4.2
Recent advanced analysis [1] allows us to refine the suboptimality bound of Theorem 4.2 to be $\tilde{O}(\sqrt{d^2H^3/N})$ without algorithmic adjustments. Then, the performance bound is (nearly) minimax optimal and can not be further improved. Also, it has a nontrivial performance bound. For Mujoco tasks,
$V_{max}\sim H$, $d\sim 10$, $H\sim 1000$, $N\sim 1e6$, which leads to a guaranteed $68\%$ performance.
Note that the main focus of Theorem 4.2 is not on sample complexity but on **robustness**. Theorem 4.2 allows us to learn an optimal policy for **any** intention $z$ as long as the corresponding policy $\pi^*_z$ is covered by the dataset. This indicates that offline algorithms are robust to reward functions, which allows us to learn diverse behaviors from one *single* dataset.
- Proposition 4.3
We agree with the reviewer that Proposition 4.3 can be coarse. To give a finer analysis of how the random rewards functions can cover the true reward function, we refine our theory and conduct further experiments.
We resort to the random feature theory [2] for the theory part. Specifically, we can show that $\tilde{O}(\sqrt{M})$ random intentions are enough to cover the true intention, where $M$ is the size of the dataset. Please see the general response for more details.
For the empirical part, to show that the set of random intentions does cover the set of true intentions, we calculate the correlation with the true reward and linear projection error for $N=256$ random rewards for each task.
The results are shown in Table 1 in our general response. Random intentions do have a high correlation with the true intention, and a linear combination of random rewards can cover the true reward function.
**W2: Additional baselines and comparison with AWAC.**
**A for W2:** As suggested, we compare with the offline unsupervised behavior extraction methods (OPAL [10] and PARROT [11]) and unsupervised data sharing methods (UDS [6]).
The experimental results in Figure 1 in the General Response show that UBER performs better than these baselines in most tasks.
It is unsurprising because prior methods extract behaviors in a behavior-cloning manner, which lacks diversity and leads to degraded performance for downstream tasks.
**W3: Clear description.**
**A for W3:** We appreciate the detailed and valuable comments.
In the next revision, we will improve the presentation of the paper by providing a detailed explanation of the experimental setup, other related works (PEX, CUP, and RLPD), and figure captions.
**Q1: Unclear y-axis:**
**A for Q1:** The y-axis represents the unnormalized frequency of each return range. We have updated the visuals to present normalized return distribution for clarity. Please refer to the attached PDF.
**Q2: Why is RLPD an oracle?**
**A for Q2:** RLPD is a SOTA offline-to-online method that **uses the reward information** in the offline dataset, while our method focuses on the unsupervised setting where the reward information is unavailable.
**Q3: Exploration bonuses in TD3 and RLPD:**
**A for Q3:** Neither TD3 nor RLPD uses exploration bonuses. However, for environments like Antmaze, common exploration strategies fall short due to the environment's complexity and the sparsity of reward signals. Here, offline datasets are imperative for meaningful results.
**Q4: In Figure 4, UBER isn't clearly learning more diversity?**
**A for Q4:** To provide a clearer view of UBER promoting diversity, we have calculated the entropy of the return distribution, as detailed in Table 2 in our General Response. UBER prominently encourages diversity across all tasks, with the sole exception being walker2d-expert. We hypothesize that expert data dominate the dataset and the task has low sensitivity to reward variations [3]. Also, in this case, the lack of diversity is not a big problem since the optimal behavior is already in the behavior set.
**Q5: Why is the reward-free (vs action-free and reward-free) setting the one we should be interested in?**
**A for Q5:**
- Reward-free settings naturally appear in many real-world problems. 1. In real-world problems like robotic tasks and NLP tasks [4,5], reward labels are expensive to get, while action labels are relatively cheap. 2. The setting also appears in data-sharing problems between different tasks [6,7]. We can remove the reward label for other tasks and reuse them as reward-free data for new tasks.
- Action labels are usually cheap while being crucial for efficient learning.
Their absence hinders our ability to estimate transition models or behaviors, necessitating either additional assumptions [8] or a considerable data volume [9].
**L1: Limitations**
**A for L1:** We thank the reviewer for pointing this out, and we will add discussion for the limitation in the updated manuscript.
Thanks again for the valuable comments.
We hope our response has clarified your concerns.
We are looking forward to further feedback and discussions.
Best,
The Authors
References
[1] Xiong, Wei, et al. "Nearly minimax optimal offline reinforcement learning with linear function approximation: Single-agent mdp and Markov game." arXiv preprint arXiv:2205.15512 (2022).
[2] Rudi, Alessandro, and Lorenzo Rosasco. "Generalization properties of learning with random features." Advances in neural information processing systems 30 (2017).
[3] Li, Anqi, et al. "Survival Instinct in Offline Reinforcement Learning." arXiv preprint arXiv:2306.03286 (2023).
[4] Ouyang, Long, et al. "Training language models to follow instructions with human feedback." Advances in Neural Information Processing Systems 35 (2022): 27730-27744.
[5] Christiano, Paul F., et al. "Deep reinforcement learning from human preferences." Advances in neural information processing systems 30 (2017).
---
Rebuttal Comment 1.1:
Title: Additional Reference
Comment: [6] Yu, Tianhe, et al. "How to leverage unlabeled data in offline reinforcement learning." International Conference on Machine Learning. PMLR, 2022.
[7] Hu, Hao, et al. "The provable benefits of unsupervised data sharing for offline reinforcement learning." arXiv preprint arXiv:2302.13493 (2023).
[8] Torabi, Faraz, Garrett Warnell, and Peter Stone. "Recent advances in imitation learning from observation." arXiv preprint arXiv:1905.13566 (2019).
[9] Baker, Bowen, et al. "Video pretraining (vpt): Learning to act by watching unlabeled online videos." Advances in Neural Information Processing Systems 35 (2022): 24639-24654.
[10] Ajay et al., OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning, 2021.
[11] Singh et al., Parrot: Data-Driven Behavioral Priors for Reinforcement Learning, 2021.
---
Rebuttal Comment 1.2:
Comment: Dear authors,
Thank you for submitting your response to the comments.
Dear Reviewer XkfL,
Were your concerns addressed by the authors?
Best,
AC
---
Rebuttal Comment 1.3:
Comment: I appreciate the author's comprehensive response, additional theoretical contributions, and experimental evaluations. With the understanding that the clarity of writing needs improvement (in terms of explaining prior work, experimental results, and more clearly captioning figures with informative captions) before publication, I would be okay with accepting the paper. Therefore I raise my score to 6.
---
Reply to Comment 1.3.1:
Title: Thanks for raising the score to 6!
Comment: We would like to thank the reviewer for raising the score! We really appreciate the valuable comments and suggestions from the reviewer.
---
Rebuttal 2:
Title: Looking forward to further comments!
Comment: Dear reviewer,
We have updated our supplementary experimental results and a more in-depth explanation of UBER. We also updated enhanced theory results. We are wondering if our response and revision have cleared your concerns. We would appreciate it if you could kindly let us know whether you have any other questions. We are looking forward to comments that can further improve our current manuscript. Thanks!
Best regards,
The Authors | Summary: The paper studies a setting where there is an offline trajectory dataset with no reward information and the goal is to extract effective behaviors from the offline data such that they can be re-used during a separate online phase to accelerate online learning. To extract effective behaviors from the offline data, the authors propose to use an offline RL algorithm (TD3+BC) to pre-train on the offline dataset with random reward functions, resulting a policy for each random reward function. Then, during the online phase, a new discrete-action policy is initialized and being optimized to select the set of pre-trained policies obtained from the offline phase using a standard online RL (TD3). The paper also provides theoretical argument for why using random reward functions for behavior extraction is sufficient and effective. Empirically, the proposed method is able to outperform existing methods on D4RL AntMaze tasks and Locomotion tasks.
Strengths: - The idea of extracting behavior prior and learning a selection policy online has been explored in some prior works (e.g., [1]). The authors should definitely discuss how this is related to the proposed approach here. Despite that, the use of random reward network for extracting the behaviors is novel (along with theoretical analyses that justify the idea)
- Empirical results (especially on AntMazes) are strong, suggesting that the proposed method is effective at extracting behaviors that sufficient for accelerating online learning.
- The paper is well-written and easy to follow.
[1] Singh, Avi, et al. "Parrot: Data-driven behavioral priors for reinforcement learning." arXiv preprint arXiv:2011.10024 (2020).
Weaknesses: - For any behaviors that are not covered in the offline dataset, the proposed method would not be able to capture them well. If the online task requires new unseen behaviors, the learning might fail completely. This limitation should be addressed/discussed in more details.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - L190 -- the authors mention that there are also visual tasks but I could not find them in the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See the first point in the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We appreciate the Reviewer for finding our work novel, effective and well-written. We provide clarification to the points the Reviewer raised as follows.
**S1: Discussion and comparison with previous behavior extraction methods.**
**A for S1:** Previous behavior extraction methods can be divided into two categories:
- **Online Skill Extraction**: Notable methods in this category [1, 2] are proposed in the presence of an online interactive environment. Their reliance on an exploration objective (e.g., information gain) to learn diverse skills makes them less applicable in offline scenarios. Directly leveraging these methods with an offline dataset could introduce significant extrapolation errors.
- **Offline Hierarchical RL**: Techniques that employ offline dataset reuse, such as the ones presented in [3, 4], predominantly utilize behavior cloning for skill extraction over extended temporal scales. While effective when the dataset closely matches the test environment, these methods might not consistently generate diverse behaviors, which are imperative for learning novel tasks.
In contrast, our work focuses on **offline unsupervised RL**, where agents extract diverse behaviors in the reward-free offline dataset.
We conducted additional experiments to compare with prior offline unsupervised behavior extraction methods, OPAL and PARROT. Specifically,
- We use the VAE model consistent with OPAL to extract behavioral policy and reuse it based on PEX during the online phase. We name this measure OPAL-PEX.
- We use the Flow model consistent with PARROT to extract behavioral policy and reuse it based on PEX during the online phase. We name this measure PARROT-PEX.
- In addition, we set the reward of the dataset to 0 and then learn the offline behavioral policy. Next, reuse it based on PEX during the online phase. We name this measure as UDS-PEX.
The experimental results in Figure~1 in General Response show that UBER performs better than these baselines in most tasks.
It is unsurprising because prior methods extract behaviors in a behavior-cloning manner, which lacks diversity and leads to degraded performance for downstream tasks.
**W1: The method would fail if the online task requires new unseen behaviors.**
**A for W1:**
It's worth noting that UBER includes a randomly initialized and learnable policy in the behavior set, so it will not completely fail when the online task requires new behaviors. If so, the performance will degrade to pure online learning but not fail. We thank the Reviewer for pointing this out, and we will discuss the limitation in the updated manuscript.
**Q1: The visual tasks.**
**A for Q1:** These are typos and should be multi-task (i.e., meta-world) rather than visual tasks, where we have multiple datasets and downstream tasks. Please see Appendix B for more details of the experiments on multi-task settings.
Thanks again for your supportive comments and suggestions.
We sincerely hope that our response has addressed your concerns. Any further feedback and discussions are highly appreciated.
Best,
The Authors
References
[1] Eysenbach, Benjamin, et al. "Diversity is all you need: Learning skills without a reward function." arXiv preprint arXiv:1802.06070 (2018).
[2] Sharma, Archit, et al. "Dynamics-aware unsupervised discovery of skills." arXiv preprint arXiv:1907.01657 (2019).
[3] Ajay, Anurag, et al. "Opal: Offline primitive discovery for accelerating offline reinforcement learning." arXiv preprint arXiv:2010.13611 (2020).
[4] Singh, Avi, et al. "Parrot: Data-driven behavioral priors for reinforcement learning." arXiv preprint arXiv:2011.10024 (2020).
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your taking the time to respond to the comments.
Dear Reviewer 822o,
After reading the authors' response, do you have any additional thoughts?
Best,
AC | Summary: This paper tackles the problem of unsupervised behavior extraction from reward-free offline data. The main idea is to pre-train multiple policies with random rewards. UBER consists of two phases. It first trains $N$ ($100$ or $256$) policies with random rewards with an offline RL algorithm (TD3+BC), and in the subsequent online phase, it continues training the N policies plus a newly initialized policy with a soft policy selector based on the Q functions. The authors show that the behaviors learned by UBER are helpful to solve downstream tasks in standard offline RL benchmarks.
Strengths: - The proposed method seems novel to me and is relatively easy to implement.
- Despite the simplicity, the behaviors learned by random rewards seem helpful in various downstream tasks in the D4RL benchmark.
- The paper contains several analyses including ablation studies.
- The paper is well-written and easy to understand.
Weaknesses: - The theoretical results do not seem to justify the use of **random** rewards. Theorem 4.2 states a general convergence result in offline RL, and Theorem 4.3 states that if two reward functions are similar, the corresponding optimal value functions are also similar. I'm not convinced how these theorems support the effectiveness of *random* reward functions. Figure 8 in Appendix A.3 states that if random reward functions sufficiently cover the reward function space, any task reward function can be approximated by the closest random reward function. However, it is unclear as to how many random reward functions are needed to enjoy this benefit. For example, we may need exponentially many random reward functions (i.e., $O((1/\epsilon)^{|S||A|})$) to cover the entire reward function space. I would have expected a complexity analysis similar to Theorem 3.1 in Chen et al. [1].
- The reason why random rewards lead to diverse behaviors in Figure 1 may heavily depend on the (strict) early termination condition in Hopper (and the same for Walker2d). How do the behaviors from random reward functions look like in HalfCheetah and AntMaze, which do not have early termination conditions? Could the authors provide videos and/or plots similar to Figure 4 in these environments?
- The paper lacks discussions/comparisons with prior offline unsupervised behavior (or behavioral prior) extraction methods (e.g., OPAL [2], SPiRL [3], and PARROT [4]) and prior unsupervised data sharing methods (e.g., UDS [5] and PDS [6]).
Typos and minor comments
- L117: Missing citation.
- OPAL [2] is cited but not mentioned in the manuscript.
- What does PEX stand for?
[1] Chen et al., Self-Supervised Reinforcement Learning that Transfers using Random Features, 2023.
[2] Ajay et al., OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning, 2021.
[3] Pertsch et al., Accelerating Reinforcement Learning with Learned Skill Priors, 2020.
[4] Singh et al., Parrot: Data-Driven Behavioral Priors for Reinforcement Learning, 2021.
[5] Yu et al., How to Leverage Unlabeled Data in Offline Reinforcement Learning, 2022.
[6] Hu et al., The Provable Benefit of Unsupervised Data Sharing for Offline Reinforcement Learning, 2023.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Based on the weaknesses section above, my two biggest questions are:
- How does UBER compare to previous unsupervised behavior extraction and/or data-sharing methods? I do not expect comparisons with all the above methods, but it would be nicer if the authors could provide empirical comparisons with some of the methods in these categories (or discussions about why UBER is very different from them).
- Why is using **random** reward functions a good idea when extracting behaviors? If this is for purely empirical reasons, I'm fine with that (though it would have required more thorough empirical evaluations in diverse environments), but at least the theorems in the paper do not seem to provide an answer to this important question.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper lacks discussions about the limitations of UBER. One limitation I can imagine is that this random reward strategy may fail in more complex environments and thus may not be scalable (though addressing this limitation may be out of the scope of this work and it does not affect my score).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your constructive feedback. We have provided additional experimental results and explanations to address your concerns, and we hope the following clarifications shed light on the raised points.
**W1: The theoretical results do not seem to justify the use of random rewards.**
**A for W1:**
**(Theorem 4.2)**
The key focus of Theorem 4.2 is that the offline performance is robust to intention $z$, as long as the dataset covers the corresponding policy $\pi^*_z$.
Concurrent work [7] also finds the robustness of the reward function for pessimism algorithms, which aligns with our findings.
This robust result allows us to learn diverse behaviors from a *single* dataset, while prior works either learn one policy from one dataset [5,6] or resort to online interactive environments [8].
**(Theorem 4.3)**
We agree with the Reviewer that Theorem 4.3 can be coarse in showing coverage, and we thank the Reviewer for the reference.
We do not need to cover the whole state-action space but only the dataset distribution. Then, we can use the random feature theory for random intention coverage. Specifically, we can show that $\tilde{O}(\sqrt{M})$ random intentions are enough to cover the true intention, where $M$ is the size of the dataset. Please see the general response for more details. This requires ~1000 random intentions for the dataset of size $M=1e6$, which aligns well with our practice with $N=256$.
**W2: How do the behaviors from random reward functions look like in HalfCheetah and AntMaze?**
**A for W2:** We provide similar plots for Halfcheetah and Antmaze tasks in the attached PDF file as Figure 4 in our submission.
We can see that UBER is encouraging diverse behaviors regardless of strict terminating conditions. This means that the diversity does not (only) come from terminating at different timesteps but from diverse intentions.
**W3 \& Q1: Lack discussions/comparisons with previous unsupervised behavior extraction and data-sharing method.**
**A for W3 \& Q1:** As suggested, we compare with the offline unsupervised behavior extraction methods and unsupervised data sharing methods, including OPAL, PARROT, and UDS. The experimental results in Figure~1 in General Response show that UBER performs better than these baselines in most tasks.
The result is not surprising because prior methods extract behaviors in a behavior cloning manner, which lacks diversity and leads to degraded performance for downstream tasks, especially when the downstream tasks differ from the dataset.
**Q2: Why is using random reward functions a good idea when extracting behaviors?**
**A for Q2:**
The motivation for using random rewards is to provide a simple way to extract *diverse* yet useful behaviors. Previous online behavior extraction methods use an explorative objective (e.g., information gain) to acquire diverse behaviors, which is not applicable in the offline setting since it will lead to extrapolation errors and value explosion. Previous offline methods [1,2] learn temporal-extended skills in a behavior-cloning manner, which lacks diversity and leads to degraded performance for downstream tasks.
Then, we justify the use of random rewards empirically and theoretically:
Empirically, using random reward functions can be justified from two aspects: the coverage of the true reward and the diversity of the behavior set with strong empirical performance.
**1. Coverage:**
- To show that the set of random intentions does cover the set of true intentions, we calculate the correlation with the true reward as well as linear projection error for each task.
- The experimental results in Table 1 in General Response show that random intentions do have a high correlation with the true intention, and a linear combination of random rewards can well cover the true reward function. Note that we are using the random reward **functions** with $(s,a)$ as input rather than completely random. The latter will lead to near zero correlation with the true intention and about $40\%$ projection error.
**2. Diversity:**
- Figure 4 in our submission clearly shows that UBER is encouraging diversity since it has a wider span of distributions over the returns. To show this more explicitly, we further calculate the entropy of the return distribution, as shown in Table 2 in the General Response.
Theoretically, using random reward functions is justified from two perspectives: robustness and coverage.
**1. Robustness:**
- For robustness, as stated in the answer of W1, Theorem 4.2 in our work and concurrent work [3] both show that pessimism leads to robustness over rewards. This allows us to smooth out the small fluctuations in the reward function and enables learning diverse behaviors from one dataset.
**2. Coverage:**
- For coverage, as stated in our answer to W1, we can show that a reasonable number of random reward functions is enough to cover the true reward. Note that we do not need to cover the whole state-action space, but the support of the dataset and $\tilde{O}(\sqrt{M})$ random functions is sufficient, where $M$ is the size of the dataset.
**L1: The applicability of the proposed method for complex rewards.**
**A for L1:** We thank the Reviewer for pointing this out, and we will discuss the limitation in the updated manuscript.
Thanks again for the detailed and valuable comments. We sincerely hope our response can clear your concerns and look forward to more discussions.
Best,
The Authors
[1] Ajay, Anurag, et al. "Opal: Offline primitive discovery for accelerating offline reinforcement learning." arXiv preprint arXiv:2010.13611 (2020).
[2] Singh, Avi, et al. "Parrot: Data-driven behavioral priors for reinforcement learning." arXiv preprint arXiv:2011.10024 (2020).
[3] LI, Anqi, et al. Survival Instinct in Offline Reinforcement Learning. arXiv preprint arXiv:2306.03286, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response! I believe the new theorem and the qualitative results on HalfCheetah and AntMaze do improve the quality of the paper, and I raised my score from 4 to 6.
---
Reply to Comment 1.1.1:
Title: Thanks for raising the score to 6!
Comment: We would like to thank the reviewer for raising the score! We really appreciate the valuable comments and suggestions from the reviewer. | Summary: The authors propose unsupervised behavior extraction via random intent priors (UBER), an unsupervised method for extracting and learning behaviors from an offline dataset for downstream tasks. Assuming the situations where there are no reward labels for the transitions in the offline dataset, they suggest using a set of random *intentions* and thus intention-induced random reward functions to train a set of policies. For the online downstream task learning, they employ the policy expansion scheme and reuse the learned policies along with a newly learning policy based on the output of the critic. The authors also provide theoretical results on the existence of the corresponding intention given an arbitrary behavior, a bound on the suboptimality of the policy that is trained with some intention in a linear MDP setting, and the robustness of the random reward functions. Empirically, they present the comparison of performance and analyses in MuJoCo, AntMaze, and Meta-World environments.
Strengths: - The proposed method shows good empirical performance in general compared to the baselines on the downstream tasks in MuJoCo, AntMaze, and Meta-World.
- The manuscript is easy to follow and clearly structured. It is also equipped with appropriate conceptual figures, which can help readers' understanding.
- Learning from unlabeled offline behavior data is an important topic in RL, given that much more unlabeled behavior data is available compared to labeled data in the real world.
Weaknesses: - My current major concern is that the use of random intentions doesn't seem well-justified (and motivated) to me given the current state of this submission. While Fig.4 suggests that UBER's learned behaviors cover the distribution of *behaviors* from the original offline dataset in terms of their resulting returns, it may not necessarily mean that the random intentions or the random reward functions used for the training cover/match (or are highly correlated with) the true intentions. There is a fair possibility that the pessimism in the offline RL training is playing an important role in matching the offline dataset's behaviors, which is also suggested by a concurrent work [1]. I imagine that the whole *intention* space might be too large to be covered with $N=100$ or $N=256$, but it may not matter for the online phase in practice as the policy selection is performed at every time step.
- Some writing or editing issues
- A missing reference at L117.
- The definition (or notations) of the loss functions in Algorithms 1 and 2.
- I believe calling RLPD an *oracle* because of the use of the offline dataset with the true rewards for its training can be misleading.
- Behavior size $N$ (from the main manuscript) vs Random reward dim $n$ (from the appendix)?
[1] LI, Anqi, et al. Survival Instinct in Offline Reinforcement Learning. *arXiv preprint arXiv:2306.03286*, 2023.
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: - Could you provide analyses to back up the claim that the set of random *intentions* is covering the set of true intentions?
- Regardless of my concern, I suggest the authors to check out the concurrent work [1].
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 3 good
Contribution: 2 fair
Limitations: Please take a look at the Weaknesses and Questions sections above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your constructive feedback. We've provided additional experimental results and explanations to address your concerns, and we hope the following clarifications shed light on the raised points.
**Q1 \& W1.1: The motivation and justification of using random intentions and their alignment with true intentions.**
**Response for Q1 \& W1.1:** We employ random rewards as a simple yet effective mechanism to distill diverse and useful behaviors from offline data.
In online settings, behavior extraction typically uses exploration objectives, like information gain, to produce diverse behaviors. Such objectives could be better-suited for offline settings, as they can cause extrapolation errors and result in value explosion. On the other hand, existing offline methods learn temporally-extended skills using behavior cloning, but they may not produce enough behavioral diversity. Lack of diversity leads to suboptimal performance, especially when the downstream task differs from the provided dataset.
To substantiate our claim that random intentions effectively cover true intentions, we highlight the following:
**(Experiments)** We calculate the correlation of $N=256$ random rewards with the true reward and measure the linear projection error. Our results (Table 1 in General Response) indicate that random intentions can have a high correlation with true intentions, and linear combinations of random rewards can approximate the true reward function quite well. Note that we use random reward functions (represented by neural networks) based on $(s, a)$ inputs rather than entirely random ones, which have a near-zero correlation with the true reward and $40\%$ projection error.
**(Theory)** Drawing parallels to the random features theory, we consider each random reward function as a unique dimension of a random feature, enabling us to leverage the random feature theory to affirm our random intention coverage. A detailed elaboration is available in the general response.
**Q2 \& W1.2: Comparing our findings with concurrent work [1] and the role of pessimism in aligning offline behaviors.**
**Response for Q2 \& W1.2:** Concurrent work [1] indeed resonates with some of our observations, especially Theorem 4.2. While the concurrent study emphasizes the robustness of offline algorithms to reward variations, our findings suggest that they can adeptly learn tasks with various intentions $z$ as long as the optimal policy is represented in the dataset. So, Theorem 4.2 is also a kind of "robustness over the reward" statement for pessimistic offline algorithms. This robustness helps smooth the random reward function and makes an implicit trade-off between usefulness and diversity. This is also observed empirically since a reasonable number of behaviors have nontrivial or near-optimal performance.
However, this robustness alone doesn't encapsulate the efficacy of our approach. For instance, behaviors learned with random intentions are diverse and have significant entropy in their return distributions (Figure 4). It would be contradictory if the reward function's robustness were the only driving factor. Additionally, our method consistently outperforms behavior cloning techniques, even when behavior cloning leads to near-optimal policy (e.g., in expert datasets). This further underscores the importance of behavioral diversity over mere reward robustness.
**W2: Concerns about the quality of writing.**
**Response for W2:** Thank you for pointing these out. We've rectified the issues in our revised manuscript to enhance clarity and readability.
We deeply appreciate your thorough feedback and sincerely hope our clarifications address your concerns. Any further feedback and discussions are highly appreciated.
Best regards,
The Authors
---
Rebuttal 2:
Title: Looking forward to further comments!
Comment: Dear reviewer,
We have updated our supplementary experimental results and a more in-depth explanation of UBER. We also updated enhanced theory results. We are wondering if our response and revision have cleared your concerns. We would appreciate it if you could kindly let us know whether you have any other questions. We are looking forward to comments that can further improve our current manuscript. Thanks!
Best regards,
The Authors
---
Rebuttal Comment 2.1:
Comment: Dear authors,
Thank you for submitting your response to the comments.
Dear reviewer wm6d,
Were your concerns addressed by the authors?
Best,
AC | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank all the reviewers for their constructive feedback and valuable insights. We are encouraged to learn that many found our work "novel," "interesting," "fairly promising," and "well-written." We genuinely appreciate these positive remarks.
We acknowledge the concerns raised about the lack of comprehensive theoretical results, experimental data, and the clarity of our algorithm's explanation. In response:
- We have included enhanced theoretical results.
- Our supplementary section now showcases expanded experimental results, comparing UBRE with additional unsupervised behavior extraction and data-sharing techniques.
- We've also provided a more in-depth explanation of UBER tailored to each reviewer's feedback.
We sincerely hope that these updates and clarifications will address the reviewers' concerns. More discussions and suggestions for further improving the paper are welcomed!
### New Theoretical Results
To give a stronger theoretical guarantee for using random intentions, we resort to random feature theory for a better characterization of the coverage property of random functions. Especially, we have the following theorem.
**Theorem.** Assume the reward function admits a RKHS represention $\psi(s,a)$ with $|\psi(s,a)|\leq \kappa$ almost surely. Then with $N=c_0 \sqrt{M}\log(18\sqrt{M}\kappa^2/\delta)$ random reward functions, the linear combination of the set of random reward functions can approximate the true reward function with error
$$
\epsilon \leq c_1 \log^2(18/\delta)/\sqrt{M},
$$
with probability $1-\delta$.
The key of the proof is noticing that a random reward function can be seen as one random feature for linear regression. We will update the manuscript for the theorem and its full proof. The theorem above allows us to use $N=\tilde{O}(\sqrt{M})$ random rewards to well cover the true reward function, where $M$ is the size of the dataset.
We hope the theorem above can solve the concerns on the theoretical justification of using random intentions.
### Additional Experiments:
- Correlation ratio and linear projection error on various tasks:
| Task | Max Correlation | Min Correlation | Projection Error |
|-----|------|------|-----|
|hopper-medium-v0 | 0.569 |-0.568|0.016 |
|hopper-medium-expert-v0 |0.498 |-0.540|0.015 |
|hopper-expert-v0 | 0.423 |-0.415 | 0.011 |
|halfcheetah-medium-v0 |0.569 |-0.568|0.016 |
|halfcheetah-medium-expert-v0 |0.605 |-0.589|0.047 |
|halfcheetah-expert-v0 |0.370|-0.461|0.021 |
|walker2d-medium-v0 |0.475 |-0.582|0.046 |
|walker2d-medium-expert-v0 |0.495 |-0.472|0.042 |
|walker2d-expert-v0 |0.358 |-0.503|0.019 |
Table 1. Minimum and maximum correlation and linear projection error for the true reward function using random reward functions on various tasks. The projection error $\epsilon$ is defined as $\epsilon = ||r-\hat{r}||/||r||$ , where $\hat{r}$ is the best approximation using linear combination of random rewards.
- Distribution of random intent priors, datasets, and behavior cloning policies:
| Method\Dataset | halfcheetah-medium | halfcheetah-medium-expert | halfcheetah-expert | hopper-medium | hopper-medium-expert | hopper-expert |
|----------------|--------------------|---------------------------|--------------------|------------------------|----------------------|---------------|
| BC | 0 | 0 | 0 | 0 | 0 | 0 |
| Dataset | 1.21 | 1.47 | 0.86 | 0.71 | 1.97 | 0.82 |
| UBER | 2.01 | 1.92 | 2.69 | 2.45 | 2.32 | 1.11 |
| Method\Dataset | walker2d-medium | walker2d-medium-expert | walker2d-expert | antmaze-medium-diverse | antmaze-medium-play | |
|----------------|--------------------|---------------------------|--------------------|------------------------|----------------------|---------------|
| BC | 0 | 0 | 0 | 0 | 0 | |
| Dataset | 1.44 | 1.88 | 2.43 | 0.63 | 0.29 | |
| UBER | 2.67 | 2.61 | 0.41 | 1.77 | 1.69 | |
Table 2. The entropy of the return distribution for each method on different tasks.
Pdf: /pdf/39dd5dc570cccb42e014fe294d68dd09de24aaf5.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper studies the usage of unsupervised (reward-free) data to help RL, which could extract useful behaviors from offline reward-free datasets. The proposed method is called UBER, which generates random intent priors and trains the agents based on them. The procedure generates diverse behaviors which in turn helps RL learn in a sample efficient way.
Strengths: + The proposed method is clearly motivated and presented. The illustration figure and the algorithm description are easy to read;
+ The proposed method has theoretical support and proofs;
+ In the empirical experiments, the proposed method is much more sample efficient than baselines due to it learns useful behaviors to help the actual task.
Weaknesses: I feel that the "unsupervised RL to learn a set of diverse behavior so that when the actual reward function applies, the RL agents can learn much faster" idea appeared in the literature. And the paper seems innovative in the setting part where the unsupervised RL happens in an offline RL fashion.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can the authors justify the novelty part when it compares to the unsupervised RL literature?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for taking the time to review our manuscript and for providing insightful feedback. We appreciate the opportunity to clarify our contributions and address your concerns.
**W1 \& Q1: Justify the novelty part when it compares to the unsupervised RL literature.**
**A for W1 \& Q1:**
The novelty of our work comes from three perspectives: a new setting, a simple yet effective algorithm, and its theoretical justification.
First, we propose a new setting: leveraging unsupervised or reward-free offline datasets to accelerate online learning for novel tasks. This helps reduce exploration costs and enable data reuse from other tasks (e.g., we can discard reward labels in the dataset from other tasks). It also enables learning from large datasets, which has the potential for behavior emergence, which is impossible for online RL due to its low efficiency. Traditional unsupervised RL methods require an online interaction environment [1,2] or oracle reward functions [3,4].
Secondly, we propose a simple yet effective method using random intentions to extract diverse and useful behaviors.
Previous work on online skill extraction relies on an exploration objective (like information gain) to derive diverse skills, which makes them less applicable in offline scenarios. Directly leveraging these methods with an offline dataset could introduce significant extrapolation errors. Previous techniques that employ offline dataset reuse, such as the ones presented in [3,4], predominantly utilize behavior cloning for skill extraction over extended temporal scales. While effective when the dataset closely matches the test environment, these methods might not consistently generate diverse behaviors, which are imperative for learning novel tasks.
Thirdly, we provide a theoretical argument for why using random reward functions for behavior extraction is sufficient and effective. We use offline RL theory and random feature theory to show the robustness and coverability in the reward space of random rewards, which justifies using random intentions.
We genuinely hope that our explanation clarifies the distinctiveness and potential of our work. We are eager to engage in further discourse if needed.
Sincerely,
The Authors
References
[1] Eysenbach, Benjamin, et al. "Diversity is all you need: Learning skills without a reward function." arXiv preprint arXiv:1802.06070 (2018).
[2] Sharma, Archit, et al. "Dynamics-aware unsupervised discovery of skills." arXiv preprint arXiv:1907.01657 (2019).
[3] Ajay, Anurag, et al. "Opal: Offline primitive discovery for accelerating offline reinforcement learning." arXiv preprint arXiv:2010.13611 (2020).
[4] Singh, Avi, et al. "Parrot: Data-driven behavioral priors for reinforcement learning." arXiv preprint arXiv:2011.10024 (2020).
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. My concerns have been addressed.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thanks for your time and effort in the review process, and we truly appreciate your constructive feedback, which allowed us to improve our work.
We are delighted to know that our rebuttal has resolved your concerns. However, it appears that the overall score has remained unchanged. We kindly request that you re-evaluate the score in light of the resolved concerns, if possible.
We understand that the review process can be quite demanding, and details can sometimes be overlooked. We sincerely hope our note serves only as a gentle reminder and not as a presumption.
Warm regards,
The Authors | null | null | null | null | null | null |
HiBug: On Human-Interpretable Model Debug | Accept (poster) | Summary: This paper proposes a novel model-debugging method for deep learning-based classifiers. Technically, the proposed method first integrates a method for assigning attributes to training data. The method leverages a pre-trained large language model, such as chatGPT, to generate visual attributes based on task descriptions. By performing K-means clustering on the embeddings obtained from a pre-trained vision-language model called BLIP, It selects representative samples, i.e., centroid images, for each cluster. These samples are then queried with BLIP to obtain the specific attribute values. This process is applied iteratively to assign attribute values to all images in the dataset.
It then designs a method to discover samples that are likely to be misclassified by the model, so-called rare case bugs, and spurious correlation bugs. Rare case bugs aim to identify cases where the model's underperformance can be attributed to insufficient training data for a particular attribute value. The authors evaluate the validation accuracy for each group of data with a specific attribute value and flag attributes that exhibit a 0.1% or 0.2% drop in accuracy as rare cases. Spurious correlation bugs address situations where the model learns strong but incorrect associations among attributes. The authors measure linear correlations between attribute values generated by HiBug and compare them with ground truth data to identify such cases. The paper also proposes a model repair mechanism, which involves selecting unlabeled data with attribute values that have high validation error rates. Additionally, they combine class names with attribute values to create prompts for generating new data. Experimental results on benchmark datasets demonstrate HiBug's effectiveness in discovering rare cases and correlation errors. Furthermore, the paper shows the improvements achieved by the repaired model in terms of performance.
Strengths: + This paper studies an interesting problem, i.e., automatically identifying rare samples and spurious correlations that deep learning models may misclassify.
+ The proposed method leverages pretrained models to reduce human intervention.
Weaknesses: First, the paper could provide a clearer distinction between the contributions and novelties of their work compared to the existing work, Domino [1]. Specifically, the rare case and spurious correlation are also discussed and identified in domino. For example, domino introduces artificial associations between features and uses their method to identify the introduced correlations, which is a similar task to HiBug. The key differences and challenges between HiBug and domino can be more clearly stated to stress the paper’s contributions. The difference in human interpretability between this work and domino can be more clearly stressed. In Section 4.1, the paper mentioned that HiBug can identify wrong correlations between features, and it still requires human experts to make the connections between such incorrect correlations and the model’s underperformance. This step can also be performed on domino as a way to identify the bug in the model. The paper could provide further clarification on why and how HiBug has the ability to discover more interpretable and human-friendly bugs compared to other approaches.
Second, some of the definitions and statements are not clearly discussed in the paper. The motivation for choosing BLIP instead of other models that generate cross-modal representation is not clearly discussed. Although Figure 1 mentions that CLIP's embedding space is entangled and unsuitable for representative image selection, a motivation example illustrating the embedding space of BLIP would further clarify the motivation behind this choice. The computation process of linear correlation between the model’s predictions and data attribute is unclear. For example, in Section 4.1, the paper mentioned that the detection threshold for linear correlation is 0.7 and 0.8 without stating the computation method. In line 237, the paper claims that HiBug can discover the model's erroneous correlation with 72.6% accuracy, but the calculation of this accuracy measure is not clearly explained.
Another minor question: Table 3 reports the validation accuracy that is 0.1% smaller than the normal model validation performance. Is it computed on the mean of all data slices’ validation accuracy?
[1] Domino: Discovering Systematic Errors with Cross-Modal Embeddings, ICLR 2022
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see the weaknesses above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper does not discuss the potential limitations, such as the generalizability and scalability of the proposed method. Another potential limitation might be its reliance on the performance of the pretrained models. The performance of the proposed method might be affected if the pretrained model cannot give accurate results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for pointing out our paper's problems. In the following parts, we address your concerns:
1. **First, the paper could provide a clearer distinction between the contributions and novelties of their work compared to the existing work, Domino [1].**
- Thank you for pointing out this problem. A comprehensive description of our contributions is in the top author rebuttal section. In general, HiBug has the following contributions compared to existing works like Domino: (1) HiBug provides a more interpretable and coherent data slice, (2) HiBug requires far less human effort, (3) HiBug generates a meaningful description of the data slice, and (4) HiBug can identify the type of bugs (e.g., rare case or spurious correlation). We will make it clearer in our revised paper.
2. **Some of the definitions and statements are not clearly discussed in the paper. The motivation for choosing BLIP instead of other models that generate cross-modal representation is not clearly discussed. Although Figure 1 mentions that CLIP's embedding space is entangled and unsuitable for representative image selection, a motivation example illustrating the embedding space of BLIP would further clarify the motivation behind this choice.**
- We would like to clarify that we do not use the BLIP as a substitute for CLIP to generate a better cross-modal representation.
- Previous works, such as Domino, use a pre-trained cross-modal model to extract image representations. Based on these representations, they cluster failures in the feature space and find corresponding text to explain the common features of the cluster. As they rely on the embedding space for clustering and explanation, the quality of the embedding space is important. However, the attributes in the embedding space are often entangled, leading to sub-optimal interpretation.
- In contrast, HiBug takes a completely different approach compared to existing methods (e.g., Domino) and does not rely on the embedding space for failure clustering. Instead, HiBug chooses to first group similar data together and then checks if they correspond to bug slices. In this process, BLIP is used for the VQA task, assigning each image with corresponding attribute values. This way, the slicing step is not affected by the quality of the embedding space.
3. **The computation process of linear correlation between the model’s predictions and data attribute is unclear. For example, in Section 4.1, the paper mentioned that the detection threshold for linear correlation is 0.7 and 0.8 without stating the computation method. In line 237, the paper claims that HiBug can discover the model's erroneous correlation with 72.6% accuracy, but the calculation of this accuracy measure is not clearly explained.**
- Computation process of linear correlation
- The linear correlation for a model's prediction $c_i$ and an attribute value $a_i$ is $P\left(c_i’ | a_i \right) $. We also refer to our common response (CQ2) for this question.
- Discover the erroneous correlation
- Each problem $p^i$ in problem set $P$ has a ground truth erroneous correlation ${c_i-a_i}$, which denotes a correlation between predicted label $c_i$ and attribute value $a_i$. HiBug can discover this correlation, if (1) $a_i$ is an attribute value in HiBug (2) the linear correlation of a_i and c_i exceeds the pre-defined threshold. We calculate the percentage that HiBug succeeds in overall problems.
- $\frac{|HiBug \ discover \ correlation|}{|P|}$
- We will add these formulas in the revised version.
4. **Another minor question: Table 3 reports the validation accuracy that is 0.1% smaller than the normal model validation performance. Is it computed on the mean of all data slices’ validation accuracy?**
- Yes
---
Rebuttal Comment 1.1:
Comment: Thank the author for the rebuttal.
1. **Clearer Distinction of Contributions:** The detailed description that the authors provided of HiBug’s contributions in comparison to Domino is helpful. The authors might want to elaborate more on these differences in a future version. For example, in what way does HiBug reduce human participation compared to Domino? A more formal description of the difference between the proposed method and existing methods is also appreciated (as promised by the authors).
2. **Motivation for Choosing BLIP:** The distinction that the authors made in how HiBug operates without relying on the embedding space for failure clustering provides a clearer understanding of the proposed method.
3. **Computation Process of Linear Correlation and Erroneous Correlation:** Thank you. This addresses my concern regarding linear correlation and erroneous correlation.
I will update my score to positive.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the recognition of our work after the rebuttal. In the revised version, we shall thoroughly elaborate on our contributions compared to previous work (especially Domino). | Summary: HiBug seeks to identify (useful) NL descriptions of the "slices" of inputs on which some model (say an image classifier) has higher error rate, and perhaps even explain why the error rate is high (e.g., not enough training data in that space, or some data bias towards an unrelated correlation).
Prior work (saliency maps and Spotlight) identify latent-space regions where bugs are clustered, but humans need to come up with good NL summaries of those latent regions. AdaVision does iterative summarization, again with human assistance. Other lines of work use NL templates to produce summaries that match the buggy latent regions, but those templates sometimes come out incoherent.
The way HiBug does all this is as follows:
1. it identifies auxilliary attribute types that might be relevant to a task, e.g., "size" and "number of legs" are attribute types relevant to a classifier of images as dogs or not; it uses an LLM, prompted with the task (e.g., "does this image contain a dog?") and asked to identify what might be relevant attribute types (e.g., expecting an answer of "size, number of legs, hairiness"). So this step uses an LLM to guess interesting attribute "keys", relevant to a task at hand.
2. it labels dataset examples with their values for these task-specific auxilliary attribute types (they call this "attribute assignment", which basically means finding the value for each attribute), by using a vision LM (e.g., for image 1, size -> large, number of legs -> 3, hairness -> yes). It doesn't directly prompt the vision LM (visual question-answering model) to get the answers, although that would be trivial, because of high inference cost. Instead, it uses a cheaper, multi-step approximation:
1. Uses a vision LM (BLIP) to map images to their embeddings, does K means clustering in embedding space, and selects centroids. The idea is that centroids will probably share attribute values.
1. For each centroid example, it queries to VQA to get the actual (symbolic) values to the auxilliary attributes.
1. It uses the values from all those cluster-examplar VQA answer to build a "vocabulary" of attribute values for each chosen auxilliary attribute.
1. It embeds each attribute value (e.g., "size large" or "number of legs 3"), by itself, using the visual LM. This gives an embedding for every attribute value.
1. Finally, for every dataset example (for which it already has an embedding), and for each auxilliary attribute, it checks the attribute-value embedding that's the most proximal to the example embedding. That enables a value assignment for each attribute for each example, without actually having to query the VQA model for each example.
3. It does inference on the initial task on each example (is it a dog?) and checks against golden labels (yes/no) to identify correct/incorrect predictions. With the correct/incorrect decision per example, and its auxilliary attribute value assignments, it can now slice the incorrect examples per single attribute value, or combinations of values for different attributes. If such a slice has a significantly higher error rate than the whole dataset, it's marked as a "buggy slice", and its corresponding attributes and values are the "description" for that buggy slice.
4. It draws an "explanation" for the buggy slice: if the training set slice has a small size, then the bugginess arises because the buggy set of attributes is rare; if there is a high linear correlation between the golden labels and the buggy slice of the validation data, then the dataset is biased.
5. Finally, to repair the model, HiBug does data selection, by choosing unlabelled data with buggy-slice attributes, labelling them, and augmenting the training dataset; it also does data generation, by creating an LLM prompt with the buggy-slice attributes, and generating synthetic examples.
Evaluation is done on the dcbench set from the Domino paper, which contains "buggy" model checkpoints, datasets, and the buggy correlation, or rare slice. HiBug is tested on identifying the slice description (either just the attribute "values", or also the attribute "keys"). Beyond identification of slices, HiBug was tested on 3 large tasks (e.g., ImageNet10), on which it identified buggy slices in validation data, which were then confirmed by looking at the performance of test data from those slices. Finally, model repair was evaluated and showed that performance can improve for biased models.
Strengths: 1. Except for task description, and data labelling during model repair, HiBug is fairly automatic and requires little human involvement, making the approach potentially impactful.
2. The performance on dcbench seems significant, for such a diverse dataset, compared to prior work (Domino), and while also offering qualitatively better feedback (the identification of the bug, rather than a description of a cluster).
3. Although the approach is fairly complex, it is described in sufficient detail to make reproducibility quite plausible. This is a well-written paper.
4. The approach seems fairly original (and non-trivial), and deviates from the style of prior art (e.g., clustering of failures in latent space). Also, it's quite creative to get around cost challenges.
Weaknesses: 1. Some assertions, especially in the design section, are provided without explanation or justification, making it hard to assess their veracity (see question 2).
2. I find Figure 5 a poor use of space. Showing images with increased diversity could be done any number of ways (e.g., randomly synthesizing prompts to an LLM) and doesn't prove anything. I'd move this text and figure to the appendix or remove.
3. I don't understand how you avoid bias inherent in the building blocks used in HiBug (e.g., BLIP or ChatGPT). Especially for model repair with data generation, bias seems problematic (question 1).
4. I wonder if the early parts of the approach (attribute discovery and value assignment) are a bit overengineered. Given that a human must write down a description for the task, couldn't the same human also describe some attributes that might be relevant, as well as the domain of values for each? Are the numbers of attributes and values that large that human enumeration is prohibitive? (question 3)
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Is using an LLM to do data generation prone to reinforcing bias? If the dataset in the LLM is already biased, how do you prevent model repair from reinforcing the same bias during data generation? [line 205]
2. How do you know which features are or are not causal to the target? [line 194]
3. How large is $|A|$ and $N$ typically, say for dcbench? Would it be practical for a human enumerate them manually, rather than using HiBench to discover them from LLMs/VQAs?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: 1. Since pre-existing, pre-trained models are used as building blocks to construct the domain in which buggy slices are identified, any bias in those building blocks might transfer to the results of the model. This limitation isn't mentioned.
2. A pre-trained vision LM isn't necessarily trained explicitly to separate any plausible attribute in its latent space. It's not clear where the approach of doing value assignment via the HiBug approach might fail in such cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank your recognition of our paper. We address your concerns as follows:
1. **Some assertions, especially in the design section, are provided without explanation or justification, making it hard to assess their veracity.**
- We refer to the common response CQ1 to this question.
2. **Find Figure 5 a poor use of space. Showing images with increased diversity could be done any number of ways (e.g., randomly synthesizing prompts to an LLM) and doesn't prove anything. I'd move this text and figure to the appendix or remove.**
- Thanks for pointing out this, we will move it to the appendix in the revised version.
3. **I don't understand how you avoid bias inherent in the building blocks used in HiBug (e.g., BLIP or ChatGPT). Especially for model repair with data generation, bias seems problematic. Is using an LLM to do data generation prone to reinforcing bias? If the dataset in the LLM is already biased, how do you prevent model repair from reinforcing the same bias during data generation?**
- We agree that while these large models have demonstrated strong ability, they can introduce bias. Please note that we rely on these models for attribute name and attribute value proposal and assignment. Even if they have some bias (e.g., they may neglect some kinds of attributes), we can still discover a lot of interpretable bugs (as evidenced by the experimental results). Data generation can then help to repair the discovered bugs. However, due to the bias, some attributes remain uncovered and these bugs may not be found. Corresponding data cannot be generated for repair. To address this problem, we provide a user interface that visualizes the entire process. Users can make slight interventions to modify attributes and attribute values if necessary.
4. **How do you know which features are or are not causal to the target?**
- We compute the linear correlation between the attribute value and the prediction label. Please refer to the common response (CQ2) for this question.
5. **I wonder if the early parts of the approach (attribute discovery and value assignment) are abit over-engineered. How large is |A| and N typically, say for dcbench? Would it be practical for a human enumerate them manually, rather than using HiBench to discover them from LLMs/VQAs?**
- In our experiments on dcbench, |A| is less than 10, and N is less than 50. The values in the experiment are relatively small and can be proposed by humans. However, this is not scalable. With HiBug, we can complete this task quickly and scale to more complex tasks.
- Specifically, HiBug can facilitate both the attribute discovery and value assignment processes. For attribute discovery, HiBug automatically proposes many attribute names and values. This is much faster than humans thinking of these words on their own, especially for complex tasks that require fine-grained descriptions of the images. These proposed names and values can also serve as inspiration for humans to choose from, thereby accelerating the attribute proposal step. For value assignment, HiBug automatically assigns attribute values to each image. Manually assigning attributes for each image is impractical, especially for modern tasks that often involve a vast amount of data.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the detailed rebuttal.
Response to #3: I didn't quite see how using the potentially biased models for value assignment isn't a problem. If, for example, a vision model decides that all people with long hair are female, and assigns the "female" value to the "gender" attribute for a bunch of images while trying to debug problems in the "is this a politician?" task, won't that lead to erroneous debugging results?
I don't think that question is satisfactory, but it's still a hard problem with a good solution that improves automation, so I don't consider this a problem for this submission.
Response to #5: I fully believe you that value assignment, done in an automated fashion, makes a lot of sense, and your engineering of that component seems sensible. I'm questioning whether using an LLM do discover the space of attributes and potential values is required, however. Perhaps an example in which the choice of attributes and their value domain aren't obvious and are error prone would help in the next iteration of the paper.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for taking valuable time to respond to our rebuttal, and we try to address these issues as follows.
1. Response to #3: I didn't quite see how using the potentially biased models for value assignment isn't a problem. If, for example, a vision model decides that all people with long hair are female, and assigns the "female" value to the "gender" attribute for a bunch of images while trying to debug problems in the "is this a politician?" task, won't that lead to erroneous debugging results? I don't think that question is satisfactory, but it's still a hard problem with a good solution that improves automation, so I don't consider this a problem for this submission.
- We agree with the reviewer that a biased pre-trained model could lead to debug failures with HiBug. For this specific "is this a politician?" task, suppose the low-performing slice for this task is [“male”, ”long hair”], and the biased model assigns the female value to the "gender" attribute of all people with long hair, then HiBug would fail.
- At the same time, we would like to point out that, the pre-trained large models show exceptional zero-shot capabilities and continue to improve over the years. They would still have bias, but as long as the bias is not so extreme, e.g., some long-haired males are not misclassified as females in this example, then HiBug would still be able to identify the low-performing slice.
- Moreover, we have equipped HiBug with a user interface for attribute input and examination. For this particular example, users could choose to display samples with the "long hair" tag and possibly find the error.
2. Response to #5: I fully believe you that value assignment, done in an automated fashion, makes a lot of sense, and your engineering of that component seems sensible. I'm questioning whether using an LLM do discover the space of attributes and potential values is required, however. Perhaps an example in which the choice of attributes and their value domain aren't obvious and are error prone would help in the next iteration of the paper.
We thank the reviewer for the recognition of our work. In our humble opinion, using LLM to discover the space of attributes and potential values is essential, because it is much easier for humans to choose from attributes and values proposed by LLMs than to think out all possible attributes and values on their own. Let us consider the rare case discovery experiment in Sec. 4.1. In this experiment, rare cases are often subclass of a labeled class. For example, carriage is a subclass of the vehicle class. If we use “subclass” as an attribute name, in order to discover potential value, we need to list a wide range of vehicle subclasses. It is likely to have missing items for humans without external help. At the same time, we acknowledge that LLMs are not omnipotent, and we suggest using HiBug as a Co-Pilot in debugging, together with the intellectual minds of humans.
We shall add more discussions on the above issue in the revised version. | Summary: The authors introduce HiBug, an approach to identify bugs in trained models in an interpretable fashion. At its core, HiBug annotates inputs (images) with a set of attributes and values obtained using an LLM followed by a VQA step, and then identifies those combinations of attributes corresponding to subsets of test examples that underperform. "Bugs" uncovered by this procedure are then fixed by requesting the label of either unlabeled instances or of synthetic instances, prioritized based on the distribution of buggy data subsets identified in the previous step. HiBug is evaluated on a subset of the dcbench dataset (consisting of ~1000 ML problems and model checkpoints).
**Post-rebuttal update**: increased score by one point.
Strengths: + Tackles an important problem - semi-automated debugging of ML models.
+ Potentially very useful.
+ Text is easy to read.
+ Related work is well done.
+ Proposed approach is quite heuristic in nature, but overall sensible.
+ Proposed approach is quite straightforward, which is a plus.
+ Could serve as a basis for less straightforward techniques.
+ Experiments are quite extensive.
Weaknesses: - Proposed approach is quite straightforward, which is also a minus, and specifically...
- ... I am on the fence regarding novelty. While HiBug might be novel on paper, the real question is: how different is it from what developers debugging ML models already do on a day-to-day basis?
- Implicitly assumes LLM and VQA stages produce high-quality outputs. This limitation is briefly mentioned in the conclusion, tho.
- No direct comparison with competitors - as there are none. Not a major issue.
All in all, HiBug is not my cup of tea, but if it works and it is useful, I cannot really complain.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1. "Recent research indicates that large language models, like ChatGPT, possess extensive general knowledge and are highly effective at answering questions" (p 4) Could you please reference literature in support of this statement?
Q2. Spurious correlations: wouldn't it make more sense to compute correlations using a non-linear measure, like Spearman's rho?As I already mentioned, I am generally positive about the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Briefly discussed in the conclusion. The discussion is fair.
In addition to this, the authors could briefly comment on what they expect to happen when HiBug malfunctions - i.e., it reports non-existent bugs or neglects actual bugs - and how one should go about handling this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thanks your recognition of our paper. We address your concerns as following:
1. **While HiBug might be novel on paper, how different is it from what developers debugging ML models already do on a day-to-day basis?**
- Before designing HiBug, we investigated the common debugging flow in the industry and discussed it with a few developers. Generally, ML developers manually go through all failure cases and attempt to summarize failure patterns themselves. To reduce the manual effort, existing works such as Domino [Eyuboglu et al., 2022] and Spotlight [d’Eon et al., 2022] try to cluster existing failures in the feature space and summarize common patterns. However, this flow—summarizing common patterns based on discovered failures—is problematic. For example, when the feature space contains entangled features or the discovered failures are sparsely distributed in the feature space, it becomes difficult to identify the true bug-related attributes.
Unlike existing efforts, HiBug follows a different debugging flow. We first group the data according to their attributes and then judge if there are underperforming ones. This way, underperforming slices must share a similar concept, leading to more accurate explanations. For a more detailed illustration, please refer to our clarification of novelty in the top section of the rebuttal.
2. **"Recent research indicates that large language models, like ChatGPT, possess extensive general knowledge and are highly effective at answering questions" (p 4) Could you please reference literature in support of this statement?**
- We shall add corresponding references in our revised paper. Some representative ones are:
- [1] Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., ... & Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language models. *Advances in Neural Information Processing Systems*, *35*, 24824-24837.
- [2] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M. A., Lacroix, T., ... & Lample, G. (2023). Llama: Open and efficient foundation language models. *arXiv preprint arXiv:2302.13971*.
3. **Spurious correlations: wouldn't it make more sense to compute correlations using a non-linear measure, like Spearman's rho?**
- Please refer to the common response (CQ2) for this question. In general, non-linear correlations, such as Spearman correlation, might be inapplicable in some scenarios due to the model’s prediction confidence being inaccessible and untrustworthy. Currently, we use simple linear correlations to demonstrate the effectiveness of the HiBug. We acknowledge that other correlation coefficients may yield better results, and we intend to explore them in future endeavors.
4. **What do they expect to happen when HiBug malfunctions - i.e., it reports non-existent bugs or neglects actual bugs - and how one should go about handling this?**
- First, HiBug can hardly detect non-existent bugs. When identifying interpretable bugs, HiBug groups images with similar attributes into slices for performance evaluation. It's important to note that if the performance of a particular slice is lower than the average accuracy, we consider it a bug and take the corresponding attributes as the failure pattern. For HiBug to flag non-existent bugs, two requirements must be met: the slice must be underperforming, and the concept used for slicing must incorrectly characterize the data in the slice. These two requirements are unlikely to occur simultaneously. This is because, even if some data in the slices are incorrectly assigned attributes, it will not affect the bug discovery process (non-existent bugs will not be flagged). In the worst case, if most data in the slice is wrongly assigned the same attribute, they are likely to behave differently and are unlikely to cause a high level of misclassifications (resulting in an underperforming slice).
- Meanwhile, even if HiBug reports non-existent bugs, users can quickly identify the error. Firstly, bugs reported by HiBug are interpretable. Second, all bugs found by HiBug are presented in the user interface. Users can easily examine the data tagged with bugs to determine if the bug truly exists.
- The contribution of this paper is to automatically identify interpretable bugs, which could provide further repair suggestions for this model. We acknowledge that HiBug may overlook some interpretable bugs, especially when fine-grained attributes cannot be suggested by the attribute name and value proposal steps. However, with the development of vision-language models, we believe that the identification of more fine-grained attributes holds promise for discovering more bugs.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thank you for your detailed reply, and apologies for not engaging in the discussion so far.
1. I see, this is sensible. I will increase my score based on this.
2. Thank you.
3. Great, thank you.
4. Thank you for clarifying this point. I still think it would make sense to briefly discuss issues possible malfunctions of HiBug in the paper. | Summary: In this paper the authors describe a system that introduces interpretability into the LLM model debugging. They use an LLM like chatGPT to reveal interpretable features from data for which the ML models don't perform well, With these features they find poorly performing data slices and provide identifiable attribute based or NL based description of visual features for these poor performing data slices. This, according to the authors, helps identify spurious correlations as well as rare (inadequate) datasets in training.
Strengths: The biggest contribution of the paper is in attempting to bring in transparency and debugging where LLM models have poor performance. They rely on visual attributes and their value extraction from slices of data that show poor performance. Then map these to spurious corrleations or bias through rareness of certain attributes in the training set.
Weaknesses: While the authors present a framework the biggest challenge is identifiying the attributes that are relevant to a certain task in the dataset .This is a manual process and can be challenging. The authors give one example of how it is done but that's not convincing enough that it can be easily done.
The second challenge is that they use another model to extract the attribute values of the identified attributes. It is not clear how effective this model is in identifying the values correctly.
The step of selecting representative images for BLIP they use k-means clustering. It is not clear how good these clusters are and what coverage and accuracy one gets. This seems like an important step in scaling the image selection
In the interpretable bug discovery step, every attribute, every combination of attributes have to be tried to group images and this step can be cumbersome and explosive.
In the bug classification step it is not clear how effective is the linear correlation step. No measurement described here.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: It will be great if the authors can address the questions in the weakness section.
1. How scalable is tha ttribute identifcation step based on tasks.
2. It would be good to address the goodness of the model that extracts the attribute values.
3. The representative selection step uses K-means clustering. How did you evaluate its goodness.
4. Please describe the complexity of the bug discovery step esp for combination of attributes.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: It is not clear if the algorithm will help improve the bias in the training data by tihs technique. While the authors do address the problem of possible rarity of certain attributes in the training datatsets and spurious correlations to other attributes it is not clear if their proposed method solves that.
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for pointing out our paper's problems. We sincerely hope you can read our clarification of HiBug’s novelty in the author rebuttal section above. In the following parts, we address your concerns:
1. It will be great if the authors can address the questions in the weakness section.
1. **Goodness of ChatGPT, BLIP, K-Means.**
- We refer to the response to common question 1 for this question. Generally speaking, the main contribution of HiBug is proposing a novel and effective workflow for model debugging. We choose ChatGPT, BLIP and K-Means mainly because they are mature and common solutions for corresponding tasks. Our experiment evaluating the whole workflow has demonstrated the effectiveness of every component in HiBug. HiBug is highly flexible and modular. Therefore, we can update any component in HiBug with a better counterpart.
- We also provide the zero-shot classification of BLIP in some attributes of CelebA, where we have ground truth values for some attribute values. Gender classification: 99%, Age classification: 89%. For more evaluations, we refer to the original paper of BLIP[3].
2. **How scalable is the attribute identification step based on tasks?**
- The attribute identification step of HiBug is powered by ChatGPT, which owns a great knowledge base of reality. However, we acknowledge that ChatGPT can have some limitations in identifying attributes for uncommon tasks.
- To overcome this problem, we provide an easy-to-use user interface, which is shown in supplementary material, to help users go through failure cases of models and propose attributes by themselves. While manual attribute proposal still requires human efforts. We emphasize that when debugging a model, these efforts can be overlooked in comparison to the time saved by HiBug.
- Meanwhile, HiBug is a highly modular and flexible framework, the backbone GPT can be easily updated by any new language model or task-related knowledge graph.
3. **Please describe the complexity of the bug discovery step esp for combination of attributes.**
- Our rare case and correlation bug discovery is limited to one attribute value. For the bug discovery of attribute combinations, we only focus on finding low-performance combinations. As described in the ablation study, its computational complexity of it is O(D), where D is the number of data in the dataset. Specifically, in the bug discovery process, every data has been assigned to an attribute combination. Our bug discovery is as follows:
- We create a dictionary storing the correct number of data and the total number of data for every attribute combination.
- We go through all the data in the dataset, collecting data’s correctness to the dictionary.
- We calculate the accuracy for each attribute combination in the dictionary while outputting those with low performance.
Although the number of attribute combinations grows exponentially with the number of attributes, it is upper-bounded by the number of data in the dataset. Therefore, the time cost of this process can be viewed as 2D, which is O(D).
2. **In the bug classification step, it is not clear how effective is the linear correlation step. No measurement is described here.**
- We refer to our response to common question 2 for this question. In general, (1) we choose linear correlation because of its simplicity. Linear correlation is straightforward and easy to understand, and (2) non-linear correlation, such as Spearman correlation, might not be applicable in some scenarios. Since the model’s prediction confidence can be inaccessible and untrustworthy. We will implement other correlation coefficients in HiBug as optional choices.
3. **While the authors do address the problem of possible rarity of certain attributes in the training datasets and spurious correlations to other attributes it is not clear if their proposed method solves that.**
- This is an insightful question. In the data selection and data generation process, HiBug finds attribute combinations that have low performance and selects or generates data with those attribute values for model training. This process inherently can fix the bug found by HiBug. For example, we have attributes [“gender”, “Age”] and labels {0, 1}.
- If “male” is a low-performing rare case. Attribute combinations [ ”male”, x] will naturally become low-performing slices, where x can be { ”Young”, “Middle-Age”, “Old”}. Then in the data selection process, new data of “male” will be selected. Then the rare case problem can be alleviated.
- If “male” is correlated with label 0. Then in the data generation process, where the label can be used as an attribute. [“male”, x, 1] will become low-performing slices and corresponding data will be generated. Then the model’s dependence on “male” is reduced.
- We also present the bug discovery results before and after fixing in our paper appendix. Bugs disappear after fixing by data selection of HiBug. Moreover, our experiment has shown that HiBug can ensure a better model's performance by data selection and data generation compared with baselines.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer GQqq,
Thanks again for your insightful questions. We hope we have addressed your concerns in our rebuttal. Specifically, we address your concerns regarding the goodness of the various components in HiBug in the top rebuttal section. We have also provided detailed answers to the scalability, time complexity, and effectiveness of HiBug's workflow.
Considering the deadline for the discussion period is approaching, we would like to know whether you are satisfied with our rebuttal. We appreciate your time and effort very much and we would be happy to provide further clarifications, if any.
\- Submission14450 Authors | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their constructive comments and recognition of our work's strength.
Before addressing common questions, we clarify the novelty of our paper as follows:
### Novelty
- The problem HiBug focuses on is critical yet under-explored. Before designing HiBug, we investigated the common debug flow in the industry and discussed it with a few developers. The typical flow is to go through all failure cases and try to summarize failure patterns by ML developers with ad-hoc solutions. After that, they collect new data related to failure patterns and re-train the model. This process requires intensive human efforts. Our original idea of HiBug is to build an AI assistant, like a co-pilot, to facilitate this process.
- The main contribution of HiBug is proposing a useful and novel workflow that automates the process of data/model debugging and provides interpretable feedback. Previous attempts in this direction are either not interpretable or require intensive human effort, as discussed in the related work section. HiBug’s novel workflow can overcome these problems.
Thanks to Reviewer Zx32’s constructive suggestion, we elaborate on the difference between HiBug and Domino, which is a representative work of previous solutions. In short, Domino clusters failure cases of a model in the CLIP embedding space and find a description for the cluster from a pre-designed large corpus, while HiBug finds attributes and attribute values for data, and uses attribute values to cluster data and discover bugs. Specifically,
1. HiBug identifies low-performing data slices in a more coherent way. Firstly, Domino might produce semantically incoherent data slices. For example, in Figure 1 of our paper, data with similar attributes are not close in the embedding space. This phenomenon is also observed by [1]. HiBug addresses this issue by directly clustering data with attribute values, inherently ensuring the coherence of data in a cluster.
2. HiBug requires far less human effort. Although Domino can automatically find a description for data slice. It is worth noting that the description comes from a human-designed corpus. In the appendix of Domino's paper, they introduce how to build the task-related corpus, which includes human proposal and programmatic collection from Wikipedia. In contrast, HiBug does not require a corpus and identifies attributes directly from the dataset. This is powered by recent large language models, and it only requires a description of the task.
3. HiBug generates a meaningful description of the data slice. Since Domino generates descriptions by combining words in the corpus, it often produces a sentence that contains irrelevant or contradicting words, such as “a photo of setup by banana” and “a photo of skiing at sandal”. HiBug avoids this issue in two ways. Firstly, data can only have one value from the same attribute. For example, it can only get one value from set {”male”, “female”}. Therefore, contradicting words cannot appear in HiBug. Second, attribute values are collected from the dataset by LLM and vision-language models. Therefore, only words relevant to the data can appear.
4. HiBug can identify bugs more effectively. A description of a data slice is different from a description of a bug. Domino only finds and describes the low-performance data slices. It does not explicitly tell what kind of bug the data slice is related to. Therefore, human experts need to go through all data in the slice to find if it is a rare case or a spurious correlation. In contrast, HiBug can explicitly show the bugs.
- **This paper focuses on illustrating the novel debug workflow of HiBug. Therefore, we do not provide a detailed evaluation for each component in HiBug.** We select methods such as ChatGPT and BLIP mainly because they are easy to use and provide good solutions. Our experiment evaluating the whole workflow has demonstrated the effectiveness of every component in HiBug. We also would to point out that HiBug is highly flexible and modular. Therefore, designers can update any components in HiBug with a better counterpart, if needed.
### Common Questions
1. **Lack of explanation or evaluation of components in HiBug, such as choice of LLM, vision-language model, and clustering algorithm.**
- Please refer to the last paragraph in the novelty clarification for this section.
- Since Reviewer GZbC and two Ethic Reviewers also mention the performance evaluation of BLIP. We provide the zero-shot classification of BLIP in some attributes of CelebA, where we have ground truth values for some attribute values. Gender classification: 99%, Age classification: 89%. For more evaluations, we refer to the original paper of BLIP.
2. **Why use linear correlation? / How linear correlation is computed? / Why not use other correlation coefficients?**
- Linear correlation is the same as the conditional probability in our paper. It computes that for data with attribute value $a_i$, the probability that the model predicts label $c_i$. The formula simply goes as:
- $P\left(c_i | a_i \right) $
- We choose linear correlation because of its simplicity, and the results show its effectiveness.
- Non-linear correlation, such as Spearman correlation, might be inapplicable in some scenarios. For a multi-classification task, one way to apply non-linear correlation is by computing the correlation between the confidence of a label in the model’s prediction and the appearance of an attribute value. However, (1) the model’s prediction confidence can be inaccessible, and (2) some previous studies have shown that model’s over-confident nature makes the confidence value untrustworthy. In contrast, linear correlation only requires the model’s prediction results and is always credible.
- Nevertheless, we plan to implement other correlation coefficients in HiBug as optional choices.
[1] Adaptive Testing of Computer Vision Models, arXiv. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Censored Sampling of Diffusion Models Using 3 Minutes of Human Feedback | Accept (poster) | Summary: The work aims to solve censored diffusion sampling problem, which prevent diffusion model generating malign / bad images. The core approach is to train a classifier and apply classifier-guided diffusion generation.
Strengths: 1. Authors present an interesting finding that the classifier-guided diffusion can effectively reduce the probability of generating malign images.
2. Authors present two approaches to learn the classifier, ensemble-based approaches when number of malign images less than benign images, imitation learning for when malign images are dominates.
3. The proposed methods are lightweight and can be effective with small labeled datasets.
Weaknesses: The core technique employed in the work is not new. And compared with existing works that are based on text2image diffusion models, authors use relatively simple diffusion models that are either class-condition diffusion models or diffusion models for one category dataset. I am not sure the effectiveness and efficiency of the proposed method can be scaled to a large diffusion model that handles for complicate diffusion models. The classifiers in three experiments are relatively easy and that may explain why it is so effective. It may not be true in large text2image settings.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: It would be more convincing that the authors provide more experiments for more complex diffusion models, such as stable diffusion.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for constructive comments. As suggested by the reviewer, we added an experiment using the **Stable Diffusion** model. Please refer to Section 2 of the common rebuttal and the attached pdf document for details.
We plan to add this new experiment, with some polishing, to the later version of the paper.
For now, we hope that the reviewer's concern on the applicability of our framework to larger/complex models is addressed, and if that is the case, we kindly ask the reviewer to consider increasing the rating.
In terms of the technical novelty, we acknowledge that the building blocks of our method are not new, but we ask the reviewer to consider that the main contribution of our paper is to **provide a simple, general, and easily adaptable framework that exhibits extreme sample/feedback efficiency.**
A priori, it was not at all obvious that this level of sample efficiency is possible.
Considering that both diffusion models and RLHF are currently actively investigated topics within the literature, we do believe that these findings will be useful to practitioners.
---
Rebuttal Comment 1.1:
Comment: Thanks for updating. The new experiments look promising. I raise my score. | Summary: The authors examine the problem of preventing the generation of certain types of images generated by a diffusion model. To achieve this, the authors propose using a reward model trained on human feedback. The authors demonstrate their approach from examples that require minimal human feedback to achieve sufficient censoring performance.
Strengths: - The authors make a good argument in that 3 minutes of human feedback can be more cost-efficient than retraining models.
- Their discussion and experiments in benign-dominant and malign-dominant scenarios are thorough.
- Their ablation studies in the experiments give convincing arguments that human feedback works well.
Weaknesses: - A discussion of when to use time-dependent or time-independent would have been nice.
- In the LSUN bedroom experiment, it took 15 minutes so it doesn't always take 3 minutes.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Line 52: there are two consecutive "the".
- Figure 3(b) text: "Comparsion" to "Comparison"
- How far can this method go? I'm sure there are practioners that want 0% malign images.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: - The authors do not explicitly state their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highly appreciate the constructive comments and the positive evaluation from the reviewer. We made our best efforts to reflect the comments to improve the paper. In the following, we address each of the reviewer's concerns in detail.
### 1. On time-dependent vs. time-independent guidance
Guidance using time-dependent reward $r(\cdot, t)$ is conceptually simpler (same as the usual classifier guidance) but requires a time-dependent architecture and the reward model should be trained on images with different level of signal-to-noise ratios.
For these reasons, one usually has to train the model from scratch, which is more suitable for simpler/smaller tasks (Sections 5.1 and 5.3).
On the other hand, time-independent reward models require additional techniques such as universal guidance, but the reward model has to be trained only on clean images, so one can exploit various pre-trained classifiers and expedite the reward training through transfer learning.
This makes (time-independent) transfer learning more suitable for complex/large-scale tasks (Sections 5.2, 5.4 and newly added Stable Diffusion experiment).
We found in our preliminary experiments that for these tasks, it was not easy to train time-dependent reward models for successful censoring, while transfer learning converged faster and provided promising results.
We will add discussion on this point in a later version of the paper.
### 2. Regarding the human time for the LSUN bedroom task
Please refer to the discussion on this point in Section 3 of the common response.
We further emphasize that our techniques are applicable to solving a task on the **Stable Diffusion** model, (also shown in the common response), and this actually **requires only 3~4 minutes of human feedback, despite the task's complexity.**
### 3. On how far the iterative application of our method could reach
We expect that iterative application of our methods (Algorithm 2 in particular) with progressively more feedback data **will achieve 0% malign images in the limit.**
Algorithm 2 iterates the process of "collecting additional human feedback data $\rightarrow$ reward training $\rightarrow$ (censored) sampling using the reward", and the 3 rounds, without much feedback data, can already reduce the initial malign proportion dramatically (from 68.6% to 2.2% and 1.0%, without and with universal guidance components, in Section 5.3).
During the rebuttal period, we tried **another round (Round 4) of imitation learning and observed another significant reduction of malign proportion to 1.38% and 0.58%, without and with universal guidance.**
Fitting the tendency, it seems that further rounds will steadily reduce the probability of malign generation and eventually converge to zero (Figure 3a in the attached pdf), but this will come at the cost of requiring more compute for data generation and more human feedback due to the high precision of censoring.
Nevertheless, we do see hope in achieving an extreme level of precision at the expense of additional human feedback and computation.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your response. You have definitely cleared up my questions. | Summary: This paper presents censored diffusion model training by using a reward model trained using human feedback. Towards this, the paper utilizes reward model ensembles (for benign dominant settings) and tools from imitation learning (for malign dominant settings).
Strengths: The human feedback part is pretty interesting and is a topic of increasing interest. The empirical results appear to be pretty compelling, though, I am not an expert with respect to evaluating this in particular, so I cannot comment on this with high confidence.
==> post rebuttal: I increased the score from a 5 to a 6.
Weaknesses: The techniques utilized by the paper are pretty well known and in that sense, this paper's contributions appear incremental.
One other question that appears to not be addressed is - by performing this paper's procedure iteratively (as multiple rounds), would the probability of producing malign images be brought down to zero? What are the other ways of achieving this?
Another weakness of the paper is that it is not clear to me if the proposed approach has been compared against other concept removal methods in the literature (which I agree I am not an expert on), but, this is something the authors must either present clarifications on, or, provide more comparisons on.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: a. i wonder if the malign dominant settings can be handled by flipping the reward to be something resembling 1-\prod_{i=1}^K (1-r_{\psi_k}^{k})?
b. does it help to train every single reward model with some form of hard negative mining? I mean this in the context of algorithm 1.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I believe the authors must present a discussion surrounding this in the context of their work - what it seeks to address, what remains to be addressed, and how much to trust the proposed algorithm as to whether it truly achieves what it sets out to. I am not certain about going through an ethics review, and I hope the AC and the authors try to see what is necessary in this direction as it clearly seems like there is potential for fleshing some of these concerns out.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the constructive comments and the positive evaluation from the reviewer.
In the following, we make our best efforts to address each of the concerns and questions.
For the concern regarding the comparison against other methods, please refer to Section 4 within our common response.
### 1. Regarding the contribution of the paper
In terms of the technical novelty, we acknowledge that the building blocks of our method are not new, but we ask the reviewer to consider that the main contribution of our paper is to **provide a simple, general, and easily adaptable framework that exhibits extreme sample/feedback efficiency.** A priori, it was not at all obvious that this level of sample efficiency is possible. Considering that both diffusion models and RLHF are currently actively investigated topics within the literature, we do believe that these findings will be useful to practitioners.
### 2. On how far the iterative application of our method could reach
We expect that iterative application of our methods (Algorithm 2 in particular) with progressively more feedback data **will achieve 0\% malign images in the limit.**
Algorithm 2 iterates the process of "collecting additional human feedback data $\rightarrow$ reward training $\rightarrow$ (censored) sampling using the reward", and the 3 rounds, without much feedback data, can already reduce the initial malign proportion dramatically (from 68.6% to 2.2% and 1.0%, without and with universal guidance components, in Section 5.3).
During the rebuttal period, we tried **another round (Round 4) of imitation learning and observed another significant reduction of malign proportion to 1.38% and 0.58%, without and with universal guidance.**
Fitting the tendency, it seems that further rounds will steadily reduce the probability of malign generation and eventually converge to zero (Figure 3a in the attached pdf), but this will come at the cost of requiring more compute for data generation and more human feedback due to the high precision of censoring.
Nevertheless, we do see hope in achieving an extreme level of precision at the expense of additional human feedback and computation.
### 3. On the question of handling malign-dominant tasks
The reviewer's question is whether the reward ensembling (Algorithm~1) can be used for malign-dominant tasks by switching the roles of malign and benign images (that is, fixing a set $\mathcal{B}$ of benign images and then ensembling $K$ reward models trained on $\mathcal{B}$ together with differently subsampled malign images).
In principle, one can train reward models $r_{\psi_k}^{(k)}$ in this way and use the ensembled product $r_\psi = \prod_{k=1}^K r_{\psi_k}^{(k)}$ as usual; there is no need to flip the reward because we still expect the product to encourage unanimous approval.
However, we observe that this is not as effective as imitation learning when the same number of total human feedback data are used; Round 3 of imitation learning that uses total 60 human feedbacks achieves **2.2%** censoring precision without universal guidance, while ensembling 5 reward models trained using 10 fixed benign samples and different sets of 10 malign samples (which also uses 60 feedbacks in total) achieves **6.6%** (please also refer to the discussion related to this point in lines 135--138 on page 6 of the paper).
### 4. Regarding the use of hard negative mining
As far as we understand, hard negative mining in this context would mean training the reward model using their false positives, which are the malign generated images even with the censoring technique applied using that reward model. We believe that this process is essentially our Algorithm 2 (imitation learning), and we have already **verified through Experiments in Section 5.3 on Imagenet Tench images that imitation learning is indeed effective, providing improved censoring precision compared to non-imitation learning using the same number of total samples (not using hard negative mining).** Therefore, we expect refining each individual reward model using Algorithm 2 and then applying Algorithm 1 (ensembling) will certainly result in better censoring performance. We, however, do not pursue this direction in depth to focus on delivering of the core ideas.
---
Rebuttal Comment 1.1:
Title: Re. author response
Comment: Thanks to the authors for their clarifications. My only remaining concern is whether is it possible to make a (formal) claim about a certified ability of the learnt model to prevent generating mis-aligned images. It will be worthwhile hearing the author's perspective on this, or include a discussion surrounding this in the paper. That said, I will increase my score, and thank the authors for their clarifications.
---
Reply to Comment 1.1.1:
Comment: We highly appreciate the reviewer's positive evaluation on our results and responses.
We assume that there exists a ground-truth function $r(x)$ that defines the likelihood of an image $x$ being benign. In principle, with infinite data and an infinitely expressive neural network, we should be able to learn $r(x)$. However, the data we have access to is highly limited in practice, so our goal is to achieve high censoring precision rather than to have any formally certified guarantees.
However, the idea of using the set of techniques from formal verification certifying certain input-output relationships of neural networks [1] seems intriguing.
We are not aware of analogous verification results regarding generative models, but this is certainly a very interesting direction of future work.
[1] Liu et al., Algorithms for Verifying Deep Neural Networks. Foundations and Trends in Optimization, 2021. | Summary: This paper studies the problem of preventing the generation of unwanted images in diffusion models. It formulates the task of 'censoring' and proposes using reward model trained from human labelling to guide the diffusion model. The method requires no fine-tuning and a few minutes of human feedback, while displaying significant reduction of unwanted images in four experiment setups.
Strengths: - This work identifies and formulates an important problem of misalignment in diffusion probabilistic models, which underwent few studies in the past. Solution to the problem can significant improve the utility of generative models and lead to positive social impact.
- RLHF is an emerging technology which is theoretically sound and empirically useful. The proposed method successfully bring the idea of RLHF into diffusion models while combining the idea of classifier guidance. The method itself is clean and simple to train, and requires few modifications to the sampling process.
- From the experiments, the method seem also quite efficient: (i) it sample efficient: requiring up to 100 malign samples, (ii) it does not need fine-tuning; (iii) it works under limited human feedback, taking only a few minutes of human labeling. These properties are desirable in practical implementation.
Weaknesses: - Under the general framework of training reward models and sampling with guidance, the authors use four different methods to train the reward model for the four different tasks in the paper. The methods are quite heuristic. It appears that for any new concept and dataset, it still requires a manual selection of the training algorithms for the reward models to achieve the best effect, which can be costly. However, the paper does not provide a conclusive algorithm or any unifying principles for the selection. In this sense the study seems unfinished.
- There are no large-scale, principled comparisons with baselines. Even if there are limited existing works studying the problem, I think there are still some naive methods such as using natural language prompts, fine-tuning, and post-selection with classifier. It is not straightforward to see why the method proposed in this paper is the best.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Why is it better to use time-independent reward model for censoring watermarks and time-dependent reward model for other tasks? Is there a general reason why time-independent reward models work better for some tasks and not others?
- Any guidance of choosing $\omega$ ?
- It appears from the paper that the training of reward models need large datasets containing images related to a single unwanted concept (for example fish with human faces). If I do not have such high-quality, single-purpose datasets, maybe only random images with all kinds of undesirable concepts, can the method still work well?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highly appreciate the constructive comments and the positive evaluation from the reviewer.
We made our best efforts to reflect the comments to improve the paper.
In the following, we address each of the reviewer's comments in detail.
### 1. Regarding the principles for selecting the techniques
We believe that our strategy on when to apply which techniques can be summarized into the following **two simple "rules"**:
- **Ensemble vs. Imitation learning.** We classify the censoring tasks into benign-dominant and malign-dominant cases, and apply Algorithm 1 (reward ensembling) to the former and Algorithm 2 (imitation learning) to the latter. Please also refer to our discussion on this point in lines 132--138 on page 6 of the paper.
- **Time-dependent vs. time-independent reward models.** Guidance using time-dependent reward $r(\cdot, t)$ is conceptually simpler (same as the usual classifier guidance) but requires a time-dependent architecture and the reward model should be trained on images with different level of signal-to-noise ratios. For these reasons, one usually has to train the model from scratch, which is more suitable for simpler/smaller tasks (Sections 5.1 and 5.3). On the other hand, time-independent reward models require additional techniques such as universal guidance, but the reward model has to be trained only on clean images, so one can exploit various pre-trained classifiers and expedite the reward training through transfer learning. This makes (time-independent) transfer learning more suitable for complex/large-scale tasks (Sections~5.2, 5.4 and newly added Stable Diffusion experiment). We found in our preliminary experiments that for these tasks, it was not easy to train time-dependent reward models for successful censoring, while transfer learning converged faster and provided promising results.
Besides these rules, we also utilize the universal guidance components to boost the censoring performance, but they apply globally to all tasks that we consider.
Additionally, we observe that our framework requires minimal hyperparameter tuning; the typical values $\alpha=0.1$ (parameter used in the reward-training loss $BCE_\alpha$), reward learning rate $3\times 10^{-4}$, batch size 128, and number of recurrence $R=4$ work sufficiently well for most setups.
For the choice of guidance weight $\omega$, we provide a separate discussion below.
We hope that our clarification resolves the reviewer's impression that there is no principled strategy on the choice of methods.
### 2. Regarding comparison with other methods
Please refer to our common response (Section 4 therein) on this point.
We provide comparisons with rejection sampling (post-selection in the reviewer's terminology) using the reward model in Sections~5.3 and 5.4, where it is shown that rejection sampling is not only worse than the guidance method in terms of the precision, but also, it rejects a large proportion of generated samples, which is undesirable.
We use non-text conditional models in the paper's experiments so direct comparison to prompting was not available, but in our additional experiment using the Stable Diffusion model, some basic attempts like adding "without text" or "no text" to the prompts did not produce successful results.
We acknowledge, however, that sophisticated prompt engineering, negative prompting, or adding specific requests regarding the background could also prevent the appearance of texts.
At this stage, we view our experiment on Stable Diffusion as a proof of concept that our simple methodology does work on large-scale text-to-image diffusion models (rather than claiming superiority over all possible alternatives).
Carefully identifying the problems where even dedicated trials using natural language prompts fail, and solving them through human feedback, seems to be an interesting future research direction.
### 3. Regarding the choice of $\omega$
For ensemble of $K=5$ reward models, **$\omega=1.0$ for the simplest toy case.**
Based on our experiments, we speculate that **the appropriate scale is doubled (roughly) with 1) significant scale growth in terms of data size and 2) the introduction of new modality (e.g. unconditional or class-conditional $\to$ text-conditional model).**
We use $\omega=2.0$ for Sections 5.2 and 5.4, where the data size grows to 256$\times$256 scale.
For the additional Stable Diffusion experiment, we again double it and use $\omega=4.0$.
Note that for the experiments of Section 5.3 (ImageNet Tench) or ablation studies with "Single" and "Union" models where we do not use ensemble, it is conceptually "fair" to use $K$ times larger value of $\omega$ in the following sense.
If $r_\psi = \prod_{k=1}^K r_{\psi_k}^{(k)}$, we have $\nabla \log r_\psi = \sum_{k=1}^K \nabla \log r_{\psi_k}^{(k)}$.
So viewing each $r_{\psi_k}$ as an approximation to $r$ (the "true" reward), $\nabla \log r_\psi \approx K \nabla \log r$ is used in the guidance process.
Thus, to produce a similar effect, one should multiply $K$ when using log gradient from a single reward model.
In all ablation studies, we stick to this rule.
Similarly, in Section 5.3, we use $\omega=5.0$ which is $K$ times the value of $\omega$ used in Section 5.1.
### 4. Regarding censoring multiple concepts
We thank the reviewer for mentioning this is interesting point.
We believe that multiple concepts can be censored simultaneously by combining reward models corresponding to each one.
Suppose we wish to censor two concepts, and for each of them we train a reward model $\hat{r}_1$ and $\hat{r}_2$ where $\hat{r}_i \approx 0$ indicates that the image contains (is malign with respect to) the $i$th concept.
Similar to using the product in reward ensemble, we have $\hat{r}_1 \hat{r}_2 \approx 0$ if at least one of $\hat{r}_1, \hat{r}_2$ is small, so censored sampling guided by $\nabla \log \hat{r}_1 \hat{r}_2 = \nabla \log \hat{r}_1 + \nabla \log \hat{r}_2$ will enforce the generated images to be free of both concepts.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. | Rebuttal 1:
Rebuttal: # Common Response (pdf attached)
We thank all reviewers for the extremely detailed and constructive feedbacks. We are delighted that most reviewers found our ideas convincing and the contributions solid. Below, we provide our response to some common concerns.
### 1. List of additional experiments
In the attached, we show 4 additional experiments.
For better readability, we provide the list of the experiments associated to each figure/table.
- Figure 1: A new task using **Stable Diffusion**. We successfully eliminate embedded raw texts that are unexpectedly associated to the prompt "a photo of a human" (Table 1). This experiment is to address the concerns (reviewers ADeQ, 6Ndg) that our framework may not apply/not be as efficient as we claim when diffusion models are of larger scale & complexity. Please refer to Section 2 below for details.
- Table 2: Censoring result using **malign images from a secondary source** (instead of model-generated samples) in the LSUN Church task. This demonstrates that external images may substitute model-generated malign images if they are scarce. We thank reviewer W62Q for motivating this experiment.
- Figure 3a: **Round 4 of imitation learning** in the ImageNet task provides further improvement over Round 3. The continued application of imitation learning reduces the malign proportion approximately exponentially ($y$-axis is in log scale) and could lead to extreme precision as reviewers inquired (GFro, iXGj).
- Figure 3b: Plot highlighting the effect of feedback size for reward training (ImageNet task), as suggested by reviewer W62Q.
### 2. Regarding application to complex tasks/large-scale models
To resolve the incorrect impression that our methods are efficient only for simple settings, we introduce a **Stable Diffusion experiment that uses minimal human efforts.**
In Stable Diffusion v1.4, **21.9%** out of 1,000 generated images (512$\times$512) by the prompt "a photo of a human" contain prominent embedded texts, or even lacks any genuine image and displays only the raw text (Figure 1a).
(We could not find a straightforward modification to the prompt that removes texts without compromising the generality of the prompt; basic attempts like adding "without text" or "no text" are unsuccessful.)
We collect human feedback data until 100 malign images are found **(3~4 mins)**.
We train 5 reward models via transfer learning (batch size 64, no augmentation, $BCE_\alpha$ weight $\alpha=0.1$, lr $10^{-4}$ and 10,000 iterations).
We use $\omega=4.0$ and no backward guidance.
Without/with recurrence ($R=4$), the proportion of malign samples drops to **4.8%**/**1.3%** (out of 1,000 samples).
Figure 1b shows generation results with all censoring techniques applied.
We hope this resolves some reviewers' concern regarding applicability on large-scale text-to-image models.
### 3. Regarding the human time for LSUN bedroom task
We do acknowledge natural criticisms from some reviewers that this task actually requires $>$3 minutes. However, we point out that
- This is a large-scale task: it uses the full 256$\times$256 diffusion model (not the lighter LDM).
- In fact, we design this setup as a **worst case** on the human work our framework requires.
Most censoring tasks in practice are likely to be less delicate than this task, where we deal with an ambiguous concept of malignity that is difficult to characterize and inevitably requires multiple guidelines to minimize subjectivity of the experiments.
- In the end, even the amount of feedback ($<$1k) and human time (15 mins) for this worst case is significantly smaller compared to those in prior works [1, 2, 3] that use 27~137k human feedback to fine-tune diffusion models.
- Nevertheless, we'll update the title to "Censored Sampling ... Using **a Few Minutes** of Human Feedback".
### 4. Comparison to other censoring methods
To clarify, our paper aims to resolve a defect/inconsistency of a model whose occurrence is **easy for humans to identify, but difficult to be mathematically described or fixed**.
There are many issues of this kind, e.g.: NSFW contents, inconsistencies in limbs/fingers, or double-headed humans, persistently appearing even after negative prompting.
Our premise is that **there are cases in which feedback from humans are the only simple or feasible way of encoding the model's defect, and for these types of tasks, any other methodologies such as fine-tuning should share the process of collecting human feedback data and learning from these data**.
Traning a reward model fitting the human feedback data and then fine-tuning as in [1, 2, 3] involves more complications compared to our approach, which is arguably one of the simplest things to do with a reward model.
Fine-tuning usually requires multiple trial-and-errors until the best hyperparameters are found, as the diffusion model often overfits to the fine-tuning objective and experiences catastrophic forgetting.
(This entire process is not necessary if we simply use guidance.)
Another possible approach is to apply the policy gradient type fine-tuning as in [4] without explicitly training the reward model but letting humans play the role of the environment (reward oracle).
However, we see less hope in this direction because policy gradient is notoriously sample inefficient [5], unless technical endeavor comparable to the one devoted to this whole paper is made.
**Therefore, we argue that other approaches, including those based on fine-tuning, may not be as straightforward or suitably sample-efficient as they might seem at first glance.**
[1] Lee et al., Aligning Text-to-Image Models using Human Feedback. 2023.
[2] Xu et al., ImageReward: Learning and Evaluating Human Preferences ... Generation. 2023.
[3] Fan et al., DPOK: Reinforcement Learning for Fine-tuning ... Diffusion Models. 2023.
[4] Fan & Lee, Optimizing DDPM Sampling with Shortcut Fine-Tuning. ICML, 2023.
[5] Gu et al., Q-Prop: Sample-Efficient Policy Gradient ... Critic. ICLR, 2017.
Pdf: /pdf/9e2064f245a39bc6d66b27659a5241f7314aab22.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors combine pre-trained diffusions with classifiers trained on human feedback about which type of images to omit, and use the classifiers to guide diffusion sampling using the Universal Guidance technique. They observe that they are able to filter out several types of malign images on a variety of datasets, notably removing undesirable artifacts on LSUN and faces on Imagenet.
I would recommend this paper for acceptance.
Strengths: - People care about censoring sampled outputs, and don't like re-training large diffusions
- Together with the appendix, the specifics of how exactly to guide are well explored in this work
- I like that that while "bad" images are hard to quantify with some kind of numerical per-sample score, it is easy for a human to recognize instances of it when they do see it.
- emphasizing that not much feedback is needed for the cases explored
Weaknesses: The main concerns would naturally be:
- how much feedback is enough
- are the good results specific to the type of things being censored in the particular datasets chosen
- for the infrequent malign content case, can a second dataset be used in place of model samples?
See questions below.
Other small comment: change "man hours" to "human work hours"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: For the first concern, it would be great if you could include something showing the number of model-produced malign samples going down as a function of number of malign samples used in classifier training, or as a function of number of total samples used for classifier training, or both. I know this is hard to do since the outputs need to be checked manually. Even doing this with a batch of 128 for a few classifier-training-set-sizes would be good, and see e.g. the malign proportion lowers for several feedback dataset sizes.
For the second concern, the crossed 7's are distinct enough in MNIST such that a very small classifier could quickly assign zero weight to them. However, the faces on Imagenet are more convincing since the properties to be censored (faces) do appear in several ways that aren't as clearly disentangled from benign properties. It would be useful to share somewhere in the text a case where the censoring did not work despite what looks like an adequate amount of samples. Did this happen for censoring some other of the Imagenet classes? Are there any cases you explore where the choice of malign samples constitute nearly all of the training set?
Feedback on a second dataset rather than model samples: for the case where a diffusion is trained on a dataset A with low frequency of the malign property (but nevertheless we need to guarantee that samples do not ever contain the property), the model samples will feature malign content infrequently. What if you have a second dataset B where the property is common? If the datasets are the same resolution and are diverse enough, and share some concepts this might work and might be a nice way to get past low malign frequency in dataset A? Consider the case where A only has some faces but B is at least half human faces. So samples from the A-trained-model do not produce malign content often and good feedback is hard to collect. How well does a feedback model trained on a second dataset B (which are not model samples) work for censoring the A-trained-diffusion from generating faces?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are delighted to see that the reviewer empathizes with our problem statement and solutions. We also greatly appreciate the inspiring comments with the positive evaluation. We have devoted our best efforts to provide satisfactory resposnes to each of the reviewer's concerns below.
### 0. Regarding the expression "man hours"
We have replaced the instances of the expression "man"' to "human".
We thank the reviewer for pointing this out.
### 1. Regarding the effect of feedback data size on malign proportion
Reflecting the comment, we tested the effect of feedback data size used for reward training on the censoring performance in the setup of Section 5.3.
Please refer to Figure 3b within the attached document in the common response.
To clarify, we already had comparisons of the cases using 10, 20 and 30 malign images within the ablation studies, but newly added the cases of 50 and 100 malign images for clearer demonstration, and we do obtain a figure of the desired shape.
We did not have enough capabilities to repeat the analogous procedure for the more complex setups of Section 5.4 and the additional Stable Diffusion experiment within the rebuttal period, but we plan to include them in the revised version of the paper.
We thank the reviewer for the suggestion.
### 2. Regarding whether the success of proposed methods is general
We would like to clarify that **the tasks we consider in the paper are not the ones particularly curated for success.**
That is, we did not extensively explore setups other than what show in the paper; rather, we fixed the tasks from the beginning and developed the general strategies that could consistently deliver the results.
Therefore, we believe that **our approach will be applicable over wide range of datasets and tasks.**
It is true, though, that there seem to be a minimum number of samples required for positive results; e.g., using 50 malign and 50 benign samples for the LSUN Bedroom task was not satisfactory enough.
We speculate that if the proportion of malign images from the baseline model is extreme (say, 99%) then it will be very difficult to successfully censor them out, but we have not tried pushing through experiments on such setups.
Our intuition on the censored generation is: it works by encouraging a model's benign behavior while suppressing the malign side, which is possible only when the model already possesses sufficient capability of producing benign images.
Therefore in designing experiments, we only considered the tasks where the baseline model generates a reasonable proportion of benign images, and for all of these cases we managed to achieve good censoring precision.
In the later version of the paper, we will try to include more discussion on this point.
### 3. On using a secondary dataset
We believe **using a secondary dataset is indeed a feasible strategy, and we provide a proof-of-concept experiment** using the LSUN Church setup where we censor the "Shutterstock" watermarks.
Instead of marking samples generated by the model as malign, we visit Shutterstock website and manually collect 30 images (the number equal to the experiments from Section 5.2) within the search results for "church", "gothic church" and "cathedral" with clearly visible watermarks, using the Snipping Tool (please see Figure 2a of the attached document).
We fix this secondary set of malign images $\mathcal{M}$ and train 5 reward models each using $\mathcal{M}$ together with 30 fresh random benign samples generated from the model.
We use the same configurations as described in Section 5.2 and the Appendix I of the main paper for reward training.
We observe that although slightly worse compared to the best ordinary censoring results from the paper **(0.76%)**, censoring via ensemble of the reward models trained using the secondary source is effective (achieving **1.4%** malign proportion) when used with $R=4$ recurrence steps (Table 2 in the attached pdf).
On the other hand, it seems that this alternative approach should be used with more care.
Our censored generation based on the secondary data does produce legitimate images with lowered proportion of malign images in the best case (pdf Figure 2b), but we heuristically observe frequent degradation in the generated images' quality without universal guidance (recurrence).
Additionally, we observe that the guidance procedure becomes relatively less resilient to larger guidance weights when using the secondary data; using $\omega=2.0$ as in the paper seemed to lower the image quality, so we use $\omega=1.0$ for the results of the pdf's Table 2 and Figure 2b.
This is possibly because it becomes much easier for the reward model to discriminate between the malign and benign images when malign images originate from a dataset of distinct distribution, so that it is more likely that the reward overfits and quickly learns to classify with extremely high confidence, causing the gradient to overshoot.
Finally, if the secondary dataset is not sufficiently close to the distribution the diffusion model has learned, then censoring may be unsuccessful.
When we try the same experiment using slightly different search keywords such as "old city", it does not work.
This means that it is difficult to make a precise prediction on whether the strategy will succeed in, e.g., the virtual scenario suggested by the reviewer; it is genuinely case-by-case so the best practice seems to be trying out with hope. | Summary: This paper proposes an approach to censor the sample generation of pre-trained diffusion probabilistic models to better align with human preferences. The authors use minimal human feedback (<3min spent in providing the feedback for basic tasks, and <15min for more complicated tasks that they consider) to train a light-weight reward model, then use the trained reward model to provide either time-dependent or time-independent guidance for the generation process. The authors also use techniques such as ensemble, iterative imitation learning, transfer learning, backward guidance and recurrence to improve censoring performance. Experiments on MNIST, LSU church, ImageNet, and LSUN bedroom showed the effectiveness of their proposed method.
Strengths: The paper studies an interesting and important problem of aligning diffusion models with human preferences. By training a relatively lightweight reward model to guide the generation process instead of fine-tuning the large pre-trained model, their proposed approach could potentially save a lot of computation and human feedback.
Weaknesses: - The paper's structure could be more coherent and consistent. The authors introduce classifier guidance in Section 1.1 but leave its relation to censored sampling unclear until Section 4. This disconnect could confuse readers. Similarly, there is a lack of clarity regarding the reward function $r$ and the reward model $r_\phi$ introduced in Section 2. While initially, it seems that $r_\phi \approx r$, the possibility of $r_\phi$ becoming time-dependent, as mentioned in Line 97 of Section 3, introduces confusion because $r$ is time-independent. This is not resolved until Section 4, when time-dependent guidance is addressed. To improve readability, it would be beneficial for the authors to introduce the underlying notations and mathematical concepts of the entire model—from the diffusion probabilistic models to the reward models and their role in censored sampling—before discussing training and evaluations.
- In section 3.1, the authors choose $r_\phi$ as the product of $K=5$ independently trained reward models. However, they did not have enough discussions or experiments to back up these choices, which makes the resulting model seem arbitrary. In particular, since the authors mentioned $r_\phi(X)\approx r(X)=P(Y=1|X)$, it's unclear why taking the product--as opposed to taking the average--makes sense because each $r_{\psi_k}$ is also approximating $r$. Moreover, if we ignore the mathematical implications for now, the authors explained in line 115 that taking the product is essentially asking for unanimous approval, but readers might still question whether other methods, such as increasing the value of $w$ in the "union" method, could work just as well.
- The paper highlights extreme human feedback efficiency by saying that a few minutes of human feedback are sufficient. However, this claim overlooks the complexity of tasks considered in this paper, which are relatively simple compared to tasks where RLHF is typically employed in large-scale models. Therefore, it would be better to directly compare with previous approaches (such as fine-tuning) under the same task. Additionally, the more complex tasks, like censoring distorted bedrooms, actually require more than the estimated 3 minutes of human feedback. Hence, the title could be seen as potentially misleading or exaggerating the method's efficiency.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - For the training data $X^{(1)},\cdots,X^{(N)}$ in Algorithms 1 and 2, are they images corrupted by the VP SDE or images generated by the pre-trained model $\varepsilon_\theta$? What are the objectives in training each reward model in the ensemble method?
- In Algorithm 1, for each $k$ the learner randomly select with replacement $N_M$ benign samples. Does this step also need human feedback?
- Typo in line 204. "30 malign and 150 malign samples"
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for constructive comments. We made our best efforts to address the concerns and reflect the comments to improve the paper. For the concerns regarding the complexity of the tasks we cover, comparison against other methods, and the longer human time for the LSUN Bedroom task, please refer to the pertaining discussions within our common response. We hope that we have properly addressed the reviewer's concerns, and if that is the case, we kindly ask the reviewer to consider increasing the rating.
### 1. On the presentation of the paper
We highly appreciate these suggestions. We will clarify in Section 1.1 that classifier guidance is the basis of our censoring framework and is therefore relevant to the paper's contents. We will also clarify within Section~2 that the reward function $r(\cdot, t)$ can be time-dependent and considering the time-dependent reward helps, in some setups, to properly deal with images with different level of noises that are encountered during the sampling process of diffusion models.
### 2. On the choice of reward model
Indeed, there are multiple ways to combine individual reward models $r_{\psi_k}^{(k)}$ to build a refined approximation of the true reward $r$, and in our initial experiments (not included in the paper), we tested each of
- Reward averaging: using $r_\psi = \frac{1}{K} \sum_{k=1}^K r_{\psi_k}^{(k)}$
- Logit averaging: using $r_\psi = \sigma \left( \frac{1}{K} \sum_{k=1}^K h_{\psi_k}^{(k)} \right)$, where $r_{\psi_k}^{(k)}(X) = \sigma \left( h_{\psi_k}^{(k)}(X) \right)$ for each $k=1,\dots,K$ and $\sigma$ is the sigmoid function
- Geometric reward averaging: using $r_\psi = \left( \prod_{k=1}^K r_{\psi_k}^{(k)} \right)^{1/K}$.
While each method seems to make sense in principle, it turned out that methods 1, 2 were not as empirically effective as method 3 in censoring out the unwanted samples.
Note that taking the product of each $r_{\psi_k}^{(k)}$ is equivalent to using method 3 and then increasing the guidance weight $\omega$ by $K$ times.
We would like to clarify that **we already did make a fair comparison, in this context, between the "ensemble" models versus the "union" models** in all of our ablation studies in Sections H.3, I.4 and K.4, where we used $K$ times larger values of $\omega$ for the "union" cases compared to the "ensemble" cases, which are the product models. Our experiments conclude that ensemble models are still always better.
One must use the guidance weight $\omega = 1.0$ to strictly obey the mathematical formalism $p_{\mathrm{censor}}(x) = p_{\mathrm{data}}(x) r(x)$, but we believe it is an everyday practice in the diffusion model literature to use larger $\omega$ for improved consistency with the conditioning label in guided/conditional generation.
In the end, we observed that using the product of $K=5$ reward models, which is equivalent to using $\omega = 5.0$ with respect to the geometric average, already works fairly well in most cases even if it is not a carefully tuned choice. (We do expect further performance gain by ensembling a larger number of reward models, but we do not pursue this minor improvement direction in depth.)
For the interest of space, we omitted these details and only mentioned that we use the product as our reward.
However, we would be happy to further justify this choice by including the above discussion, if we are later provided with additional space.
### 3. Question regarding the notation $X^{(1)}, \dots, X^{(N)}$ and reward model training
$X^{(1)}, \dots, X^{(N)}$ are the images generated by the pre-trained model, and in Algorithm 1, we assume that the human feedback (labels for malign/benign) is already given on them.
When using the time-dependent reward model architecture, we also sample $t$ uniformly within $[0, T]$ and noise $X^{(i)}$ up to time $t$ along the VP SDE.
We use the weighted BCE loss function as described in our Appendix, Section F: $BCE_\alpha(r_\psi(x;t), y) = -\alpha \cdot y \log r_\psi(x;t) - (1-y) \log (1-r_\psi(x;t))$ where $\alpha < 1$ is a hyperparameter, determining to which extent the reward model prioritizes classifying malign images as malign over benign images as benign.
### 4. Question on the subsampling of benign samples
In the setup of Algorithm 1, we assume that benign images are more frequently generated compared to malign images.
The human feedback collecting-stage terminates once the number of malign samples reaches the desired level ($N_M$), and at this moment one will already have a pool of abundant ($N_B > N_M$) benign samples (because the feedback provider is automatically marking images as benign if they are not malign).
Thus, during the execution of Algorithm 1, no additional human feedback is required.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's rebuttal and the additional experiment on stable diffusion. Given that most of my concerns have been addressed, I will raise my score from 4 to 6. | null | null | null | null |
Energy-Based Models for Anomaly Detection: A Manifold Diffusion Recovery Approach | Accept (poster) | Summary: The authors introduce a novel algorithm, Manifold Projection-Diffusion Recovery (MPDR), for training energy-based models (EBMs) that improve the performance of anomaly detection tasks. These tasks are highly relevant in real-world applications like industrial surface inspection, machine fault detection, and particle physics.
Unlike conventional EBM training methods, MPDR harnesses low-dimensional structures in the data to generate more informative negative samples. It works by initially perturbing a data point along a manifold approximating the training dataset and then trains the EBM to maximize the probability of recovering the original data.
A significant aspect of this new method is its use of Manifold Projection-Diffusion (MPD), which replaces Gaussian noise with perturbations reflecting the data's low-dimensional structure. This approach provides more meaningful insights into variations within the data.
The authors show that MPDR provides consistent density parameter estimation under standard assumptions, is compatible with any energy function, and can work with multiple autoencoders - an advantage over existing algorithms. Moreover, it demonstrates good results even with lightweight autoencoders, making it computationally efficient.
The paper also presents practical strategies for deploying MPDR, such as two-stage sampling, energy function design, and ensemble techniques using multiple autoencoders.
Through experimental testing on various data types, including images, representation vectors, and acoustic signals, the authors demonstrate that MPDR significantly improves unsupervised anomaly detection tasks, outperforming other deep generative models.
Strengths: **Originality:**
The paper is highly original in its formulation and approach to training energy-based models (EBMs) for anomaly detection. The development of the Manifold Projection-Diffusion Recovery (MPDR) represents a creative combination of existing methodologies, such as the use of autoencoders for efficient training and manifold projections for capturing low-dimensional data structures. The method of perturbing data points along a low-dimensional manifold that approximates the training dataset and then training the EBM to maximize the recovery of the original data is a fundamentally new approach.
**Quality:**
The quality of the paper is commendable. The authors present a clear problem statement, propose a novel solution, and provide empirical evidence supporting their claims. They also delve into the theoretical backing of the proposed method, offering a consistent density parameter estimation under standard assumptions. The paper includes detailed experimental results, highlighting the strength of MPDR across diverse anomaly detection tasks and data types.
**Clarity:**
The exposition in the paper is clear and well-organized. The authors have done an excellent job explaining complex concepts and methodologies, which makes the paper accessible even to readers who may not be experts in the field. The use of illustrative figures and well-explained algorithms further enhances the clarity of the paper.
**Significance:**
The significance of this work is substantial, given the wide range of practical applications of anomaly detection. By providing a more efficient and effective way to train EBMs, MPDR could significantly improve performance in areas like industrial surface inspection, machine fault detection, and particle physics. Furthermore, the ability of MPDR to perform effectively with lightweight autoencoders means it can be used in scenarios where computational resources are limited, making it relevant to a broader audience.
Weaknesses: While the paper presents a compelling new approach, here are a few areas that could be addressed or clarified further:
**Theoretical Analysis:** While the authors provided a theoretical justification for consistent density parameter estimation under standard assumptions, it would be beneficial to include more analysis of the proposed method's convergence properties. Understanding how the choice of manifold affects the convergence and stability of learning would also be useful.
**Comparison with Other Methods:** The paper could benefit from a more comprehensive comparison with other state-of-the-art methods for anomaly detection. Not only should this include direct quantitative comparisons on common datasets, but also qualitative discussions about when and why one might prefer the proposed method over others.
**Parameter Sensitivity:** It is not clear how sensitive the results are to the choice of parameters within the MPDR framework. It would be beneficial for practical applications to know how much tuning is needed to achieve optimal performance and how robust the method is to variations in these parameters.
**Real-World Applications:** While the experiment demonstrates the effectiveness of the proposed algorithm in various data types like images, vectors, and acoustic signals, applying the model to real-world datasets and providing case studies can strengthen the paper. This will help readers understand its practical implications better.
**Computational Complexity:** The paper mentions that MPDR performs well with lightweight autoencoders, which indicates computational efficiency. However, a more explicit discussion or analysis of the computational complexity of the proposed method, including both training time and inference time, would provide valuable information to practitioners considering this method.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. **Theoretical Analysis:** Could the authors provide a more rigorous analysis of the convergence properties of their proposed method? Specifically, how does the choice of manifold affect the stability and speed of learning in MPDR?
2. **Comparison with Other Methods:** It would be beneficial to see a wider comparison with other state-of-the-art anomaly detection methods. Could the authors elaborate on why one might choose MPDR over other established methods in specific scenarios?
3. **Parameter Sensitivity:** How sensitive is the MPDR algorithm to the initial choice of parameters? Is there a recommended procedure for parameter tuning, or guidelines that could assist users in achieving optimal performance?
4. **Real-World Applications:** Could the authors possibly demonstrate the application of their model on real-world datasets or provide case studies? This could help showcase the practical implications of the proposed method.
5. **Computational Complexity:** The paper mentions that MPDR can work well even with lightweight autoencoders, but could you please clarify further on its computational complexity, training time, and inference time? Would there be any scalability issues when applying this method to larger datasets?
Looking forward to the authors' response to these points.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: From the information provided, it does not appear that the authors have explicitly discussed the limitations and potential negative societal impacts of their work. Therefore, here are some suggestions for addressing these points:
**Limitations:**
1. **Robustness to Noise:** How well does the MPDR framework handle noise in the data? Real-world data often contain a significant amount of noise, which may distort the underlying manifold structure that MPDR relies on.
2. **Scalability:** The scalability of the model hasn't been addressed. Can the method be applied efficiently to very large datasets? What would be the computational requirements in such cases?
3. **Multimodality:** How effectively can MPDR handle multimodal or highly dimensional distributions? This is a common challenge in many real-world anomaly detection tasks.
**Potential Negative Societal Impacts:**
While this study primarily focuses on improving anomaly detection methods, which generally have positive implications (e.g., defect detection in manufacturing, early detection of diseases, etc.), any technology has the potential to be misused.
1. **Privacy Concerns:** Anomaly detection tools can potentially be used to identify outliers or anomalies in personal behavior or characteristics, leading to potential privacy concerns if misused, especially in contexts like surveillance or social profiling.
2. **False Positives/Negatives:** In critical applications, false positives or negatives can have serious repercussions. For instance, in health care, a false positive might cause unnecessary worry or treatment, whereas a false negative could delay necessary intervention.
It would be beneficial to see the authors address these potential issues and discuss how they might be mitigated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Tcts,
We would like to express our sincere gratitude for taking the time and effort to review our paper. We greatly appreciate your highly detailed and constructive comment and are happy to answer your questions.
**Theoretical Analysis**
> How does the choice of manifold affect the stability and speed of learning in MPDR from a theoretical perspective?
Thank you for the insightful question. Indeed, the theoretical analysis can shed light on the choice of a manifold. According to our analysis on the asymptotic normality of estimation error, an encoder with a higher compression rate tends to yield better estimations. That is, the more information lost during encoding, the smaller the estimation error becomes. This understanding aligns with our empirical observation that L2 regularization on the encoder weights often yields improved results. We will incorporate this analysis into the updated manuscript.
More detailed argument is given as follows:
In the limit of infinite data, the parameter estimation error from maximum recovery likelihood follows a zero-mean normal distribution as in maximum likelihood estimation. The covariance of this distribution is determined by the inverse of a term $I(\theta)=E_{p(x,\tilde{z})}[-\nabla^2_\theta \log p_\theta (x|\tilde{z}) ]$. Since $\nabla^2_\theta \log p_\theta (x|\tilde{z})= \nabla^2_\theta \log p_\theta(x) - \nabla^2_\theta \log p_\theta(\tilde{z})$, this term is decomposed as follows:
$$ I(\theta) = I_0 (\theta) + E_{p(x,\tilde{z})}[\nabla^2_\theta \log p_\theta(\tilde{z}) ],$$
where $I_0 (\theta)$ is Fisher information. The term $\nabla^2_\theta \log p_\theta(\tilde{z})$ becomes 0 when the encoder compresses all $x$ into a single $z$.
However, with an encoder that compresses everything, we could not leverage the inductive bias of the manifold assumption. Therefore, in practice, we need to use a moderately compressing encoder.
**Comparison with Other Methods**
> It would be beneficial to see a wider comparison with other state-of-the-art anomaly detection methods.
Due to page constraints, we primarily compared the proposed method with the closely related generative anomaly detectors. At the request of other reviewers, we have conducted additional comparison experiments. If there's a specific algorithm you'd like to see compared, please let us know. We are happy to incorporate additional comparative experiments in our updated manuscript.
> Could the authors elaborate on why one might choose MPDR over other established methods in specific scenarios?
Since MPDR explicitly leverages low-dimensional representations, we believe MPDR is a recommendable choice when data is expected to have a pronounced manifold structure.
**Parameter Sensitivity**
> How sensitive is the MPDR algorithm to the initial choice of parameters? Is there a recommended procedure for parameter tuning, or guidelines that could assist users in achieving optimal performance?
Please find Appendix B.2 for sensitivity analysis for important hyperparameters such as the noise magnitude and the latent dimensionality. As in other outlier detection algorithms, those hyperparameters can be selected using a separate validation OOD dataset, as proposed in Appendix A of Hendrycks et al., 2018.
Hendrycks et al., Deep Anomaly Detection with Outlier Exposure, 2018.
**Real-World Applications**
> Could the authors possibly demonstrate the application of their model on real-world datasets or provide case studies?
Please find the experiments in Section 4.4 which utilized audio data collected from real-world mechanical devices. MPDR is able to deliver improvement in this dataset as well.
**Scalability & Computational Complexity**
> Could you please clarify further its computational complexity, training time, and inference time? Would there be any scalability issues when applying this method to larger datasets?
In our CIFAR-10 experiment, for example, training of MPDR takes about 5 hours on a single V100 GPU, including the training of autoencoders used in manifold projection-diffusion. Compared to other energy-based models that is typically trained for more than one day, MPDR requires shorter training time. Using the same V100 GPU, the inference over the whole test set takes 3.5 seconds, which corresponds to ~2800 images per second. The inference is lightweight because the autoencoders used in perturbation are only used during training and are not required for making predictions.
With respect to the number of data, MPDR is as scalable as other deep learning algorithms, as its training is fully based on stochastic gradient descent.
**Multimodality**
> How effectively can MPDR handle multimodal distributions?
This could be a highly interesting future research direction. One possible method is to build a joint latent representation of multiple modalities and run MPDR in that space.
**Potential Negative Impacts**
We believe that the concerns raised regarding negative impacts are not specific to the proposed algorithm but pertain to machine learning algorithms in general. At the very least, issues related to false positives/negatives can be mitigated by developing a more refined algorithm, and MPDR aims to contribute in that direction.
We thank you again for your detailed comment. Please let us know if you have any other unresolved concerns. If your concerns are resolved, please kindly consider raising the evaluation score.
Best regards,
Authors.
---
Rebuttal Comment 1.1:
Comment: I acknowledge I have read the rebuttal. | Summary: Paper proposes MPDR, a novel method of using auto-encoders for training EBM.
Some practical techniques are introduced.
Extensive numerical experiments are done.
Strengths: Numerical experiments cover a large scope of benchmarks. And it shows superiority on most benchmarks.
Weaknesses: No theoretical guarantee is provided.
And it doesn't show dominant superiority on some datasets.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Can we have some theoretical analysis on when proposed method has significant advantage, when not? Or is the proposed method better in general and some loss are just random?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: No negative societal impact is seen.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Pi2D
Thank you so much for your comment. We would like to address your concerns in detail.
> No theoretical guarantee is provided.
We are concerned that this statement does not accurately reflect what is presented in the paper. Please refer to the end of Section 3.2 and Appendix A, where we provide the theoretical guarantee that the proposed method offers an asymptotically unbiased estimation of the underlying density. Consistent estimation of density can lead to accurate recovery of the distribution’s support (Cuevas and Fraiman, 1997), enabling successful out-of-distribution detection.
Antonio Cuevas. Ricardo Fraiman. "A plug-in approach to support estimation." Ann. Statist. 25 (6) 2300 - 2312, 1997.
> The method does not show dominant superiority on some datasets. When does the proposed method has significant advantage?
Qualitatively speaking, the proposed method is designed to leverage the inductive bias that the data possesses a pronounced low-dimensional structure. This manifold assumption is both useful and effective since it approximately holds true for a wide range of data. However, this assumption may not be as effective in complex datasets, such as high-resolution images.
Is there a specific set of results that concerns you? Please let us know. We will try to provide an explanation for the result.
> Can we have some theoretical analysis on when proposed method has significant advantage, when not?
Almost all the generative anomaly detection methods compared in our paper offer consistent density estimation, making them nearly equivalent from a theoretical perspective. The difference in empirical performance originates from the inductive biases of the algorithms and their implementation details. As with many other deep learning algorithms, it is challenging to quantify the contribution of these aspects. However, exploring this could be a highly rewarding avenue for future work.
Again, thank you for your time in reviewing our work. Should you have any additional questions, please leave us a comment. If your concerns have been addressed, we kindly ask if you would consider enhancing the score.
Best regards,
Authos.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I will keep my original assessment. | Summary: This paper introduces an energy-based model based on the manifold of low-dimensional data. To train the EBMs, this paper takes the idea of maximum recovery likelihood and adds a layer of autoencoder to approximate the low-dimensional manifold representing the data. This introduces perturbation along the low-dim manifold representing the dataset. Simulation results are provided to show the performance.
Strengths: - It is easy to follow.
- Sufficient literature review.
Weaknesses: - The novelty of this paper is moderate at best. It only adds a deterministic encoder/decoder to the original problem of likelihood recovery.
- The accuracy of the manifold approximation is as good as the autoencoder approximation of the dataset.
- The generation of the negative samples concentrated neat the manifold could potentially introduce biases.
- lack of comparison to some recent advancements in the topic such as:
* Anomaly Detection in Networks via Score-Based Generative Models by Gavrilev et. al, 2023.
* Enhancing Unsupervised Anomaly Detection with Score-Guided Network by Huang et. al, 2022.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - How can we mitigate these biases to ensure the trained energy-based model generalizes well to out-of-distribution samples?
- The encoder in this algorithm is assumed to be deterministic how does the deterministic assumption impact the performance and the ability to capture the full range of variations in the data?
- Could authors please elaborate more on the conditions under which the consistency of the estimation by maximizing $\log p(x|z^~)$ holds?- How does the use of a latent chain and latent space improve the sampling process compared to the visible chain? What are the advantages and disadvantages of using a latent LMC?
- The paper mentions that the perturbation design, including the encoder-decoder pair (fe, fd) and noise magnitude $\sigma$, significantly impacts the algorithm's performance. How can we effectively select the optimal perturbation design for different datasets and anomaly detection tasks?
- The paper mentions that the autoencoder (fe, fd) and the noise magnitude $\sigma$ should be independent of $\theta$ and remain fixed during training. How does the fixed nature of the autoencoder and noise magnitude impact the algorithm's adaptability to different datasets and anomaly types?
- Can you provide more details on how the simultaneous use of multiple perturbations enhances the algorithm's performance? Are there any potential trade-offs or challenges associated with this approach?
- One of the main potential issues is the memory overhead in manifold ensembles. When utilizing multiple autoencoder manifolds in MPD, there is a memory overhead associated with processing multiple groups separately. How can this memory overhead be managed effectively to ensure efficient training while utilizing multiple autoencoders?
- Can you provide more insights into how the choice of Dz affects the algorithm's ability to detect anomalies? How can we determine the optimal combination of autoencoders with varying Dz for different types of data? Specifically, in high-dimensional data such as images, how does MPDR framework deal with curse of dimensionality and maintain good performance with relatively smaller autoencoders?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Please check weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer gSqM,
Thank you for dedicating your time to reviewing our work. We sincerely appreciate your detailed feedback. Here, we would love to answer your questions in depth.
**The novelty of the paper**
We would like to highlight that simplicity does not always equate triviality. MPDR (our method) is the first example of training an energy-based model using the recovery likelihood framework with non-Gaussian perturbation. Even though MPDR is based on autoencoders and the recovery likelihood, it achieves significantly better out-of-distribution detection performance than both of the base algorithms, as repeatedly shown in our experiments (Table 1, 3, and 4, where DRL is an algorithm based on the recovery likelihood). Please consider that the paper also provides novel techniques, such as perturbation ensemble and the latent space LMC, which deliver stable training and improved performance.
**Accuracy of the manifold approximation is as good as the autoencoder approximation**
We are afraid that this statement does not accurately reflect what is presented in the paper. MPDR can learn the correct data distribution (and thus the correct data manifold) even when an autoencoder provides a crude approximation. In Figure 2, the manifold learned by an autoencoder (the gray line) does not reflect the cluster structure of data, being a crude approximation. However, the resulting energy function correctly captures the underlying clusters, as shown in Figure 3.
MPDR can learn from an imperfect autoencoder because input-space LMC is not confined to the autoencoder’s manifold and can generate off-manifold samples.
**Potential bias due to near-manifold negative samples / How can we reduce bias**
In theory,MPDR provides asymptotically unbiased estimation of probability density, as we show in Section 3.2 and Appendix A. This is possible because an ideal MCMC sampling can cover the entire input space.
In practice, the bias may exist, and the root cause of the bias is finite-length MCMC which does not mix over the entire space. The training techniques we introduced contributes to mitigate this bias. For example, the manifold ensemble provides diverse starting points for MCMC. Also, the use of autoencoder-based energy function (MDPR-R) automatically assigins high energy on points that are far from data.
Meanwhile, the inductive bias from the near-manifold negative samples could be useful. Other energy-based models, such as IGEBM, also suffer from biases from finite MCMC. When compared to them, MPDR achieves stronger OOD detection performance, as shown in Table 1, 3, and 4.
**Comparison to other recent advancements**
> Anomaly Detection in Networks via Score-Based Generative Models by Gavrilev et. al, 2023.
Unfortunately, Gavrilev et al., 2023 is not directly comparable with MPDR, because it is designed for a different task, graph node anomaly detection. We will cite and mention this work in the updated manuscript, as extending MPDR to graph data is an interesting future direction.
> Enhancing Unsupervised Anomaly Detection with Score-Guided Network by Huang et. al, 2022.
Thank you for letting us discover an interesting work. We compare MPDR with SG-AE, and SG-RSRAE, the method proposed by Huang et al., 2022. We run them on MNIST, as both papers contain MNIST experiments. We use the digit 4 as inliers and the rest of the digits as outliers. We use MPDR-R model. All the hyperparameters are the same as in the MNIST experiment presented in the main manuscript. We will cite this work and include this result in the updated manuscript.
| | AUC-ROC| AUC-PR|
|:---|----:|----:|
|SG-AE| 0.939±0.005| 0.563±0.015|
|SG-RSRAE| 0.951±0.010| 0.736±0.062|
| MPDR (ours) | **0.975**±0.004 | **0.997**±0.001|
**Other questions**
> how does the deterministic assumption impact the performance and the ability to capture the full range of variations in the data?
The deterministic assumption does not play a critical role in MPDR but is employed mostly for convenience. An autoencoder, either deterministic or stochastic, usually captures the overall variation of data well, achieving low reconstruction error for the training data. Changing a deterministic encoder requires us to solve the integral $\int p(\tilde z | z) p(z|x) dz$ in Eq.6. This integral is easy when we are using Gaussian distributions in the Euclidean space but can be difficult in other situations.
> Conditions for consistency
The consistency of MPDR requires several assumptions: the infinite number of data, correctly specified and identifiable model, and non-zero recovery probability, i.e., $p(x|\tilde{z})>0$ for all $x$ and $\tilde{z}$. Please find more detail in Appendix A. If you are curious about a particular aspect of the condition, please let us know.
> How does the latent chain improve the sampling process? The advantages and disadvantages of using a latent LMC?
The latent chain facilitates the exploration of MCMC. A small step in the latent space corresponds to a much larger distance in the input space. However, it requires additional computation time. Also, operating only in the latent space, a latent LMC can not explore the off-manifold region in the input space. Therefore, the input space LMC is always necessary for successful training.
> How can we select the optimal perturbation design, optimal combination of autoencoders, and Dz?
As in other outlier detection algorithms, those hyperparameters can be selected using a separate validation OOD dataset, as proposed in Appendix A of Hendrycks et al., 2018.
Hendrycks et al., Deep Anomaly Detection with Outlier Exposure, 2018.
Due to the rebuttal's space constraint, we will address the rest of the questions during the discussion period.
Your comprehensive review of our paper is deeply appreciated. Please share any lingering questions or concerns. If they have been resolved, we would be grateful for your consideration in updating the review score.
Best regards,
Authors.
---
Rebuttal Comment 1.1:
Title: Additional response
Comment: Due to space limitations, our previous response was unable to address all the questions. Therefore, we would like to offer additional responses to the unanswered questions here.
>How does the fixed nature of the autoencoder and noise magnitude impact the algorithm's adaptability to different datasets?
Please note that the autoencoders are trained on the training data. Although the autoencoders remain 'fixed,' they adapt to the specific dataset and approximate the manifold structure of each dataset.
Additionally, the fixed noise magnitude doesn't imply the application of the same perturbation across different datasets. Since distinct autoencoders are used for various experiments, a Gaussian perturbation in the latent space corresponds to a distinct operation in the input space.
> How does the simultaneous use of multiple perturbations enhance the algorithm’s performance? Any potential trade-offs?
One hypothesis is that ensembling over multiple perturbations increases the MCMC’s coverage of a high-dimensional input space.
We empirically observe that the gain from ensembling usually diminishes as the number of components increases. Therefore, there is usually a trade-off between the marginal improvement in performance and the marginal computational cost.
> how the choice of Dz affects the algorithm's ability to detect anomalies?
$D_z$ is a hyperparameter of the proposed algorithm that has to be tuned, while the performance is generally robust across a wide range of $D_z$. Please see Table 9 in Appendix for the sensitivity analysis with varying $D_z$ value.
> Solution for memory overhead of using multiple autoencoders?
Multiple possible solutions may present. We may distill or quantize the autoencoders. Instead, we may design the autoencoders so that a significant portion of parameters are shared across the autoencoders. We believe this is an exciting direction for future work.
We thank you again for your deep and thorough review. Please let us know if you have additional questions.
Best regards,
Authors.
---
Rebuttal Comment 1.2:
Comment: I thank the authors for their response and the extra experiments they provided. I have completely read the authors' rebuttal and other reviews and thus I increase my score. | Summary: This paper introduces an EBM-based model for anomaly detection in the latent manifold space. The proposed model first trains an autoencoder that maps a data point $x$ into the low-dimensional $z$, and then two-stage sampling strategy is developed to generate the original data via the LMC algorithm. Several notable designs are developed to complete the proposed model, such as manifold projection diffusion, two-state sampling, and perturbation ensemble. Experiments on images, vectors and acoustic signals show the strong performance of the method.
Strengths: (1) The idea that generation of negative samples from the manifold space sound good. This makes sure the starting points highly reflecting information, resulting in more discriminative generation.
(2) Several practical strategies such as two-stage sampling and resembling multiple autoencoder are developed to benefit the anomaly detection performance.
(3) Extensive empirical results and ablation studies show the efficiency of the model.
Weaknesses: (1) lack of the results of widely-used anomaly detection dataset MVTec-AD
(2) Lack of well-known anomaly detection baselines, such as UniAD [1] and DRAEM [2]
[1] Zhiyuan You et.al. A unified model for multi-class anomaly detection.
[2] Vitjan Zavrtanik et.al. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The method artificially inject Gaussian noises into the latent space. What is the correlations between the simulated abnormal data and the real abnormal data in the test dataset?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer dsqA,
Thank you for taking the time to review our paper. We deeply value your feedback. Below, we will address your concerns and questions.
> MVTec-AD dataset and comparison with UniAD and DRAEM
Thank you for your suggestion. MPDR also demonstrates promising performance on MVTec-AD. We present the anomaly detection performance of MPDR on MVTec-AD, along with an empirical comparison to UniAD and DRAEM. We will incorporate the MVTec-AD results in the updated manuscript and ensure that both papers are cited.
**Table: MVTec-AD detection task in the unified setting. AUROC (percent) scores are computed for each class. UniAD and DRAEM results are adopted from You et al., 2022.**
| | MPDR (ours) | UniAD | DRAEM |
| -------|---------------------|---------------|---------------|
| bottle | **100.0**$\pm$0.00 | 99.7 | 97.5 |
| cable | **95.5**$\pm$0.01 | 95.2 | 57.8|
| capsule | 86.4$\pm$0.01 | **86.9** | 65.3 |
| hazelnut | **99.9**$\pm$0.00 | 99.8 | 93.7 |
| metal_nut | **99.9**$\pm$0.00 | 99.2 | 72.8 |
| pill | **94.0**$\pm$0.01 | 93.7 | 82.2 |
| screw | 85.9$\pm$0.01 | **87.5** | 92.0 |
| toothbrush | 89.6$\pm$0.01 | **94.2** | 90.6 |
| transistor | 98.3$\pm$0.01 | **99.8** | 74.8 |
| zipper | 95.3$\pm$0.01 | 95.8 | **98.8** |
| carpet | **99.9**$\pm$0.00 | 99.8 | 98.0 |
| grid | 97.9$\pm$0.01 | 98.2 | **99.3** |
| leather | **100.0**$\pm$0.00 | **100** | 98.7 |
| tile | **100.0**$\pm$0.00 | 99.3 | 98.7 |
| wood | 97.9$\pm$0.00 | 98.6 | **99.8** |
| mean | 96.0$\pm$0.00 | **96.5** | 88.1 |
**Table: MVTec-AD localization task in the unified setting. AUROC scores (percent) are computed for each class. UniAD and DRAEM results are adopted from You et al., 2022.**
| | MPDR (ours) | UniAD | DRAEM |
| -------|---------------------|---------------|---------------|
| bottle | **98.5**$\pm$0.00 | 98.1 | 87.6|
| cable | 95.6$\pm$0.00 | **97.3** | 71.3|
| capsule | 98.2$\pm$0.00 | **98.5** | 50.5|
| hazelnut | **98.4**$\pm$0.00 | 98.1 | 96.9|
| metal_nut | 94.5$\pm$0.00 | **94.8**| 62.2|
| pill | 94.9$\pm$0.00 | **95.0**| 94.4|
| screw | 98.1$\pm$0.00 | **98.3**| 95.5|
| toothbrush | **98.7**$\pm$0.00 | 98.4| 97.7|
| transistor | 95.4$\pm$0.00 | **97.9**| 65.5|
| zipper | 96.2$\pm$0.00 | 96.8| **98.3**|
| carpet | **98.8**$\pm$0.00 | 98.5| 98.6|
| grid | **96.9**$\pm$0.00 | 96.5| 98.7|
| leather | 98.5$\pm$0.00 | **98.8**| 97.3|
| tile | 94.6$\pm$0.00 |91.8| **98.0**|
| wood | 93.8$\pm$0.00 | 93.2| **96.0**|
| mean | 96.7$\pm$0.00 | **96.8**| 87.2 |
We follow the “unified” experimental setting of UniAD paper (You et al., 2022). Normal images from 15 classes are used as the training set, where no label information is provided. We use the same feature extraction procedure used in UniAD. Each image is transformed to a 272x14x14 feature map using EfficientNet-b4. When running MPDR, we treat each spatial dimension of a feature map as an input vector to MPDR, transforming the task into a 272D density estimation problem. We normalize a 272D vector with its norm and add a small white noise with a standard deviation of 0.01 during training. We use the maximum energy value among 14x14 vectors as an anomaly score of an image. For the localization task, we resize 14x14 anomaly score map to 224x224 image and compute AUROC for each pixel with respect to the ground true mask.
We use MPDR-R with a single manifold (i.e., without the manifold ensemble). Both the manifold and the energy are a fully connected autoencoder of 256D spherical latent space.
For input-space Langevin Monte Carlo, the number of MCMC steps, the step size, and the noise standard deviation are 10, 0.1, and 0.1, respectively. No latent chain is used. The manifold is trained for 200 epochs with Adam of learning rate 1e-3, and the energy is trained for 20 epochs with Adam of learning rate 1e-4.
In both MVTec-AD detection and localization tasks, MPDR achieves AUROC score that is very close to that of UniAD, outperforming other baselines including DRAEM. Note that MPDR-R and UniAD are based on the same approach that anomalies are characterized by a large reconstruction error. These two approaches may be combined to improve the result further.
> What are the correlations between the simulated abnormal data and the real abnormal data in the test dataset?
Thank you for your question. In principle, the simulated abnormal data, which we refer to as negative samples in the manuscript, are not correlated with the actual test outliers. Firstly, no test outliers are utilized during any stage of the training process, including both the construction of the manifold and the learning of the energy. Moreover, as MPDR reaches convergence, the distribution of these negative samples should be close to the distribution of the training data, as in typical EBMs. We present some examples of these negative samples in Figure 4. These images bear minimal resemblance to test outliers like SVHN digits.
Once again, we appreciate the time and effort you've invested in reviewing our work. Please let us know if you have any further concerns or questions. If your concerns have been addressed satisfactorily, we kindly ask you to consider revising the score upward.
Best regards,
Authors.
---
Rebuttal Comment 1.1:
Comment: Thanks for the experiments compared with UniAD and DRAEM. However, the experiment results are lower than UniAD. The reasons and results analyses should be clarified.
---
Reply to Comment 1.1.1:
Title: Regarding MVTect-AD Experiment
Comment: Dear Reviewer dsqA,
Thanks for taking the time to read our response. We would love to provide further discussions on the MVTec-AD Experiment.
**MPDR outperforms UniAD in certain classes of MVTec-AD.** Even though UniAD's mean AUROC is higher (with a very small gap), UniAD does not dominate MPDR. In the detection task, MPDR achieves higher or equal scores in 8 out of 15 classes, and in the localization task, MPDR wins in 5 out of 15 classes. This result suggests that MPDR captures the patterns in data that UniAD neglects.
**MPDR and UniAD use different approaches to prevent the reconstruction of anomalies.** UniAD is an algorithm that aims to prevent the reconstruction of anomalies. MPDR, especially MPDR-R, which uses an autoencoder's reconstruction error as energy, shares the same goal, as the energy should be large for anomalies. The difference is that UniAD prevents anomaly reconstruction through a novel neural network design, while MPDR addresses the problem with novel recovery likelihood learning.
Besides, **MPDR has the advantage of being more widely applicable than UniAD.** As a general learning algorithm for energy-based models, MPDR is compatible with a wide range of network architectures and can be applied to diverse data types. MPDR has been tested on 2D data, tabular data, images, audio signals, and feature vectors from a pre-trained network. However, UniAD is specialized for anomaly detection using feature vectors and has only been tested for image data.
Best regards,
Authors. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a novel anomaly detection algorithm utlizing energy-based models (EBMs). The proposed method, Manifold Projection-Diffusion Recovery (MPDR), is based on recovery likelihood, a framework for learning energy functions by denoising data from artifically injected Gaussian noise. MPDR uses deterministic encoder and decoder to first project the data onto a latent space, add Gaussian noise in the latent space, and finally decode the noisy latent representation onto the data manifold. This would capture more relevant modes of variation in the data than Gaussian perturbations on the original data. The recovery likelihood of MPDR is derived and is shown to result in consistent estimation of the energy function. Negative samples are generated via Langevin Monte Carlo in the latent space. The paper also introduces additional variations, including different types of energy functions and the use of ensembles to generate diverse negative samples. Experiments demomstrate the effectiveness of MPDR for out-of-distribution detection on MNIST and CIFAR-100 datasets, as well as anomaly detection for acoustic signals.
Strengths: Anomaly detection is a long standing problem in machine learning with a rich literature, and this paper proposes a novel step in the development of new algorithms. The idea of using recovery likelihood as well as projecting data onto a low-dimensional latent space are not novel, but this paper combines them to produce an original idea. The relevant background and motivation are presented with sufficient clarity, and the proposed method seems logically sound. The toy example presented in Figure 2 empirically verifies that MPDR captures more relevant modes of variation in the data.
Weaknesses: The related works section cite most of the relevant previous works, but are not covered in sufficient detail, perhaps owing to the pade limitations. It is important to cover previous work exhaustively to demonstrate where the proposed method stands in relation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The experiments cover two image datasets and acoustic signals, which are relatively high dimensional data. It would be interesting to demonstrate the performance of MPDR on tabular data [1] (both low and high-dimensional) and explore the relevant modes which arise from applying MPDR to such datasets.
[1] Han, S., Hu, X., Huang, H., Jiang, M. and Zhao, Y., 2022. Adbench: Anomaly detection benchmark. *Advances in Neural Information Processing Systems*, *35*, pp.32142-32159.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper covers the limitations of MPDR in sufficient detail, namely the sensitivity to the autoencoders used for latent space projection, questions around application to high-dimensional text or image data, and the fact that MPDR is not optimized to generate samples from simple distributions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer ekma,
We're truly grateful for your insightful feedback. Your time and effort in evaluating our work is deeply appreciated. Please allow us to address any questions you have.
> Related works are not covered in sufficient detail.
We apologize that we couldn't discuss all related work in sufficient detail due to page constraints. If we are given an opportunity to update the manuscript with an extra page, we will expand the "Related Work" section to provide a deeper discussion. If you have a particular set of literature that you think we should cite and discuss, please let us know. We will include them in the updated manuscript.
> Demonstrating the performance of MPDR on tabular data, such as Adbench.
Thank you for the suggestion. Based on your suggestion, we benchmarked MPDR using Adbench and provided the results below. MPDR was run on 47 tabular datasets provided by Adbench, and we compared MPDR with 13 baselines in Adbench. We reproduced the baseline results using the official repository of Adbench.
For each dataset, we ran each algorithm on three random splits and computed the AUROC on the corresponding test split. We then ranked the algorithms based on the averaged AUROC. In Table A, we present a summary table with the average rank across the 47 datasets.
**Table A: Rank of each algorithm's AUROC score averaged over 47 datasets. The $\pm$ sign indicates the standard error.**
| | Average Rank (The lower the better)|
|:----|-----------:|
| MPDR (ours) | **4.43** $\pm$ 0.50 |
| IForest | 5.28 $\pm$ 0.39 |
| OCSVM | 7.94 $\pm$ 0.47 |
| CBLOF | 5.98 $\pm$ 0.53 |
| COF | 9.23 $\pm$ 0.58 |
| COPOD | 7.10 $\pm$ 0.61 |
| ECOD | 6.97 $\pm$ 0.58 |
| HBOS | 6.53 $\pm$ 0.51 |
| KNN | 6.64 $\pm$ 0.56 |
| LOF | 9.04 $\pm$ 0.62 |
| PCA | 6.45 $\pm$ 0.54 |
| SOD | 7.91 $\pm$ 0.52 |
| DeepSVDD | 11.43 $\pm$ 0.42 |
| DAGMM | 10.09 $\pm$ 0.52 |
For MPDR, we used MPDR-R, where the energy is an autoencoder. We do not employ the manifold ensemble, instead, a single manifold is used for perturbation. For both the manifold and the energy, the same MLP-based autoencoder architecture was used. The encoder and the decoder were MLPs with two 1024-hidden-neuron layers. If the input space dimensionality $D_x$ is smaller than or equal to 100, the latent space dimensionality is set to have the same dimensionality $D_z = D_x$. If $D_x>100$, we set $D_z$ as 70% of of $D_x$. We employed 1 step of Langevin Monte Carlo in the latent space and 5 steps in the input space. The step sizes are 0.1 for the latent space and 10 for the input space. All the hyperparameters except $D_z$ are fixed across 47 datasets.
As shown in Table A, MPDR achieves highly competitive performance on Adbench, demonstrating a higher average rank than the isolation forest (although with some overlap of confidence interval). This result indicates that MPDR is a general-purpose anomaly detection algorithm capable of handling tabular data.
Once more, we're truly grateful for the time and effort you've dedicated to reviewing our paper. Should you have any additional questions, please don’t hesitate to leave a comment. If your concerns have been addressed well, we kindly ask if you'd consider raising your score.
Best regards,
Authors.
---
Rebuttal Comment 1.1:
Title: Review updated
Comment: My major concerns have been addressed, and I am happy to see the good performance on ADBench.
I have updated my recommendation for acceptance. | null | null | null | null | null | null |
Stable Vectorization of Multiparameter Persistent Homology using Signed Barcodes as Measures | Accept (poster) | Summary: In this paper, the authors are addressing a very critical need in topological data analysis (TDA), vectorization of multiparameter persistence (MPH). Persistent homology (PH) is the key method in TDA, but in its current form, it allows only a single function to use in its key process, filtration. By enabling multiple functions, MPH is a natural generalization of (PH) with much finer information, however, there are several mathematical obstructions to use them effectively in applications.
In this paper, the authors propose an effective way to vectorize the barcode information in the MP module by sacrificing some to keep the computation feasible and process practical. They offer two versions of MP vectorizations where one being kernel and the other being direct vectorization, both can be effective in different settings. The authors made extensive experiments in several settings, including point cloud, graph classification, and virtual screening to compare their model with other MP vectorizations and SOTA models. Their model consistently outperforms existing MP vectorizations.
Strengths: The authors use a recent idea of signed barcodes cleverly to obtain practical vectorizations. There is a significant need to employ MPH idea with good and feasible vectorizations, and this paper makes a valuable contribution in this direction.
They propose two versions of their vectorizations, a kernel (MP-SW) and a direct vectorization (MP-C). In the past years, depending on the domain, we see that one can perform better than the other from our experience with the applications of single persistence. So, this versatility of outputs is very valuable for ML applications.
Computational time is significantly better than existing MP vectorizations. Combining with its good performance, this might be the most important contribution of the paper from ML perspective.
Their extensive experiments in various settings show that their model consistently outperforms the existing MPH vectorizations.
Weaknesses: The main weakness is that the paper might be too technical for non-experts and ML audience. However, considering the depth of the problem, I can see that the authors did an enormous effort to make this important subject accessible to a wider audience.
While new vectorizations consistently outperform the existing MP models in point cloud settings, the performance is not as strong in graph classification and specifically, virtual screening cases (TODD also uses a version of MPH).
As MPH is an involved process, hyperparameter tuning might need serious expertise in TDA in real life applications.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: This is more of a question/remark. A low-hanging fruit might be to combine (concatenate) Hilbert functions (MP-H) with MP-HSM-C (or MP-ESM-C) as you already compute MP-H during the process if I understand the process right. Considering the closeness of the performances in graph classification and virtual screening, this combined vectorization might perform better.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: As mentioned in weaknesses, the main limitations are scalability and hyperparameter tuning which could be very tricky, especially choosing the right grid size. Hence, performance vs. computational feasibility can be a problem in large datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and feedback.
- *Performance on graph data.*
We agree that the results in section 4.3 (graph data) are not as good as those in section 4.2 (point cloud data).
We note, however, that the performance on the virtual screening task of our method is very good; ToDD is a supervised method, whereas in this task we are using our vectorizations in a completely unsupervised way (we included ToDD to give an idea of how well TDA-based tools can work on this type of data).
With respect to section 4.3, we are using an invariant (the Hilbert function or, equivalently, the Hilbert signed measure) that is strictly weaker than that used by other topological methods (such as the rank invariant), which could be the reason.
In particular, we believe that the issue is not with the vectorizations themselves, and that the gap in performance could be bridged by using signed barcodes/measures coming from the rank invariant.
- *Combine/concatenate MP-H and MP-HSM-C (or MP-ESM-C) to get better performance.*
We think this is an interesting idea which should be experimented with.
In this paper, we have decided to keep the approach as straightforward as possible to try to isolate the performance of the vectorization from the performance of the machine learning model and other application specific choices.
We address the tuning of the approach, including combination of different vectorizations, in ongoing work.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed answers. I have no further questions. Good luck with your submission. | Summary: The paper vectorizes data descriptors coming from multiparameter persistent homology for classifying point clouds and measuring similarity between graphs extracted from databases of times series and molecules.
Strengths: The paper includes rigorous definitions and proves (in the appendices) three theorems from section 4.
Weaknesses: The paper seems to over-advertise the strengths of persistence.
Lines 1-2 claim that "Persistent homology (PH) provides topological descriptors for geometric data, such as weighted graphs". If the authors really meant a topological classification of graphs, this classification has been known to Euler in the 18th century, so there is no need to re-invent topological invariants of graphs by using persistence.
Lines 22-23 claim that "TDA methods usually require the sole knowledge of a metric or dissimilarity measure on the data". In fact, the definition already of persistence requires a choice of filtration of simplicial complexes on given data points, which seriously affects the resulting persistence whose further analysis requires many more extra parameters, see lines 351-352: " Our pipelines, including the choice of hyperparameters for our vectorizations, rely on the cross validation of several parameters, which limits the number of possible choices"
Examples 1-3 are definitions or comments, not illustrative examples that could help the readability.
The questions and limitations below include further concerns about experiments and comparisons with past work.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Lines 14-15 claim that "The resulting feature vectors are easy to define and to compute". If this is true, could the authors write a full easy definition of these vectors in a few lines?
Concerning the computation, line 245-256 seem more honest: "We discretize the computation of the sliced Wasserstein kernel by choosing directions {θ1, . . . , θd} ⊆ Sn−1 uniformly at random". Are there any theoretical guarantees for this approximate computation and what is the asymptotic cost in the size of the input?
The authors correctly highlight the importance of stability (Lipschitz continuity) under noise. However, the practical experiments in section 4.2 use discrete grids, which makes all features discontinuous under perturbations of times series or point clouds. Is it possible to avoid this discretization?
Section 4.3 discusses datasets of graphs coming from social networks, biology, and medicine. How can edges be justified in these experimental graphs?
In social networks, friendships or other links between people have different strengths and are always subjective. In molecules, even the strongest covalent bonds between atoms have no strict definitions and only abstractly represent inter-atomic interactions, so there are physical sticks or strings between them. That is why most databases of molecules and materials such as QM9 contain only atomic positions but no links between them.
Which of the invariants in Appendix B are provably Lipschitz continuous under perturbations of points, sat in the bottleneck distance on the persistence diagram?
Which of these invariants are faster or stronger than the isometry invariants from Widdowson et al (CVPR 2023)?
Which (dis)similarity functions mentioned in the paper satisfy all metric axioms? The importance of metric axioms including the triangle inequality was theoretically justified in the paper arXiv:2211.03674, which should be cited in every work on clustering and metric distances because any clustering based on non-metrics is not trustworthy.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The main limitation of the 1-parameter persistence is its weakness as an isometry invariant, which should have been clear to all experts in computational geometry many years ago but was demonstrated only recently. The paper by Smith et al (arxiv:2202.00577) explicitly constructs generic families of point clouds in Euclidean and metric spaces that are indistinguishable by persistence and even have empty persistence in dimension 1. These examples can be easily extended to more than one parameter at least for some filtrations.
Though Topological Data Analysis was largely developed by mathematicians, the huge effort over many years was invested into speeding up computations, rather surprisingly, instead of trying to understand the strengths and weaknesses of persistent homology, especially in comparison with the much simpler, faster, and stronger invariants of clouds under isometry.
Persistence in dimension 0 was extended to a strictly stronger invariant mergegram by Elkin et al in MFCS 2020 and Mathematics 2021, which has the same asymptotic time as the classical 0D persistence and is also stable under perturbations of points.
A SoCG 2022 workshop included a frank discussion concluding that there was no high-level problem that persistent homology solves. Is there such an ultimate problem for multi-parameter persistence? In fact, persistence as an isometry invariant essentially tries to distinguish clouds of unlabeled points up to isometry, not up to continuous deformations since even non-uniform scaling changes persistence in a non-controllable way.
On the other hand, the isometry classification problem for point clouds was nearly solved by Boutin and Kemper (2004), who proved that the total distribution of pairwise distances is a generically complete invariant of point clouds in any Euclidean space. The remaining singular cases were recently covered by Widdowson et al in NeurIPS 2022 and CVPR 2023.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and feedback.
We start with two clarifications: Our applications go well beyond classification of point clouds up to isometry, and even in the point cloud application, we do not seek a strong isometry invariant since such invariants are necessarily highly sensitive to, e.g., outliers.
It is also beyond the scope of this rebuttal to discuss the efficacy of PH, which is well-established given the vast amount of existing applications in ML ([Hensel, Moor, Rieck, 2021] and references).
- *"Lines 1-2 [...] using persistence"* We do not understand this comment: the quoted sentence does not mention "topological classification of graphs". All we are saying here is that persistence allows for the use of topology to produce descriptors of geometric data. A weighted graph can be taken as a single graph, but persistence allows one to incorporate much richer information into topological descriptors by using filtrations derived from the weights.
- *"Lines 22-23 [...] possible choices"* The fact that our method has hyperparameters does not prevent it from being applicable in arbitrary metric spaces or weighted graphs. One can resort to default choices as in Section 4.4.
- *"Examples 1-3 [...] readability"* Ex 1 provides a classical example of 1-dimensional simplicial complex; Ex 2 describes a classical family of multifiltered complexes; Ex 3 gives a usual way of getting a persistence module from a filtration. We do not see why these examples should not be helpful to the reader.
Does the reviewer have other examples in mind?
- *"Could the authors [...] few lines"* The Hilbert function of a module is its pointwise dimension (def 2), and the Hilbert signed measure is computed by Mobius inversion of the Hilbert function (amounting to a convolution with a kernel of small support; this is reference to [41, Rmk. 3] in line 213). The formulas for vectorizations are explicitly given in defs 6 and 7; and the computation is described in lines 232 and 245, respectively. We will be happy to make this comment into pseudocode, if the reviewer believes this will improve readability.
- *"Are there any [...] input"* There is no approximation: our definition of the Wasserstein kernel takes any measure on the $(n-1)$-sphere, and our stability result holds for any such measure.
This allows us to use a discrete measure so that the integral can be evaluated exactly. One could consider the approximation of SWK with uniform measure on the sphere by a SWK with discrete measures, as in [Carriere et al. 2017] and the analysis would be almost identical. For space concerns, we decided to have the theory of the paper focus on stability results and not on injectivity/inverse results.
The time complexity of the computation of the SWK is linear in the number of point masses of the discrete measure on the $(n-1)$-sphere.
- *"Is it possible to avoid this discretization?"* Discretizations make the vectors discontinuous, but, crucially, Lipschitz stability theorems still hold up to the discretization size (on grids of step size $\epsilon$, stability holds up to additive term of $O(\epsilon)$), so near-by datasets still get near-by vectors, and approximation can be controlled by the user.
It is possible to avoid discretizations: the computation of the Hilbert and Euler signed measures can be done exactly since fp MP modules can be described exactly over some data-dependent finite grid.
So the SWK can be computed exactly on any finite point measure.
The convolution vectorization does require a grid if one seeks a vectorization (i.e., a Euclidean vector for each sample); however, if a kernel method is sufficient, one can dispense with discretizations because the $L^2$ inner product between Gaussian mixtures can be computed exactly (see, e.g., the vectorization of [Reininghaus et al. 2015]).
- *"How can edges [...] graphs"* We seek to understand the performance of our vectorizations for graph classification, so we take the input datasets at face value like previous papers.
While interesting, the discussion about the validity of edgers is outside the scope of the paper.
- *"Which of [...] persistence diagram?"* We are not sure to understand this question: since the invariants being referred to are invariants of multiparameter persistence modules, there is, a priori, no persistence diagram. Could you please clarify this question?
- *"Which of these [...] (CVPR 2023)?"* In Section 4.2, we do not seek to use strong isometric invariants for point clouds such as the ones in (CVPR 2023) because we want stability under, e.g., outliers.
We rather see the point clouds as empirical measures, and use density-geometric bifiltrations whose one-parameter slices are stable under Wasserstein perturbations [Chazal, Cohen-Steiner, Mérigot (2011)].
- *"Which (dis)similarity [...] axioms?"* Our vectorizations have the space of finite signed measures with total mass zero as domain, equipped with the Kantorovitch norm, which is an honest norm; thus, its induced metric satisfies all metric axioms. The codomain of our vectorizations are Hilbert spaces, where the metric is derived from the inner product and therefore is an honest metric.
- *"About an ultimate problem/goal for multiparameter persistence"* Besides recent applications of multiparameter persistence (see motivating paragraph for MPH, for now in the response to reviewer 61ca), various theoretical papers show that TDA with several parameters/directions can address difficult practical problems: [Blumberg, Lesnick. Found Comp Math, 2022] shows that two-parameter persistence provides Gromov-Prokhorov Lischitz-stable invariants (i.e., robust to outliers) of metric probability spaces that do not require user chosen parameters, such as a KDE bandwidth. [Curry, Mukherjee, Turner. Trans Amer Math Soc Ser B, 2022] shows that several one-parameter filtrations, taken together, can uniquely characterize simplicial complexes in Euclidean space up to action of the orthogonal group.
---
Rebuttal Comment 1.1:
Title: remaining questions
Comment: Thank you for the reply.
>efficacy of PH, which is well-established given the vast amount of existing applications in ML ([Hensel, Moor, Rieck, 2021] and references)
The strong claim about efficacy requires serious justifications not by including a vast amount of papers but by quoting rigorously proved theorems. The widely cited stability of persistence gives only an upper bound under perturbations. Such a Lipschitz upper bound is straightforward for many easier and faster invariants, for example, the total distribution of pairwise distances.
The key ignored difficulty was the lack (actually, impossibility) of a lower bound, because the persistence doesn't change or even remains trivial (fully empty) under all perturbations for a generic family of point clouds, even in the plane.
While Theorems 1-3 prove upper bounds, can the left hand sides of these inequalities be zeros for different functions and measures?
Could the authors please specify exact places, where the paper [Hensel, Moor, Rieck, 2021] discusses efficacy of PH? The Frontiers Media is actually known for the facts of controversial reviewing https://en.wikipedia.org/wiki/Frontiers_Media#Controversies
> persistence allows one to incorporate much richer information into topological descriptors by using filtrations derived from the weights.
How does persistence incorporate topological descriptors if the persistence changes even under non-uniform scaling, much worse under more flexible topological deformations?
>The fact that our method has hyperparameters does not prevent it from being applicable in arbitrary metric spaces or weighted graphs. One can resort to default choices as in Section 4.4.
What choices were called default here: "grid size of k = 1000 for all methods" in line 336? How is this number 1000 justified?
>"Which of [...] persistence diagram?" We are not sure to understand this question: since the invariants being referred to are invariants of multiparameter persistence modules, there is, a priori, no persistence diagram.
Which of the invariants in Appendix B are provably Lipschitz continuous under perturbations of given points?
>[Curry, Mukherjee, Turner. Trans Amer Math Soc Ser B, 2022] shows that several one-parameter filtrations, taken together, can uniquely characterize simplicial complexes in Euclidean space up to action of the orthogonal group.
Main Theorem 7.14 in this paper is proved only for complexes in general position. A reconstruction in general position was proved for the harder case of point clouds (without any edges or simplices) by Boutin and Kemper "On reconstructing n-point configurations from the distribution of distances or areas" in Advances in Applied Mathematics (2004) by using the much simpler distribution of pairwise distances without any slower persistence.
Do the words "up to action of the orthogonal group" mean that the persistence-based objects from the paper above should be compared by rotations?
To check if original complexes are rigidly equivalent, it's easier to fix their centers of mass at the origin and rotate given vertices, see Alt et al "Congruence, similarity, and symmetries of geometric objects" in Discrete and Computational Geometry (1988) and Brass et al "Testing congruence and symmetry for general 3-dimensional objects" in Computational Geometry (2004).
---
Reply to Comment 1.1.1:
Comment: We thank you again for the discussion.
- *"The strong claim about efficacy requires [...]"*
[Hensel, Moor, Rieck, 2021] is about the diversity of "existing applications in ML".
Efficacy is not only about theoretical results but also about practical performances, especially in an ML venue such as this one. In this regard, a number of TDA works have been published every year at NeurIPS since 2019
(e.g.,
[Zhao, Wang. NeurIPS 2019],
[Hu, Li, Samaras, Chen. NeurIPS 2019],
[Kim, Kim, Zaheer, Kim, Chazal, Wasserman. NeurIPS 2020],
[Carrière, Blumberg. NeurIPS 2020],
[Vishwanath, Fukumizu, Kuriki, Sriperumbudur. NeurIPS 2020],
[Chen, Coskunuzer, Gel. NeurIPS 2021],
[Birdal, Lou, Guibas, Simsekli. NeurIPS 2021],
[Zheng, Zhang, Wagner, Goswami, Chen. NeurIPS 2021],
[Demir, Coskunuzer, Gel, Segovia-Dominguez, Chen, Kiziltan. NeurIPS 2022],
[Akcora, Kantarcioglu, Gel, Coskunuzer. NeurIPS 2022],
[Turkes, Montufar, Otter. NeurIPS 2022]).
- *"While Theorems 1-3 prove upper bounds, can the left hand sides of these inequalities be zeros for different functions and measures?"*
We believe that there may be lower bounds for theorem 2 for point measures with bounded number of masses, and for theorem 3 only locally and also for point measures with a bounded number of masses.
A lower bound for theorem 1 is probably not possible in full generality.
Yet, the experiments in our paper show that the invariants are still expressive enough in a variety of contexts.
- *"How does persistence incorporate topological descriptors [...] deformations?"*
Persistence is known to be a homeomorphism invariant of filtered topological spaces, in that the composite of a homeomorphism with a filtering function has the same persistence barcode as the filtering function.
How this general invariance translates to transformations of the data depends on the (application dependent) construction that turns data into filtered spaces.
In any case, we don't mean that persistence "incorporate topological descriptors" as you write, but rather that it turns a topological descriptor, such as homology, into a richer descriptor that encodes additional information captured through the filter function.
Note also that multiparameter persistence is what naturally arises when considering multiple filter functions.
- *"What choices were called default here [...]"*
To avoid the parameter $k$, one can proceed as outlined in the response to one of your previous questions, by noticing that the distance between the vectors returned by our vectorizations has a closed form solution.
To simplify the implementation, we simply chose a grid that is as fine as possible, given our computing resources.
- *"Which of the invariants in Appendix B [...]"*
The invariants in Appendix B are for persistence modules.
How the persistence modules are constructed from the data, and thus, how they vary with respect to perturbations of the data, is application dependent.
We note, however, that the papers that introduce some of the invariants of Appendix B (e.g., persistence landscape and its multiparameter versions, and multiparameter persistence kernel) prove stability results, usually in terms of the interleaving or matching distance between multiparameter persistence modules.
- *"[Curry, Mukherjee, Turner. [...]"*
We mentioned this paper as one particular example of a difficult practical problem that can be addressed using multiple parameters/directions.
The references you provided, and the discussion about their relationship to the paper we mentioned, are very interesting.
However, we believe this discussion is outside the scope of this rebuttal, since it is only tangentially related to our submission.
Based on your feedback and the references you have provided so far, we propose to add a clarification in Section 4.2, explaining that, in that application, we are not trying to do point cloud classification up to isometry (because of the possibility of outliers), a problem for which well-developed methods exist as you have pointed out. | Summary: The authors introduce first vectorizations of multiparameter persistent homology (MPH) via signed barcodes, that are easy to compute and shown to be stable. The two proposed vectorizations often outperform the state of the art MPH methods on a variety of data sets.
Strengths: (S1) The paper is clearly organized and written, it reads really well.
(S2) The experiments are rather extensive and the results seem convincing.
Weaknesses: I did not identify major weaknesses in this work, besides some confusion about the experimental settings addressed in question (Q2) below. I am open to raising my score if this issue is properly addressed.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: (Q1) The proposed approach is theoretically weaker than some other (Appendix A, Proposition 1), but it outperforms them. This should be at least briefly commented/discussed.
(Q2) It seems that the two and three filtrations used respectively in Sections 4.2 and 4.3 for the calculation of proposed MPH featurizations are not the same as the other considered MPH approaches from the literature? How is the comparison fair/reasonable then?
(Q3) Can you comment on why MP-HSM-SW is much better for some, whereas MP-ESM-C performs significantly better for other data sets?
Other minor comments:
- Definition 6: Name the introduced notion K * \mu.
- Line 81: Briefly summarize the conclusions of the runtime assessment, i.e. that your pipeline is much faster than the other topological baselines, by several orders of magnitude (Appendix D.2, Table 3).
- Theorems and Propositions: For better readability, it would be useful to name the theoretical results.
- Section 4.1: Describe the data and filtration.
- Line 301: “As one can see from the results in Table 2, MP-HSM-SW is quite efficient”. Rephrase, since Table 2 does not show the computation times, but accuracy.
- Table 4 caption: MP-HSM-C -> MP-ESM-C?
- References: Be consistent between capital case vs. lower case for journal names, and also between full and shortened names, e.g. Journal of Machine Learning Research or J. Mach. Learn. Res. (similarly, Discrete Comput. Geom, Found. Comput. Math, …). Correct 2d, Xgboost, ucr, betti, Tudaset.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations and future work are discussed in detail.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and feedback.
- *Comment on/discuss the fact that the proposed approach is theoretically weaker than other approaches (Appendix A, Proposition 1), but it outperforms them.*
We believe this is due to the fact that, despite being weaker than the rank invariant, using the Hilbert/Euler functions allows us to leverage and use the vectorization techniques that have worked the best for one-parameter persistence (in our case, Gaussian convolution and sliced Wasserstein kernel).
In other words, while the rank invariant is richer, the ability to use it in machine learning pipelines is (currently, and to the best of our knowledge) limited in the sense that it forces one to use more complicated vectorizations.
In the original submission, we have commented on this briefly in line 349 of the Conclusions.
We agree that this is too succinct, and we will add the following more detailed comment:
>[...] strictly weaker descriptors.
>We believe that this is due to the fact that using the Hilbert and Euler signed measures allows us to leverage well-developed vectorization techniques shown to have good performance in one-parameter persistence.
>We conclude from this that the way topological descriptors are encoded is as important as the discriminative power of the descriptors themselves.
- *Filtrations of Sections 4.2 and 4.3 are not the same as the other considered MPH approaches from literature.*
We agree and thank you for pointing this out.
We have rerun the experiments of sections 4.2 and 4.3.
For section 4.2, we now used alpha complex + DTM filtration as in [Carrière, Blumberg. NeurIPS 2022].
We are including the new results for the first 6 UCR time series datasets of the original submission in Table 1 of the rebuttal PDF we submitted.
For section 4.3, we now used Heat Kernel Signature (with time 10) + Ricci curvature as in [Carrière, Blumberg. NeurIPS 2022].
We are including the new results for the graph datasets of the original submission in Table 2 of the rebuttal PDF we submitted.
The takeaway for both tables is the same: our methods are outperforming previous MPH vectorization methods for these tasks.
We note two interesting things: SWK is doing slightly worse than using Rips+KDE as in the original submission (but still better than previous MPH vectorizations), while the convolution-based vectorization is doing better; we believe that the improved performance of the convolution-based vectorization could be due to some implementation improvements having to do with how measures are snapped to a grid.
If the paper is accepted, we will include the full table as well as the Hilbert and Euler function columns (not part of previous work, but included in the original submission for the sake of comparison), which are now missing due to time constraints.
- *Comment on why MP-HSM-SW is much better for some, whereas MP-ESM-C performs better for other data sets.*
Both vectorizations have advantages and drawbacks.
The convolution-based vectorization requires the choice of a relatively small grid in order to make training practical, whereas SWK can take a much finer grid since it does not result in a more costly vectorization.
But, the SWK can be sensitive to outliers in the form of far-away point masses, since transporting them can be costly in terms of Wasserstein distance; meanwhile, for the convolution-based vectorization a far-away point mass just results in a bump whose distance to zero is bounded regardless of how far it is.
We believe that the best method depends highly on the data type.
- *Other minor comments.*
Thank you for spotting these; we will address them.
One question: when you say "For better readability, it would be useful to name the theoretical results", do you mean giving a short description of the result like "Theorem 3 (Stability of sliced Wasserstein)"?
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. Some final comments:
- Thanks for running the experiments for the filtrations considered in the literature. In the final version of the paper, I suggest to include both these filtrations, as well as your original choice, since I assume that there is a reason you opted for those. In case you include at least a brief discussion on the choice of filtrations and some insights on how they influence the results, and as suggested, I will increase my rating of the paper.
- Indeed, you understood my comment about the naming of definitions and theoretical results, I think it helps the readability, but this is a matter of style so I leave the final decision to you.
---
Reply to Comment 1.1.1:
Title: Answer to reviewer cn3Z
Comment: Thank you for the suggestions, which we will follow.
We definitely agree with you that using the exact same filtrations is the way to go for a fair comparison and these are the results we will present to make the point that, in these applications, our vectorizations lead to improved performance compared to previous methods. The set of results that appears in our original submission will also be included (and they will be rerun for the convolution vectorization due to the implementation improvements mentioned in our previous response to you). Here we will include a discussion comparing the different choices, saying in particular that the performance gain could be attributed to the better stability properties of Rips, in the case of the UCR data, and to the larger number of parameters, in the case of the graph data.
Specifically, we will include the two sets of experiments (improved original + rebuttal) in the paper, as well as a discussion like the following about how the choices influence the results (which in particular contains the motivation for the original choice of filtrations).
"Note that for the UCR data, previous work in the literature used bifiltrations obtained with Alpha and DTM. In addition to these, in this article we also run experiments with (a) Rips instead of Alpha because Rips scales better with the embedding dimension of the point cloud (and thus allows for better generalization of the application of our method to point clouds with outliers), and (b) with KDE instead of DTM because KDE behaves similarly to DTM while being easier to fine tune as it is more standard and has been more studied theoretically. Note also that for the graph data, we also decided to experiment with using more than two parameters since our methods apply directly in this case, in contrast to methods from the literature (such as multiparameter persistence images and GRIL)."
We will be happy to discuss any further questions or comments. | Summary: This work promote the use of signed barcodes for feature generation, and proposed the feature generation pipeline based on the signed bar codes,
a. the work introduces two general vectorization techniques for signed barcodes;
b. the authors prove Lipschitz-continuity results that ensure the robustness of the proposed entire feature generation pipeline
c. the practical performance of the proposed pipeline is compared to other baselines in various supervised and unsupervised learning tasks.
All of my questions are fully addressed by the rebuttal. I am satisfied with the explanations. I think the work is solid and promising.
Strengths: This work generalizes the single parameter bar codes to multi-parameter bar codes, introduces general vectorization techniques for signed barcodes, and prove Lipschitz-continuity results to ensure the robustness of the proposed pipeline. The theoretic results are important and convincing.
The paper is well written, the key concepts are explained thoroughly, the proofs of the main theorems are clear and in details. The experimental results are analyzed in details.
Weaknesses: The motivation for multi-parameter persistent homology could be further emphasized. From theoretical point of view, it is unclear what extra-information can multi-parameter PH bring compared to single-parameter PH.
It will be helpful to add some experiments to validate the stability theorems, and estimate the Lipschitz-constant.
The definition of the Kantorovich-Rubinstein norm for point measures in proposition 1 is inconsistent with that in definition 3. The definition 3 allows point mass splitting, namely one source point can be mapped into multiple target points, hence \psi is a transportation scheme; but in proposition 1, each source point is mapped to a single target point, \gamma is a transportation map. This needs to be clarified.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Whether single-parameter persistent homology is equivalent to multi-parameter persistent homology ? What extra-information can multi-parameter PH contributes?
2. The Kantorovich-Robinstein norm in Definition 3 is inconsistent with that in proposition 1, under what conditions are they equivalent in general ?
3. The generalization of theorem 1 to higher dimension is open, is the difficulty intrinsic or just technical ?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations, which are helpful for practical applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and feedback.
- *Motivation for multiparameter persistence and extra info compared to one-parameter.*
Thank you very much for pointing out the lack of references motivating multiparameter persistence and its relationship to one-parameter persistence.
To address this, we will add the following short paragraph right after line 47:
>There are many applications of TDA where multiparameter persistence modules are more natural than, and lead to improved performance when compared to, one-parameter persistence modules.
>These include
>noisy point cloud data
>[Vipond, Bull, Macklin, Tillmann, Pugh, Byrne, Harrington. Proc Natl Acad Sci USA 2021],
>where one parameter accounts for the geometry of the data and another filters the data by density;
>multifiltered graphs
>[Demir, Coskunuzer, Segovia-Dominguez, Chen, Gel, Kiziltan. NeurIPS 2022],
>where different parameters account for different filtering functions;
>and time-varying data
>[Chen, Segovia-Dominguez, Coskunuzer, Gel. ICLR 2022],
>where an extra parameter accounts for time-dependence.
- *Definition 3 vs Proposition 1.*
Proposition 1 is stating a property of the Kantorovich-Rubinstein norm, namely that, when evaluated on a point measure (i.e., a measure given by an integer linear combination of Dirac deltas), there always exists an optimal coupling that does not split masses.
We gave details in the originally submitted appendix, but we agree that the fact that this is a property and not a different definition should be further emphasized to avoid confusions.
We will add the following sentence right before Proposition 1 with that goal:
> The following result says that, for point measures, the computation of the Kantorovich--Rubinstein norm reduces to an assignment problem, and that, in the case of point measures on the real line, the optimal assignment has a particularly simple form.
- *Theorem 1 for higher $n$.*
This is a very interesting question.
We know of an extension for arbitrary $n$ but with a Lipschitz constant that depends on the simplicial complex $K$; although not as strong as the cases $n = 1,2$, this result does provide justification for the method for higher parameters.
However, the proof is quite technical and, unlike the cases $n=1,2$, it does not follow easily from previous work and requires several new definitions; for this reason we decided to omit it.
An extension for arbitrary $n$ with a Lipschitz constant that is independent of the simplicial complex $K$ seems plausible and is the subject of current work.
- *Experimental validation of stability.*
Thank you for suggesting this.
Please see the rebuttal PDF we have submitted where such an analysis is carried out for the pipeline (multifiltered complex) -> (Hilbert signed measure) -> (convolution vectorization), showing that our vectors are indeed stable and also provide a quite strong lower bound.
If the paper is accepted, in the final version we will include an analysis for the three constructions: (multifiltered comples) -> (Hilbert signed measure), (Hilbert signed measure) -> (convolution vectorization), (Hilbert signed measure) -> (sliced Wasserstein kernel).
---
Rebuttal 2:
Comment: Dear reviewer,
Please **briefly acknowledge the rebuttal** by the authors and consider updating your score—we want to avoid borderline scores for reviews, and the discussion phase will close soon. If you have any additional questions to the authors please ask them **now**.
Thanks,\
Your AC | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their time and feedback.
We have responded to their questions, and we will be happy to provide further clarifications, if required.
Only a couple of short paragraphs (included in the responses to specific reviewers, below) are required in the main body of the paper to address the reviewers' feedback.
With respect to the appendix, an experiment will be added following a suggestion by reviewer 61ca (a short version of the experiment is in Fig. 1 of the rebuttal PDF).
Pdf: /pdf/eb4ef75e8062c99925af0b100253e0a8bd5ef3fb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Online Pricing for Multi-User Multi-Item Markets | Accept (poster) | Summary: The paper proposes online pricing and allocation strategies for multi-user multi-markets model under three different valuation models.
The main contribution is to extend dynamic pricing strategies from the literature to the case of more than one item and more than one user.
The setting goes as follows:
- At every iteration, each user $u$ presents a maximum demand for items $d_u^t$, that is observed.
- the algorithm allocates items and prices.
- a user accepts the items for which his valuation is higher than the offered price, accepting at most $d_u^t$ items.
The study the fixed valuation model in which the valuation of users for each item is fixed and deterministic, the random experience model, where a user's valuation for an item is the mean of his past experiences, and the random valuation model where the valuations are drawn from fixed but unknown distributions.
Presented algorithms construct confidence intervals for the user-item valuations. At every iteration, the items are allocated according to the so-called optimism principle, meaning the optimal allocation for the upper bounds on the user-item valuations is chosen.
The price set for the item depends on the valuation model.
Strengths: The paper is clear and the algorithms well presented for the most part. The study of larger markets than one item/one user certainly is interesting and relevant for applications.
Weaknesses: My main criticism concerns the way the paper is written. A lot of emphasis is put on the pricing strategies. However, at least two of the method for pricing, for the fixed valuation model and the random valuation model, already exist in the literature (reference 3) and this is not discussed in the paper.
More precisely, the methods of geometrically decreasing intervals and discretization of the price interval are exactly the methods used in ref 3.
There are some unclarities in the proofs, as listed below.
1. The proof of Theorem 1 is not clear at all. Parameter $\epsilon$ is set to $1/LT$ without explanation. This can only be done because exploration stops when the interval for the valuation becomes small enough. But although it appears in the pseudo-code of the algorithm, it is not evoked in the main text nor in the proof. However, this feature of stopping exploration after some time is crucial in avoiding linear regret. This is clear for a reader familiar with ref 3, but I do not believe it would be otherwise.
2. In Appendix C, $y_ui$ is used without introduction. In figure 2, notations are also used before they are introduced.
3. Line 494, the lists are defined too informally.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Overall, the paper has interesting elements, but a good discussion on the dependence of the problem in the parameter L, M and N is missing, as this is the main contribution of the paper.
Precise references to where previously existing methods were used are missing. Adding this would help the reader understand the contribution of the paper better.
On a side note, the assumption that $d_u^t$ is observed seems rather strong, if the authors have a good argument as to why it is necessary (or not), it would be interesting to add.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and thoughtful questions.
In this paper, our main contribution is the analysis of revenue-maximizing algorithms that allocate and price multiple items to multiple users. As stated in our related work section, Kleinberg and Leighton [3] developed a widely-appreciated framework to maximize revenue from selling copies of a single good to sequentially arriving users. Their scenario corresponds to a special case of our problem where the provider interacts with only one user at each round and sells a single type of item. In contrast to this setting, a provider that simultaneously interacts with multiple users faces additional challenges in offering and pricing the items; such challenges are addressed in our work.
The hardest challenge is to decide on simultaneous allocation and pricing actions that can balance exploration with exploitation. At a high level, we tackle this challenge by combining ideas from the dynamic pricing and combinatorial bandits literature. The offers are chosen using the OFU principle as it is often done in combinatorial bandits literature. On the other hand, our solutions for pricing the items are based on the foundational techniques presented in [3]. Regarding related comments, we will provide more precise references to [3] in Sections 4.1 and 4.3 with the goal of improving clarity.
Please find our responses to your other questions below:
* In Section 1.1 and in lines 166-168, we already have some discussion relating to the problem's dependency on the parameters $N$, $M$, and $L$. However, in response to related comments, we want to expand on that discussion by including the following paragraphs. \
Under all three valuation models, we observe that optimal regret rates depend on two central quantities: $NM$ and $LT$. The $NM$ term in these expressions corresponds to the number of user-item pairs for which we need to learn the valuations. On the other hand, the $LT$ factor in the optimal regret rate corresponds to the maximum total number of offers that we make. \
For the specific case of the random valuations model, it is possible to draw further conclusions about the complexity of the problem. Recall that we achieve an almost optimal regret rate of $\widetilde{O}(\sqrt{NMLT})$. In comparison, consider a social-welfare maximization problem where we can make real-valued observations of the random valuations $v_{ui}^t$ for each allocated user-item pair and our goal is to maximize the sum of expected valuations for our allocations. As this problem can be easily modeled as a semi-bandit problem with $NM$ arms of which at most $L$ can be chosen simultaneously at each of the $T$ rounds, the optimum regret rate would be $O(\sqrt{NMLT})$ (Kveton et al., 2015). Hence, we can conclude that our revenue maximization problem with accept/reject feedback is no harder than the social welfare maximization problem with real-valued random reward feedback.
* Our focus is on the problem of allocation and pricing to maximize long-term revenue when user preferences are unknown. Applications relevant to our study include online delivery, hotel bookings, ride-shares, etc., where users come to the platform in search of products or services. In these examples, we can readily consider that users demand only one item ($d^t_u = 1$) when they are active and they have no demand ($d^t_u = 0$) when inactive. While this assumption might be too strong for some applications, our work serves as a valuable starting point for investigation.
Please find our responses regarding other unclarities below:
* We will include the following paragraph after line 227 to elaborate on the need to stop exploration.\
Due to this update rule for $\beta_{ui}$, the length of the corresponding search interval $b_{ui} - a_{ui}$ is at most $\sqrt{\beta_{ui}}$ at any given time. Let an epoch for a user-item pair $(u, i)$ refer to the set of all time steps in which user $u$ is offered item $i$ using the same price increment $\beta_{ui}$. Since the algorithm operates by offering item $i$ to user $u$ at prices increasing by $\beta_{ui}$, the number of queries in an epoch can be anywhere between 1 and $1/\sqrt{\beta_{ui}}$. In the proof of Theorem 1, we show that the regret we incur over a single epoch for a user-item pair $(u, i)$ is $O(1)$, and the number of exploratory epochs can be as large as $T$ in the worst case. Therefore, continuing to explore indefinitely would result in linear regret. To avoid this issue, the algorithm should stop exploration after achieving a certain level of precision and offer item $i$ to user $u$ at the maximum price that is certainly acceptable, namely $a_{ui}$.
* To address the weakness (2), we will use $v_{ui}^t$ instead of $y_{ui}$ throughout the proof.
* Regarding weakness (3), we will revise the rest of the proof after line 494 as follows. \
\begin{align}
\sum_{u\in\mathcal{N}}\sum_{i \in \mathcal{I}}\sum_{t:x_{ui}^t = 1}\sqrt{\frac{1}{m_{ui}^t}}\leq \sum_{u\in\mathcal{N}}\sum_{i\in\mathcal{I}}\left(\sum_{t:x_{ui}^t=1\text{ and }\nexists j\in\mathbb{N}:n_{ui}^t=2^j}\sqrt{\frac{1}{m_{ui}^t}}+\sum_{t:x_{ui}^t=1\text{ and }\exists j\in\mathbb{N}:n_{ui}^t=2^j}\sqrt{\frac{1}{m_{ui}^t}}\right)\leq\sum_{u\in\mathcal{N}}\sum_{i\in\mathcal{I}}\left(2\sqrt{m_{ui}^T}+\log T\right)\end{align}
where we upper bound the first term using $\sum_{k=1}^{n}\frac{1}{\sqrt{k}}\leq\int_{0}^{n}\frac{\mathrm{d}x}{\sqrt{x}}=2\sqrt{n}$ and the second term by noting that it can have at most $\log T$ terms. Also, note that the total number of offers over all time intervals is upper bounded by $\sum_{u\in\mathcal{N}}\sum_{i\in\mathcal{I}}m_{ui}^T\leq LT$. Then, using the Cauchy-Schwartz inequality,
\begin{align}
\sum_{u\in\mathcal{N}}\sum_{i \in \mathcal{I}}\sum_{t:x_{ui}^t = 1}\sqrt{\frac{1}{m_{ui}^t}}\leq 2\sqrt{NM\sum_{u\in\mathcal{N}}\sum_{i\in\mathcal{I}}m_{ui}^T}+NM\log T\leq O(\sqrt{NMLT}+NM\log T).
\end{align}
Putting all together, we conclude the proof of the lemma.
---
Rebuttal Comment 1.1:
Comment: Thank you for those clarifications. I have decided to slightly improve my score, as my main criticism concerned the presentation of the paper and the inclusion of the contributions within the literature, and those paragraphs resolve that concern. | Summary: The paper considers the online pricing problem where there are several users and items. Each user has a (private) valuation for each item and a (public) upper bound for the number of items desired, both of which could vary at different rounds. The provider has an available item set at each round, and the goal is to distribute them to the users such that the total revenue is maximized. The paper considers three settings for the uses' valuation functions: (1) fixed valuations, (2) random experiences, and (3) random valuations. For each setting, the corresponding algorithm is proposed and analyzed, along the theoretical lower bound. For the first two settings, the basic algorithmic idea is trying to learn an accurate valuation interval for each item-user pair, and then computing the assignment and the prices according to the estimated intervals; while for the last setting, the authors quantize the set of possible prices and then borrow the idea from the multi-armed bandit problem to give an algorithm. Finally, the proposed algorithms are evaluated empirically.
Strengths: - The paper makes theoretical contributions. The multi-user multi-item online pricing problem introduced makes sense. The authors consider different valuation settings of this problem, and obtain small gaps between the upper bounds and the lower bounds. The analysis is non-trival.
- The paper is well-written. For each algorithm, the authors always provide the basic algorithmic intuition, which can help the reader understand the algorithm.
Weaknesses: - The discussion of the algorithms' running time is missing. Further, in the current statement, the algorithms need to solve linear programs at each round, but essentially, this can be formulated into a bipartite matching instance and thus, the classical matching algorithm can be applied and help to improve the running time.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Minor comments:
- It might be better to point out that $\lor$ and $\land$ represent "taking max" and "taking min" respectively in the paper.
- In the statement of algorithm 3, $n_{uik}$ is initialized by 0, but in the first iteration, we compute the term $\sqrt{8\log (NMKT) / n_{uik}}$.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I didn't see any negative societal impact of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and questions. Please find our responses to the questions below:
* Based on your comments, we agree to include a discussion on the computational complexity and resource requirements of the algorithms. As you noted, the integer linear program in (7) can be written as an instance of maximum weight bipartite matching. Then, using a variant of the Hungarian algorithm for unbalanced bipartite graphs (Ramshaw and Tarjan, 2012), this problem can be solved in space $O(NM)$ and time $O(NML)$ in the worst case. As this result shows, for a fixed number of items, space and time complexities both have a linear dependency on the number of users.\
**Details of the construction:** In this graph, we represent each user $u$ with $d_u^t$ left nodes and we represent each endowed item $i$ with a right node. Then, we construct the complete weighted bipartite graph where the weight of an edge between a node for user $u$ and a node for item $i$ is given as $v_{ui}$. In total, this graph has $D_t$ left vertices, $E_t$ right vertices, and $D_t E_t$ weighted edges where $D_t$ and $E_t$ correspond to total demand and endowment at time $t$ (as given in Definition 2). Since we can upper bound $D_t \leq N$, $E_t \leq M$, and $\min \\{D_t, E_t\\} \leq L$, the results in the previous paragraph follow.\
**Reference:** Ramshaw, L., & Tarjan, R. E. (2012). On Minimum-Cost Assignments in Unbalanced Bipartite Graphs.
* We agree that it will be better to clearly define notations for “taking min” and “taking max” in our notation section at line 120.
* The division by zero in Algorithm 3 can be interpreted as resulting in infinity. Since we set $b_{uik}$ as the minimum of $( \psi_{uik} + \sqrt{8 \log (NMKT)/n_{uik}} )$ and $1$, we always set $b_{uik} = 1$ when $n_{uik} = 0$. For clarity, we will include a footnote that briefly explains how to interpret $\sqrt{8 \log (NMKT)/n_{uik}}$ when $n_{uik} = 0$.
---
Rebuttal Comment 1.1:
Comment: I have gone through the rebuttal. The reduction stated makes sense to me. | Summary: The paper studies the problem of dynamic pricing in which multiple items are offered to multiple users at each round. Authors propose a novel algorithm for maximizing revenue under three user valuation models: fixed valuations, random experiences and random valuations. Authors provide theoretical guarantees and provide simulation results supporting the theoretical claims.
Strengths: **Strengths**
1.) The paper is well written and easy to follow. Authors motivate the problem formulation well.
2.) Authors provide a detailed related work section and contributions well positioned in the relevant literature.
3.) The theoretical results appear to be sound and the paper also provide novel algorithmic contributions.
Weaknesses: **Weaknesses**
1.) The main weakness of the paper is lack of comparison with existing methods. Authors do not provide a discussion comparing their theoretical or simulation results with existing work.
2.) Including a proof sketch in the main paper will improve the quality of the paper.
3.) Including additional simulation results will improve the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.) Authors assume that items that are not sold in one round can not be stored to be sold in future rounds. What are technical challenges associated with relaxing this assumption?
2.) Can this results be extended to the case where users only accept an item if its price is lower than the valuation by some
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not discuss the limitations of the method and potential societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and questions. Please find our responses to the questions below:
* As discussed in our related works section, dynamic pricing literature has only considered settings where a single user interacts with the provider per time step. Therefore, even if their valuation models are similar, those theoretical or empirical results are not directly comparable with ours. To best position our work within the literature, in our related works section, we include bodies of works that are most relevant to the framework that we are proposing. Throughout that section, we explain the similarities and differences between our work and cited papers. \
Though it’s not a direct comparison, it is possible to compare our work with Kleinberg and Leighton [3] since our results can be interpreted as a generalization of their theoretical findings. This is because their market model is a special case of ours where the number of users is $N=1$, the number of items is $M=1$, and the load is $L=1$. For fixed valuations, we achieve almost optimal regret of order $O(NM\log\log(LT))$ which reduces to $O(\log \log T)$ in the single-user single-item case. Similarly, for the random valuations model, we achieve regret of order $O(\sqrt{NMLT})$ which reduces to $O(\sqrt{T})$ in the single-user single-item case. For both cases, these results match the results of Kleinberg and Leighton [3].
* We provide discussions on the reasoning behind our allocation and pricing mechanisms throughout Sections 4.1, 4.2, and 4.3 within the limitations of space constraints. Based on your comment, we will extend these discussions to speak about the high-level details of the proof as well.
* In our current framework, the demand and endowment information for time $t$ is only made available to the provider at time $t$. However, if the provider can store the items to sell in future rounds, it also needs to have some idea about the demand for future rounds to be able to determine how much it needs to store. To formulate a no-regret algorithm under this scenario, one would need to analyze the tradeoff between offering available items to learn user valuations faster and storing the items to sell in future rounds at possibly higher prices. We believe that the combination of learning user preferences and estimating future demand/endowment is an important direction and we believe that the current work serves as a foundation for exploring this topic.
* If we were to consider a case where users only accept an item if its price is lower than the valuation by some fixed (but unknown) amount $c_{ui}$, then our results would be still valid for all valuation models. Letting $b_{ui}^t = v_{ui}^t - c_{ui}$ to be the acceptance/rejection threshold and replacing $v_{ui}^t$ with $b_{ui}^t$ in the definitions of optimality and regret, all three algorithms would achieve the same order of regret without any modification. | Summary: The paper proposes algorithms for optimizing the sale of multiple goods to multiple users, taking into account their time-varying valuations throughout repeated rounds. These algorithms efficiently learn from users' accept or reject feedback and utilize this information to make optimal offers and prices based on three user valuations: fixed valuation, random experience, and random valuation. To evaluate the effectiveness of the proposed algorithms, the paper provides revenue regret guarantees. Notably, the significant contribution of this paper lies in the introduction of the concept of user's random experience as a crucial valuation to consider in dynamic pricing algorithms.
Strengths: The paper addresses the problem of revenue maximization in a dynamic market with time-varying valuations. Specifically, the paper introduces the concept of "random experience", which is a novel and unexplored idea in prior literature.
The paper offers a rigorous theoretical analysis of the proposed algorithms and assesses their effectiveness and efficiency in practice by evaluating nearly-optimal revenue regret guarantees for each algorithm.
The paper exhibits a well-organized structure, presenting the problem formulation, proposed algorithms, theoretical analysis, and experimental results in a clear and coherent manner.
Weaknesses: Some statements are general and lack support or explicit example, such as in line 17, which states, “The ability to design algorithms that can achieve the optimal sale of goods to multiple users having time-varying valuations for each of the goods is both timely and relevant, given the explosion in the use of data in large-scale systems.” How does the author come to this statement? I would suggest briefly citing more literature to support the argument.
Figure 1 is unclear, and it is challenging to understand how the provider learns user valuation from the current depiction. Additionally, the meaning of the gray user representative is not apparent. To improve the figure's clarity, I recommend providing a more detailed description.
The paper may lack comparisons with relevant literature. The author mentions that fixed valuation and random valuation have been used as standard models for dynamic pricing in previous research. Therefore, I would suggest that the author could further elaborate on and compare this paper’s results with previous work. In comparison with previous works, the author can emphasize the advantages and differences of their proposed method, such as differences in experimental design or interpretation of results. This will help clarify the innovation and contribution of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In line 56, why are the algorithms focused on offering each item to only one user during each round? The framework of "multi-user multi-item" proposed in the paper suggests that interactions happen between multiple users and multiple items over repeated rounds, indicating a dynamic market setting. While the constraint of offering one item to one user during each round simplifies the problem, it may not fully reflect the dynamic market. Therefore, I would like to see that the algorithms can be focused on more items to one user, one item to more users, or more items to more users, which aligns better with the multi-user multi-item market framework proposed in the paper.
In Section 5, why are two different datasets used for numerical experiments? Figure 3 is based on 20 sets of experiments with N=150 users and M=100 items, while Figure 4 is based on 5 sets of experiments with N=30 users and M=20 items.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and questions. Please find our responses to the questions below:
* As discussed in our related works section, dynamic pricing literature has only considered settings where a single user interacts with the provider per time step. Therefore, even if their valuation models are similar, those algorithms and theoretical results are not directly comparable with ours. To best position our work within the literature, the related works section includes bodies of works that are most relevant to the framework that we are proposing. Throughout that section, we explain the similarities and differences between our work and cited papers. \
Though it’s not a direct comparison, it is possible to compare our work with Kleinberg and Leighton [3] since our results can be interpreted as a generalization of their theoretical findings. This is because their market model is a special case of ours where the number of users is $N=1$, the number of items is $M=1$, and the load is $L=1$. For fixed valuations, we achieve almost optimal regret of order $O(NM\log\log(LT))$ which reduces to $O(\log \log T)$ in the single-user single-item case. Similarly, for the random valuations model, we achieve regret of order $O(\sqrt{NMLT})$ which reduces to $O(\sqrt{T})$ in the single-user single-item case. For both cases, these results match the results of Kleinberg and Leighton [3].
* We do have multiple active users in each round and we allocate multiple items across them simultaneously. Our assumption is only about not assigning the same item to multiple users in any round. Nevertheless, our framework can also be readily extended to capture settings in which multiple users are offered the same item. We could achieve this by replicating each item according to its number of available copies. Note that this would correspond to an increase in the number of items as we will need to count each copy as a separate item. For all three valuation models, the corresponding algorithms would achieve regret upper bounds that hold with this modification to the number of items. To not complicate the market model and its analysis in this work, we only consider the case we are given a single copy of each item.
* We chose relatively large numbers of users and items for experiments in Figure 3 to show that our algorithms can achieve diminishing regret over time for larger-scale problems. On the other hand, for the experiments in Figure 4, we opted for a smaller problem because we needed to run many more experiments corresponding to each different time horizon. For instance, we run 20 experiments with $T=2^{10}$ for the first plot in Figure 3 while we run 50 experiments with up to $T=2^{14}$ for the first plot in Figure 4. \
Nevertheless, we have improved the time complexity of our simulations using a maximum weight matching algorithm instead of an LP solver as suggested by reviewers 2rbX and ABaL. Therefore, we are now able to run the experiments in a shorter time and generate a figure showing “regret as a function of time horizon T under different valuation models with N = 150 users and M = 100 items”. (Please see the pdf document in our global response for this figure.) In our revised manuscript, we are committed to repeating this experiment across multiple runs and replacing Figure 4 with those results.
* We appreciate your feedback regarding the motivating statements and their need for explicit support. We are fully committed to addressing this concern by incorporating relevant citations that strengthen the rationale behind these statements. This effort will significantly enhance the credibility and context of our arguments.
* In relation to Figure 1, we recognize your observation regarding its clarity. We deeply value your input and are committed to incorporating your feedback as we revise the figure. This will involve providing a more detailed description that better elucidates the process and ensures a clearer understanding for our readers. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their constructive feedback and thoughtful questions. Please see our separate responses below each review.
In response to a question from reviewer Hkdu regarding the experiments, we provide the new results in the PDF file. In our initial submission, Figure 4 was generated for experiments with $N = 30$ users and $M = 20$ items. Now, we have improved the time complexity of our simulations using a maximum weight matching algorithm instead of an LP solver as suggested by reviewers 2rbX and ABaL. Therefore, we are now able to run the experiments in a shorter time and generate results for larger experiments. In our revised manuscript, we are committed to repeating this experiment across multiple runs and replacing Figure 4 with those results.
Pdf: /pdf/8b3389b9450b5e6b4c29057d493299b4473bdbba.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the problem of online pricing in multi-user multi-item markets. The main objective is to maximize revenue by selling multiple items to multiple users in each round. The paper proposes algorithms that efficiently offer and price items while learning user valuations from accept/reject feedback. It considers three user valuation models and provides algorithms with nearly-optimal revenue regret guarantees. Additionally, the paper introduces a problem-dependent load parameter and designs regret-optimal algorithms for different market settings. Overall, the paper makes contributions in terms of user valuation models, load parameter analysis, and algorithm design, being the first to address this problem.
Strengths: This paper introduces a novel problem of online pricing in multi-user multi-item markets and addresses the specific challenge of maximizing revenue by selling multiple items to multiple users in each round. The quality of the paper is commendable, with a comprehensive analysis of different user valuation models, well-explained methodology, and rigorously analyzed algorithms. The paper is also clearly written and well-structured, making it easy for readers to follow the flow of ideas. Overall, the paper demonstrates originality in problem formulation, high-quality research methodology, clear presentation of ideas, and significant contributions to the field.
Weaknesses: 1. The paper makes a simplifying assumption that each item is offered to only one user during each round. However, this assumption may not capture the realistic settings in which multiple users may have overlapping preferences for the same item and providers may have incentives to offer the same item to multiple users to increase their revenue.
2. The paper does not investigate the scalability of the proposed algorithms. It would be important to evaluate how the algorithms scale with respect to the number of users, items, and rounds. This evaluation would reveal the computational complexity and resource requirements of the algorithms in large-scale multi-user multi-item markets.
3. The paper does not discuss practical considerations that may arise in real-world implementations, such as privacy concerns, fairness considerations, or strategic user behavior. It would be beneficial to address these practical considerations and discuss how the proposed algorithms can cope with such challenges or potential extensions to handle them.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to weaknesses 1, 2, and 3
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper presents several limitations that could be addressed in future work. Firstly, the assumption that each item is offered to only one user during each round may not reflect realistic settings where multiple users may have overlapping preferences for the same item. Secondly, the paper does not address practical considerations such as privacy concerns, fairness considerations, or strategic user behavior. Lastly, the exploration of different market settings is limited, and it would be interesting to analyze the performance of the proposed algorithms in diverse market scenarios. Addressing these limitations would provide a more comprehensive understanding of the applicability and robustness of the algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and questions. Please find our responses to the questions below:
* Our framework can be readily extended to capture settings in which multiple users are offered the same item. We could achieve this by replicating each item according to its number of available copies. Note that this would correspond to an increase in the number of items as we will need to count each copy as a separate item. For all three valuation models, the corresponding algorithms would achieve regret upper bounds that hold with this modification to the number of items. To not complicate the market model and its analysis in this work, we only consider the case we are given a single copy of each item. \
On the other hand, another possibility is to offer an item to more users than the number of copies we have. However, this also brings with it the risk of demand exceeding capacity. If we want to allow algorithms that take such risks, we would need to (1) estimate the chances of overselling and (2) make quantifying assumptions about the consequences of not being able to satisfy users' requests. We agree that relaxing this assumption is an intriguing and important area for future work, and we believe that the current work serves as a foundation for exploring this topic.
* Based on your comments, we agree to include a discussion on the computational complexity and resource requirements of the algorithms. The integer linear program in (7) can be written as an instance of maximum weight bipartite matching. Then, using a variant of the Hungarian algorithm for unbalanced bipartite graphs (Ramshaw and Tarjan, 2012), this problem can be solved in space $O(NM)$ and time $O(NML)$ in the worst case. As this result shows, for a fixed number of items, space and time complexities both have a linear dependency on the number of users. \
**Details of the construction:** In this graph, we represent each user $u$ with $d_u^t$ left nodes and we represent each endowed item $i$ with a right node. Then, we construct the complete weighted bipartite graph where the weight of an edge between a node for user $u$ and a node for item $i$ is given as $v_{ui}$. In total, this graph has $D_t$ left vertices, $E_t$ right vertices, and $D_t E_t$ weighted edges where $D_t$ and $E_t$ correspond to total demand and endowment at time $t$ (as given in Definition 2). Since we can upper bound $D_t \leq N$, $E_t \leq M$, and $\min{D_t, E_t} \leq L$, the results in the previous paragraph follow. \
**Reference:** Ramshaw, L., & Tarjan, R. E. (2012). On Minimum-Cost Assignments in Unbalanced Bipartite Graphs.
* We recognize the importance of the practical considerations you've emphasized, such as privacy concerns, fairness, and strategic user behavior, within real-world applications. While these aspects extend beyond the paper's present focus, we intend to highlight their significance in our revised manuscript. There is an opportunity to delve into these areas more extensively in subsequent research, building on the groundwork we've established. \
To further address the inquiry regarding strategic user behavior, we wish to offer the following insight: If each user interacts with the system only once, it would be in their best interest to behave myopically. In particular, our frameworks of fixed valuations and random valuations can be used to model settings where each user interacts with the system only once (at a single time step) if each user is associated with a type that determines their valuations. In this case, the set $\mathcal{N}$ corresponds to the set of all user types and each demand parameter $d^t_u$ represents the total demand of users of type $u$ in round $t$. Under the fixed valuations model, all users of type $u$ have valuation $v_{ui}^t = v_{ui}$ for item $i$ at all rounds $t$. Under the random valuations model, each user of type $u$ has a random valuation with distribution $F_{ui}$ for item $i$ independently for each user at each time $t$. Since at most one user receives any item $i$ in any round, it is sufficient to consider a single random valuation $v_{ui}^t$ for each type $u$ and item $i$ at time $t$. Crucially, since each user interacts with the system only once, it is in each user’s best interest to behave myopically.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have thoroughly reviewed your response as well as the comments provided by other reviewers. After careful consideration, I have decided to maintain my current score. | null | null | null | null | null | null |
Multi-Modal Inverse Constrained Reinforcement Learning from a Mixture of Demonstrations | Accept (poster) | Summary: The paper proposes the algorithm Multi-Modal Inverse Constrained Reinforcement Learning (MMICRL) for imitation learning mixture of expert demonstrations with various constraints. The algorithm includes agent identification, agent-specific constraint inference and multi-modal policy optimization. The problem is challenging, the proposed method seems to be novel, and the simulation experiments show the effectiveness of the proposed method.
Strengths: I think the problem of multi-modal learning from a mixture of experts with constraints is indeed a challenging one, and the proposed algorithm has necessary components for handling several difficulties.
The paper has good writing, and the method is described clearly with necessary derivation details. Although lots of mathematical notations are used, it’s good to see intuitive explanations for most terms.
The experiments are evaluated on both discrete and continuous environments. The improvement of the proposed algorithm over baseline methods is significant, although not consistent across tasks.
Weaknesses: Overall the paper has a clear description of the reasons for different design choices, but there are still several places confusing for me, detailed in the Questions section.
For proposition 4.2, please provide the complete conditions for the statement. For the case with a limited number of samples and estimated occupancy measure, what is the change for this statement?
In Table 2, it could be better to provide the constraint violation rates in the expert demonstration data as a reference. Does the demonstration data have zero violation rate? It may be mentioned somewhere in the paper that I missed.
For Fig. 4, is it possible to have zero or lower constraint violation while remaining high rewards for tasks Blocked WalkerWithPos and Blocked SwimmerWithPos.
It could be better to see the method tested on some more realistic cases like vehicle driving dataset, etc.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: From Eq. (9) to (10), why using the contrastive estimation method helps with handling the trade-off of reward maximization and density maximization? The contrastive estimation loss also induces an additional term from the reward.
In Eq. (10), how is the reward of the trajectory estimated conditioned on the current policy? Is there any value function estimation in the policy optimization process?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations of the work are well-discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer, we sincerely value your time and effort in evaluating our work. Your insights have been valuable to our work. We have prepared comprehensive responses and clarifications to address each point you raised. We hope these responses can resolve your concerns.
1. *"For proposition 4.2, please provide the complete conditions for the statement. For the case with a limited number of samples and estimated occupancy measure, what is the change for this statement?"*
**Our response.** Thanks for your suggestions. We agree that a thorough explanation of proposition 4.2 is necessary. This proposition is in fact citing the Theorem 2 of [15]. The occupancy measures $\rho$ should satisfy the Bellman flow constraints. That's to say:
\begin{align}
\sum_a{\rho(s,a)}=P_0(s)+\gamma\sum_{s^{\prime},a}\rho(s^{\prime},a)P_{\mathcal{T}}(s^{\prime}|s,a) \text{ and }{\rho(s,a)}\geq0
\end{align}
This constraint is often applied in the dual problem for solving MDP. Since our expert trajectories are assumed to be optimal in solving the primal control problem in MDP, their occupancy measures $\rho^\pi(s,a)$ should generally satisfy the constraints.
[15] Umar Syed, Michael H. Bowling, and Robert E. Schapire. Apprenticeship learning using linear programming. In International Conference on Machine Learning (ICML), pages 1032–1039, 2008.
2. *" In Table 2, it could be better to provide the constraint violation rates in the expert demonstration data as a reference. Does the demonstration data have zero violation rate? It may be mentioned somewhere in the paper that I missed."*
**Our response.** That's a very good point. We agree that more detailed clarification is needed. In fact, ICRL algorithms typically focus on 1) soft constraints, which demand constraint satisfaction in expectation (i.e., violation rate > 0), and 2) hard constraints requiring absolute constraint satisfaction (i.e., violation rate = 0). While soft constraints are particularly useful in stochastic environments, they're challenging to estimate. For simplicity, we've chosen to focus on hard constraints, which exhibit a zero violation rate in the expert demonstration, in this study.
3. *"For Fig. 4, is it possible to have zero or lower constraint violation while remaining high rewards for tasks Blocked WalkerWithPos and Blocked SwimmerWithPos."*
**Our response.** In our experiments, we managed to attain a low constraint violation in **certain** random seeds, but not in **all** of them. The reported results represent **averages** across all tested seeds. Consistently achieving a low constraint violation rate is challenging in both environments. Similar findings are illustrated in Figure 2 of [7], where the MECL (ICRL) algorithm fails to uphold a zero constraint violation rate in the Walker and Swimmer environments. Notably, their experiments focus on a single type of agent, and our task is even more challenging. Future research will be required to make further progress in this regard.
[7] Guiliang Liu, Yudong Luo, Ashish Gaurav, Kasra Rezaee, and Pascal Poupart. Benchmarking constraint inference in inverse reinforcement learning. In International Conference on Learning Representations (ICLR), 2023.
4.*" It could be better to see the method tested on some more realistic cases like vehicle driving dataset, etc."*
**Our response.** In Section 5.4 Limitations, we acknowledged the absence of a realistic benchmark to test our MMICRL algorithm. We notice a recent study ([Ref.7]) provides an environment tailored for autonomous driving tasks, albeit **limited to a single agent type**. A possible route is extending this environment to include multiple agent types and deriving corresponding constraints. Our methodology and findings are elaborated in the attached PDF.
5.*"From Eq. (9) to (10), why using the contrastive estimation method helps with handling the trade-off of reward maximization and density maximization? The contrastive estimation loss also induces an additional term from the reward."*
**Our response.** This is an excellent question. Empirically, we conducted an **ablation study** in our experiment (see MMICRL-LD) by omitting our contrastive estimation approach and using objective (9) directly for policy updates. We noticed sub-optimal control performance in our early experiments, which **motivated** us to explore alternative methods. The contrastive estimation technique is a straightforward, effective method for learning feature representations. Compared to directly applying log density, the information-theoretic InfoNCE loss, which transforms **multi-label-classification** into **positive-negative samples differentiation**, serves as a more applicable loss for learning differentiable policies. This method has already shown stable performance in more complex applications.
6.*"In Eq. (10), how is the reward of the trajectory estimated conditioned on the current policy? Is there any value function estimation in the policy optimization process?"*
**Our response.** ICRL typically assumes a known reward function[4]. In our MMICRL, we assume all experts share the same reward, and explain their distinct behaviors by different constraints. To estimate the cumulative rewards in a trajectory, we follow [4] and utilize of PPO-Lag (Proximal Policy Optimization Lagrange) for policy optimization.
In PPO-Lag, we have an estimation of the value function. The algorithm learns both a reward-based value function and a cost-based value function for calculating expected rewards and costs for a policy.
[4] Shehryar Malik, Usman Anwar, Alireza Aghasi, and Ali Ahmed. Inverse constrained reinforcement learning. In International Conference on Machine Learning (ICML), pages 7390–7399, 2021.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I would like to thank the authors for providing detailed explanation. It answers my questions and I have no further problem at present. I would suggest the authors to add these clarification details to the modified paper.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer. Thanks for your comments. We appreciate your time and effort, and we will make sure all these clarification details are included in the modified paper. | Summary: The paper introduces a new algorithm called Multi-Modal Inverse Constrained Reinforcement Learning (MMICRL) to address the challenge of recovering multiple underlying constraints from a mixture of trajectories demonstrated by different types of expert agents. The algorithm utilizes a flow-based density estimator to identify experts from the demonstration data and infer agent-specific constraints. MMICRL then employs a novel multi-modal constrained policy optimization objective to imitate expert policies, considering both conditioned and unconditioned policy entropy. The approach is further enhanced using contrastive learning to improve robustness. Experimental results in discrete and continuous environments demonstrate that MMICRL outperforms other baseline algorithms in terms of constraint recovery and control performance.
Strengths: The paper introduces a new algorithm, MMICRL, which addresses the challenge of recovering multiple constraints from a mixture of trajectories demonstrated by different types of expert agents. This is a novel and important problem in the field of Inverse Reinforcement Learning (IRL). MMICRL utilizes a flow-based density estimator to identify expert agents from demonstration data. This approach allows for unsupervised expert identification and contributes to the ability to infer agent-specific constraints accurately. The incorporation of the policy optimization objective into the contrastive learning framework enhances the robustness of the algorithm. This addition strengthens the learning process and improves the overall performance of MMICRL.
The paper presents extensive experiments conducted in both discrete and continuous environments to evaluate the performance of MMICRL. The experimental results demonstrate that MMICRL outperforms other baseline algorithms in terms of constraint recovery and control performance, indicating the effectiveness and superiority of the proposed approach.
Weaknesses: I am not an expert in IRL, but I know some RL algorithms [1] make use of mixture models and their modelling methods can even be applied in other fields, like computer vision[2]. I am wondering if is it possible to make these modelling methods in IRL and what is the advantage of MMICRL compared with these methods.
[1] Ren, Jie, et al. "Probabilistic mixture-of-experts for efficient deep reinforcement learning." arXiv preprint arXiv:2104.09122 (2021).
[2] Xia, Xiaobo, et al. "Pluralistic image completion with gaussian mixture models." Advances in Neural Information Processing Systems 35 (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer, we sincerely value the time and effort you have devoted to evaluating our work. To address each point you raised, we have prepared comprehensive responses and clarifications. We hope these responses can resolve your concerns.
1. *"I am not an expert in IRL, but I know some RL algorithms [1] make use of mixture models and their modeling methods can even be applied in other fields, like computer vision[2]. I am wondering if is it possible to make these modeling methods in IRL."*
**Our response.** We thank the reviewer for providing us with the methods for reference and comparison. Although these works could serve as important relevant works (**We will make sure our related works section can reflect this relevance**), they are not directly applicable to solving ICRL problems. In the following, we elaborate on their difference to our MMICRL and potential extension to ICRL.
a) In the study, "Probabilistic Mixture-of-Experts for Efficient Deep Reinforcement Learning", the authors examined the **unconstrained** forward control problem that learns ** control policy** from **observed** environmental dynamics, such as state and rewards. Their approach employs a Gaussian Mixture Model (GMM) to model a **multi-modal** policy, which they term a mixture of experts.
Several key differences distinguish their approach from ours:
- Our method, MMICRL, addresses an **inverse** optimal control problem, learning **missing** dynamics signals, notably constraint signals, from provided **expert demonstration**.
- MMICRL operates within a **constrained** Markov Decision Process (MDP), in contrast to their unconstrained model.
- Instead of modeling multi-modal expert behavior with a single unified policy, MMICRL **differentiates** expert trajectories through agent identification and learns imitation policies **based on the expert type**.
Substituting our PPO-Lag policy representation with a GMM does not account for the contradictory behavior seen in expert demonstrations, particularly in safety-critical situations.
b) In the research, "Pluralistic Image Completion with Gaussian Mixture Models", the primary focus is on constrained pluralistic image completion. The relationship between the sub-procedure and pluralistic results is modeled using a Gaussian Mixture Model (GMM).
Their methodology differs from ours in two main ways:
- MMICRL models the control problem as sequential decisions within a Markov Decision Process (MDP). It's unclear if the task of pluralistic image completion can be mapped similarly to a control problem within an MDP.
- MMICRL learns constraints from expert demonstrations. In contrast, the constraints in pluralistic image completion are either manually provided or derived from the dataset.
2. *"what is the advantage of MMICRL compared with these methods."*
**Our response.** Since the methods primarily target different problems and settings than MMICRL, direct comparisons may be challenging. However, we argue the necessity of MMICRL based on the following points:
- MMICRL can be concurrently applied to both discrete and continuous environments and various inverse control tasks.
- MMICRL can estimate agent-specific constraints from a mixture of expert demonstrations, providing an explanation for their contradictory behavior in game contexts.
- In comparison to other baselines, MMICRL stands out in maintaining a low constraint violation rate while achieving high rewards, demonstrating its effectiveness.
- Using a GMM policy representation is not a direct substitute for our method. Our approach distinctly excels in differentiating between various types of agents.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
We appreciate the time and effort you've devoted to reviewing our paper. We would like to check whether our responses have adequately addressed your concerns. Your feedback is invaluable to us, and we will carefully revise the manuscript and integrate your suggestions to improve its overall quality. Thank you once again for your significant contribution and the time you've invested.
Best Regards,
Authors of Paper 6041 | Summary: This paper considers the problem of estimating constraints from a dataset consisting of a mixture of expert trajectories, and proposes an algorithm (MMICRL) for solving that problem.
MMICRL proceeds iteratively, first estimating which trajectories belong to which agent class using a density estimation approach (over state-action occupancy), and second infers constraints on a per-agent class basis (using a MLE constraint inference procedure).
The policy is learned concurrently using an agent-unconditional max entropy, agent-conditional min entropy objective.
Experiments in gridworld and mujoco environments show that MMICRL outperforms baselines as measured by permissible expected return and constraint violation rate.
Strengths: ### Originality
- The paper proses what seems to be a novel setting of inverse constrained RL with a mixture of experts
- The combination of methods (unsupervised agent clustering, constraint inference, policy learning) used in MMICRL appears novel, the application of certain methods (state-action occupancy density estimation, contrastive objective) to ICRL seems novel, and some aspects seem entirely novel (in-class min entropy, out-of-class max entropy).
### Quality
Evaluation
- The evaluation of the proposed algorithm is quite strong. Appropriate baselines are considered (given the novel setting they double as ablations), and experiments are well-designed to illustrate the advantages of the approach - the grid world demonstrates the behavior in an interpretable manner, and the mujoco experiments demonstrate the usefulness of function approximation in the setting.
- Experiments are run multiple times over different random seeds, with aggregate performance reported.
- Experiments investigating susceptibility of the algorithm to initial poor performance do a good job of conveying how robust the algorithm is in practice
### Clarity
- The algorithm and experiments are presented fairly clearly, with generally useful / clear figures, a specification of the algorithm, and results clearly summarized / visualized.
Weaknesses: ### Significance
- Does constraint inference in the context of a mixture of expert demonstrations makes sense as a problem formulation? If different experts disagree on what is or is not a constraint, then does it make sense to formulate the problem as constraint inference (or would rewards be more appropriate)? The motivation for ICRL is typically that the behavior of experts is often best explained by a simple reward function + a set of constraints [1], but that does not seem to be the case here. The actionable critiques are (1) the paper does not apply the method in a realistic setting and (2) the examples given in the paper of real applications (driving datasets with unlabeled car types) are unrealistic (and relies on partial observability (of the object type) more than a mixture of experts).
### Originality
- Constraint inference in the context of a mixture of expert behavior does seem novel, but the analogous extension for reward learning is not (e.g., [2,3,4])
- MMICRL largely combines existing methods (density estimation, constraint inference, policy optimization) adapting for the multi-modal case (with some novel contributions noted in the strengths)
### Quality
Algorithm
- What can be shown theoretically about the algorithm? For example, does it converge under some assumptions? Discuss of such topics seems missing from the paper
Evaluation
- Related to the point about significance: the evaluation settings are unrealistic. An example with real world data would dramatically improve the quality of the evaluation.
Clarity
- It’s not clear until later in section 4 that the algorithm iteratively performs agent identification / clustering, which makes section 4.1 confusing when first reading through
- Figure 4 could be improved wrt clarity. Perhaps aggregating over environments would help.
[1] Maximum Likelihood Constraint Inference for Inverse Reinforcement Learning \
[2] LiMIIRL: Lightweight Multiple-Intent Inverse Reinforcement Learning \
[3] Dealing with multiple experts and non‑stationarity in inverse reinforcement learning: an application to real‑life problems \
[4] Nonparametric Bayesian Inverse Reinforcement Learning for Multiple Reward Functions
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. How is evaluation performed given that the agent-class is identified in an unsupervised manner? It seems like the relevant z value must be inferred either heuristically or manually in order to evaluate e.g., constraint violations. If so, this seems like a limitation of the approach, the reason being that identifying which z value to use in a practical setting would be challenging (given the use of CIRL in the first place implies complex / high-dimensional constraints).
2. Is it correct that the method is limited to a setting with a fixed number of agent classes? This appears to be the case based on the algorithm. If so, (1) how robust is the algorithm to the number of classes chosen being incorrect? And (2) this also seems like a major limitation in that often real world data doesn’t cluster into a discrete set (but rather exists along a continuous spectrum).
3. Section 3 has this line: “CMA-MDP assumes the agents are differentiable by examining their policies, implying that different agent types cannot share identical policies”. Would you elaborate on both the differentiability claim and the implication?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See weaknesses and questions sections
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer, we sincerely value the time and effort you have devoted to evaluating our work. To address each point you raised, we have prepared comprehensive responses. We hope these responses can resolve your concerns.
1. *"How is evaluation performed given that the agent class is identified in an unsupervised manner? ... identifying which z value to use in a practical setting would be challenging"*
**Our Response.** During the evaluation process, **we must assign the identified agents to a specific type of expert agent.** This is achieved by a majority voting system. To illustrate, let's consider a classified expert dataset, denoted as $D_{z}$ (line 182 of our paper), where 90\% of the trajectories are generated by agent type 1 (this information is available during the evaluation.), so we would label this class as type 1. Subsequently, the corresponding ground-truth constraint related to type 1 would be used to compute the constraint violation rate.
In a **practical application**, such as autonomous driving, we can run our MMICRL algorithm to derive constraints for different agents. To apply these constraints, we must associate identified agents with actual vehicles, which is made by matching the distribution of the identified expert dataset $\mathcal{D}_{z}$ to the distribution observed in the target vehicle. It's crucial to note that this matching wouldn't be feasible without the expert dataset identified by our MMICRL.
2. *"Is it correct that the method is limited to a setting with a fixed number of agent classes?"*
**Our Response.** In our MMICRL algorithm, the count of agent types of interest, denoted as $|\mathcal{Z}|$, is a hyperparameter. However, it doesn't necessarily match the precise number of expert types but rather serves as an upper limit. Therefore, if the actual expert types are fewer than $|\mathcal{Z}|$, MMICRL will 1) either create additional subclasses for an actual expert type or 2), assigns no expert trajectory to one class, rendering $\mathcal{D}_{z^{''}}=\{\emptyset\}$, which can then be discarded in the subsequent iteration. This behavior doesn't compromise the algorithm's effectiveness. However, if the actual expert types exceed $|\mathcal{Z}|$, implying $|\mathcal{Z}|$ no longer serves as an upper limit, it could lead to sub-optimal performance.
3. *"How robust is the algorithm to the number of classes chosen being incorrect?"*
**Our Response.** The robustness in MMICRL can be justified by experiment. Please check our additional results in the attached PDF file.
4. *"This also seems like a major limitation in that often real-world data doesn’t cluster into a discrete set ..."*
**Our Response.** In real life, we believe there are many examples where agents can be classified into discrete sets. For instance, vehicles of different types, and sports players in different positions. For cases where clear distinctions are difficult to make, we can **discretize the continuous sets** and then use MMICRL to infer the constraints.
5.*"Section 3 has this line: "CMA-MDP assumes the agents are differentiable by examining their policies ...". Would you elaborate on both the differentiability claim and the implication?"*
**Our Response.** The assumption is consistent with the design of our MMICRL algorithm. If two agent policies are identical, our unsupervised agent identifier would not differentiate them. Consequently, there's no requirement or benefit to regard them as distinct agent types or account for their behaviors using separate constraints.
6. *"What can be shown theoretically about the algorithm? For example, does it converge under some assumptions? ...".*
**Our response.** While we acknowledge the **significance of such understanding for ICRL**, theoretical results for ICRL are, to our knowledge, currently **lacking**. A potential route is extending theoretical findings from IRL to ICRL, but enabling this extension remains a complex issue. because:
- The advancements in IRL theory depend on defining feasible sets of rewards. However, defining a feasible set of **constraints** is challenging due to the variability of constraints (e.g., hard, soft, and probabilistic constraints) and the constrained optimization method (e.g., Lagrange method, Interior points). These issues are notably **absent in IRL**, where the forward problem is unconstrained.
- IRL theories usually deal with a simplified setting involving bounded rewards and a tabular environment. In contrast, our work explores environments with a continuous state space and multiple expert policies and demonstrations. The gap between rigorous theoretical understanding and our ICRL setting is substantial.
7. "If different experts disagree on what is or is not a constraint, then does it make sense to formulate the problem as constraint inference (or would rewards be more appropriate)?"
**Our response.** It is challenging to explain the conflicting behaviors among expert trajectories through reward inference with IRL. For instance, in a situation where 50\% of expert agents move left and the other half move right, IRL would assign near-zero rewards to both actions, considering that each action is deemed sub-optimal 50\% of the time.
Our approach indeed aligns with the motivation for ICRL, as we assume all expert agents adhere to the same known reward function. We account for their disagreements by learning a series of distinct constraints.
8. *"Concerns about the unrealistic evaluation."*
**Our response.** Section 5.4 Limitations have acknowledged the absence of a realistic benchmark. A recent study ([Ref.7]) describes an environment tailored for autonomous driving tasks, albeit **limited to a single agent type**. A possible route is extending this environment to include multiple agent types. Our methodologies are elaborated on in the attached PDF.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
We appreciate the time and effort you've devoted to reviewing our paper. We would like to check whether our responses have adequately addressed your concerns. Your feedback is invaluable to us, and we will carefully revise the manuscript and integrate your suggestions to improve its overall quality. Thank you once again for your significant contribution and the time you've invested.
Best Regards,
Authors of Paper 6041 | Summary: # Problem Statement
The paper addresses a significant problem in Inverse Constraint Reinforcement Learning (ICRL), which is the assumption that all expert demonstrations follow the same constraints. This assumption is problematic because in real-world scenarios, demonstration data may come from various agents who follow different or even conflicting constraints. Therefore, using a single constraint model to explain the behaviors of diverse agents can lead to inaccuracies.
# Main Contribution
To tackle this issue, the paper introduces the Multi-Modal Inverse Constrained Reinforcement Learning (MMICRL) algorithm. This approach allows imitation policies to capture the diversity of behaviors among expert agents. The paper demonstrates that MMICRL outperforms other baselines in terms of constraint recovery and control performance in both discrete and continuous environments.
# Methedology
The proposed MMICRL estimates multiple constraints corresponding to different types of experts, allowing for a more accurate representation of diverse agent behaviors. It uses a flow-based density estimator for unsupervised expert identification from demonstrations, infers agent-specific constraints, and optimizes a multi-modal policy that minimizes the agent-conditioned policy entropy and maximizes the unconditioned one.
The key steps of MMICRL include unsupervised agent identification, conditional inverse constraint inference, and multi-modal policy update.
1. Unsupervised Agent Identification: MMICRL identifies expert trajectories in an unsupervised manner. It performs trajectory-level identification by estimating an agent-specific density. The algorithm uses a Conditional Flow-based Density Estimator (CFDE) to compute a density using a state-action density estimator. The agent identifier is represented by the softmax representation.
2. Agent-Specific Constraint Inference: Based on the identified expert dataset, the likelihood function can be simplified. The algorithm parameterizes the instantaneous permissibility function with $ω$ and updates the parameters by computing the gradient of the above likelihood function.
3. Multi-Modal Policy Optimization: The policy is trained to maximize cumulative rewards subject to constraint. The objective expands the reward signals with a log-probability term, which encourages the policy to generate trajectories from high-density regions for a specific agent type. The algorithm integrates contrastive learning into policy optimization, helping it to better understand the relationships between agents, their behaviors, and the corresponding expert trajectories.
The algorithm alternates between these steps until the imitation policies reproduce expert trajectories, signifying that the inferred constraints align with the ground-truth constraints.
# Experiments
The paper conducts experiments on the MMICRL algorithm in both discrete and continuous environments, using Constraint Violation Rate and Feasible Cumulative Rewards as evaluation metrics. In discrete environments based on Gridworld, MMICRL successfully identifies various agent types and generates accurate trajectories. In continuous environments based on MuJoCo, MMICRL consistently exhibits lower constraint violation rates and higher cumulative rewards, outperforming other baseline methods. A robustness test shows that MMICRL can recover from initial incorrect agent identifications, demonstrating its ability to accurately model diverse agent preferences and its robustness in handling errors.
Strengths: # Originality and significance
This work poses a unique problem setup of multi-modal ICRL, which nevertheless has substantial real-world impact for applications such as autonomous driving or human-robot interaction. The proposed method is sophisticated and innovative, requiring no supervision other than expert demonstrations.
# Quality
A comprehensive analysis, along with an ablation study, is conducted in comparison to various baselines, effectively validating the efficacy of the proposed method.
Weaknesses: - The number of agent types needs to be preset, and such number tested in the paper has been limited (mostly 2 or 3). The results would be more significant if the effectiveness of the method could be demonstrated with more complex data of larger scale.
- Please enlarge the font size in figure 3, and also adjust the opacity of the bars so that the overlapping distributions are both visible.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Should the summation in line 178 be a production instead?
- Why does the equation in line 181,
$p_{\psi}(z|\tau) = \frac{\exp \left(\prod_{(s,a) \in \tau} p_{\psi}(s,a|z)\right)}{\sum_{z'} \exp \left(\prod_{(s,a) \in \tau} p_{\psi}(s,a|z')\right)}$ hold? It would be great if the authors could elaborate more on the derivation.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors acknowledge two valid limitations in their work. Firstly, their MMICRL method doesn't consider potential collaboration or competition between agents to meet joint constraints. Secondly, their evaluations are conducted in virtual environments, not real-world applications, due to the absence of a suitable ICRL benchmark.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer, we sincerely value your time and effort in evaluating our work. Your insights have been valuable to our work. We have prepared comprehensive responses and clarifications to address each point you raised. We hope these responses can resolve your concerns.
1. *"The number of agent types needs to be preset, and such number tested in the paper has been limited (mostly 2 or 3). The results would be more significant if the effectiveness of the method could be demonstrated with more complex data of larger scale."*
**Our Response**. We have supplemented our findings with further experimental results, enclosed in the attached PDF. This includes an increase in the number of ground-truth constraints and a study examining the model's performance when the number of these constraints and the preset agent types are mismatched. These additional results can provide clearer insight into our ICRL's performance.
2. *"Please enlarge the font size in Figure 3, and also adjust the opacity of the bars so that the overlapping distributions are both visible."*
**Our response.** We appreciate your kind suggestions and we have revised the manuscript accordingly.
3. *"Should the summation in line 178 be a production instead?"*
**Our response.** Thank you for your careful reading of our manuscript. It should be a production in line 178. We apologize for any confusion caused and appreciate the valuable suggestions.
4. *"Why does the equation in line 181, hold? It would be great if the authors could elaborate more on the derivation."*
**Our response.** We're thankful for your suggestions. Indeed, this equation utilizes the softmax representation. We've developed an agent-conditioned density model, $p(\tau|z)=\prod_{(s,a)\in\tau} p(s,a|z)$, to calculate the density of state-action pairs. However, the agent identifier in MMICRL uses expert trajectories as input to predict the most likely agent generating these trajectories. We need to convert the density estimate to the agent identifier $p_\phi(z|\tau)$, a transformation commonly achieved via the softmax function.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I thank the authors for their response.
For Q1, the supplemented materials do not address my concern that the algorithm is only demonstrated with very limited number of agent types (fewer than 4). In realistic scenarios, this number could be substantially larger, in which case it is not clear whether the algorithm can perform well.
For Q4, given the likelihood $p(\tau|z)=\prod_{(s,a)\in\tau} p(s,a|z)$, the posterior $p_\phi(z|\tau)$ should be given by Bayes' Theorem, which does not involve the exponential, considering that $p(\tau|z)$ is not a logit score but already a probability. I'd suggest the authors to further clarify this in the article, or change the notation and do not call the softmax quantity the posterior $p_\phi(z|\tau)$.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer, thanks again for taking the time to review our paper and providing valuable feedback. We greatly appreciate your important concerns and suggestions. We hope these concerns are addressable with the following points:
1. "*In realistic scenarios, this number could be substantially larger, in which case it is not clear whether the algorithm can perform well*"
- **Response**. We would like to clarify that the additional experiments in the realistic environment were conducted primarily to address the concerns raised by other reviewers, who requested more real-world studies. These were not intended to respond to Q1. We apologize for any confusion our previous communication may have caused.
In fact, real-world environments pose more challenges, requiring longer training times. Due to the constraint of providing a response within a week, we only considered distance constraints of 20m and 40m to validate the algorithm's applicability in autonomous driving scenarios. Nonetheless, our algorithm has the capability to handle a wider range of agent types (see "Experiments with a Larger Variety of Agent Types.").
2. "*For Q1, the supplemented materials do not address my concern that the algorithm is only demonstrated with a very limited number of agent types (fewer than 4).*"
- **Response**. In the paper and the additional experiments in the attached PDF, we have investigated the performance of MMICRL in environments featuring 2, 3, and 4 distinct agent types. While we acknowledge the potential benefits of expanding this scope to encompass 5, 6, 7, and so forth, it's worth noting that the primary objective of this paper is to introduce a framework that facilitates ICRL from multiple expert types. In similar contexts, earlier research on IRL or imitation learning [10] typically involved **a maximum of 3 agent types** in their experiments.
Considering that increasing the number of agent types in the demonstration dataset will increase the difficulty of "unsupervised agent identification", it is important to **explore the scalability limits** of the algorithm by increasing the number of agent types. Although we **have initiated this study and anticipate including additional results**, it is important to clarify that this investigation is not the **primary objective** of our current work.
[10] Li, Yunzhu, Jiaming Song, and Stefano Ermon. "Infogail: Interpretable imitation learning from visual demonstrations." Advances in neural information processing systems 30 (2017).
3. "*For Q4, given the likelihood $p(\tau|z)=\prod_{(s,a)\in\tau} p(s,a|z)$, the posterior $p_\phi(z|\tau)$ should be given by Bayes' Theorem, which does not involve the exponential, considering that is not a logit score but already a probability.*"
- **Response**. Thank you for your insightful suggestion. In fact, we can employ Bayes' Theorem in our formulation, expressed as:
$$p_\phi (z|\tau) = \frac{p(z)\cdot p_\phi(\tau|z)}{\sum_z p(z)\cdot p_\phi(\tau|z)}$$
where $p(z) = \frac{1}{|\mathcal{Z}|}$ denotes a uniform prior. This representation is similar to our "softmax", but without the exponential term in both numerator and denominator. It's important to note that in our formula, we leverage the probability as the logit for the softmax function. This is done to ensure consistency with the max operator we utilize in line 184, with which we construct the identified dataset.
In response to your feedback, we'll enhance our paper by **incorporating the Bayesian representation**. Additionally, we will try to redefine the term 'softmax' to avoid any potential confusion. | Rebuttal 1:
Rebuttal: Dear Reviewers, Area Chairs, and Program Chairs,
We sincerely appreciate the valuable comments and suggestions provided, as they have been instrumental in enhancing our work. In light of your feedback, we can improve our work with detailed clarifications, comprehensive explanations, and supplementary experimental results. To facilitate understanding and mitigate any potential ambiguity, we provide a summary of the major updates below:
1. **Validating the Robustness with Additional Experiment.** To address the concerns raised by reviewer ZL9H, we evaluate our model's performance across different numbers of agent types. The algorithm maintains its efficiency even when the parameter surpasses the actual number of agent types. However, a parameter smaller than the actual count may influence the performance of constraint inference. In response to reviewer KNkP's concerns, we test in a Gridworld environment with a larger number of agent types, where the algorithm consistently exhibits satisfying performance. The results can be found on the attached one-page PDF.
2. **Clarifying the Advantage of MMICRL.** As it is requested by the reviewer ieEm, we clarify the distinctions between MMICRL and other methods, given that they are proposed for different tasks. Additionally, we provide a concise summary of the unique advantages offered by the MMICRL algorithm.
3. **Justifying Various Design Decisions**. We furnish the comprehensive conditions for Proposition 4.2. Additionally, we illuminate our motivation for adopting the contrastive estimate method, which effectively handles the balance between reward maximization and density maximization.
4. **Responding to Queries About Evaluation Process and Problem Formulation**. We detail our approach of allocating the identified agents to distinct expert agent types as part of the evaluation process. Besides, We introduce why explaining conflicting behaviors among expert trajectories through reward inference with IRL is a challenging task, so we think MMICRL proves to be an efficacious instrument for modeling the behaviors of multiple agents.
5. **Adding Results in Realistic Environments**. As suggested by reviewers ZL9H and Gwnr, we have added the experiment results of our MMICRL method in the realistic environment of autonomous driving on highways. We mainly study the car distance constraints (check results in the attached one-page PDF).
As far as we know, we are the first to differentiate multiple constraints corresponding to various types of agents. In order to demonstrate the generality of our method compared to other baselines, we examine the capability of MMICRL to accurately perform multi-modal constraint inference in both discrete and continuous environments. We believe that our benchmark can facilitate the development of more mature multi-model ICRL algorithms in other meaningful settings.
Pdf: /pdf/000d4ae05edd0ce7da51d5d6f78c222b0999ca34.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Operator Learning with Neural Fields: Tackling PDEs on General Geometries | Accept (poster) | Summary: This paper creatively employs Implicit Neural Representations (INR) for operator learning on irregular domains. This novel method benefits from INR's capability to adaptively handle irregular grid distributions or irregular geometric areas, making learning the mapping from input to output in the INR's latent space quite natural. The primary experiments of the paper are on initial value problems, where it shows a significant improvement. However, a notable challenge is that the introduction of INR necessitates bi-level optimization, which potentially introduces instability and difficulty into the training process.
Strengths: 1.The proposed method is highly innovative.
2.The quality of writing is excellent, and the paper is easy to read.
3. As an empirical paper, the experimental results surpass the baselines.
Weaknesses: 1.The paper lacks research on related work. Besides Geo-FNO and FFNO, transformer-based methods can also naturally handle irregular geometric areas. The authors should at least mention these works. Here are some references for consideration [1,2,3].
2. Operator learning tasks are not computationally demanding, so the paper should explore the method's stability and report error bars (at least for some cases).
3. More explanation or supplementation is needed concerning the stability of the bi-level optimization, the difficulty of training, and why second-order meta-learning is adopted.
Overall, this is a good paper. If the authors can clarify the issues mentioned, I am inclined to accept it.
Reference
1. Transformer for Partial Differential Equations' Operator Learning (https://arxiv.org/abs/2205.13671)
2. GNOT: A General Neural Operator Transformer for Operator Learning (https://arxiv.org/abs/2302.14376)
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: None
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful feedback and have addressed the raised concerns below.
## Transformer-based methods
> [...] Besides Geo-FNO and FFNO, transformer-based methods can also naturally handle irregular geometric areas. The authors should at least mention these works.
We thank the reviewer for the references, we will add a discussion on these
recent contributions and how they position w.r.t. other methods.
## Stability
> Operator learning tasks are not computationally demanding, so the paper
should explore the method’s stability and report error bars (at least for some
cases).
We have provided error bars in Table 1 of the PDF rebuttal
page for the *Navier-Stokes* dataset, for CORAL and for the baselines. We
compute the mean and standard deviation of the test MSE out of three runs with
different seeds. Each seed induces a different parameter initialization for all the
models and a different pair of train and test grids $X_{tr}$ , $X_{te}$ for the subsampling
experiments.
Note that the variance of all models increases when the subsampling ratio
decreases, since the independent runs sample a different training and test grids.
Additionally, training times for CORAL have been provided in Table 2 of the PDF rebuttal page.
## Optimization procedure
> More explanation or supplementation is needed concerning the stability
of the bi-level optimization, the difficulty of training, and why second-order
meta-learning is adopted.
The bi-level optimization that is referred throughout the paper as the inner-loop and outer-loop, is a crucial aspect for the success of our method. We show in Figure 1 of the PDF rebuttal page a comparison between 1st order and 2nd order meta-learning for training the modulated INRs.
As can be seen, the 1st order method struggles to find a good local optimum for the shared parameters of the modulated INR. In contrast, the 2nd order meta-learning is more stable and is able to converge consistently to similarly-performing local optima across different runs.
The intuition to stabilize the meta-learning training is to have a large learning rate for the inner-loop update (1e-2), and a much smaller one for the outer-loop (from 1e-4 to 1e-6). Increasing the batch size also helps stabilizing the training. Overall, this increases the number of epochs required to achieve a good reconstruction of the fields but favors a smooth training. | Summary: This paper proposes a method for solving PDEs with neural networks in continuous space (and optionally time as well). The authors leverage the success of coordinate-based neural networks (or implicit neural representations – INRs) and formulate the problem as an operator learning one, akin to the Neural Operator family, bypassing the need for discretisation. In particular, the method is based on a typical encode-process-decode strategy:
1. *Encode*: the input condition of the PDE (i.e. the initial value in an initial value problem – IVP) is projected to the input latent vector (via gradient-based optimisation), which is then transformed (via an MLP) to a latent modulation of the input INR (SIREN), such that the INR is fitted to the input condition (auto-decoding).
2. *Process*: the input latent vector is transformed (via an MLP when projecting to a single output or via a neural ODE when solving over continuous time) to the output one.
3. *Decode*: the output latent vector is transformed (via an MLP) to a latent modulation of the output INR, which encodes the output signal and can then be queried in arbitrary spatial positions. To promote training stability, the authors employ a two-step process: First the parameters of the input/output INRs are fitted via a meta-learning strategy, similar to Dupont et al., ICML’22, using a shared INR across all input conditions and learning both the parameters of the INR and the latent vectors simultaneously and efficiently. Then, the parameters of the encode-process-decode pipeline are learnt by doing optimisation in the latent space.
The method is evaluated on several PDE tasks, showing competitive results against other continuous neural solvers.
Strengths: - **Clarity and presentation quality**. The paper is well-presented, has a good flow and guides the reader smoothly throughout the important ideas. Also, it is mathematically clean and concise. Finally, it is mostly well-contextualised with the relevant literature.
- **Simplicity and Scalability (inference)**. Although the method is less theoretically motivated compared to Neural Operators, it builds on top of existing tools in the INR literature, making it easy to implement. In addition, forecasting happens in a compressed version of the physical process (a latent vector), which improves scalability.
- **Novelty and Significance**. The paper proposes a new framework for continuous simulations, departing from the Neural Operator family, advancing the field further, in terms of efficiency and performance. The claims are empirically evaluated on a broad spectrum of PDE tasks (from IVPs and dynamics modelling to geometric design and optimisation) showcasing competitive generalisation performance across input conditions, resolutions and time horizons (for trajectory unrolls).
Weaknesses: - **Connections with Dupont et al., ICML’22**. First off, it is important to acknowledge that the method, from a technical point of view, is largely based on the “functa” concept of Dupont et al., ICML’22. The authors do cite this work, but not as prominently as one would expect. Although the aforementioned paper focuses more abstractly on function representation learning, and less on PDE solvers, the current work is a natural follow-up, so I believe that the connections between the two should be stated more clearly.
- **Efficiency and ease of training**. The training process might be slow since it happens in two steps. Although the second step, might be relatively fast since it happens in the latent space, the first one requires fitting INRs to all possible inputs and outputs in the training set. The meta-learning strategy might potentially alleviate this problem, but the authors have not discussed this matter. Could the authors provide more details on this step? How long does it take? Does it require extensive hyperparameter tuning? How sensitive are the results to different hyperparameters?
- **Potential risk of overfitting (w.r.t. input/output signals not coordinates)**: Experimentally, it seems from the paper that the method generalises well across different input conditions, including the particularly challenging problem of trajectory unrolls. Let me explain why I am a bit surprised by this:
- I am puzzled about the inductive biases of the encoding process (Eq. (1)). In particular, it is unclear what are the hidden assumptions regarding the input/output signals. Think of a CNN encoder for example – in that case, we would silently assume signal stationarity and shift invariance (due to the application of the same filter across different regions), locality, etc. On the other hand, the corresponding assumptions of the auto-decoder are unclear, even in an intuitive manner. I would appreciate it if the authors could comment on this, and it might be also useful for future readers.
- Given the above, I am wondering how well the encoding process, and more importantly the entire pipeline, generalises across different input signals. In particular, in case the encoding process overfits, then small errors in the predictions of the latent codes (especially when unrolling trajectories where errors accumulate), might result in regions of the latent space “unknown” to the decoder, and therefore to unpredictable outcomes. To clarify this, it might be useful to discuss the distributions of the input conditions used. Are they challenging enough? How important is the Z-score normalisation (Appendix B.1.2.)?
- My concern about this also comes from the fact that only a latent-space objective is minimised, which in principle does not rule out the possibility of small errors in the latent space to translate to large errors in the signal space (e.g. assume that the INR decoder has a large Lipschitz constant).
- **Empirical evaluation and comparison against relevant methods**: I think there is room for improvement in the experimental section (extra discussion and comparisons) in order to make the claims more convincing. In particular,
- It would be helpful to also include discrete neural solver baselines (e.g. based on CNNs, as in Stachenfeld, ICLR’22, or mesh GNNs, as in Pfaff et al., ICLR’21).
- In addition, it would be nice to see a more thorough discussion regarding the comparison between continuous neural solvers (MPPDE, Neural Operators etc.). It is unclear to me why and when the current method performs better. For example, intuitively I don’t understand why MPPDE is dependent/overfits on the training grid. Have the authors considered comparing against the Graph/Multipole Graph Neural Operator methods (NeurIPS’20)? Does CORAL have any stronger inductive biases compared to Neural Operators? Could the authors elaborate on the above?
- Regarding the dynamics modelling experiments, I have a few questions: (1) Have the authors compared against autoregressive one-step predictors (e.g. as in the MPPDE paper, or in Sanchez-Gonzalez et al., ICML’20)? Obviously, the NeuralODE formulation allows for evaluation across arbitrary timestamps, but I am wondering if there is any disadvantage in this approach. (2) Have the authors considered comparing to discrete autoregressive solvers, such as the LE-PDE paper (NeurIPS’22)? (3) In certain papers (e.g. MPPDE and Sanchez-Gonzalez et al., ICML’20) the authors use a noise trick to stabilise trajectory unrolls and mitigate error accumulation. I am curious if the authors used it here as well, and if not why?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The major questions that I would like to be clarified and discussed are included in the "Weaknesses" section. Below, I enlist a few extra minor comments:
**Minor comments**:
- Could the authors explain their inference-time findings? Why is CORAL faster than DINO and MPPDE and slower than DeepONet and FNO? Why does CORAL scale batter (almost constant time regardless of the resolution)? Why didn’t the authors include results for FNO across all resolution setups?
- Perhaps it would have been beneficial to also test on more challenging setups such as the turbulent regime of Navier-Stokes (potentially to identify limitations and failure cases).
- A recent paper that improves upon prior work and might be useful to also include as a baseline is the “Clifford neural layers for PDE modelling”, Brandstetter et al., ICLR’23.
- L72: “infinite-dimensional functions“: This should be rephrased - maybe mappings between functions (infinite-dimensional vectors)?
- L76: Regarding DeepONet, if I am not mistaken it is required to have the same observation grid only for training and not for testing.
- L265: RK4 – maybe explain that you are referring to fourth-order Runge-Kutta.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have included a section discussing the limitations of their work and were upfront about them. A few things remain to be clarified (e.g. time and tuning required for training, generalisation across input conditions, and more challenging experimental PDE setups). No foreseeable negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We value the reviewer's extensive and detailed feedback and have effectively addressed the raised concerns as follows.
>Connections with Dupont et al., ICML’22.
The work by Dupont indeed inspired the INR part of our contribution, and will be better acknowledged in the final version.
> Efficiency and ease of training.
Certainly, training the INR is the most time-consuming step while time integration by the solver is fast. We provide additional details on the training time in Table 2 of the pdf rebuttal page. Note that accelerating INR training is an active research topic. Besides, we found out that training is relatively robust to the choice of hyperparameters. Key parameters include SIREN frequency $\omega_0$, code size $d_z$, and SIREN depth.
> Potential risk of overfitting
This is a great question which deserves further investigation. For now we can only answer intuitively. INRs encode global image representations in a spectral form [1] so that the role of the code is to modulate this frequential representation. [2] perform some experiments showing that each dimension of the code induces some frequential pattern in the reconstructed image.
In a simplifying view, we could think of the code features corresponding to coefficients in a spectral basis ([2]). Then in CORAL the solver can be thought of as operating on this coordinate space. If the flow is smooth and regular the associated trajectory could be more easily learned.
How is this achieved in CORAL ? CORAL is constrained (i) to operate in small dimensional code spaces, and (ii) optimization with meta learning is constrained for each image, to adapt the codes from a shared initial value to a final value with only a few (3 in the experiments) gradient steps. This avoids overfitting and constrains the code manifold to be smooth and low dimensional. For example in a forecasting experiment, two successive images will have close representations in the code space, and the code trajectories are deemed to be regular. This strategy enforces the robustness of the trajectories to small variations. This helps training the neural ODE and could explain the good behavior in extrapolation.
[1] A Structured Dictionary Perspective on Implicit Neural Representations. Yüce et al. CVPR 2022
[2] From data to functa: Your data point is a function and you can treat it like one. Dupont et al. ICML 2022.
## Baselines
> Inclusion of discrete neural solver baselines.
We would like to clarify that we did experiments with discrete solvers. The graph based encode-process-decode architecture was proposed in Pfaff 2021 and re-used within an updated version in MP-PDE, which is one of our baselines. These methods are discrete both for the spatial and time dimensions. We found out through preliminary experiments that MP-PDE was SOTA among GNN solvers which motivated our choice. CNNs weren't compared due to grid limitations.
> Thorough discussion on continuous neural solvers comparison.
MP-PDE, FNO and CORAL all perform well in many settings, and have their respective advantages, and weaknesses. Our point is that CORAL is a general framework that allows dealing (i) with multiple tasks, (ii) in a mesh-free setting, (iii) in a reduced latent space and generalizes well on multiple conditions. MP-PDE's mesh dependence hinders generalization in drastic mesh changes or sparse sub-sampling. As for FNO, being based on FFT it is limited to regular grids. Graph-Multipole was unstable and excluded from the baselines.
> (1) Comparison against autoregressive one-step predictors.
Yes, all the experiments involving dynamics modeling are performed with autoregressive one-step predictors for all the models.
> (2) Comparison to discrete autoregressive solvers.
Maybe this was not clear enough. There are two dynamics involved for forecasting. One is auto-regressive, $u_t = g(u_{t-1})$ and the second one is how we proceed from $u_{t-1}$ to $u_{t}$. The solver is involved for the latter computation: for a 4-step method like RK4 this requires 4 iterations and for a simple Euler forward, this would require 1 iteration only. Said otherwise, NODE generalizes the basic AR formulation.
> (3) Noise trick for trajectory unrolls.
Our NeuralODE training employed exponential sampling, initially using ground truth codes as inputs. We didn't use additional tricks.
## Minor comments
> Inference-time analysis.
At test time, DINO's adaptation requires 100x more optimization steps. MPPDE's computation is slow due to the KNN graph creation and slower message passing. DeepONet and FNO are faster due to forward computation only. CORAL's encoding/decoding is relatively resolution-insensitive and performed in parallel across all sensors. Process operates on a fixed dimension independent of the resolution.
FNO is fast due to FFT but cannot used on irregular grids, hence its previous omission. We included additional lin. interpo. + FNO results in Table 1 with corresponding inference time in Figure 2 of the PDF rebuttal page.
> Turbulent regime of NS.
Yes, we agree but this deserves a specific treatment and probably alternative models. Initial tests with large Reynolds numbers have shown an order of magnitude decrease for all the models.
> Clifford neural layers for PDE modeling.
This paper focuses on modeling the interactions between the physical fields and the variables of the model when they are intrinsically connected. We focused solely on model comparisons without Clifford algebra. But we agree that this would be an interesting extension for several PDE families.
> L76: DeepONet's observation grid.
At test time, DeepONet requires the same number of sensors in their input to the Branch Net, preferably matching the training sensors. However, DeepONet can infer fields at any query coordinate through its trunk-net, akin to an INR.
> L72: Infinite-dimensional rephrasing and L265: Clarification of RK4.
Adjusted accordingly; your feedback is appreciated.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Dear Authors,
Thank you for your reply. Most of my concerns have been addressed. I will keep my score unchanged and vote for acceptance, as per my initial review (well-presented work, simple to implement, works well in practice, a new framework for continuous simulations).
However, I would like to point out that the underlying reasons that lead to the success of this method, as well as its limitations, are still unclear to me. For example, the authors do mention that in the turbulent regime of NS the performance deteriorates substantially, but they have not discussed what are the input condition distributions that are currently tested, so as to understand the complexity of the tasks to be solved. Moreover, since there is no theoretical evidence and the inductive biases of the INR auto-decoding are hard to grasp, I think that there is a considerable gap in our understanding. I would encourage the authors to try to provide more intuition in an updated version (e.g. including the intuitive explanation given in the “Potential risk of overfitting” paragraph of their rebuttal) and investigate this further in the future.
Some minor concerns that were not addressed and would be good to discuss in an updated version:
- Looking at Table 2 it is apparent that the INR training creates a large computational burden in the overall pipeline, with the processing in the latent space being only a small fraction of the overall training time. Could you compare the overall training times with those of the baselines?
- It is claimed that training is relatively robust to the hyperparameters. I would suggest adding some quantitative results in an updated version and discussing how the hyperparameters are chosen (is it by looking at the training error?).
- The authors have not discussed the following questions:
- “To clarify this, it might be useful to discuss the distributions of the input conditions used. Are they challenging enough? How important is the Z-score normalisation (Appendix B.1.2.)?”
- “My concern about this also comes from the fact that only a latent-space objective is minimised, which in principle does not rule out the possibility of small errors in the latent space to translate to large errors in the signal space (e.g. assume that the INR decoder has a large Lipschitz constant)”
- Could the authors clarify what “exponential sampling” is?
---
Reply to Comment 1.1.1:
Title: Response to reviewer
Comment: Dear Reviewer,
Thank you for your insightful feedback and your careful consideration of our work. We're delighted to learn that most of your concerns have been addressed and that you are inclined to maintain your initial score while supporting acceptance. We will aim to offer more intuition behind the success of our method in the final version and reserve the exploration for more theoretical explanations for further investigation.
## Minor Comments:
* **Training time comparison**: In the final version, we will include a comparison of training times for all the baselines used in the dynamics modeling experiment. CORAL took slightly more time to train than DINO and MPPDE, with similar orders of magnitude in the context of *Navier-Stokes* and *Shallow-Water*. On the other hand, FNO and DeepONet benefited from faster training.
* **Selecting hyperparameters**: Recognizing the relevance of this aspect for potential users, we will incorporate a quantitative section that separately studies the impact of key hyperparameters on training and test errors. Initially, hyperparameters are determined based on training error, while validation utilizes extrapolation loss on the training set.
* **Challenges with input distributions**: The input distributions originate from the PDE-pushforward of a certain distribution (e.g. gaussian for NS), i.e. rather than utilizing directly sampled random realizations as inputs at time $t=0$, we leverage solutions from numerical solvers at time $t=T_1$. We describe the conditions used to generate the data in Appendix A.
* **Z-score normalization**: Due to codes being obtained from only a few gradient descent steps, their standard deviation is generally quite small. Consequently, they cannot be used as such for training Neural ODE / MLP. Normalizing the code is essential to obtain a meaningful input for the processor.
* **Training in latent space**: Indeed, concerns can arise with latent-space architectures. Empirically, we observed that the above-mentioned code normalization effectively mitigates this concern, as a small loss over the normalized codes led to a small forecasting error.
* **Term clarification**: Apologies for any confusion; the accurate term is "scheduled sampling with exponential decay." Appendix B.1.4 outlines its behavior.
We appreciate your meticulous review and the constructive suggestions you've provided. We are dedicated to incorporating these refinements in the updated version. | Summary: This work tried to introduce an implicit neural field-based framework for solving PDEs for different applications on general geometries. The method represents the input and output function spaces as an implicit neural representation. The method consists of a two-stage pipeline: first, the authors train two auto-decoders over the input and output function domains. Then, using paired data, they learn a mapping between the spatial latent code between domains. The paper shows results in initial value problem, dynamics, and predicting PDE solution from different geometry.
I think the paper is tackling a very important problem - generalizing operator and neural field method to solving PDE with different geometry (in the boundary/shape). And I also like the approach to this problem as they leveraged neural fields techniques to achieve discretization-free effect and operator techniques to allow relatively fast influence time. However, the presentation of the paper confuses me in several places (see weakness section) and that holds me back from full acceptance in the first read.
Strengths: - The method is discretization free. It doesn't require the user to discretize the domain, which can have effect on the simulation quality. This method only need to run optimization with auto decoder, which is capable of taking partial observation and predict a continuous function that satisfies these observations.
- This method has shown advantages compared to many operator methods in that it doesn't require operations such as Fourier transformation which requires regular samples in the geometry. This advantages the method to potentially work with different domain geometry, as well as different observation density.
Weaknesses: 1. Can this method be used for Inverse design? The experiment named "geometry design" is a bit misleading as it hints at the designing the geometry (the domain). But to the best of my reading, the geometric design experiment is set-up to predict PDE solution from different geometric, rather than designing the domain geometry. It would be great if the authors can add a note on this in the supplementary.
2. It's not clear to me how does the method actually generalize to different geometry - I might miss something here. For example, how does the method encode where the boundary of the geometry is? It occurs to me that all functions are defined in the ambient space of physics domain. This makes it unclear to me how does the effect of domain boundary on the solution gets encoded. For example, if we have a bunch of samples (x_i, u_0(x_i)) that indicates the initial condition of a IVP problem, but we set two different boundary condition - one circle these samples with a sphere and the other with a rectangular box. The current method seems to make the same prediction as the latent code is inferred from the set of samples samples (x_i, u_0(x_i)), but I can't believe that's the case in the real world - i.e. the boundary has no effect on the solution.
3. Justification for the requirement of simulated data. The system still requires paired data for training similar to the neural operator, which is more supervision required by the PINN or neural fields methods (which only requires the PDE definition). I wonder what's the way to justify this additional data?
4. Risk of overclaiming. I think "general geometries" has become very vaguely define throughout the paper. I'm under the impression that the method is able to solve equation with different boundary condition and different geometry of the boundary. But it's not clear to me how the current method is able to handle different boundary conditions, also it's not clear to me how does the method take into account of the different geometry of the simulation domain. For example, if we have the same initialization function, but different shape, would the latent code we obtained from test-time optimization of the auto-decoder be the same or different? If in test time, we want to change the boundary condition of this problem, how do I specify this? In general, I think the proper claim of the method is that it allows the neural operator method to work with irregular samples, but every trained model might still be restricted to solving the same set of problem (maybe the same boundary geometry) but with different initialization.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Section 5 discussed limitation
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We're thankful for the reviewer's helpful feedback and have addressed the raised concerns below.
## Application to inverse design
> Can this method be used for Inverse design?
We agree, this will be clarified in the final version. As you mention, the geometric design section in the core paper amounts to predicting PDE solutions based on diverse geometric settings. However, once trained the model could be used for "true" inverse design. This is indicated in l.358 in the paper with preliminary experiments on the design of Airfoils shapes in Appendix D and Fig. 8.
## Operator Learning setting
> Justification for the requirement of simulated data.
These are two different settings targeting two different problems, considered separately in the literature. Ours is a pure data-based approach (same as for FNO, DeepONet, MP-PDE, etc), the assumption is that no prior explicit knowledge is available on the physical phenomenon (hence no PDE equation, no explicit Boundary Condition (BC), etc) and only data is available for training. The model is trained on a number of contexts (initial values etc.) and is due to generalize on different but similar contexts.
PINNs aim at replacing traditional solvers for explicitly-known PDEs and is coined a data-free approach. Moreover, a Vanilla PINN is trained for one single condition (initial value and boundary conditions) and does not generalize to different conditions.
## Geometries and meshes
> It's not clear to me how does the method actually generalize to different geometry.
This is a pure data-based approach where no prior knowledge of the PDE is incorporated into the framework. Therefore, the PDE conditions (initial or boundary) are not stated analytically and never used as such. Let's consider an Initial Value Problem (IVP) as exemplified by the cylinder scenario (Figure 1). When presented with two different obstacles, our method uses the initial condition on two distinct meshes, which include disparities at the obstacle boundary. This leads to a varying sampling for the INR encoding and one obtains a different code in each situation. The processor operating on the corresponding representation will output two different values. Our approach does not presume that the boundary has no influence on the solution. Instead, it captures these distinctions through the mesh differentiation and as result in the INR codes. This entails that for each situation, a unique code is acquired due to the employment of distinct mesh configurations at the obstacle boundary. Regarding the question of whether a model trained on a certain geometry distribution can generalize to entirely different geometries (e.g., circles vs. squares), we would like to acknowledge that this aspect was not explored within our current investigation. It remains an interesting avenue for future exploration.
> Risk of overclaiming.
We will clarify the claim in the paper. As indicated above, the boundary conditions are provided by the mesh definition and the corresponding sampling for learning the INR. Consider for example the airfoil (NACA-Euler) experiment in section 4.3 and Fig 1, 13, (the same holds for Fig. 14, 15). The model is trained on multiple shapes (BC) and evaluated at test time on similar but different airfoil shapes. In this sense the model is able to handle different geometries at test time. For any new example (airfoil shape) the learned code will be different thus encoding the geometry. Note that this is also what allows us to perform inverse design (Appendix D.1).
---
Rebuttal Comment 1.1:
Title: Thanks for the response!
Comment: Thanks authors for the response! The replies address some of my confusion.
In general, I think the paper proposes a system that combines the advantages of neural field representation with advantages of neural operators. This can be of interests to NeurIPS audiences. I will keep my acceptance rating.
---
Reply to Comment 1.1.1:
Title: Thank you for the comments !
Comment: We appreciate your positive feedback. Please don't hesitate to reach out if you have further suggestions or questions. | Summary: In the paper, the author proposed to use neural field in operator learning. In the CORAL model, it first encodes the input into some codes, apply an MLP on the codes, and then decode into the output function. Since the neural field can be continuous evaluated, the CORAL model can be applied to general geometries.
Strengths: The paper uses neural presentation in operator learning which is a very natural choice and worthy investigation. The proposed method show promising results on time extrapolation and geometry design problem.
Weaknesses: The overall framework is quite simple. The encode-decode structure has been studied in PCA basis [1] and Fourier basis [2], which are also applicable for general geometries. Compared to these well-studied conventional basis, neural representation usually use a higher number of parameters, which in the end, make the encoding unstable.
In the CORAL framework, the latent code is obtained using gradient descent. However it's possible to get very different latent code given the same input. Would such non-uniqueness cause any issue?
[1] Bhattacharya, Kaushik, et al. "Model reduction and neural networks for parametric PDEs." The SMAI journal of computational mathematics 7 (2021): 121-157.
[2] Fanaskov, Vladimir, and Ivan Oseledets. "Spectral neural operators." arXiv preprint arXiv:2205.10573 (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What is the dimension/size of the latent code?
One of the major advantage of this CORAL is time extrapolation. Do the authors have some insight why neural representation does better on this task?
In the experiment, the authors compared CORAL with another neural representation based method DINO. How does CORAL compared to DINO in the model design? Does it also use encoder and decoder? It will be great to address DINO also in the related work section.
In neural representation, many recent work use context grid and voxels to enhance the representation. Is it possible to compensate CORAL with grid-based representation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper did not address its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful feedback and have addressed the raised concerns below.
## CORAL framework
> The overall framework is quite simple.
CORAL adopts a classical encode-process-decode paradigm for learning operators. However, training an auto-decoder presents challenges, necessitating robust regularization techniques. The principal strength of CORAL lies in its auto-decoding mechanism, which is stabilized through careful optimization design: (i) CORAL encodes fields within a compact, low-dimensional code space, and (ii) leverages meta-learning optimization to adapt codes within a limited number of steps (3 in the conducted experiments). This strategic approach averts overfitting and ensures the development of a smooth, low-dimensional code manifold. We performed additional experiments to illustrate the stability of the learned encoding. Results provided in Fig. 1, in the pdf rebuttal page, show that this is indeed extremely stable.
> In the CORAL framework, the latent code is obtained using gradient descent. However it's possible to get very different latent code given the same input. Would such non-uniqueness cause any issue?
This is like for any non convex optimization problem, where the resulting solution from the INR method could correspond to different local optima based on the initial conditions and the optimization path taken.
In all of our conducted experiments, one makes use of a shared encoder for the fields associated with a particular PDE along with consistent initialization and optimization techniques. As a result, for a given input, a unique solution code is obtained, and similar inputs tend to yield close codes within the latent space. We provide the inference pseudo-code in Algorithm 4 in Appendix B.1.3.
> What is the dimension/size of the latent code?
The dimension of the latent code is detailed in Appendix B.1.4 in Tables 4 and 5. We use 256 for *Shallow-Water* and 128 for all the other datasets.
> One of the major advantage of this CORAL is time extrapolation. Do the authors have some insight why neural representation does better on this task?
Across all models, time extrapolation is realized through an auto-regressive framework. In contrast to models like DeepONet, FNO, and MP-PDE, CORAL operates within a reduced representation space. DINO also operates in a reduced space, but we hypothesize that the CORAL optimization methodology contributes to the smoothing of the modulation space, thereby facilitating the training of the NeuralODE component. Within CORAL the dynamics' representation space is deliberately constrained, leading to a concentration of codes within a compact, low-dimensional manifold. This design choice enables comprehensive exploration of potential trajectories during training of the NeuralODE, resulting in enhanced generalization capabilities beyond the training horizon.
## CORAL vs DINO
> How does CORAL compared to DINO in the model design?
DINO also operates within an encode-process-decode framework. However, our model diverges from DINO in several aspects encompassing design, training, and objectives. In terms of the objective, while DINO is tailored for dynamic spatio-temporal forecasting, requiring inputs and outputs to share the same space, CORAL functions as an operator capable of mapping input functional spaces to distinct functional spaces. This flexibility is demonstrated experimentally in Section 4.1 Initial Value Problem and Section 4.3 Geometric Design.
Regarding design and training, our model exhibits key variations driven by preliminary trials. DINO employs a Multiplicative Fourier Network as its INR backbone, whereas CORAL relies on a SIREN architecture. Moreover, the optimization approach in CORAL is second-order, enabling code adaptation within a few gradient descent iterations (3 steps in contrast to the roughly 100 steps in DINO).
## Possible extensions with context-grid / voxels
> Is it possible to compensate CORAL with grid-based representation?
While it's possible to enhance CORAL's training efficiency on 3D data using grid-based representations, such as those seen in [1] or [2], it remains a challenge to derive an effective grid-based representation optimized for dynamics modeling within the CORAL framework. An alternative grid or voxel-based enhancement involves segmenting the input space and fitting distinct Neural Fields for each subspace, effectively reducing parameter count while preserving finer representation nuances [3,4]. Incorporating these strategies into CORAL could involve creating a new latent representation by concatenating latent codes obtained from individual smaller Neural Fields.
References:
[1] Variable Bitrate Neural Fields. Takikawa et. al. SIGGRAPH 2022
[2] Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. Müller et al. SIGGRAPH 2022
[3] Neural Sparse Voxel Fields, Lingjie Liu et al. Neurips 2020.
[4] MINER: Multiscale Implicit Neural Representations. Saragadam et al. ECCV 2022.
## Limitations
> The paper did not address its limitations.
Limitations are discussed in Section 5 of the paper. Thanks to the reviewers' feedback, we will also discuss on the potential risks of overfitting. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their comments and suggestions. We carefully answered all the questions raised by each reviewer. We have also added new experimental results following the suggestions of the reviewers concerning:
* Consolidated results with error bars on *Navier-Stokes* (Reviewer K5kw)
* Stability of the training of the modulated INR with a 1st order vs 2nd order meta-learning on *Navier-Stokes*. (Reviewer K5kw)
* Additional FNO + linear interpolation results with irregular grids on *Navier-Stokes*. (Reviewer 5RAL)
* Additional inference time on FNO + linear interpolation (Reviewer 5RAL)
* Additional information on the training time of CORAL (Reviewer 5RAL)
These new results are summarized in the PDF rebuttal page.
Pdf: /pdf/4f5b4a509cb6ee23031064c19d13301bc233d4cd.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Train 'n Trade: Foundations of Parameter Markets | Accept (poster) | Summary: This paper introduces a novel concept called *parameter markets*, which serves as a platform for exchanging parameters learned by machine learning models. In this framework, agents have the option to engage in parameter trading, to achieve (1) mutual benefits through collaboration or (2) monetary gains by simply selling these parameters. The key contributions of this paper involve the formulation and design of these parameter markets. This encompasses defining the value of parameters, establishing pricing mechanisms, and outlining how transactions can occur. The authors explore both competitive and collaborative scenarios and illustrate the advantages agents can obtain by opting in to trade through the marketplace.
Strengths: 1. The idea to trade parameters is novel (to my best knowledge) and is quite different from data markets. The key benefit from parameter markets is that agents can train models for their own tasks and can yet benefit from training runs of each other. This is a very relevant problem to study.
2. The formulation is thorough and the empirical experiments are adequate to demonstrate the claims made in the paper
3. The paper is well written and easy to follow.
Weaknesses: 1. This work is a bit existential - this framework works if *model privacy and alignment are assured*.
2. Theorem 4.1 relies on the broker having access to $\theta^*$ which is a strong assumption.
3. There is no discussion relating to incentives (i.e. agent misreporting their valuations to benefit from the trades).Perhaps this paper could benefit from adding a mechanism design angle to this framework.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Refer to Weaknesses.
2. It's unclear why Cobb-Douglas utility function is used.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: To my best knowledge, there isn't an immediate potential negative societal impact of their work. This paper, however, doesn't have an explicit weakness section (relating to their framework).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer Mu3U
We are grateful for the review, the kind words, and the positive assessment. We address your questions and concerns below.
* **On assurance of model privacy and alignment.**
* This is true! We do not view this as a limitation, however. Vast research resources are being expended in assuring privacy [1, 2] and alignment can be performed [3, 4, 5, 6]. The motivation of our work is to discover, _if_ these concerns can be resolved, whether parameter marketplaces are viable at all. We believe that an answer to this question, whether positive or negative, is extremely valuable.
* We note, in addition, that the progress required in alignment and privacy may not be out of reach. According to the findings presented in Sec. 5.1, while trading with model alignment yields optimal outcomes, basic convex interpolation is also sufficient in achieving quicker convergence and enhancements. Furthermore, in our trading system, parameters are secured and only exchanged if both parties agree on the agreed-upon price, thus ensuring a type of privacy in the market.
* **On accessing true parameters.**
* We agree! Having knowledge of the true parameter $\theta^*$ to reveal the gain-from-trade $\Delta^t_u = ||\dot\theta^t_u - \theta^*||^2_2 / ||\bar\theta^t_u - \theta^*||^2_2$ is a strong assumption. However, the result using it is only a specific example that can assist sellers in assessing the transaction and providing theoretical insights (Theorem 4.2). In practice, **there are a variety of ways to determine the value of a trade** based on factors such as the agent's preference, resource budget, and relative performance improvement. Furthermore, for more practical purposes, we present a generalized convergence analysis in Appendix D.2, where the gain-from-trade is calculated by subtracting the empirical loss before and after a trade, $\Delta^t_u = \hat{L}(\dot\theta^t_u) - \hat{L}(\bar\theta^t_u)$, removing the need for the broker to possess knowledge of the true parameter $\theta^*$.
* **On agent’s incentives and honesty.**
* Thank you! We note another finding in Sec. 5.2, where the seller estimates the value solely using the lower bound they found according to Theorem 4.1. The resulting negotiated price creates a discrepancy with the price in the optimal scenario where both parties report their valuations truthfully without any estimation, **highlighting the significance of revealing accurate parameter values and justifying the need for incentives.**
* **On the perspective of mechanism design.**
* Thank you for your suggestion! This is an interesting approach to strengthen our work, and we will include this in our updated draft.
* **On the clarity of Cobb-Douglas utility function.**
* The concept behind price negotiation is to split a fixed joint surplus ($U_a \times U_b$) while maximizing the revenue for both parties equally (as the power of $U_a$ and $U_b$ is the same). The Cobb-Douglas utility function is ideal for this purpose. In addition, it has several convenient mathematical properties. For instance, it is strictly quasiconcave, which means it has a unique maximizer and an optimal solution. Moreover, it demonstrates the diminishing marginal rate of substitution as the negotiated price changes. Lastly, when expressed in log form, it serves as the first Taylor approximation of a convex production function, thereby eliminating the need for extensive knowledge of the underlying production function.
[1] Zhu et al, "Privacy-preserving Decentralized Federated Deep Learning", ACM TURC 2021. \
[2] Pasquini et al, "On the (In)security of Peer-to-Peer Decentralized Machine Learning", IEEE Symposium on Security and Privacy (SP), 2023. \
[3] Stoica et al, "ZipIt! Merging Models from Different Tasks without Training", arXiv:2305.03053. \
[4] Wortsman et al, "Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time", ICML 2022. \
[5] Singh and Jaggi, "Model Fusion via Optimal Transport", NeurIPS 2022. \
[6] Qin et al, "Exploring Mode Connectivity for Pre-trained Language Models", EMNLP 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response!
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with our rebuttal! We are excited to incorporate the finding regarding agent's incentives and clarity of utility function into the paper. We are eager to provide any further clarifications or address any additional questions. Thank you! | Summary: * This paper proposes an economic framework for trading parameters of prediction models.
* Interaction is modeled as a brokered marketplace - Each agent $u$ trains a model characterized by parameters $\theta_u$ using gradient descent. At each time-step $t$:
* Each agent performs a gradient descent step on the previous parameters $\theta_u^{t-1}$, to obtain $\dot{\theta}_u^t$.
* After the GD step, agents relay their model parameters $\dot{\theta}_u^t$ to a trusted broker, which calculates the optimal linear interpolation between them, yielding the possibly-improved combination $\bar{\theta}_u$ for each agent. The broker indicates for each agent the gain-from-trade $\Delta_u^t$.
* The agents participate in Nash bargaining to decide on the value exchange, set trading prices, and $\theta^t_u$ is decided. Each agent has a valuation function $v_u$ for their own parameters, and the other agent’s parameters. Utility function is assumed to be fully known by the broker.
* Analysis:
* Analysis is restricted to the two-agent setting $u\in\{A,B\}$, and a specific form for gain-from-trade $\Delta_u^t$ is assumed. It is assumed that each seller has complete knowledge of the valuation prior, such that Myerson’s revenue-maximizing pricing mechanism (virtual valuations) can be applied based on the information relayed from the broker ($\alpha$, $\beta$, $\Delta_a^t$ - By Theorem 4.1).
* It is assumed that the broker has complete knowledge of the optimal model parameters in advance ($\theta^*$). Combining with the assumptions about the agents’ utility function, a closed-form expression is obtained for the monetary transfer after Nash bargaining.
* In Section 4.1, two upper bounds on convergence rates are presented - One for a scenario where agent $B$ always trades (Theorem 4.2), and one for a scenario where both agents don’t trade (Proposition 4.3). The “always-trade” bound (Thm 4.2) is lower, and the authors claim that this explains why trading is sometimes beneficial.
* Two sets of experiments are presented:
* The first set of experiments is an evaluation of the data-sharing setting, without taking parameters into account (Section 5.1.1 - complete networks, Section 5.1.2 - subsets of networks), on real-world vision datasets (MNIST, CIFAR10, TinyImageNet).
* Finally, the value of trading is evaluated using a synthetic model, and positive results are reported for both cooperative and competitive scenarios.
Strengths: * Problem is well-motivated.
* Leverages recent deep insights about the structure of contemporary learning problems.
* The breadth of work is substantial, and it contains a number of new definitions, theoretical analyses, and empirical evaluations.
* Code is provided.
Weaknesses: * Some key assumptions are not well supported:
* Valuation functions $v(\theta)$ are assumed to be fully known by the broker. For contemporary neural nets, the dimensionality of $\theta$ could be in the order of billions (or even trillions, such as in GPT4), and therefore the parameter valuation would be a function $v:\mathbb{R}^{10^9}\to\mathbb{R}$ - Which may be prohibitively expensive to represent, store and compute.
* Broker is assumed to know the true model parameters $\theta^*$ in advance, which in practice may render the whole learning process unnecessary. In L214, it is claimed that knowing $\theta^*$ “is not necessary in practice”, but I did not find further support of this claim.
* Very specific functional forms are assumed for gain-from-trade but are not sufficiently justified.
* A common prior on agent valuation is assumed (L208), however it is not discussed whether having such prior is realistic and how it would be implemented.
* Mathematical soundness concern regarding one of the claims: In section 4.1, authors claim that trading is beneficial by comparing two performance upper bounds (Theorem 4.2, Proposition 4.3). However, the difference between two upper bounds does not indicate the relation between the true quantities unless they are tight (in other words, proving that $a<100$ and $b<50$ does not indicate that $a>b$ unless there are matching lower bounds for $a$ and $b$). Therefore, if this observation is true, the analysis in Section 4.1 does not address the posed question (whether participation leads to better convergence).
* First set of experiments (Sec 5.1.1, 5.12) only seem to demonstrate the ability to interpolate model parameters, but do contain simulation of parameter trading. Economic behavior is only demonstrated on the synthetic dataset. As the benefit of parameter interpolation has already been demonstrated in the literature to some extent, I did not understand the contribution of the results in sections 5.1.1, 5.1.2 to the understanding of parameter-trading markets.
* Paper only analyzes the case of two agents $\{A,B\}$. Unclear how results extend to multiple agents.
* Trading parameters during the gradient descent process would require agents to train models simultaneously, or suffer significant delays due to trading. This may make such trading impractical.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * How would the model extend to more than 2 agents? Which results can be used as-is, and which need to be generalized? Would the system behave qualitatively differently in any way when the number of agents is increased?
* What are the computational requirements from the broker?
* Is it possible to quantify the economic implications of the broker not being able to interpolate successfully?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I feel that limitations could be discussed at more depth. The key assumptions are added gradually throughout the paper, and it is hard to keep track. I feel that the paper can greatly benefit from a thorough discussion of limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer Tv8J
Thank you for your clear summary and thoughtful review! We appreciate the kind words. We answer your questions below and include two new experimental results: multi-agent market and asynchronous parameter trading.
* **On valuation function.**
* Thanks for pointing this out. In fact, the valuation function is not exposed to anyone in the market (see Ln 173 - Ln 175), but the valuation of trade (a scalar) will be disclosed to the broker for negotiation purposes. Additionally, the valuation function is quantifying the value of parameter sets based on gain-from-trade, which is written in the form of losses, for example in Ln 161. Hence, agents do not need to plug in high-dimensional parameter sets and compute the value. They value a potential trade by knowing the gain-from-trade.
* **On knowing the true parameter.**
* However, it is only for the sake of offering a concrete example of assisting sellers in assessing the transaction and providing theoretical insights (Theorem 4.2). In practice, **there are a variety of ways to determine the value of a trade** based on factors such as preference, budget, and relative returns. For more practical purposes, we present a generalized convergence analysis in Appendix D.2, where we remove the need for the broker to possess knowledge of $\theta^*$.
* **On the form of gain-from-trade.**
* The gain-from-trade in our case study can be seen as the relative improvement if a trade is made. It should be noted that the gain-from-trade can take different forms and **is not limited to a specific definition**. For instance, it can be defined as the difference between an agent's loss before and after trading (as seen in Ln 161). Excitingly, **any method that quantifies the benefit of trading before and after** can be generalized into our pricing mechanism.
* **On common prior.**
* The common prior (Ln 205): buyer sets a price arising from gain-from-trade and this price is derived from a probability distribution. **Both are inspired by Myerson's auction work, designed to help sellers estimate the values of trades**. The first prior is reasonable, since all agents consider the gain-from-trade while assessing a trade. Second, the notion that buyer's price is drawn from a probability distribution is standard in economic modeling [1]. We clarify this in our updated draft.
* **On the upper bound of convergence rate.**
* We appreciate the comment. We have reviewed and revised our draft accordingly. While the upper bound on the end round $T$ may not necessarily lead to faster convergence, participating in the market provides agents with sufficient incentive and the assurance of achieving a smaller $T$ in the worst-case scenario. The discrepancy arises from the presence of gain-from-trade. We note that this is not unusual: comparing the upper bounds of the convergence rates (worst cases) is standard in the optimization.
* **On the contribution of Sec. 5.1.1 and Sec. 5.1.2.**
* To the best of our knowledge, previous research has mainly focused on merging entire trained models at the end of the training process [2, 3], but there has been limited investigation into aligning parameters or subsets of parameters during training. This is a crucial factor in establishing a functional marketplace, and it is worthwhile to investigate such potential benefits. Hence, we demonstrate the feasibility of parameter trading while training in Sec. 5.1.1 and 5.1.2.
* **On extension to multiple agents.**
* For the sake of simplicity and demonstrate the viability, we presented a two-agent market, which is a commonly used economic model for studying many settings. Nevertheless, it is straightforward to include multiple agents. The trading logic can be generalized to look for trades that enable agents to make the largest advancements. To validate this, **we implemented a three-agent market and present detailed experimental setups** in our general response. Results show that the generalization functions as expected.
* **On asynchronous trading.**
* Thank you for bringing up this important point. Indeed, this area has been extensively studied in the distributed training literature [4]. The typical solution is to permit asynchrony. **We conducted asynchronous parameter trading and offer detailed experimental setups** in the general response. Results show that the parameter market allows delayed agent actions to enhance performance as well. Since the presence of the broker helps agents align parameters and optimize purchase weights, **the potential issues caused by asynchronous training can be mitigated**.
* **On computational requirements for broker.**
* The broker has two main computational needs. Firstly, the broker needs to validate model's performance through inference, which is generally less intensive than training. Second, the broker assists in aligning parameters for both parties using an approximation algorithm, which has been shown to perform efficiently in experiments. In other words, the **computational workload for the broker is not a significant issue and can be managed**.
* **On quantifying interpolation failure.**
* This is a great question, as the weights purchased may not be useful for agents if they are built for very unrelated tasks, which can result in a negative gain-from-trade and lead to agents stopping their purchases, ultimately reducing market efficiency. To measure such failures, one can establish prior knowledge of task interdependence. As mentioned in Section 5.1.4, the advantages of trading diminish when the tasks are not related.
[1] Chawla et al, "Algorithmic Pricing via Virtual Valuations", EC 2007. \
[2] Stoica et al, "ZipIt! Merging Models from Different Tasks without Training", arXiv:2305.03053. \
[3] Qin et al, "Exploring Mode Connectivity for Pre-trained Language Models", EMNLP 2022. \
[4] Wang et al, "Asynchronous Training Schemes in Distributed Learning with Time Delay", arXiv:2208.13154.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I appreciate the discussion, and the additional empirical results. I’m increasing my rating to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for increasing your score and engaging with our rebuttal. We are excited to include the new experimental results in the updated version of our paper. We would love to answer any additional questions! Thank you! | Summary: The paper investigates how to design a marketplace for model parameters. The marketplace consists of agents training models for potentially different objectives and a trusted third-party broker. The broker receives the model parameters from each agent, assesses (and informs each agent of) the loss achieved by the merged model, and helps determine prices using Nash bargaining.
The paper provides a theoretical and empirical analysis of this framework. In particular, they prove a bound on the gain from trade for linear models and they analyze the effectiveness of buying parameters in terms of improving training convergence. They empirically assess their framework on an image classification task. They validate that the merging method of the broker (Ainsworth et al., 2022) improves loss, and they evaluate the effectiveness of their pricing strategy.
Strengths: The idea of trading subsets of parameters in a marketplace seems novel and interesting. Parameter marketplaces seem to contrast with classical approaches of data marketplaces, where agents buy data from each other, or model marketplaces, where agents directly purchase models or model access.
Weaknesses: The proposed framework seems to rely on there being a trusted broker facilitating the trades and assessing and reporting the gains of merging parameters to the agents. In reality, this seems to be a strong assumption, as it is not clear if (a) a broker can be trusted with all of the parameters, and (b) if a broker will be able to assess the gain of merging models without significant data and infrastructure. See question below.
The technical contribution of the paper seems limited. The paper combines existing work on merging and aligning models (e.g. Ainsworth et al., 2022) and classical approaches for Nash bargaining. However, the paper seems to use these approaches out-of-box and does not seem to present significant new technical contributions.
The paper is a bit difficult to follow and the exposition could be improved. For example, the results in Section 4 use a lot of notation and the qualitative insights are a bit difficult to infer.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: How might the framework be modified to avoid the reliance on a trusted broker, e.g., via a more decentralized approach?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer iN1r
We are grateful for your review and for describing our framework as novel and interesting. We address your questions in the response below and have updated our paper!
* **On trusting brokers.**
* Indeed, having a reliable broker is **essential for any trading market**, just as it is in real-life scenarios (for example, stockbrokers help perform huge numbers of trades, and large-scale escrow organizations help aid in billion-dollar transactions). One area of interest is how to motivate third parties to play this role, a question that is frequently studied by economists and business researchers [1]. Examples of this are research into how to set fees for brokers, what other types of incentives can be found, etc.
* We view this question as orthogonal to this work, and instead simply presume that such brokers exist. This supposition is commonly employed and has been substantiated by a number of prior research papers on trading markets [2, 3, 4]. We note, in addition, that our market permits the trading of subsets of parameters and transferred parameters only if a trade is made, thereby safeguarding a certain degree of model confidentiality.
* **On broker infrastructure.**
* This is a great question! We agree that it is essential to concern the accessibility of computational resources for the broker, but we do believe this is not a major challenge. The reason for this is that **brokers can be paid from brokerage fees** to a sufficient extent to meet their computational needs. Note that these requirements need not be prohibitive: our trading framework does not ask the broker to train models, only to validate model performance through inference, which typically requires fewer resources [5]. Additionally, the amount of validation data needed does not need to be high [6]. Moreover, in general, the alignment approach we employ is computationally efficient. As a result, the computational requirements for the broker are manageable.
* **On contributions.**
* Our focus is on developing a novel trading framework that contributes to creating parameter markets and demonstrating the feasibility of trading sets of parameters. The techniques used in this framework are flexible and can include more alignment approaches [7, 8, 9, 10] and be generalized to any other utility functions for price negotiation [11]. Our findings are important: they suggest that **the entire approach of parameter trading is viable**. Indeed, this is the major contribution of our work.
* In addition, through empirical evidence, we have found that trading parameter sets can also lead to faster convergence and improved model performance (refer to Sec. 5.1.2)---a further contribution. Finally, as far as we are aware, this is the first work that successfully monetizes parameter sets and treats them as commodities to be traded. Such a market can be used in the current large language model community: by engaging in parameter trading, individuals or start-ups are able to purchase pre-trained parameter sets instead of building from scratch.
* **On clarifying results and takeaways.**
* Sec. 4 offers concrete examples of market instances. We highlight two main results and have clarified this further in our draft.
* **How does a seller value a trade?** We study the case of linear models and help the seller provide a valuation to the broker by using virtual valuation. Theorem 4.1 reveals that the seller can estimate their own valuation by bounding the buyer's valuation based on information disclosed by the broker. We empirically validate the success of price estimation procedure to monetize parameters in Sec. 5.2.
* **How effective is buying parameters from a theoretical perspective?** To investigate the effectiveness of parameter trading, we offer a convergence analysis. Theorem 4.2 states that more gain-from-trade in the market results in faster convergence, which supports the trading incentives to participate in the market. Additionally, we offer a generalized form of convergence analysis with L-smooth functions in Appendix D.2, which also demonstrates similar results that more gain-from-trade leads to quicker convergence. These results can be justified by empirical evidence in Sec. 5.1.
* **On avoiding a trusted broker through decentralization.**
* We agree! A decentralized approach can indeed eliminate the need for a centralized broker. However, there are downsides to this approach as well. We would require new ways to motivate decentralized agents to perform the work. This may cut against our goal of minimizing expenses for individuals. Nevertheless, we think this is an interesting question and consider it for further investigation.
[1] Yavas, "Economics of Brokerage: An Overview", Journal of Real Estate Literature, Vol. 2, No. 2, 1994. \
[2] Liu et al, "Dealer: An End-to-End Model Marketplace with Differential Privacy", Proceedings of the VLDB Endowment, Vol. 14, No. 6, 2021. \
[3] Azcoitia and Laoutaris, "Try Before You Buy: a Practical Data Purchasing Algorithm for Real-World Data Marketplaces", DE 2022. \
[4] Chen et al, "Towards Model-based Pricing for Machine Learning in a Data Marketplace", SIGMOD 2019. \
[5] Aminabadi et al, "DeepSpeed- Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale", SC 2022. \
[6] Kossen et al, "Active Testing: Sample–Efficient Model Evaluation", ICML 2021. \
[7] Stoica et al, "ZipIt! Merging Models from Different Tasks without Training", arXiv:2305.03053. \
[8] Wortsman et al, "Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time", ICML 2022. \
[9] Singh and Jaggi, "Model Fusion via Optimal Transport", NeurIPS 2022. \
[10] Qin et al, "Exploring Mode Connectivity for Pre-trained Language Models", EMNLP 2022. \
[11] McFadden, "Constant Elasticity of Substitution Production Functions", The Review of Economic Studies, Vol. 30, No. 2, 1963.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for responding to my questions. I appreciate the response and its discussion of the trusted broker assumption and the broker infrastructure. I have raised my score from a 3 (reject) to a 4 (borderline reject).
---
Reply to Comment 1.1.1:
Comment: Thank you for raising your rating---we appreciate your engagement with our rebuttal! We have included new experimental results and added a discussion section on the study of brokers, trust, incentives, and infrastructure in our updated draft. We would like to ask whether there are any further questions we can answer. Thanks you! | Summary: The paper proposes a framework for collaborative and competitive parameter trading among deep learning agents. The authors conduct experiments to validate the effectiveness of the proposed framework in improving the performance of the agents. The experiments show that even when the agents are training on different tasks, they can still benefit from trading parameters. The authors also validate the proposed pricing mechanism in a competitive scenario. The paper makes contributions in demonstrating the potential of parameter trading to improve the performance of deep learning agents and providing a framework for collaborative and competitive trading.
Strengths: - The paper proposes a novel framework for collaborative and competitive parameter trading
- The experimental results are promising.
- The experiments show that even when the agents are training on different tasks, they can still benefit from trading parameters.
Weaknesses: - The authors do not provide an analysis of the limitations of the proposed framework, which could help identify potential issues in real-world scenarios.
- The experiments could benefit from a more extensive evaluation of the proposed framework's performance by comparing it with more baselines.
- The paper could benefit from a more detailed explanation of the proposed pricing mechanism and its implementation.
- The authors do not discuss the potential ethical implications of parameter trading, which could be relevant in real-world applications.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why is parameter trading necessary?
- How to resolve the legal concerns on copyrights?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: More limitations should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer 91tn
Thank you for finding our framework novel and the results to be encouraging, particularly in terms of trading parameters with different purposes. We appreciate your thoughtful review!
* **On limitations.**
* There are two primary limitations toward building viable parameter markets. We discuss these in the paper and have included additional clarifications. First, the reliability of the broker is crucial to the success of the market. The broker must report trade gains/losses without any bias to ensure the market remains efficient. One limits of our work is that some entity is willing to serve as a broker with these properties.
* On the other hand, such a limitation may not be substantial. In fact, a similar requirement is accepted and used in practice in other settings, such as (non-machine learning) markets [1].
* An additional limitation is that while agents are allowed to train models using various model architectures for a range of downstream tasks, the parameter sets to be traded require a certain alignment. We believe this limitation is not too severe: it can be overcome, for example, by projecting the parameters onto the same space for merging [2].
* **On a new baseline.**
* Thank you for your suggestion. We have conducted another baseline test (FedAvg) and describe the setup in the general response. Results are presented in Table 1 in our attachment, and they confirm the significance of having a trusted broker in parameter trading. Without an intermediary broker to facilitate the trade, the performance of purchased weights is negatively impacted, as evidenced by the CIFAR10 + ResNet20 results.
* **On further explanation of the pricing mechanism.**
* The discussion of pricing mechanisms is in Sec. 3.4, along with Sec. 4, where we provided a practical example to instantiate the market. We include additional explanations here and have updated our draft.
* The steps are as follows. Once agents become aware of the gain-from-trade information disclosed by the broker, they assess the value of parameters and negotiate prices accordingly. To negotiate prices, the broker solves a type of Nash bargaining problem. There are several ways to assess parameters, such as according to the agent's preference, training budget, and relative improvements. To provide a concrete study of how to value trades for the seller, we study trading in linear models and introduce virtual valuation for sellers. Finally, buyer's valuation can be bounded, enabling the seller to provide an estimated asking price for negotiation purposes.
* **On potential ethical implications.**
* Under the conditions we require for our framework, we do not foresee ethical concerns around this work. However, if some of the assumptions we make are violated, we could envision certain ethical implications. For example, suppose that the broker acts adversarially, rather than being a neutral third party as intended. In this case, the broker could potentially use models from both parties, could misinform one party of their potential benefit and engage in front-running, etc. We have added a discussion of these possibilities to our updated draft.
* **On why parameter trading is necessary.**
* Our motivation follows from the fact that the costs of building high-quality ML models have become prohibitively high, leaving only a few well-capitalized organizations in a position to build such models. Indeed, we see this in the space of large pre-trained models, where all other individuals can at best fine-tune or adapt these base models. **Our goal is to take a step towards reducing such costs**. The most intuitive approach is to perform distributed training. However, this would require fully open-sourced models, which many users may not be able to adhere to.
* How can we reduce costs, even when models aren't open-sourced and organizations may even be competing with each other? **Our solution is a type of parameter trading that can benefit all parties**. Buyers can purchase pre-trained parameter sets instead of building from scratch, reducing training expenses and leveraging the expertise of others. Sellers can profit from selling their parameters as a secondary source of income. The most exciting aspect of this work demonstrates that this simple solution works: we prove the effectiveness of parameter trading and allow buyers having faster convergence of losses. Sec. 5.2 demonstrates the success of our pricing mechanism and the opportunities for sellers to earn a profit.
* **On legal concerns with copyright.**
* This is a great question! In fact, one of our motivations is precisely _the fact that trading data is challenging due to the state of digital property rights_. **Parameter trading, on the other hand, does not suffer from the same limitations**, at least in many present legal frameworks. That is, parameters are typically owned by organizations or individuals training their models. Because of this, voluntarily trading parameters does not require breaking any type of copyright law. Broker is the only additional party to access to these.
* One potential additional concern is how to prevent out-of-bounds parameter usage (i.e., how to ensure a buyer does not publicly release the purchased parameters, sell them to another unauthorized party). There is a wide array of existing research on this question. Applicable tools enable users to check the purchased weights by creating transaction hashes [3] and watermarks [4]. We discuss these in our updated draft.
[1] Yavas, "Economics of Brokerage: An Overview", Journal of Real Estate Literature, Vol. 2, No. 2, 1994. \
[2] Khanuja et al, "MergeDistill: Merging Language Models using Pre-trained Distillation", ACL-IJCNLP, 2021. \
[3] Lawrenz et al, "Blockchain Technology as an Approach for Data Marketplaces", ICBCT 2019. \
[4] Boenisch, "A Systematic Review on Model Watermarking for Neural Networks", arXiv:2009.12153.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thoughtful review and suggestions regarding framework limitations and baselines. Our response discusses limitations and we have included new baselines. We are eager to provide any further clarifications or address any additional questions. | Rebuttal 1:
Rebuttal: ### General Response
We are grateful for all the comments and constructive feedback on our work. Reviewers consistently commented that our proposed trading framework is **novel and well-motivated**. Reviewers 91tn, Tv8J, and Mu3U note that our experimental results as **promising and substantial**. In our revision, we adopted several reviewer-suggested clarifications and performed additional experiments, leading to a much stronger draft.
Additional experiments include:
* **A new baseline** [Reviewer 91tn]: we provide another baseline test (FedAvg [1]), which assumes that there is no broker involved in a trade to help agents align parameters and optimize their purchased weights. In the FedAvg method, the interpolated weight is determined by the portion of data assets an agent is endowed with, which is 0.5 in the case of our two-agent trading example. The results are presented in Table 1 in our one-page response. Results confirm **the significance of having a trusted broker** in parameter trading. Without an intermediary broker to facilitate the trade, the performance of purchased weights can be negatively impacted, as evidenced by the CIFAR10 + ResNet20 results.
* **Multi-agent markets** [Reviewer Tv8J]: we generalize our setting to involve more agents, where the broker helps agents seek for parameters that can bring the largest gain-from-trade to purchase. We implement a three-agent market by reusing the same data endowment for Agent $A$ and Agent $B$ and display performance results in Table 2. We make Agent $C$ collect 10% of data points from class 3 ~ 7 in MNIST and CIFAR10, and 10% of data points from class 50 ~ 149. In the three-agent market, results also follow expectations. The proposed parameter trading is able to help agents achieve higher accuracy performance compared to conventional model training without trading (out-of-market).
* **Asynchronous parameter trading** [Reviewer Tv8J]: we conduct a new experiment on asynchronous parameter trading. Both agents are asked to train the model for 60 epochs in total. Agent $B$ trains the model for 5 epochs, trades in the market for 50 epochs, and then trains the model for the remaining 5 epochs. On the other hand, Agent $A$ is instructed to delay trading. Agent $A$ trains the model for 10 epochs and then trades in the market for 50 epochs. In Table 3, our results demonstrate that the parameter market allows both agents to enhance their performance, with trading in alignment yielding optimal accuracy as expected. This emphasizes the crucial role of brokers in aligning parameters and optimizing purchase weights to eliminate differences **not only in synchronous but also in asynchronous parameter trading**. Note as well that in the paper, agents train for 5 epochs and then trade synchronously for 55 epochs, which have additional 5 epochs for training and trading. This explains the degraded performance of Agent $B$.
We have addressed reviewers' concerns and placed our comments in their respective threads below. Thank you for your questions and thoughtful reviews!
[1] McMahan et al, "Communication-Efficient Learning of Deep Networks from Decentralized Data", AISTATS 2017.
Pdf: /pdf/b60abeb1e9d9c9eb4b2614e07c9e7354b167c676.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Data-Dependent Bounds for Online Portfolio Selection Without Lipschitzness and Smoothness | Accept (poster) | Summary: The paper considers the online portfolio selection problem. In this problem, one must allocate funds between d possible investment choices, with the goal of maximizing the total amount. In each round, the "success" of each choice is revealed, in the form of a ratio between new and old price, called price relative; these price relatives are adversarially chosen. The goal is to have low regret, in terms of the logarithm of total wealth, against any constant, fractional allocation of funds between the investment choices.
In general, where the number of rounds is T, existing work for this problem exhibits either polynomial running time in T and poly-logarithmic regret in T, or running time independent of T and regret polynomial in T (specifically, square root).
The paper considers a different approach, providing data-dependent bounds for the problem.
The paper presents three bounds w.r.t. the input: small-loss bound, w.r.t. the total loss of the optimal allocation; gradual-variation bound, w.r.t. the volatility of the gradients of the loss functions; and a second-order bound, w.r.t. some second-order statistics of the loss function. The first two appear in the body of the paper.
The paper presents an algorithm based on optimistic FTRL w.r.t. a log-barrier regularizer; the optimism refers to using an estimate for the upcoming loss function in choosing the allocation for the next time step. To my understanding, the algorithmic contribution of the paper is in the method for choosing the loss estimate.
Comments:
Line 33: Maybe "lower" instead of "faster"
Line 283: In the displayed equation, should be a_t instead of a
Strengths: The online portfolio selection is an interesting problem, and obtaining data-dependent bounds for this problem seems natural.
In that sense, the paper does a comprehensive job and provides bounds w.r.t. three parameters.
Weaknesses: I have a concern regarding the small-loss bound, addressed in the questions for rebuttal.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Regarding Assumption 1, the paper correctly states that this could be ensured by normalizing the price-relatives vector, without affecting regret. However, this does affect the total loss of the adversary, which comes into play in the small-loss bound in Theorem 6.2. If I understand correctly, the assumption is thus nontrivial to make, which weakens Theorem 6.2. Is this the case?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful review.
1. **Typos:** We will correct them. Thank you.
2. **Concern on the small-loss bound:** If the price relatives are not upper-bounded by 1, then the cumulative loss in the small loss bound is defined with respect to the normalized price relatives. Hence, the assumption that the price relatives are upper-bounded by 1 does not weaken the regret bound. Thank you for pointing out this potential confusion. We did not notice that because of the notations, the remark following Assumption 1 is not sufficient. We will clarify this after Theorem 6.2 in the revision.
---
Rebuttal Comment 1.1:
Comment: Okay, thank you for clearing up this issue. I am raising my score from 6 to 7. | Summary: This work studies online portfolio selection (OPS) problem and establishes regret bounds that is square-root dependent on some data-dependent quantities, namely, the cumulative loss of the best action or the variation of the gradients respectively, without lipschitzness or smoothness assumption. Although previous studies obtain regret bounds with logarithmic dependence on the same data-dependent factors, they require the lipschitzness assumptions, which this work circumvents by investigating the problem carefully and proposing alternative local-norm lemmas as substitutes for lipschitzness and smoothness conditions. Furthermore, this work designs a novel optimistic FTRL algorithm with a self-concordant function to cooperate with the local-norm techniques and validates that the prediction can be resolved in $\tilde{O}(d)$ time pre round.
Strengths: 1. This paper leverages the inherent structure of the problem and derives the first data-dependent regret bounds without lipschitzness or smoothness assumptions for the OPS setting.
2. The proposed optimistic FTRL algorithm is novel and interesting, which facilitates the application of local-norm techniques with optimism for this problem.
3. This paper is clearly written and presented well.
Weaknesses: The OPS problem is fundamentally challenging, and I am glad to witness advancements in this field even if the results have not yet reached an optimal level. Below are some questions that emerged when I am reviewing the paper, and I would be appreciated if the authors can provide some feedback.
1. Ordinarily, one might anticipate that data-dependent bounds would demonstrate a marked advantage over minimax optimal results in the best-case scenarios, such as when $V_T = 0$ or $L_T^\star = 0$. But, by Theorem 6.1, when $ V_T = 0$, the algorithm can only assure $O(d\log T)$ regret bound, just matching the minimax optimal bound. By Theorem 6.2, $L_T^\star = 0$ will imply an even less optimal bound than the minimax result. However, these results do attain Pareto frontier optimality when balancing the optimality and efficiency under the best cases. I advise the authors to elaborate further on the motivation and highlight the efficiency of the proposed algorithms to make the contributions clearer.
2. As the authors noted in line 242, the term "implicit" might potentially mislead readers since it is commonly associated with another class of algorithms [Kulis and Bartlett, 2010]. It may be beneficial to consider a different nomenclature for the algorithm.
3. I’m interested in the relationship between two data-dependent bounds. Indeed, some earlier study (see Zhao et al., 2021, Theorem 6) posits that a gradual variation dynamic regret can imply a small-loss dynamic regret in the analysis, for general convex, non-negative, and smooth functions. This result is also true for static regret. A natural question that arises from this is whether it's feasible for one algorithm to concurrently secure guarantees for both the gradual-variation bound and small-loss bound.
References:
Brian Kulis and Peter L. Bartlett. Implicit Online Learning. ICML 2010.
Peng Zhao, Yu-Jie Zhang, Lijun Zhang, and Zhi-Hua Zhou. Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization. ArXiv:2112.14368, 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See comments above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating the online portfolio selection problem and our work.
1. **Benefit of data-dependent bounds:** This work is motivated by the high computational complexities of existing logarithmic-regret algorithms. Given that the optimal tradeoff between regret and efficiency remains unclear, we seek the opportunity to attain the optimal regret rate with moderate computational complexity. The results in this paper show there are indeed algorithms that automatically exploit such opportunities. If you agree with this argument, we will make this clearer in the revision.
2. **Another name for Algorithm 2:** We agree with the comment. We will rename Algorithm 2 as “LB-FTRL with Multiplicative-Gradient Optimism”. The name is motivated by the fact that, unlike standard optimistic algorithms, we predict $x_t \odot g_t$ instead of the gradient $g_t$ in each round.
3. **Possibility of a small-loss gradual-variation bound:** Thanks for pointing out the reference. We had also noticed this work.
- Yes, we can construct an algorithm that satisfies a unified small-loss gradual-variation bound. It suffices to aggregate the outputs of Algorithm 3 and Algorithm 4 by Vovk's aggregating algorithm. Since the losses are mixable and there are only two experts, the regret of the resulting algorithm is bounded by $\min \lbrace R_{3, T}, R_{4, T} \rbrace + \log 2$, where $R_{i, T}$ denotes the regret of Algorithm $i$.
- The reference by Zhao et al. (2021) that you pointed out provides a more efficient approach for standard smooth losses. Unfortunately, the techniques therein do not directly apply to our case, because our "self-bounding property" (Lemma 4.7) includes an additional $\alpha^\star (x) e$ term. We will mention generalizing Zhao et al. (2021) as a future research direction.
Reference:
- P. Zhao et al., Adaptivity and non-stationarity: Problem-dependent dynamic regret for online convex optimization, 2021.
---
Rebuttal 2:
Title: Please acknowledge rebuttal
Comment: Dear reviewer,
Please acknowledge that you have read the rebuttal and indicate whether it adequately addresses your comments. The author-reviewer discussion period ends Aug 21; please engage with the authors before that if needed.
Thanks,
AC | Summary: The paper presents beyond-the-worst-case regret bounds for Online Portfolio Selection (OPS). In general online learning, beyond-the-worst-case bounds are established using structural assumptions on the loss functions, such as Lipschitzness and smoothness, but the loss functions in OPS are neither Lipschitz nor smooth, which is the main technical difficulty the paper addresses. To this end, local norm analogues of Lipschitzness and smoothness are established for OPS, and a generic optimistic FTRL algorithm with the log-barrier regularizer is proposed. Specializations of this algorithm achieve a gradient-variation-dependent bound and a $L^*$ bound, which are the first without the additional no junk bond assumption. In the worst case, these bounds match a type of classical regret-computation tradeoff. In better cases, these bounds are logarithmic in $T$, which is a substantial acceleration.
Strengths: - Online portfolio selection is an iconic problem in online learning, with plenty of recent progresses in its worst case characterization. The paper introduces classical types of data dependent bounds to this problem, which has a very clear and natural motivation.
- The challenge of non-Lipschitzness and non-smoothness is technically nontrivial. The solution relies on establishing local counterparts of these properties, which is interesting, and could be of broader applicability.
- The specific data-dependent bounds are novel without the additional no junk bond assumption.
- Related works are discussed in depth, which gives the new contributions a nice context.
Weaknesses: Overall this is a good paper, and there isn't any major criticism I'd like to make. On less important issues,
- The technical presentation of the paper could be improved. Currently there are quite a few of typos and unclear notations. For example, there's Algorithm 1 in the main paper, and an algorithm 1 in the appendix. $\omega$ in Theorem 3.2 is not defined in the main paper. The notation $x(i)$ is used in line 201 but defined in line 227, ...
- I think compared to data-dependent bounds in general online learning, the benefit of data-dependent bounds in OPS is a little less clear, particularly due to the existence of a computationally inefficient logarithmic regret algorithm. I would appreciate more discussions on computation, since the paper essentially considers a particular regret computation tradeoff. Is $\tilde O(d)$ runtime and $\tilde O (\sqrt{dT})$ regret pareto-optimal for OPS in some sense? A numerical example would also be helpful, as it will make the computational savings over universal portfolio (and line 33 to 36) more clear.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Could the authors comment a bit more on the novelty of the smoothness characterizations (Lemma 4.6 and 4.7)?
- I'm also generally wondering about the regret computation tradeoff in OPS. Are there lower bounds?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our work.
1. **Novelty of the Smoothness Characterizations:**
We are not aware of any similar results in the literature. The closest is perhaps the well-known local smoothness property of self-concordant functions.
2. **Optimal regret-efficiency tradeoff:**
To the best of our knowledge, there is not a characterization of the optimal tradeoff between regret and computational efficiency for online portfolio selection. Zimmert et al. (2022) consider the currently best tradeoff for *existing* algorithms.
3. **Typos:** Thanks for the careful review. We will correct the typos.
4. **Benefit of data-dependent bounds:** This work is motivated by the high computational complexities of existing logarithmic-regret algorithms. Given that the optimal tradeoff between regret and efficiency remains unclear, we seek the opportunity to attain the optimal regret rate with moderate computational complexity. The results in this paper show there are indeed algorithms that automatically exploit such opportunities. If you agree with this argument, we will make this clearer in the revision.
Reference:
- J. Zimmert et al., Pushing the efficiency-regret Pareto frontier for online learning of portfolios and quantum states, 2022.
---
Rebuttal Comment 1.1:
Comment: I appreciate your rebuttal. It adequately addressed my comments. Although the benefit compared to the computationally inefficient log regret algorithm is still a bit murky, I agree that further studies are beyond the scope of this paper. Within the computationally tractable paradigm, the results are good contributions to the field.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for the reply and appreciation of our work. | Summary: The paper studies follow the regularized leader algorithm (FTRL) on the online portfolio selection problem without the assumption of no junk bonds. This makes the resulting loss function non-Lipschitz and non-smooth and makes analyzing the regularized follow the leader algorithm hard to analyze. The paper proposes using a self-concordant regularizer for the FTRL algorithm and proves regret bounds for the algorithm for a generic online convex optimization problem. They then use the result to propose two novel algorithms that have small-loss and gradual-variation regret bounds, respectively, for the online portfolio selection problem, which to the best of the author’s knowledge are the first regret bounds for non-Lipschitz and non-smooth losses.
Strengths: The paper seems significant as it provides a new result towards bounding regret for non-Lipschitz and non-smooth losses. The main contribution seems to originate from being able to prove that FTRL with a self-concordant regularizer without the barrier requirement obtains a regret bound similar to the setting with the barrier requirement. They then show how to apply the result to the online portfolio selection problem which is a canonical online convex optimization problem. The paper provides clear background for the problem to help motivate the goal and significance of the result.
Weaknesses: The paper lacks clarity in how exactly Theorem 3.2 is applied to the online portfolio selection problem. Specifically, it is unclear what conditions are needed to apply Theorem 3.2 as Algorithm 1 is applied to a very specific online convex optimization problem. I believe Section 4 tries to highlight these conditions via Lemma 4.4, 4.5, 4.6, and 4.7, but without looking at the appendix, it is unclear where the conditions factor into proving the regret bounds.
Moreover, in Section 5 where Theorem 3.2 is applied to the online portfolio selection problem, it is very unclear why the implicit optimistic LB-FTRL Algorithm 2 solves it. The first challenge in understanding how Algorithm 2 maps the OPS problem to Algorithm 1 is in the limited explanation in the mapping. From my reading, it seems that the only explanation in the main body is in the second sentence of the first paragraph in Section 5, “By the convexity of the loss functions, OPS can be reduced to an online linear optimization problem described in Section 3 with $v_t = g_t$ and $\mathcal{X}$ being the probability simplex $\Delta$.” Writing the optimization problem explicitly would be more clear.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Questions
1. Is it possible to state Theorem 3.2 for a generic loss that satisfies some set of specific conditions? This might streamline sections 3 and 4 by highlighting the necessary conditions ahead of time. Additionally, it might reduce the repetition in redefining the problem since fundamentally the only thing that changed was the loss function.
2. Is there intuition how the newly proposed algorithms get around the non-Lipschitz loss? My intuition is that it depends on choosing the learning rate properly so that when the Lipschitz constant is large, the correct learning rate mitigates it. Is the intuition correct? This seems to imply the learning rate should change depending on where the location of the data point for FTL type algorithms.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper is generally more theory-oriented and authors imply it is unclear how the analysis can be generalized to other settings, thus adequately addressing the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments.
1. **On stating Theorem 3.2 for generic losses:** Yes, it is possible to state Theorem 3.2, which is currently stated for online linear optimization, for generic convex losses. Denote the loss function in the $t$-th round by $f_t$. The only modification required is to replace the loss vectors $v_t$ in Algorithm 1 with the gradients $\nabla f_t ( x_t )$. This follows from the fact that $f_t ( x_t ) - f_t ( x ) \leq \langle \nabla f_t ( x_t ), x_t \rangle - \langle \nabla f_t ( x_t ), x \rangle$ by the convexity of $f_t$, showing that it suffices to solve the online linear optimization problem with $v_t = \nabla f_t ( x_t )$. "Linearizing" the losses is a standard approach. See, e.g., Section 2.4 of Shalev-Shwartz (2012) and Section 2.3 of Orabona (2023).
We chose to state Theorem 3.2 in the current way to follow the style of Shalev-Shwartz (2012) and Orabona (2023), whereas Hazan (2022) does not explicitly mention linearization and hides that in the proofs. Please let us know if you think it necessary to restate Theorem 3.2 for general loss functions. The modification is easy to do.
2. **Intuition on how the non-Lipschitzness issue is solved:** Indeed, as we focus on deriving data-dependent bounds, the issue we face is lack of smoothness rather than lack of Lipschitzness. The lack-of-smoothness issue is addressed by Lemma 4.6 and Lemma 4.7, which provide local-norm analogies of the definition of smoothness and the self-bounding property, respectively.
Based on our results, one can also obtain a worst-case $\tilde{O} ( \sqrt{ dT } )$ regret bound for online portfolio selection by FTRL with the log-barrier, with a constant learning rate, and without optimisim (a very simple special case of Algorithm 2). The regret bound then follows from Corollary 5.2 and Lemma 4.3. The non-Lipschitzness issue is solved by Lemma 4.3, showing that the gradients are bounded in dual local norms.
References:
- S. Shalev-Shwartz, Online Learning and Online Convex Optimization, 2012.
- E. Hazan, Introduction to Online Convex Optimization, 2022.
- F. Orabona. A Modern Introduction to Online Learning. 2023. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies how to achieve adaptive regret bound, including gradient-variation bound and small-loss bound, for the online portfolio management problem, without the classical no-junk bund assumption. The authors successfully achieve this goal by observing a new kind of smoothness for the function -log wx.
Strengths: The finding of a kind of smoothness of -log<wx> is the major contribution of the paper. I think this is a novel and significant observation, and lead to multiple new conclusions. Specially, for the small-loss bound, one can easily show a upper bound related to root{sum of gradients at x_t} by using a step size inversely proportional to root{sum of norm of gradients}. However, to get a small loss, bound, the next step is to assume smoothness to create a relationship between root{sum of gradients at x_t} and root{sum of function values at x_t} (using the classical Lemma 2.1 of [29]). In this paper, the authors show that, in the OPS problem, for the loss -log<wx>, we do not need smooth to construct this relationship. The key observation, to me, is Lemma 4.7, which shows a similar result to Lemma 2.1 of [29] with a “surrogate gradient”. In this way, the paper achieves the first small-loss bound for non-smooth functions, which is novel and interesting. For the gradient-variation bound, similar arguments also hold.
Note that, with this observation, one can directly combine it with existing adaptive methods to achieve adaptive bounds. However, it does not mean the proposed methods are less novel.
This paper is generally well-written and easy to follow. The key finding, section 4.3, is easy to understand.
Weaknesses: For exp-concave functions with bounded gradients (e.g., with no-junk bund assumption), algorithms such as variant of ONS () can get a O(log L^*) bound, while the algorithm in this paper yields a O(root{L^*} + log T) bound, which is sub-optimal. It would be great if the authors can consider how to get a best-of-both-world bound, which enjoys O(root{L^*} + log T) for non-smooth functions, and O(log L^*) for smooth functions.
To obtain he optimal gradient-variation bound, the proposed algorithms need to know V_T in advance. In the appendix, the authors propose an algorithm that can learn the step size adaptively. However, it somewhat lost the good property of the gradient-variation bound when V_T is small.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for correctly pointing out our main contribution and emphasizing the novelty in the proposed methods.
1. **On achieving optimal small-loss bounds for both smooth and non-smooth functions simultaneously:** This is an interesting direction. Such generalization requires non-trivial work and seems to be not directly related to the topic of this paper. We will consider this as a future research direction.
2. **On the gradual variation bound:**
- Our algorithm for the gradual variation bound does not need $V_T$ (see Theorem 6.1) and there is no need for learning the learning rate. It is Algorithm 5, which is for the second-order bound, that needs learning the learning rate. As stated in the paper, we have not found a good interpretation of the second-order bound and hence provide the second-order result only in the appendix.
- We do not get the statement on losing the good property when $V_T$ is small. We would appreciate it if you could elaborate.
---
Rebuttal Comment 1.1:
Title: Thanks for the reponse
Comment: Thank you for the detailed response, and I do not have further questions.
For the prior knowledge of V_T, I made a mistake and thought that the step size of Theorem 6.1 is related V_T (but its actually related to V_t). It would be great if the authors can emphasize this point in the revised version after Theorem 6.1.
For the first point, I would like to clarify that my **main** point is to discuss the limitation of the current results, instead of suggesting a new direction, so I think it is related to the topic of this paper.
---
Reply to Comment 1.1.1:
Title: Thank you for the response
Comment: Thank you for the reply. We will emphasize that the step size in Theorem 6.1 does not need $V_T$ and discuss the possibility of obtaining a $O ( \log L_T^\star )$ regret bound in the revision.
If we still have any misunderstandings, then please let us know. Thank you again for your appreciation of this work! | null | null | null | null | null | null |
Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification | Accept (poster) | Summary: The authors in this paper propose EMBROID aim to improve language models (LM) (such as GPT-3.5) without additional labeled data. Unlike prompt-based models that focus on prompt designs, this paper attempts to modify the predictions of the data point $x$ by considering its neighboring samples. First, the proposed model applies a set of embedding encoders (such as BERT, and SentenceBERT) to construct the neighboring graph by the output embeddings, and then a latent variable model is used to generate the final prediction. This paper also provides the theoretical analysis about the generalization error and information gain. Finally, experiments on sentence classification datasets and ablation studies on output embeddings and dataset size show the improvements of the proposed model.
Strengths: (1) The motivation that improves the LM predictions with unlabeled data and no expert supervision is interesting for recent NLP community. Based on ensemble learning, this paper uses a set of textual encoders to define the neighborhood prediction vector for each data point and combines the original prediction with its neighborhood prediction to correct the final prediction. The idea is simple and reasonable.
(2) This paper also provides theoretical analysis by deriving a bound of generalization error and calculating the pointwise information gain, which suggest the efficiency of the proposed modules mathematically under several assumptions.
(3) Extensive empirical results with several baselines on sentence classification task show the consistent improvements.
Weaknesses: (1) The novelty of this paper appears to be satisfactory. Building upon previous works on weak supervision [1][2], the authors employ a latent variable model in the context of prompt-based language models.The combination of these two approaches is intuitive, and there are only a few significant technical challenges to address in solving the combined equation (Eq. 3).
(1) This paper limits the proposed model to the classification task (from the method description and empirical results). Given that the generation ability (such as question answer) of a language model is also important, the proposed model seems to only work on the classification case.
(2) Genral textual benchmarks such as SuperGLUE can enhance the convincingness of the experiment.
[1] Mayee F. Chen et.al. Shoring up the foundations: Fusing model embeddings and weak supervision.
[2] Daniel Fu et.al. Fast and three-rious: Speeding up weak supervision with triplet methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) One of the main technical challenges is how to fuse the original prediction with the neighboring prediction, and this paper employs the recent latent variable graphical model to the end. Please explain the core difference from [1] and [2] theoretically.
(2) The proposed model consists of several large models, such as three embedding functions, GPT-3.5, and latent graphical model. It will be better to provide the space and time complexity analysis.
(3) Ablation studies on $\alpha$ in Eq.(3) are needed to evaluate the proposed module.
(4) Theorem 5.2 suggests a smooth embedding function E. I wonder how to select an optimal E? Are BERT-like encoders smooth functions?
[1] Mayee F. Chen et.al. Shoring up the foundations: Fusing model embeddings and weak supervision.
[2] Daniel Fu et.al. Fast and three-rious: Speeding up weak supervision with triplet methods.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank XCbh for their review! We are glad to hear they appreciated Embroid’s motivation, the theoretical analysis, and the extensive validation.
**Limited number of technical problems are solved.** We would like to clarify the novelty of our work. Specifically, our contribution is synthesizing distinct technical insights from different areas of study (e.g., weak supervision and embedding-based approaches), and showing how they can combine into a simple yet powerful method. We note that while both fields of study have existed, our work is the first to (1) show how weak supervision-approaches can be applied to settings where we only have one set of predictions, and (2) ensemble embeddings to account for potential noise in individual embedding spaces.
**Evaluation on multi-class.** We found that Embroid can be applied to the multiclass setting through a one-vs-all approach. Please see the general response for more details.
**General textual benchmarks.** Embroid is applicable for tasks where practitioners can identify pre-existing embeddings which are smooth for the task (see §5). The tasks contained in the SuperGLUE benchmark fall outside of this scope, because these tasks are natural language inference tasks. Embedding proximity for samples in these tasks is unrelated to task labels
**Explaining difference from Chen et al 2022.** Chen et al (2022) introduces Liger. Liger differs from Embroid in several ways.
1. Liger requires the user to specify a single embedding space, while Embroid incorporates multiple embedding spaces. As a result, Liger requires the embedding space chosen be high quality. In contrast, Embroid is more robust to lower quality embedding spaces, because these embedding spaces are aggregated.
2. Moreover, Liger requires multiple predictions for each sample, each of which must come from a different hand-written labeling function (LF). In contrast, Embroid requires only a single prediction for each sample–generated from a few-shot LM–and relies on embedding spaces to manufacture additional predictions which can be combined.
3. Liger operates on label functions, and not the predictions of few-shot LMs. These label functions are written by hand, and have fundamentally different properties from few-shot LMs. For instance, they abstain, usually have high precision, and lower recall.
4. Liger uses embedding spaces to (1) propagate votes for each LF from labeled samples onto its neighbors, and (2) uses these embedding spaces to learn clusterings over the data, on each of which a different graphical model is learned. In contrast, Embroid uses embedding spaces to evaluate disagreement between neighboring predictions, and only learns a single graphical model for the entire dataset.
5. Finally, we note that Embroid, when run in the same setting as Liger (i.e., with access to multiple predictions per sample), exceeds Liger’s performance (Table 2).
**Explaining difference from Fu et al 2020.** Fu et al (2020) introduces FlyingSquid. While Embroid uses FlyingSquid to solve the graphical model posed, Embroid introduces several innovations beyond the contributions made in Fu et al. First, Embroid incorporates information from embedding spaces, which Fu et al. do not consider. Second, Embroid requires only one noisy vote per sample (bootstrapping additional votes from the embedding spaces), while FlyingSquid requires multiple. Finally, Fu et al evaluate FlyingSquid in the classical hand-written LF regime.
**Space and time complexity.** We provide additional information on the space and time complexity of Embroid. First, we clarify that Embroid requires only one large model. This model can be on the order of GPT-3 (176B parameters), or a much smaller open-source equivalent like GPT-JT (7B parameters). Next, Embroid requires embeddings from 2-3 additional models. However, as our results show, these models can be relatively small. For instance, we rely on BERT-base and SentenceBERT models–both of which are approximately 110 million parameters. Finally, the latent variable graphical model has 2 parameters for class prior and 1+N parameters per LLM prediction, where N is the number of embedding spaces. So, when Embroid is run on a single LLM’s predictions with two embedding spaces, the total number of parameters is 2 + (1+2)*1 = 5.
In terms of time complexity, we observe that computing LLM predictions is the largest bottleneck. On a dataset of ~1800 samples for instance:
- Computing predictions on GPT-3.5 takes 1440 seconds (24 minutes).
- Computing embeddings using BERT takes 5 seconds.
- Computing nearest-neighbors using the (unoptimized) scikit-learn library takes ~3 seconds.
- Solving the Embroid graphical model takes less than a second.
We will add this analysis to our Appendix.
**Ablation on Eq 3.** We note that ablating Eq. 3 to isolate the impact of adding the last term (which incorporates embedding information) is essentially equivalent to either (1) comparing the original predictions to Embroid, or (2) comparing the FlyingSquid model over three sets of predictions (per sample) to Embroid. (1) is provided for §6.1 (Table 1), where we observe that Embroid improves upon the original predictions for multiple models, by a significant margin. (2) is provided by §6.2 (Table 2), where we find that Embroid improves upon Flying Squid by between 4 to 20 points, depending on the LM used.
**How should embedding spaces be selected?** Please see the general response.
**Are BERT embeddings smooth?** Figure 2 (bottom left) plots average embedding smoothness for each task, as computed using Eq 4. We find that for a number of tasks, BERT-variant embeddings are smooth.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarifications, which address most of my concerns. I would like to see this paper at the conference. | Summary: The paper aims to improve prompt performance through a technique of prompt patching where multiple neighbourhood instance of a given datapoint (retrieved through the embedding space of BERT or RoBERTa like model) are used in prediction from the LLM along with the original instance. Finally a majority vote is incorporated. Empirically, the method performs well in the case of binary classification tasks and outperforms the Chain-of-thought prompting technique.
Extra note: I want to highlight that I have not published in the area of prompting for LLMs and my review is based on the recent readings in the area.
Strengths: - The main strength of the paper lies in the simplicity of the method and also that it works well in practice (empirically validated).
- The theoretical analysis well supports the empirical analysis which is a strong point.
- Combination of Chain-of-thought and prompt patching results in strong improvements.
- Strong ablations and baselines are considered!
Weaknesses: Overall, I believe the paper is empirically strong, with good theoretical intuition. The writing in Sec. 6.2 can be improved to reflect the workings of the baseline. I believe the reader might get confused with the Embroid-1 / Embroid-3 notation when comparing with the baselines. It will be better to highlight exactly how Embroid-1 / Embroid-3 differ from the baselines in terms of the exact number of prompts used.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See Weaknesses
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: While I don't see any significant limitations, the final version of the paper can be updated with results on newer instruction-tuned models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank XCbh for their review! We are glad to hear they appreciated Embroid’s simplicity, its theoretical analysis, and the empirical results.
We will update the paper with results on new instruction-tuned models. We will also update the writing in Section 6.2 to clarify the workings of the different baselines, so that readers are not confused as to how many prompts are used.
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: Thank you for the response; I maintain my rating! | Summary: This paper presents EMBROID, a prompt-patching method that corrects erroneous predictions for a prompt via agreements over KNN examples. For this, it employs N different embedding models to get smoothed neighborhood prediction vector. The smoothed information is integrated to combine the voting with quality parameters. They conduct a formal theoretical analysis for EMBROID to get its behaviors and corresponding results. Experimental results demonstrate the efficacy of the proposed method, improving baseline predictions. It boosts the performance when multiple prompts are used and can be complementary used with other prompting methods such as chain-of-thought. Moreover, they show the proposed method works well even with the dataset size is relatively small.
Strengths: 1. This paper introduces a novel method to improve the prompt-based learning paradigm, introducing a broad agreement procedure over the predictions.
2. The intuition of the idea and corresponding analyses are good.
3. Strong empirical results
Weaknesses: - The method is limitedly evaluated only in the binary classification tasks.
- Multi-class classification or generative tasks are not tested at all.
- It requires multiple embedding models for each correction.
- The proposed method always assumes unlabeled datasets.
- The paper does not include discussion about the self-consistency [1] method, another voting method to improve prompt accuracy.
[1] ****Self-Consistency Improves Chain of Thought Reasoning in Language Models (Wang et al.)****
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please position the proposed method compared to the Self-Consistency method.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It would be good to show that the proposed method can be easily adapted to multi-class classification or even generative tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Bw2X for their review! We are glad to hear they appreciated Embroid’s novelty, the analysis, and our empirical evaluation.
**Evaluation on multi-class.** We found that Embroid can be applied to the multiclass setting through a one-vs-all approach. Please see the global review for more details.
**Comparison to self-consistency.** We evaluate Embroid against self-consistency, and find that (1) it can provide competitive performance to self-consistency, and (2) that it can improve predictions generated from self-consistency. We study a subset of five tasks, and will add an expanded study of additional tasks to the paper. For self-consistency, we generate 5 predictions per sample, using a temperature of 0.7. For each task, we consider a “base” prompt which does not use chain-of-thought, but otherwise uses the same task instructions and in-context demonstrations. For 4/5 tasks, Embroid applied to the output of the base prompt outperforms self-consistency. On all tasks, Embroid applied to the predictions generated from self-consistency improve performance, by a median of six points.
We additionally note one important distinction between self-consistency and Embroid. While self-consistency requires multiple predictions from a LM for a single sample (thus accruing additional cost in hardware usage or API calls), Embroid requires only one prediction per-sample.
**Embroid requires multiple models for each correction and assumes unlabeled datasets.** We agree with the reviewer that these requirements are essential to Embroid. However, we believe that for a substantial number of applications, both requirements are easy to satisfy.
First, repositories like Huggingface contain a number of BERT-variants for specialized domains like law or medicine. Computing embeddings from BERT-models for Embroid is relatively cheap–on an NVIDIA T4 (16GB) computing embeddings on a dataset of ~1800 takes 5 seconds.
Second, we observe that for many applications in medicine and law, unlabeled data is readily accessible. For instance, the CUAD dataset contains only 500 annotations per-clause type, yet nearly 34GB of unlabeled contractual data.
We will update our discussion in the paper to clarify the setting we focus on.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! I'm glad to see the updated evaluation and discussion. I will raise my score accordingly. | Summary: The authors propose Embroid, a promising method that exploits availability of diverse pre-trained LLMs to create something akin to (but better than) an ensemble approach for prompt-patching.
Strengths: - a promising and easy-to-use method for prompt-patching
- robust theoretical analysis
- extensive empirical evaluation
Weaknesses: - Evaluation was performed on models which are non-SOTA and the improvement may possibly be explained by gaps in their training that are filled by the domain-specific pre-trained models. It would be helpful to evaluate Embroid with a stronger general model (e.g. GPT4) to see if performance gains decrease significantly. It could also be helpful to evaluate Embroid on models that have been pre-trained on the particular domains of interest, to see if lack of domain-knowledge for the general model is what enables improvement when leveraging domain-specific pre-trained embedding models.
- Evaluation was performed on only binary sentence classification tasks. It seems like extending to multi-class classification should be doable (at a minimum this can be done by doing multiple binary one-vs-all comparisons, though a cleaner extension would of course be better) and would go a long way in validating Embroid on more realistic settings.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors address limitations like dependence on domain-specific embeddings, the quality of embeddings, and the availability of at least a moderate number of data points (in the hundreds).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank YSHE for their review! We are glad to hear they appreciated the ease of use, our theoretical analysis, and our empirical evaluation.
**Evaluating on stronger general models.** Our original work found Embroid had a substantial win rate (80.6%) and average improvement (4.9 points F1) on GPT-3.5, a highly performant model which–at the time of paper submission–was SoTA amongst publicly available API-models.
We evaluated Embroid+GPT-4 on a subset of 14 tasks. The tasks chosen are identical to those listed in Appendix H.1 of the original submission. We note that overall, GPT-4 appears to saturate our evaluated tasks, scoring greater than 95% on 5 of them. Given that we evaluate on publicly accessible benchmarks, this could be explained by leakage into GPT-4’s pre-training data.
We find that Embroid is strictly better than the base prompt on 42% of tasks, and effectively equivalent or better (i.e., no worse than 1 point F1) on 73% of tasks. On two tasks Embroid improves by 4+ points. We will add the above analysis to the Appendix.
**Evaluation on domain specific models.** Surprisingly, there is (to the best of our knowledge) no publically available few-shot language model for law that could be used to generate predictions. This was, in fact, a motivation for exploring Embroid’s ability to offer domain-specialization capabilities. We will add experiments to our paper evaluating Embroid’s performance improvements for domain-specific medical models.
**Importance of open-source models.** Though Embroid’s absolute gains over base prompts may diminish for stronger base LMs like GPT-4, our evaluation and findings focused on open-source models because of their practical feasibility. Such models are cheaper and better suited for sensitive domains, as they allow data to remain on-premises. This is a particularly salient concern for applications in medicine, law, or government. On these models, we found that Embroid offered substantial gains, with a win-rate > 88% and an average improvement > 7 points F1.
**Evaluation on multi-class.** We found that Embroid can be applied to the multiclass setting through a one-vs-all approach. Please see the general response for more details.
---
Rebuttal Comment 1.1:
Comment: Thank you for the excellent response! My concerns have been addressed and I have raised my score accordingly. I'm excited to see the additional multi-way classification results when they are ready! | Rebuttal 1:
Rebuttal: We thank all reviewers for their feedback. We are glad reviewers recognized and appreciated the novelty/simplicity of our method [Rd7v, Bw2X, YSHE, XCbh, bLRH], our theoretical analysis [Rd7v, YSHE, XCbh, bLRH], and our empirical validation [Rd7v, YSHE, Bw2X, XCbh, BLRH].
We have made a number of changes in response to the reviewers’ comments and questions:
1. We have added an evaluation in the multi-class setting, where we find Embroid offers improvements over the base prompt (by up to 3.5 points on balanced-accuracy).
2. We have added exposition to clarify questions related to embedding smoothness and the choice of embeddings.
3. We have added experiments evaluating Embroid on GPT-4 and with self-consistency. We find that Embroid can still improve upon GPT-4 (by up to 8 points for some tasks), and that Embroid improves self-consistency for all studied tasks.
**Multi-class evaluation.** Multiple reviewers [YSHE, Bw2X, bLRh] asked whether Embroid could be applied to multi-class problems in a one-vs-all setting. We provide initial results on a multi-class version of the contractual classification task, involving five unbalanced classes (where the largest class is 3.5x larger than the smallest class). Overall, we find that the one-vs-all variant of Embroid improves the quality of the base prompt (by 4 points on balanced-accuracy and 1.6 points on macro-F1). We will add a more comprehensive study with additional datasets to our paper.
**Embedding smoothness.** Several reviewers [RD7v, bLRH] raised questions regarding properties of the embedding spaces and how they should be selected for Embroid. In the revised draft, we will expand our discussion of hyperparameters in the Appendix (Appendix G) to include an overview of embedding selection. We offer more concise responses to reviewer questions below.
At the outset, we note that selecting/evaluating embeddings is particularly difficult in our regime, where practitioners do not have access to labeled data. Importantly, this motivates Embroid’s “mixture of embeddings'' approach. Previous methods (e.g., Liger [7]) operate on only a single embedding space. Thus, practitioners must be certain that the embedding space chosen is good for a task, and a poorly chosen embedding space will reduce performance. Because Embroid ensembles different embedding spaces, practitioners can be less certain in the quality of any individual embedding space.
*Is there a simple method to quantify smoothness for an embedding?*
Smoothness for an embedding is typically calculated using Eq. 4 (L188). This requires knowing the true label $y$ for each point. When practitioners have access to a dev set, smoothness can be estimated using this set of points. Our work assumes a pure unlabeled setting (also known as “true few-shot”) where no dev set is accessible. Thus, computing Eq. 4 is impossible. We will clarify this fact in §5.
*Are there methods/selection strategies to drop out embeddings from a given task?*
In the expanded discussion in our paper, we will discuss potential methods for selecting possible embedding spaces to include in Embroid. These include:
- Looking at the MLM loss of the embedding model (e.g. BERT) on the task data. Prior literature on domain specificity has suggested that this may be a promising heuristic [27].
- Looking to the performance of embedding models on other related tasks may also be helpful [27].
- Looking at the extent to which a potential embedding space generates embeddings which are geometrically similar to embeddings already included in Embroid. If the embedding space is similar, then the additional value of incorporation is low.
*Is there a theoretical result for the optimal number of selection of embeddings used in Embroid?*
The number of embeddings used presents a bias-variance tradeoff. Under Proposition 5.1, increasing the number of embedding spaces (1) increases the variance due to estimation error, but lowers (2) the conditional entropy of $H(y | \lambda)$. Precisely characterizing this tradeoff is challenging because the variance is an upper bound, and $H(y | \lambda)$ can only be estimated. However, these bounds/estimates can be used to derive heuristic stopping rules: for instance, add embedding spaces until the marginal decrease in the conditional entropy is less than the upper bound on the marginal increase in variance. We will add this discussion to Section 5. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors focus on few-shot prompted classification using language models. In this setting, they focus on the challenge of developing optimal few-shot (in-context) prompts for language models in domains where data collection is prohibitively expensive. Rather than engineering the prompts themselves, the proposed Embroid method aims to *correct* prompted language model predictions using semantic similarity predictions of similar samples using other pre-trained language models. Embroid operates as a form of mixture-of-experts involving both the core language model as well as a family of other embedding models to edit LM annotations and enforce consistency. The authors perform a theoretical analysis of how/why Embroid works re: embedding smoothness, and demonstrate how Embroid improves accuracy across a range of language models and tasks.
Strengths: - To my knowledge, this paper proposes a novel way of using noisy similarity heuristics as a "vote of confidence" to regularize language model predictions
- The methods sections (3, 4) are clearly written and equation intuitions are well-explained. This helps greatly in understanding why the Embroid method may work and helps in understanding the theoretical analysis in Section 5.
- The authors report a significant improvement overall (win rate) and average across tasks over a variety of models after applying Embroid across their extensive task suite.
- The experiments section is structured well, and the research questions naturally flow from one to another. It is encouraging to see that Embroid seems to work well together with chain-of-thought prompting, prompt ensembling, and other methods of improving the prompt (selective annotation).
Weaknesses: - There is a bit of confusion in the problem setting description about what constitutes the "prompt". (108/109) suggests that the prompt includes the description and in-context examples. Is the main challenge the authors target the task description engineering, choice of in-context examples, or both?
- Regarding the smoothness observation in Section 5: is there a simple method to quantify smoothness of embedding models on a given dataset? And does this give rise to selection strategies or methods to drop out embeddings from a given task? Essentially I'm wondering whether there is a theoretical result for the optimal number or selection of embeddings used in Embroid.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank RD7v for their review! We are glad to hear they appreciated the novelty of our method, its intuitive appeal, and the structure of our evaluation. Our response primarily serves to answer the questions they raised.
**Confusion regarding the definition of “prompt”.** We consider the “prompt” to include both the task description and the in-context samples, consistent with Zhao et al (2021). Because Embroid corrects the predictions of the prompt–and is thus agnostic to the content of the prompt itself–we believe it can address challenges in both the choice of in-context samples and in task description engineering. We will add a clarification in §2.3 our draft to make this clearer.
**Embedding smoothness.** Please see the general response above.
*References*
Zhao, Zihao, et al. "Calibrate before use: Improving few-shot performance of language models." International Conference on Machine Learning. PMLR, 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough responses to / discussion of my questions! | null | null | null | null | null | null |
AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback | Accept (spotlight) | Summary: This paper presents a simulation framework, AlpacaFarm, for developing LLMs with human feedback.
AlpacaFarm adopts LLMs (e.g., GPT-4) to generate feedback (i.e., the ranking of candidate responses given the query), and evaluate the performance by calculating the win-rate against the baseline.
This framework can obtain synthetic feedback data cheaply compared with human annotators.
The major contribution is the system design and validation of the AlpacaFarm framework.
System Design:
- i) the framework constructs a relatively comprehensive evaluation dataset.
- ii) multiple LLMs are adopted as annotators proxy (GPT-4, ChatGPT and Davinci003)
- iii) random noise is injected to mimic the variability of human annotators.
Validation: Simulated evaluation (i.e., the model performance ranking ) results of learning with the synthetic feedback dataset match that obtained by training models with human feedback and evaluated by real annotators. Besides, the pairwise ranking and variability also correlate with the human results well.
Finally, the paper benchmarks the reference methods with the framework of LLAMA-7B, suggesting that SFT plays a significant role and highlights the superiority of PPO.
Strengths: - Clarity: The overall presentation is good, clearly conveying the key ideas of this paper.
- Significance: This framework would potentially become a useful resource for building, and evaluating the LLMs with feedback datasets when budgets are limited.
- Originality: The novelty of the framework is somewhat limited as generating synthetic feedback datasets & evaluating with powerful LLMs have already been explored recently. Yet, it is still worthwhile to explore combing these dots and justify its effectiveness.
Weaknesses: - Some hyper-parameter choices are unjustified (such as the ratio in Question 1).
- The effect of this framework is under-explored when the baseline model is close to the annotators LLMs.
---
After Rebuttal:
Thanks for the response which well-addressed my questions well. I have raised my score to Accept.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The paper mentioned that 25\% of the simulated preferences are randomly flipped. Any explanation on the ratio? Besides, while the results are consistent with the human results, I think this randomness may not faithfully reflect the inherent variance as human variability depends on the background and geography of the annotators.
- What results would be when the base LLM is close to the annotator LLMs? Will this framework still effective when we are adopting a powerful LLM itself as annotators?
- What prompt is used to generate feedback and evaluate the candidate responses? Did you do some stabilization techniques such as self-consistency in [1] to stable the evaluation results?
- Typo: Captions Table 4 and Table 5 of the Appendix: longer outputs.s -> longer outputs.
[1] Large Language Models are not Fair Evaluators
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their useful review, which we have incorporated to improve our paper.
# Simulated annotators
> *The paper mentioned that 25% of the simulated preferences are randomly flipped. Any explanation on the ratio?
We have expanded our explanation in Appendix B.1 about the 25% ratio. We refer the reviewer to the [general rebuttal](https://openreview.net/forum?id=4hturzLcKX¬eId=Hq0kpzSHSw).*
----
>*randomly flipped [...] while the results are consistent with the human results, I think this randomness may not faithfully reflect the inherent variance as human variability depends on the background and geography of the annotators.*
As discussed in section 3.2 (text quoted below from line 153), there are two important sources of variability for human annotators: intra- and inter-annotator.
>> To more completely emulate human annotators, we first emulate inter-annotator variability into the simulated pairwise preference by mimicking a pool of annotators. We design different annotators by querying different API LLMs and varying the prompts with different formats, batch sizes, and in-context examples.[...]. To emulate intra-annotator variability, we directly inject random noise and flip the simulated preference 25% of the time.
The random flipping concerns intra-annotator variability, while using different API LLMs with varying in-context examples addresses the inter-annotator variability that the reviewer seems to be referencing. That said, we completely agree with the reviewer that our approach to emulating inter-annotator variability does not capture complex human variability, such as that due to background and geography. Our work is only a first step in this direction, and this important and challenging problem will require much future work to tackle. We have added this to our new limitations section. We refer the reviewer to the [general rebuttal](https://openreview.net/forum?id=4hturzLcKX¬eId=Hq0kpzSHSw).
---
> *What prompt is used to generate feedback and evaluate the candidate responses? Did you do some stabilization techniques such as self-consistency in [1] to stable the evaluation results?*
We thank the reviewer for their insightful comments. We detail our prompts in appendix G.1 and are releasing them as part of our code release. With respect to paper [1], we concur that the order of examples is crucial, as emphasized in the following excerpt from Appendix G.1:
>>For each annotator, we randomize the ordering between the two outputs to annotate, i.e., we randomly choose which output is the first and which is the second. We found randomization to be important given that the first output is often preferred by simulated annotators
Given this randomization we do not require the stabilization techniques form [1].
---
Rebuttal Comment 1.1:
Comment: Thanks for your response and my questions are well-addressed. I will raise my score to Accept. | Summary: This paper introduces AlpacaFarm, a simulator that enables faster and cheaper research and development of fine-tuning LLMs with human feedback. The authors propose to use an LLM to simulate human feedback, which is 45x cheaper than using crowdworkers and displays high agreement with humans. They also identify an evaluation dataset which is representative of real-world instructions from interactions between humans and chatbots or other LLMs. The authors also plan to open-source reference implementations for multiple methods which are widely used in the research and applications such as PPO, Best-of-N (BoN), and Expert Iteration (EI). The authors validate their simulator and evaluation protocols by showing that rankings of models trained in AlpacaFarm match those of models trained on human data. The results indicate that leveraging a reward model can improve performance over supervised fine-tuning, which is in line with prior work.
Strengths: ## Clarity
The paper is well-written and clear overall. I also appreciated that the authors didn't only consider the agreement between LLMs and humans but also the variability in the answers which is an important aspect of human evaluation and feedback.
## Impact
The paper addresses a timely topic of great importance, namely understanding how to best fine-tune LLMs with human feedback. Although it is more of an engineering / datasets / benchmarks paper, I believe it fills a gap in the literature and could lower the entry barrier for doing research on RLHF and related methods which are rather complex and expensive to train well.
## Novelty
Although the paper does not propose a novel approach, it open-sources reliable implementations of popular approaches for fine-tuning LLMs with human feedback and proposes an automatic (fast and cheap) way of evaluating such methods. I believe these could be quite impactful and valuable for the community. I expect this can enable research and development of new methods, as well as a platform for fairly comparing them in order to make faster progress on these problems.
Weaknesses: ## Generality
One of the main weaknesses of this paper is the fact that only LLaMA-based models are considered. However, the LLM simulator is advertised as a general platform for researching, developing, and evaluating human feedback fine-tuning of LLMs. In particular, one open question is whether the models you use for evaluation and simulating human feedback transfer to other base models such as OPT, Pythia etc.
1) Thus, it would be useful to include experiments validating this hypothesis to demonstrate your protocol is robust to the base model used.
2) In addition, it would also be good to include different models for evaluating and even for simulating human feedback to see how much variance there is and increase the robustness of the results obtained with this platform. In particular, I expect that the results won't perfectly match human evaluation and feedback (particularly for the absolute performance), but adding different ways of simulating human feedback and evaluating LLMs can at least decrease the variance in the final results.
3) It would also be good if the authors can include human evaluations in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In section 3.2, you claim that the agreement between your model and human annotators is high and quote 65%, which seems rather low to me. Can you provide more details on how this number was computed and what it means in practice? How does it compare with the agreement across humans?
2. Also in section 3.2, you mention that to emulate intra-annotator variability you inject random noise flipping the simulated preference 25% of the time. How did you come up with this number, did you run experiments to find out the intra-human variability? How did you validate that the resulting models match human variability?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors mention some of the assumptions and limitations of the paper but I strongly suggest having a separate section that discusses these in greater depth. Given the potential impact of this simulator and relevance to real-world applications, it is essential for readers to understand that this is just a simulator which may be inaccurate so it should only be used for research and preliminary experiments; human evaluations and potentially feedback should always be used before deployment or to make stronger claims about a model's capabilities.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their questions and will address their concerns in the updated manuscript.
## Limitations
> *The authors mention some of the assumptions and limitations of the paper but I strongly suggest having a separate section that discusses these in greater depth.*
We agree with the reviewer’s suggestion and have incorporated the feedback. We refer the reviewer to the [general rebuttal](https://openreview.net/forum?id=4hturzLcKX¬eId=Hq0kpzSHSw).
---
> *one open question is whether the models you use for evaluation and simulating human feedback transfer to other base models such as OPT, Pythia etc.*
This is an interesting and important question, which we now discuss in the new limitation section. During earlier stages of development, we experimented with OPT and Flan-T5 but found that outputs from these models were often completely wrong, making it challenging to obtain enough signal from human feedback. Since submitting our paper, many strong strong base models have been released (e.g. MPT, Falcon, LLaMA-2) and we think that it’s an important avenue for future work to consider the impact of the choice base model.
## Simulated annotators
> *you inject random noise flipping the simulated preference 25% of the time. How did you come up with this number, did you run experiments to find out the intra-human variability? How did you validate that the resulting models match human variability?*
We thank the reviewer of their important questions, which we now expanded on in the paper. We refer the reviewer to the [general rebuttal](https://openreview.net/forum?id=4hturzLcKX¬eId=Hq0kpzSHSw).
---
> *In section 3.2, you claim that the agreement between your model and human annotators is high and quote 65%, which seems rather low to me. Can you provide more details on how this number was computed and what it means in practice? How does it compare with the agreement across humans*
As elaborated in section 4.3, the 65% is comparable to the human-human agreement rate at 66%. Quote from line 259:
>> We begin computing agreement levels between our simulated annotator and a majority vote of 3 human annotators, comparing this to the agreement level of a held-out human annotator. We find that our evaluator $p_{eval}^{sim}$ has a 65% agreement rate with the human majority vote, which is similar to the held-out human agreement rate (66%) [...]
We agree that this number seems low and see two reasons for that. First, samples from the same SFT model are often quite similar, making it hard and often very subjective to select a preferred sample. Second, we use a pool of human annotators that may have different preferences. We emphasize that the disparity is not merely due to the crowd-workers; even when we (the paper's authors) annotated >200 examples, we observed just a 64% agreement rate.
The agreement here estimates the frequency with which an annotator aligns with the majority of humans. Concretely, we collected 650 examples each annotated by 4 different crowdworkers (i.e., 2600 in total). For human-human agreement, each example was evaluated by one human and then cross-checked against the majority preference of the remaining three. The resulting agreement was then averaged over all four humans and 650 examples. For model-human agreement, we performed the same computation: comparing the model's response to the majority agreement of the 3 humans, then averaging over samples and human participants.
We thank the reviewer for those questions, which we now expanded on in the paper.
----
> *it would also be good to include different models for evaluating and even for simulating human feedback to see how much variance there is and increase the robustness of the results obtained with this platform.*
We agree that understanding the robustness of the simulator to the oracle model is important. Figures 9, 10, and 11 in the appendices investigate the use of other models and prompts for simulation. Here is a table with a more in-depth analysis of the use of different simulated evaluators, their agreement with humans, their variance, and their biases. Our annotators are bolded. The last 4 columns in the following table refer to the metrics from figures 9 (bias, variance) and figure 11 (preference for list and for longer outputs) in appendices E.3. of the submitted manuscript.
| | Human agreement [%] | Price [$/1000 examples] | Time [seconds/1000 examples] | Bias | Variance | Proba. prefer longer | Proba. prefer lists |
|---|---|---|---|---|---|---|---|
| alpaca_eval_gpt4_fn | 71.0 | 14.5 | 5046 | 27.6 | 11.1 | 0.75 | 0.63 |
| alpaca_farm_greedy_gpt4 | 66.4 | 15.3 | 878 | 30.2 | 19.3 | 0.60 | 0.59 |
| humans | 65.7 | 300.0 | 36800 | 0.0 | 34.3 | 0.64 | 0.61 |
| claude | 65.5 | 11.1 | 173 | 31.9 | 18.0 | 0.62 | 0.58 |
| text_davinci_003 | 64.1 | 8.7 | 121 | 33.8 | 22.7 | 0.70 | 0.64 |
| lmsys_gpt4 | 63.2 | 13.9 | 17982 | 34.7 | 16.1 | 0.74 | 0.64 |
| alpaca_farm | 60.0 | 11.5 | 820 | | | 0.60 | 0.63 |
| chatgpt_fn | 60.0 | 1.0 | 530 | 36.9 | 27.7 | 0.62 | 0.65 |
| chatgpt | 57.2 | 0.8 | 285 | 39.4 | 34.1 | 0.59 | 0.56 |
| cohere | 53.4 | 3.5 | 217 | | | 0.50 | 0.51 |
## Other
> *It would also be good if the authors can include human evaluations in the paper.*
All validation of our pipeline is done using human evaluations. Is the reviewer asking about releasing those annotations? If so, we have released them as part of our code release but have not linked to them due to the anonymity guidelines.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I thank the authors for their thorough responses and for going the extra mile to further improve the paper by running additional experiments and including an extensive discussion of its limitations. I also appreciate the emphasis of some details and results I might have missed in the Appendix, for using a wide range of models to perform evaluations, and for releasing human annotations.
I'm happy to say the rebuttal has addressed my main concerns.
I believe this paper would be a very valuable contribution to the community by democratizing research on an important topic (RLHF) which has a relatively high entry barrier. I also find the experiments to be very thorough. In light of this, I have increased my score to 8 to reflect my strong support of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for considering our rebuttal and for the encouraging words! | Summary: The authors provide a simulator for experiments with LLMs that aim to learn from human-feedback, in particular, human binary comparisons. This allows researchers to run exploratory experiments with, e.g., RLHF, quickly and cheaply, without having to collect human data. The main contribution is the open source library that includes various components needed for these type of experiments. In the paper, the authors evaluate their simulator; using end-to-end validation, validating subparts of the library, and showcasing how it can be used to produce findings match those found when using human feedback.
Strengths: Learning from human feedback is an important topic that has long been hard to study for any but the most well-resourced labs. Alpaca Farm will enable more research on this topic
Open source software that enables other researchers is always great.
The paper is well written.
Thorough validation.
The validation results, especially Figure 2, are impressive.
The finding in Figure 3 (reward model over-optimisation only happens with strong inter-rater disagreement) is intriguing (seems worth further study)
Weaknesses: From more to less relevant concerns. I will raise my score if these are addressed sufficiently in the rebuttal.
The paper could explore the limitations of Alpaca Farm more. I don't actually believe that Llama-7B with some some instruction fine-tuning and a bit of PPO is comparable to GPT-3.5; you probably don't believe this either. However, Table 1 shows a winrate of 55% of "PPO". This is probably because your evaluation instructions are relatively easy, only single turn, you don't factor in adversarial inputs, and so on. This is fine, but it should be discussed more in the paper. In general, the paper would be better if it elucidated the limitations of your simulator a bit more, as there clearly are some. Try to break it, show not only where it works, but where it stops working.
There has been recent work that shows that MTurk crowdworkers use AI a lot. https://arxiv.org/abs/2306.07899 . So, it's not wonder that MTurk evaluations agree with the evaluations from you GPT evaluator models. How do you expect this to affect your results?
Figure 4: this experiment would be better if you used humans to evaluate the winrate on the demo. The x-axis is the evaluation in Alpaca Farm. The y-axis should be the "realistic evaluation", which means realistic data AND human evaluation.
Why does Figure 2 have 10 data points? What are these data points? Comparing to Table 1, it seems like you included GPT-4 on Figure 2; this seems misleading, as this only measures the evaluation part of your simulator, and not the training part. You should show a Figure where you only include models that were actually trained on the simulated preferences.
I have some worries thinking about that the training signal (pairwise comparisons) and the evaluators (pairwise comparisons) come from the same API-models. This might lead to overfitting to these particular evaluators. However, in practicse, this doesn't seem to be an issue (see Figure 2). Of course, if you work with humans, you may also use the same humans for training signal and eval (although best practice would probably be to use different humans)
Paper could be shorter, a bit much repetition.
-------
EDIT: some of these points have been convincingly addressed in the rebuttal, so I raise my score
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Why does Figure 2 have 10 data points? What are these data points? Comparing to Table 1, it seems like you included GPT-4 on Figure 2; this seems misleading, as this only measures the evaluation part of your simulator, and not the training part. You should show a Figure where you only include models that were actually trained on the simulated preferences.
There has been recent work that shows that MTurk crowdworkers use AI a lot. https://arxiv.org/abs/2306.07899 . So, it's not wonder that MTurk evaluations agree with the evaluations from you GPT evaluator models. How do you expect this to affect your results?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper could explore the limitations of Alpaca Farm more. I don't actually believe that Llama-7B with some some instruction fine-tuning and a bit of PPO is comparable to GPT-3.5; you probably don't believe this either. However, Table 1 shows a winrate of 55% of "PPO". This is probably because your evaluation instructions are relatively easy, only single turn, you don't factor in adversarial inputs, and so on. This is fine, but it should be discussed more in the paper. In general, the paper would be better if it elucidated the limitations of your simulator a bit more, as there clearly are some. Try to break it, show not only where it works, but where it stops working.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and insightful feedback, which we incorporated in the updated manuscript.
## Limitations
>*The paper could explore the limitations of Alpaca Farm more.*
We agree with the reviewer’s suggestion and have incorporated the feedback. Please see the [general rebuttal](https://openreview.net/forum?id=4hturzLcKX&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DNeurIPS.cc%2F2023%2FConference%2FAuthors%23your-submissions))
---
> *I don't actually believe that Llama-7B with some instruction fine-tuning and a bit of PPO is comparable to GPT-3.5. [...] your evaluation instructions are relatively easy, only single turn, you don't factor in adversarial inputs, and so on*
We agree with the reviewer’s concerns and have added all the necessary disclaimers for that result in the revised manuscript. Furthermore, we have detailed the mentioned limitations in a new section (refer to [general rebuttal](https://openreview.net/forum?id=4hturzLcKX¬eId=Hq0kpzSHSw)).
Evaluation of instruction following models is an active area of research. Our team is actively working on that domain, and we intend to incorporate recent and upcoming evaluation improvements into AlpacaFarm.
---
> *There has been recent work that shows that MTurk crowdworkers use AI a lot. https://arxiv.org/abs/2306.07899 [...]’How do you expect this to affect your results?*
This is a significant potential concern that we were aware of. We undertook two measures to address this during the project:
1. We manually annotated over 200 examples without any language model and assessed our agreement with the auto annotator. The agreement was 64%, akin to the agreement between crowd-workers and the auto annotator and among different authors.
2. We continuously monitored the agreement between crowd-workers and GPT4, as well as the time taken for annotations. This monitoring led us to exclude 3 crowd-workers and their annotations, who were annotating particularly fast with a questionable agreement with GPT4.
Though there might still be instances of LLM use we didn't detect, given these precautions, we're confident it hasn't majorly affected our results. We did not include a discussion of the paper since it was published after our submission, but we will include it in the updated manuscript.
---
>*I have some worries thinking about that the training signal (pairwise comparisons) and the evaluators (pairwise comparisons) come from the same API-models. This might lead to overfitting to these particular evaluators. However, in practice, this doesn't seem to be an issue (see Figure 2).*
We agree with everything that the reviewer said. We now address this as a potential limitation, but as the reviewer pointed out, it doesn't appear to be a major concern in practice. This potential issue might be partly alleviated given that we utilize a diverse pool of annotators (simulated or crow-dworkers).
## Other
> *You should show a Figure where you only include models that were actually trained on the simulated preferences.*
We agree and have updated the figure in the main paper. We also clarified in Table 1 which models were only evaluated (and not trained) in both human and simulated settings.
---
Rebuttal Comment 1.1:
Title: Updated Figure
Comment: > We agree and have updated the figure in the main paper
Could I see the updated Figure please?
---
Reply to Comment 1.1.1:
Title: Link to figure & table
Comment: Please refer to this [anonymized link]( https://anonymfile.com/g3xjN/alpacafarm-graypoints.pdf) for a PDF containing the updated figure and table from the main manuscript. We apologize for the oversight in not providing any proof of our claim. | Summary: The paper identifies three major challenges in training models with human feedback: (a) the cost of *preference* data collection, (b) the lack of trustworthy eval, and © the absence of implementations for reference methods. I completely agree with the fact that the process of training LLMs with human feedback is less understood due to the lack of published information and tools on them. The paper does a great job at addressing these dimensions in great detail with solid experimental results and thought-provoking findings. The paper establishes AlpacaFarm as a framework whose feedback and evaluation is synergetic with training on human feedback and human evaluation. Overall, the paper is well-written and clear to follow.
*Cost of preference data collection*
- The paper’s contribution in creating prompts for API LLMs that follow high human agreement and replicate human variability is very novel!
- I do not fully understand why the authors want the practitioners to collect high quality human feedback (post-AlpacaFarm simulations) if they already have AlpacaFarm API LLM annotators. It makes it difficult for me to digest how Alpaca Farm is reducing the cost of collecting preference data if eventually we do need to collect actual human feedback.
- I will be interested in understanding the gap between the performance of the methods trained with API LLM feedback vs human feedback by fixing the evaluator as Alpacafarm eval in the first case and humans in the other. Currently, the presented results focus on the gap between training with the simulated feedback + simulated evaluation and human feedback + human evaluation.
*Trustworthy eval*
- The experiments establish the high correlation between the simulated win-rates and human win-rates. It was interesting to observe that the rankings of the methods match well under the simulated environment and the real-world environment.
- I feel the paper lacks a discussion on evaluating the LLMs on the existing NLP datasets such as MMLU, SuperGLUE and benchmarks such as BigBench. I understand that 805 instructions may be a set of instructions that humans care about but I still feel that the prior works on creating datasets for model eval deserve credit in the main text.
*Reference Implementations*
- I agree that there are not a lot of implementations for reference methods, and really appreciate the authors for providing them.
- Given, PPO is known to be finicky and hard to stabilize, it would have been good to get some more details about the hyperparameter search in the main/supplementary material.
- Minor comment: It would be good to mention that Best-of-n policy uses n = 1024 more frequently in the plots.
Strengths: Mentioned in the main comment.
Weaknesses: No major issues with the contribution and experiments.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: More details on the reference method implementations in the main/appendix would be great.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: No major issues.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging and thorough review. We will clarify and emphasize the answers to all their questions in the updated manuscripts.
## Training in simulating vs with humans
> I will be interested in understanding the gap between the performance of the methods trained with API LLM feedback vs human feedback by fixing the evaluator as Alpacafarm eval in the first case and humans in the other.
The main validation of the paper involves training and evaluation with the oracle LLM versus training and evaluation with humans. As the reviewer suggests, understanding the setting of training in simulation and evaluating with humans is also useful. We investigated this setting in Appendix B.2, where we show how to modify AlpacaFarm’s simulated annotators if one wants to use them as a source of supervision rather than as simulator.
In light of the reviewer's feedback, we have now incorporated a summary of those results into the main text. In essence, the PPO model trained on AlpacaFarm's simulator achieved a 43% human-evaluated win-rate, which is significantly worse than the 55% win-rate of the PPO model trained on human feedback. However, we show that the PPO model trained using feedback from a low-variance GPT4 annotator achieves a 50% win-rate.
---
> I do not fully understand why the authors want the practitioners to collect high-quality human feedback (post-AlpacaFarm simulations) if they already have AlpacaFarm API LLM annotators. [...] how Alpaca Farm is reducing the cost of collecting preference data [...]
We thank the reviewer for pointing this out and will make the distinction clearer in the manuscript. The primary cost-saving benefit of introducing a simulator before human engagement is to facilitate an affordable method development, e.g., when developing a new RLHF algorithm. By refining the model pipeline in simulation (often entailing multiple rounds of training), we need only run the final training with human feedback.
As discussed above, Appendix B.2 shows how to repurpose AlpacaFarm as a source of supervision that does not require a final human round, further decreasing the cost. However, we find that PPO trained with this automated source of supervision still lag behind human PPO. We see narrowing this gap as a promising direction for future work.
## Other
> I feel the paper lacks a discussion on evaluating the LLMs on the existing NLP datasets such as MMLU, SuperGLUE and benchmarks such as BigBench.
>get some more details about the hyperparameter search in the main/supplementary material.
> mention that Best-of-n policy uses n = 1024 more frequently.
We thank the reviewer for their feedback, which we have incorporated in the updated manuscript.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Hi, I thank the authors for rebuttal.
- It is interesting to see that the optimal LLM feedback setups are different if you want to use them for supervision versus simulators.
On "affordable method development with AlpacaFarm as a simulator":
- I believe that the AlpacaFarm's "cost-effectiveness" would lie in their faithful automatic evaluation that achieves high agreement with the human judgments, instead of AlpacaFarm's feedback data itself.
- Let us say, a practitioner has N plausible RL algorithms for aligning their LLM. Without any doubt, collecting feedback data (pairwise judgments) from AlpacaFarm will be cheaper than human feedback.
- **Scenario 1**: Under the cost-effective argument in the paper, the practitioner would test their RL algorithms on AlpacaFarm feedback data, and use the best method (on AlpacaFarm eval) to train on human feedback data. Finally, they would perform a human eval of the algorithm trained with human feedback.
- **Scenario 2**: The practitioners could collect human feedback data and select the algorithm that works best on AlpacaFarm eval. Finally, they can perform a human eval of the best algorithm thus found.
- In my opinion, **Scenario 2** is more straightforward and cheaper than **Scenario 1**.
Overall, I am satisfied with the paper's findings and experiments. Good luck to the authors. I will keep my scores unchanged since there were not any major concerns in the original rebuttal anyway.
---
Reply to Comment 1.1.1:
Comment: Thank you for the detailed answer. This is a great point! We agree that Scenario 2 is cheaper in the case of a single-round RLHF, which is the experimental setting we consider!
In the case of multiple RLHF rounds, scenario 2 would likely become more expensive as a different set of human preferences would need to be collected for every considered model! We hope that future work will consider AlpacaFarm in multi-round settings. | Rebuttal 1:
Rebuttal: # General
We thank the reviewers for their insightful and constructive feedback.
We are pleased that the reviewers found our paper well-written [goj1, pSDY, JtTg, NPeD], thorough [goj1, pSDY], and believe that it may be an impactful and valuable contribution to the community [goj1, pSDY, JtTg, NPeD].
There are two questions/feedback shared by different reviewers: the need for a limitation section [pSDY, JtTg, NPeD] and how we came up with 25% label noise [JtTg, NPeD]. We address general feedback here and will answer questions specific to each reviewer separately.
We will upload the updated manuscript as soon as OpenReview allows us to. We appreciate the reviewers' feedback, which we believe has improved the quality of our work.
## Limitation section
We agree with the reviewers' suggestions about highlighting our limitations [pSDY, JtTg, NPeD]. We have taken advantage of the additional page in the updated manuscript to include a limitation section, which consolidates and elaborates on the limitations discussed throughout the paper and appendices. We briefly highlight some of these here:
- **Validation**: Although we provided strong validation results of AlpacaFarm in Section 4, there are limitations in how we perform such validation. First, instructions are relatively simple and are single-turn (even those from the real world-demo). Second, we only consider LLaMA 7B as the base model as this was the only model powerful enough for learning from human feedback at the time of submission. Finally, human validation is based on feedback from 13 crowd-workers, which may not reflect broader human preferences.
- **Assumption**: We assume access to an “oracle” LLM, which is more powerful than the ones we are training. While this may be true in research settings, it's not always the case in practice.
- **Generalization of Hyperparameters**: Specific hyperparameters, like the KL regularization coefficient, might not translate seamlessly from simulated environments to training with human preferences. The AlpacaFarm simulator is thus likely more useful for method development than hyperparameter tuning (and we only validated it in that setting).
## 25% noise
In response to reviewers [JtTg, NPeD]'s question about how we arrived at a 25% label flipping rate, we have highlighted and expanded our discussion about it in the updated manuscript.
In summary, we selected 25% based on two factors: estimated human (intra- and inter-annotator) variability (\~0.35) and overoptimization. Injecting a 25% label flip brings our annotator variability closer to the human one (\~0.43), and it was also the point at which we began to observe overoptimization (see rebuttal figure).
For a more in-depth discussion about annotator variability, including the variability of standard simulated annotators (<0.1), please refer to Appendix B.1 of the original manuscript.
Pdf: /pdf/e8d1f34b61cc523600ba024b4cc9a160e93eb1d4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Tackling Unconditional Generation for Highly Multimodal Distributions with Hat Diffusion EBM | Reject | Summary: This work tries to improve the unconditional generation performance of Energy Based Model (EBM) by combining several techniques. It includes a pertrained diffsion model as a part of the generator and train the energy function and generator through cooperative learning. The performance of HDEBM outperforms many strong baselines of EBM.
Strengths: The authors highlighted several strong techniques to improve the generative performance of EBM, including the introduction of pretrained diffusion model as a part of the generator, training the EBM under cooperative framework, sampling in both latent and noise spaces and 2-stage training. Some of these techniques are pre-exist and some are new. The final performance show good results among EBMs.
Weaknesses: Currently, I have the following questions and concerns:
1. The proposed method appears to heavily depend on a well-pretrained (and distilled) diffusion model. While pretrained diffusion models exist for standard benchmarks like CelebA or ImageNet, pretraining and distilling such a model, especially on larger datasets or other specific applications, may pose greater challenges. This limitation could potentially restrict the application of the proposed method.
2. Additionally, how does the generative performance of the pretrained diffusion model used in this work compare? Does the Hierarchical Diffusion Energy-Based Model (HDEBM) achieve superior results compared to the pretrained diffusion model used for training?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Here are my questions:
1. The impact of each step in the generative process: From my understanding, the image generation process involves three steps: the initial proposal of G1, the modification of G2 through the diffusion process, and the modification of EBM. Could the authors provide demonstrations of each generative step by showcasing a few samples and reporting the Fréchet Inception Distance (FID) for each step?
2. The understanding of pretrained diffusion model as MCMC sampling: Employing a diffusion model as MCMC sampling can alter the generative sample distribution to align it with the data distribution. However, as per the definition, negative samples from the Energy-Based Model (EBM) should originate from the EBM's own distribution. Although a well-trained EBM may generate samples from the data distribution, there can be a substantial gap between the EBM distribution and the data distribution during the training process. Hence, it begs the question of whether shifting the samples using a pretrained diffusion model could potentially introduce incorrect negative samples or shift the negative samples in an erroneous direction. Additionally, while the expression $D(\alpha x + \sigma z)$ holds true when $x$ follows the data distribution, it might not be valid when the initially generated $x$ deviates significantly from real data. How can we theoretically understand these two questions?
3. Missing reference:
It seems that the results of [1] is included in table 1 but the paper is not in the reference list;
Also authors may consider including the results of [2, 3] in table 1;
[1] A tale of two flows: Cooperative learning of langevin flow and normalizing flow toward energy-based model;
[2] Learning energy-based generative models via coarse-to-fine expanding and sampling.
[3] Learning Energy-Based Model with Variational Auto-Encoder as Amortized Sampler
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review; we have addressed your comments as follows:
* *Limitations of training the diffusion model:* While training diffusion models on complex datasets can require significant computational resources, a major benefit of our method is that one only needs to learn a truncated diffusion which focuses on the parts of the diffusion trajectory which are easier to learn. Thus, our method could potentially be more amenable to scaling than standard diffusion models for increasingly complex data.
* *Generation results compared to the pretrained diffusion:* We note that truncated diffusions cannot be used to generate model samples directly since they require initialization from noisy data. Like previous works TDPM and ES-DDPM, we investigate ways in which a truncated diffusion can be incorporated into a larger generative model. In the global response, we replicate ADM on unconditional ImageNet at 128x128 and find HDEBM has superior performance compared to the standard diffusion model, and therefore also to accelerated variants.
* *Investigation of difference phases of sampling:* Thank you for the very useful suggestion. We present FID scores and samples visualization for each phase of our Stage 1 ImageNet HDEBM in the global response.
* *Possible mismatch between EBM and data samples:* This is certainly a valid concern. In general, it is difficult to robustly gauge the degree of separation from the data distribution that is still compatible with successful refinement by the truncated diffusion. Proposition 3.2.1 sheds some light on your question. Since a perfect truncated diffusion defines an MCMC trajectory with the data distribution as its steady-state, applying $G_2$ should still push samples towards the data distribution even when the initial samples differ from the data distribution (e.g. biased samples from imperfect EBM learning) . Applying $G_2$ to out-of-distribution states corresponds to the MCMC burn-in phase, while applying $G_2$ to in-distribution states corresponds to the MCMC steady-state phase. The empirical results show that HDEBM can eventually teach $G_1$ to generate samples which can be effectively refined by a truncated diffusion.
* *Missing reference:* Thank you for pointing out that [1] is missing from the bibliography and for the suggestion to include [2,3]. We will make sure these works are properly cited in future revisions. | Summary: The paper proposes *Hat Diffusion Energy-Based Model (HDEBM)*, a hybrid model with a generator and an EBM component that can be primarily applied for unconditional image generation tasks. It is built upon the framework of Hat EBM, which produces the final image sample $X$ by combining (through addition) a raw image output from a generic generator network, $G(Z)$, with a further refinement step parameterized by a residual random variable, $Y$, which is obtained through Langevin sampling via an energy network, $H$; $i.e.$, $X=G(Z)+Y$. *HDEBM* applies an alternative parameterization of the generator component, $G$, by coupling the original Hat EBM generator $G_1$ with a truncated and distilled diffusion model $G_2$, thus produces the final image by $X=G_2(G_1(Z_1), Z_2)+Y$. The truncated and distilled diffusion model component $G_2$ is added to the original framework with the goal of driving the image output from $G_1$ closer to the true data distribution, thus can be viewed as an additional refinement step before the addition of $Y$. The paper demonstrates experiments results mainly in unconditional image generation on CIFAR-10, Celeb-A $64\times 64$, and ImageNet $128\times 128$, including *HDEBM* achieving a SOTA FID score of $21.82$ on the ImageNet $128\times 128$ dataset.
Strengths: * The introduction and the related work section are well-organized in general, in terms of the clarity of the *HDEBM* framework overview and its connection with other works.
* The limitations of the work from a technical perspective and in terms of potential social impact are discussed in detail via Appendix A and Appendix B.
* The choice of diffusion-based generative models to improve the generator component of the Hat EBM framework for image generation tasks is a reasonable one, due to their strong performance in modeling multimodal distributions.
Weaknesses: * It’s hard to gauge the novelty of this work, since the whole *HDEBM* framework from objective functions to engineering details resembles Hat EBM closely. It essentially can be viewed as one specific parameterization of the Hat EBM framework, by substituting the original generator with a different one that includes a truncated and distilled diffusion model.
* The technical details of the diffusion model component are not very sound:
* The term “forward/reverse” and “forward and reverse” have been used multiple times to describe the truncated diffusion process $G_2$; however, since the goal of $G_2$ is to refine the output of $G_1$, it’s expected to only run the reverse process of the diffusion model to “denoise” the image towards the true data distribution. Therefore, it’s not very clear on why the processes in both directions are run, especially since there is only one step after distillation.
* The term “approximate MCMC step/sampling” has been used to describe the role of the truncated diffusion model, with a theoretical development via Proposition 3.2.1. However, the wording of the proof is quite concise and vague (directly jumps to stating “the process is aperiodic and irreducible” in Line 146 after assuming the diffusion model is “perfectly trained” in Line 145, as well as concluding $q_0$ being “a unique steady-state” in Line 150). It is already a well-established result that a diffusion probabilistic model is a Markov chain that aims to approximate the true data distribution, and the stationarity of such a chain shall be of less importance because we are not sampling as many timesteps as we want, but the same number during the reverse process as during the forward process (Ho et al., 2020). Therefore, the role of authors' "noting that an ideal truncated diffusion defines an approximate MCMC process with the data distribution as its steady-state" (Line 96-98) is confusing.
* There are some factual errors in the authors’ claims about several related works:
* In Line 152-153, the authors claim that “SDEdit [33] empirically observes that a truncated diffusion process can add realism to naive edits or rough user-defined templates.” However, truncated diffusion models were not mentioned in that work.
* In Line 157-158, the authors claim that *DiffPure* “uses truncated diffusion to remove adversarial signals while preserving most of the original image appearance.” However, it appears that only conventional score-based diffusion models were used in that work.
* In Line 302-303, the authors write, “As described in [48] the entire distillation takes about the same time as training the initial truncated diffusion.” But truncated diffusion was not mentioned in that work.
* In Line 169-170, the authors write, “As noted in [48], a challenging aspect of learning a distilled diffusion is that the diffusion network output for a noise image at $t = T’$ provides essentially no information about the final state before distillation”, which appears to be counterintuitive since any intermediate diffusion step shall be viewed as a noisy version of the final state, but not a white noise. It’s not clear where in citation [48] that such statement was made.
* The design of combining two different generator networks appears to be a bit overcomplicated: there seems to be no obstacle in directly substituting the original generator with a model trained with the progressive distillation procedure, or consistency models (Song et al., 2023), if modeling multimodality is a desired property of this generator component under the Hat EBM framework.
* The content/structure of **EBM Learning** under Section 3.1 is very similar to Section 3.1 of (Hill et al., 2022), including the citations. Similarly, the background of diffusion models in Line 117-125 resembles Section 2 before Eq. (2) in (Salimans & Ho, 2022), and Line 233-234 resembles the text at the end of Sec. 3.2 of (Hill et al., 2022). This borders on text recycling and needs significant revision.
> Minor Issues
* Line 236 notation typo of $G(z)$: shall instead be “a fixed generator $G_2(x, z_2)$”?
* Repeated citation: [55] and [56].
* The pdf documents were uploaded as flat images, thus making them unsearchable and harder to read. It would be nice to upload a searchable and clickable pdf document in the future for reviewing.
[a] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency Models. In *Proceedings of the 40th International Conference on Machine Learning*, 2023.
[b] Mitch Hill, Erik Nijkamp, Jonathan Mitchell, Bo Pang, and Song-Chun Zhu. Learning Probabilistic Models from Generator Latent Spaces with Hat EBM. In *Proceedings of the 36th Conference on Neural Information Processing Systems*, 2022.
[c] Tim Salimans and Jonathan Ho. Progressive Distillation for Fast Sampling of Diffusion Models. In *Proceedings of the 10th International Conference on Learning Representations*, 2022.
[d] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In *Proceedings of the 34th Conference on Neural Information Processing Systems*, 2020.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * Could the authors provide experimental results of unconditional generation with ImageNet $128\times 128$ by only a truncated and distilled diffusion model?
* Would the authors provide a reference for the challenge of diffusion models mentioned in Line 31-35?
* Why does the truncated and distilled diffusion model have both the forward and the reverse process, instead of just the reverse process that directly maps a noisy image to a sample closer to the data distribution?
* Eq. (7) the equation on the right: is $x$ a real data sample, or one generated by $G_1$?
* Can the authors provide a little more explanation on the sentence in Line 264-265, “The function of this loss term can be interpreted as training $G_1$ to invert $G_2$ given forward noise $Z_2$ and target image $X$.”? Specifically, what does “invert $G_2$” mean?
* In Stage 2 from Figure 2, there are multiple MCMC steps; why are the mapping from sample $Z_1$ and that from $Z_2$ marked as red arrows as well?
* In Line 289-290, the authors write, “we can perform MCMC on $z$ for the density (11) but not for (8)” – could the authors provide more explanations for it?
* How is the difference between generated images computed, $e.g.$, in Eq. (10) – are they the Euclidean distances in the original image space?
---
Update on 08/28/2023: I’ve increased my overall rating from 3 to 4:
* As the authors pointed out, the coordination of different moving parts of the framework is non-trivial, and their design choices to combine EBM models with diffusion models under the same framework could be learned from by other researchers. The framework achieves SOTA result on unconditional ImageNet at $128\times 128$ resolution, while Hat EBM did not, demonstrating the effectiveness of such design choices.
* The authors’ claim as their first main contribution that $G_2$ “defines an MCMC trajectory whose steady-state is the data distribution” is not very solid: in practice, it’s not clear on where the forward process of $G_2$ starts from ($X$ as the output of $G_1$ shall have a distribution quite different from $q_0$, otherwise there is no need to have $G_2$), or where the reverse process of $G_2$ ends up (shall no longer be $q_0$ even with a perfectly trained $D$). Reviewer iE1P expressed a similar concern as the second question, but I’m not convinced by the authors' response. Although the operation of adding noise then denoising by $G_2$ is an interesting design, it lacks clear motivation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: As mentioned before, the authors have sufficiently addressed the limitations from a technical perspective in Appendix B. Meanwhile, potential negative social impact has been discussed in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We're grateful for the effort in reviewing our work and for your valuable suggestions. Our clarifications and responses follow below.
* *Regarding novelty:* Like the related works TDPM and ES-DDPM, the primary novelty of our works comes from the design choices that we make to incorporate a truncated diffusion as part of a larger generative model. Our major technical novelty is using backpropagation through the truncated distilled diffusion both to train the generator network and to refine latent states to improve image quality in the Stage 2 HDEBM. Although HDEBM can be viewed as a special case of Hat EBM, integrating diffusion and EBM models in an effective way involves non-trivial design choices and insight into the learning process.
* *Foward/Reverse used to describe $G_2$:* As shown in (7), the network $G_2$ will first add noise to the output from $G_1$ to create a noisy sample, and then denoise the noisy sample with $D$. Thus, $G_2$ performs both the forward and reverse process. This use of a truncated diffusion matches ES-DDPM but differs from TDPM, which predicts noisy images directly. We used the approach from ES-DDPM rather than the approach from TDPM because we found that the TDPM approach was ineffective when trying to model samples with large magnitudes of noise added, while the TDPM approach yielded results very similar to Hat EBM for small noise magnitudes. In our work, $G_1$ can learn to output any distribution such that samples from $G_1$ plus noise match noisy data samples. We find this is much easier than teaching $G_1$ to directly match true data (as in ES-DDPM) or to directly match noisy data (as in TDPM).
* *Regarding Proposition 3.2.1:* Our claim about truncated diffusion is distinct from the observations in Ho et al., 2020. In particular, Proposition 3.2.1 describes the image space trajectory of a sample after repeated forward/reverse diffusion process are applied, which defines an MCMC trajectory whose steady state is the true data distribution under the assumption of perfect modeling. While diffusion updates for small timesteps $t$ of a standard diffusion model can also be viewed as approximate MCMC steps on the data distribution (since $q_t \approx q_0$ for small $t$), in practice the diffusion steps for small $t$ provide very minor change to the image. Applying a forward/reverse process of a truncated diffusion can yield much larger movement in the image space while still following the image manifold, as done in SDEdit and related works.
* *Regarding factual errors:* We respectfully disagree that these are factual errors. SDEdit and DiffPure both use truncated forward/reverse processes which are directly analogous to our generator $G_2$, even though they do not use the term "truncated diffusion" explicitly. Although [48] distills full diffusion models, we find their observation about the cost of learning a distilled full diffusion also applies when learning truncated diffusion models (total cost of distillation stages is about the same cost as the base stage, whether the base stage is full or truncated). We will clarify the wording of line 302-303 in future revisions. Lines 169-170 refer to the paragraph on Page 5 of [48] which begins "Although this standard specification..." and describes the need to use alternate parameterizations for learning distilled diffusions.
* *Why not just use Progressive Distillation?:* We find that our framework has superior performance compared to standard diffusion models on unconditional ImageNet 128x128 (see global response Table 1), in which case HDEBM would also outperform accelerated variants like Progressive Distillation.
* *Similarities in Background Section:* We indeed adapt the background section notation and presentation from existing work to give a concise yet relatively complete description of previous works. In future versions, we will note that the background sections are generally based on the notations and presentation from Hat EBM and Progressive Distillation.
* *Minor Issues:* In Line 236, $G(z)$ refers to the whole generator $G_2(G_1(z_1), z_2)$ where $z = (z_1, z_2)$. We will fix the repeated citation. We sincerely apologize for the unsearchable format, this occurred when we manually separated the main paper and appendix. Future revisions will be fully searchable.
* *Generation with only truncated diffusion:* A truncated diffusion is not capable of serving as generative model in its own right because it requires initialization which match noisy data to generate realistic samples. Like TDPM and ES-DDPM, our work investigates ways of incorporating a truncated diffusion into a larger generative model.
* *Challenges of unconditional diffusion modeling:* This refers to the gap in sample quality between unconditional and conditional diffusion models for high-resolution and highly multimodal data, which is widely known in the literature. Please refer to our response to a similar question from Reviewer HLKM.
* *Eq. 7:* $x$ is a sample generated by $G_1$.
* *Meaning of inverting $G_2$:* Minimizing the loss term $\| X - G(Z_1, Z_2 ; \phi) \|^2 = \| X - G_2(G_1(Z_1; \phi), Z_2) \|^2$ can be accomplished by tuning $\phi$ so that $G_2(G_1(Z_1; \phi), Z_2) \approx X$. Then one can view the update of $\phi$ as solving $G_1(Z_1; \phi) \approx X^{-}$ for some image $X^{-}$ such that $G_2 (X^{-}, Z_2) \approx X$.
* *Red arrows in Stage 2 diagram:* These arrows indicate that we updated $Z_1$ and $Z_2$ using Langevin dynamics in Stage 2, in addition to the residual image $Y$.
* *First Stage vs. Second Stage density:* Since $z$ appears in the normalization constant of (8), one cannot use standard MCMC techniques because this would require calculating the intractable normalizer for each update of $z$. The intractable normalizer of (11) does not depend on $z$ and one is free to use any MCMC technique which only requires the energy.
* *Distance metric for (10):* Yes, this is Euclidean distance in the image space.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: Dear authors,
Thank you very much for your detailed response, it has helped me to better understand the paper and has clarified some of my questions. After incorporating other reviewers’ comments and rebuttal responses, I plan to keep my current score for now. I am open to make further adjustments during the second phase discussion period.
---
Reply to Comment 1.1.1:
Title: Thanks for your review
Comment: We greatly appreciate your thoughtful review and discussion. Please let us know if there are any points which we could address or improve upon which would assist with your assessment of our work. | Summary: The authors propose the Hat Diffusion Energy-Based Model (HDEBM), which incorporates a distilled truncated diffusion model as a generator network for a Hat EBM. They note that a perfectly-trained truncated diffusion model can be used to define an MCMC process whose steady-state distribution is the data distribution. A two-stage training procedure is proposed. In the first stage, the energy network and generator networks are trained by performing MCMC on residual images conditioned on frozen latents. In the second stage, the energy network is finetuned so that both latents and residuals can be updated using MCMC. Empirically, HDEBM outperforms existing EBMs on ImageNet 128x128 and performs comparably to GANs and diffusion models on smaller resolutions.
Strengths: - The work seems quite original. The idea of combining the strengths of diffusion models and EBMs for multimodal unconditional generation is a good one, and the procedure for unifying them into a single framework is nontrivial.
- Although proper comparisons are difficult to make, many experiments are run and results show HDEBM outperforms previous EBMs and achieves comparable results in image quality and sampling cost to GANs and some diffusion models.
Weaknesses: - I found the presentation of the methodology section to be a bit confusing, and found myself referring back and forth between the main text and the appendix. It may be helpful if possible to bring several details from the appendix, such as the Stage 1 training algorithm, to the main text.
- The experiment section could be strengthened a bit. It may be compelling to compare with more recent diffusion models on CIFAR10 and CelebA, such as EDM [1]. Comparison with Hat EBM on CIFAR10 and CelebA would also be enlightening.
[1] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. In Advances in Neural Information Processing Systems, 2022
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - The authors mention that diffusion models have drawbacks when it comes to highly multi-modal unconditional modeling (lines 27-28). I may be misunderstanding this point, but this seems separate from the issue of there being few diffusion models to benchmark against at higher resolutions. What is the rationale for why multi-modal modeling is challenging for diffusion, and is there some basic experiment that could be included to demonstrate this?
- The proposition (Prop 3.2.1) regarding the connection between truncated diffusion and MCMC seems to be known and mentioned in previous works, as the authors mention, so characterizing this as a novel perspective may be too strong.
- The claim that HDEBM sampling costs are "significantly lower than diffusion models" (lines 13-14) seems unsubstantiated by the experimental results.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately address the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful insights and suggestions to better our work. We have addressed each of your points below.
* *Reorganization to improve clarity:* We agree that the clarity of our presentation could be improved by bringing details from the appendix into the main text. Future revisions of the text will focus on improving clarity and making sure that the main text is as self-contained as possible.
* *Improving the experiment section:* We will include EDM results for CIFAR-10 and CelebA in Table 1 to provide additional context. Table 1 of the original submissions compares HDEBM and Hat EBM on CIFAR-10, CelebA, and ImageNet 128x128. Additional interpolation, reconstruction, and inpainting experiments can be found in the global response.
* *Challenges of Diffusion for unconditional modeling:* Thank you for bringing up this point. We agree that the wording is somewhat unclear. For low-resolution data such as CIFAR-10, the gap in image quality between unconditional and conditional diffusion models is quite low. For high-resolution and highly multimodal data, there remains a large gap in quality between conditional and unconditional diffusion models. The gap between conditional and unconditional models is, in our view, evidence that unconditional modeling remains challenging for diffusion models. It is probably more clear to say that all current generative models, including diffusion models, face challenges for high resolution and highly multi-modal data. We will rephrase this section of the introduction to present this more clearly.
* *Proposition 3.2.1 known in prior works?:* Although many prior works empirically observe that performing a partial forward and reverse diffusion process can add realism to an image, to our knowledge no work has explicitly observed that repeated forward/reverse processes of a perfectly trained diffusion defines an MCMC trajectory in the image space whose steady-state is the data distribution. We carefully checked SDEdit, DiffPure, TDPM, ES-DDPM, and others for such an observation but could not find it. Nonetheless, the diffusion literature is rapidly growing and it is difficult to ensure that we did not miss this observation in another work. If we are missing a reference which makes the same observation, we will gladly include it in future revisions and rephrase our contributions accordingly.
* *Sampling costs compared to diffusion models:* Thank you for pointing this out. We meant that our method has significantly lower costs than standard (unaccelerated) diffusion models. In future revisions, we will rephrase this sentence to state the our work has comparable cost compared to accelerated diffusion models.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed responses. I feel my questions/concerns are addressed and I will retain my score of weak accept. | Summary: Hat EBM introduced a framework to incorporate an arbitrary generator network $G : \mathcal{Z} \to \mathcal{X}$ (for example a GAN generator or a VAE) into an EBM by defining a joint energy function over the generator latent space and a residual image space that bridges between the generator output and the ground-truth data distribution. In particular, in HEBM an image is generated as $X = G(Z) + Y$ where $Z$ is a latent vector passed through a generator $G$, an $Y$ is a residual image. The joint energy function is $U(Y, Z ; \theta) = H(G(Z) + Y ; \theta)$ where $H(x; \theta)$ is a neural net that maps from images to scalars. Hat EBM models the joint distribution of the generator's latent space and the residual image, and can be used with either a pre-trained generator $G$, or can be used to learn $G$ in tandem with the energy function.
This paper introduces a variant of Hat EBM called Hat Diffusion EBM (HDEBM), that incorporates a diffusion model to partially denoise the output of the generator $G$. In particular, the authors augment HEBM with a diffusion model such that the generative process first samples a latent $Z_1$ and passes it through the generator function $G_1$ to produce an initial output image $G_1(Z_1)$. Then, as an additional step compared to HEBM, HDEBM adds noise $Z_2$ and runs one step of denoising using a pre-trained and distilled diffusion model, which yields $G_2(G_1(Z_1), Z_2)$. The application of the initial generator followed by a denoising step can be interpreted as a more complicated generator function $G(Z_1, Z_2)$. Finally, following Hat EBM, the final output image is formed by adding a residual, $X = G(Z_1, Z_2) + Y$.
For the diffusion component, the authors first train a truncated diffusion model from scratch, focusing on the less-noisy half of the denoising trajectory; they then use progressive distillation to distill the model into a single-step denoiser.
The authors propose a two-stage pipeline to train the HDEBM: the first stage assumes that $Z \sim \mathcal{N}(0, I)$ and learns the distribution of the residual images $Y$ conditioned on $Z$; the second stage learns the joint distribution of the generator latents $Z$ and the residuals $Y$. They show that the second stage leads to slightly improved results compared to only using the first stage.
Empirically, they use HDEBMs for unconditional image generation on CIFAR-10, CelebA, and ImageNet, and they compare FID scores to other classes of generative models (including other EBMs and diffusion models). They obtain competitive performance.
Strengths: * Overall, the paper is well-written, clearly introducing the approach and discussing how it differs from Hat EBM.
* The proposed approach significantly outperforms Hat EBM in terms of FID scores.
* The idea is nice: HDEBM is an interesting way to incorporate ideas from diffusion models into an EBM, which could be useful inspiration for future work.
* Overall, the unconditional image generation experiments show very good performance, in particular compared to explicit EBMs. A good set of comparisons is provided, including other EBMs (Hat EBM, VAEBM, diffusion recovery likelihood EBM, VERA) and non-EBM methods like BigGAN.
Weaknesses: * The paper does not clearly motivate the HDEBM approach. Why would one wish to use this complicated framework with several moving pieces (that need to be trained in stages) rather than simply using another class of generative model, in particular a diffusion model that also yields high sample quality without dropping modes?
* I think that the HDEBM approach should be further compared to the progressive distillation component that forms the denoiser $G_2$, in isolation. If Prog. Distillation is well-trained, then it has fast sample times and good FID scores, so what does HDEBM add to that?
* In Table 3, Prog. Distill significantly outperforms HDEBM both in terms of FID and sampling speed.
* The overall framework is fairly complicated, as it requires several components: the generator function $G_1$, the pre-trained, truncated and distilled diffusion model $G_2$, and the residual for the energy function. In particular, this leads to a complicated three-stage training pipeline, where first the diffusion model is trained and distilled, followed by the two stages of learning the energy function.
* The background in Section 3.1 should not be part of the method section, as it is not novel; this should be a new section between the Related Work and Method sections.
* In Eq. 3, it is strange to use $\epsilon$ to denote the step size and $V_k$ to denote the noise sample; usually $\epsilon$ would denote a noise sample, and the step size would be $\eta$ or $\alpha$.
* There seems to be inconsistent use of lowercase and capital $x$ to denote data samples (e.g., $x_t$ in Eq. 4 and $X_t$ in Eq. 5). Similarly, what is the difference between $z$ and $Z$ used in different parts of the paper (e.g., $G(z)$ in Eq. 8 and $G(Z)$ in the sentence just before Eq. 8)?
* I do not think that the discussion of Proposition 3.2.1 is clear. Among other things, the proposition should clarify what is meant by a "perfect reverse process $D$", and clarify why the assumption that $D$ is perfect implies that there is always an image in $q_t$ that will map to a given image in the support of $q_0$.
* It feels like Section 3.2 is present to add math and formality to the paper, which does not seem necessary or helpful. This section does not inform the design decision of the HDEBM, it only serves as post-hoc justification for plugging in the diffusion model.
* I think that the paper should further clarify why two training stages are necessary rather than a single fused stage.
* Why are there no diffusion models listed in Table 2?
* Why is progressive distillation not reported in the CelebA part of Table 3?
* Can the diffusion model be fine-tuned during training of the HDEBM? If so, that would be an interesting ablation to see how much it helps.
* Figure 1 in this paper is almost identical to Figure 1 in Hat EBM; the only change is the addition of the "approximate MCMC" component $G_2$ and the corresponding random sample $Z_2$. I think this difference could be made clearer if the new components were highlighted, and the caption stated that the diagram was inspired by HEBM.
* The left side of Figure 2 is almost identical to Figure 2 in the Hat EBM paper; this should be mentioned in the caption, and the parts that are exactly the same or different should be highlighted somehow.
**Minor**
* L33 "network changes" --> "changes in the network weights (during training)"?
* L64 "curated samples" --> It would be nicer to see un-curated samples.
* Usually, $\mathcal{N}$ is used to denote a normal distribution rather than $N$. Also, typically $\mathbb{E}$ is the standard notation for expectation, not $E$ as used in the paper.
* L130 typo: "our works utilizes" --> "our work utilizes"
* L133 typo: "the full diffusion" --> "the full diffusion model"
* L161 typo: "values of $t$ greatly changes" --> "values of $t$ greatly change"
* Stages 1 and 2 should be more clearly delineated in Figure 2, for example with larger labels on top of the left- and right-hand sides.
* L266 typo: "match EBM" --> "match the EBM"
* In the caption of Figure 1: "passed through a forward/reverse truncated diffusion in $G_2$. $G_2$ then performs approximate MCMC on the data distribution."
* This wording is unclear: it sounds like first we perform forward/reverse diffusion and then the diffusion model performs approximate MCMC, but this is intended to say that the truncated diffusion is essentially doing approximate MCMC.
* L267: "We view this term as a regularizer." --> I think that this should be stated earlier, otherwise it sounds like both terms are equally important.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Is truncated diffusion used primarily to reduce the training time by considering only half the number of denoising steps? Have the authors investigated whether using non-truncated diffusion helps, at the cost of more compute?
* While empirically it seems to work, in principle wouldn't the output of $G_1$ potentially be out-of-distribution for the denoiser $G_2$? That is, $G_2$ is trained to denoise data that has been corrupted in a particular way (additive Gaussian noise), while the "corruptions" output by $G_1$ may be quite different. It might be interesting to see an ablation with respect to the amount of noise $Z_2$ added to $G_1(Z_1)$ before denoising. It seems useful to add $Z_2$ to bridge the OOD gap between the generated $G_1(Z_1)$ and the images $G_2$ was trained on.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: * It is not clear why the paper focuses entirely on unconditional generation. Is it possible to extend HDEBM to the conditional setting? The original Hat EBM paper was applied in both conditional and unconditional settings.
* The experimental evaluation is somewhat limited, as only three main experiments are performed: unconditional image generation on CIFAR-10, CelebA, and ImageNet. The metrics focus on FID scores and compute/memory costs. But EBMs can do many other tasks, such as inpainting, OOD detection, etc. It might provide a more complete picture if the paper had more diverse experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thorough and positive review. Reponses to your main comments are below.
* *Choice of EBM model family:* Please see our response to a similar question from Reviewer 88Mj.
* *Comparison to the progressive distillation $G_2$ in isolation:* The truncated diffusion model $G_2$ can create realistic samples given noisy data samples, but it is not capable of serving as a generative model on its own. As in related works TDPM and ES-DDPM, the truncated diffusion in HDEBM must be a component of a larger generative model. Our works builds on TDPM and ES-DDPM to investigate ways in which a truncated diffusion model can be incorporated into a larger generative model.
* *Better performance of Progressive Distillation:* Although we expected Progressive Distillation to outperform HDEBM on lower-resolution datasets, our experiments on higher-resolution unconditional datasets show that HDEBM can outperform standard diffusion models (and therefore accelerated variants like Progressive Distillation) in some scenarios.
* *Complexity of the Pipeline:* We acknowledge that our pipeline is more complex than standard diffusion training. Our use of multi-stage training is consistent with current truncated diffusion works TDPM and ES-DDPM, which also train the final model in multiple stages.
* *Background in Section 3.1 should not be part of the method section:* Thank you for pointing this out. We will move Section 3.1 into Section 2.
* *Notation issues:* The confusion in notation in (3) comes from mixing notation common in the EBM literature with notation common in the diffusion literature. We will improve notation in future revisions. We aim to use lower case notation in cases where the variable plays the role of a constant and upper-case notation when the variable is a random variable. We will improve notation in revisions.
* *Regarding Proposition 3.2.1:* Reasoning about the properties of a perfectly trained generative model can yield insights into the learning framework. For GAN models, the works [a, b] both reason about the properties of perfectly trained GANs and develop probabilistic interpretations (i.e. that ideal GANs minimize JSD and that composing the generator and discriminator of a perfect GAN defines an EBM). We view Proposition 3.2.1 in a similar vein. A perfectly trained diffusion model $D$ has the property that, if $X_t \sim q_t$, then $D(X_t, t) \sim q_0$ (see the "ODE Formulation" Section of EDM [c] for a similar discussion). Although truncated diffusions are used to add realism to initial images in several prior works (notably SDEdit), these works lack a theoretical perspective for why truncated diffusion increases realism. Our work provides an explanation for this phenomenon from the perspective of MCMC sampling, which is known to push out-of-distribution states towards the data distribution. We believe Proposition 3.2.1 adds important context for understanding truncated diffusions (in our own work and beyond) and could perhaps motivate development of MCMC samplers based on truncated diffusions.
* *Reason for using 2 stages:* The first stage will draw an initial latent vectors from random noise, then add a residual image refinement while the latent vectors that define the generator output are fixed. The second stage will allow refinement of the latent vectors as well as the residual image. Performing MCMC in the latent space allows the model to find nearby latents which lead to significant better quality generated images $G(z)$.
* *Diffusion Comparison in Table 2:* To our knowledge, there is no publicly available high-quality unconditional diffusion model trained on ImageNet at 128x128 resolution. To provide essential context for our results, we train an ADM model on unconditional ImageNet at 128x128 resolution. HDEBM generally has stronger performance.
* *Similarity with Hat EBM Figures:* The similarity is intentional, as our work is closely related to Hat EBM. The suggestion to highlight the differences is very helpful and we will follow it in future revisions.
* *Minor Issues:* Thank you for pointing out these issues, we will follow your suggestions. Uncurated samples for each model can be found in Appendix D.6.
* *Motivation for use of Truncated Diffusion:* Yes, the primary reason for using a truncated diffusion is to alleviate the computation cost of the diffusion and shift resources towards initializing and refining the truncated diffusion. We expected that truncating a full diffusion could improve quality, as well as increasing the scale of the truncated diffusion.
* *$G_1$ output might not match true corruptions:* This is a very valid concern and a critical part of our design. In our framework, $G_2$ will add noise to the output of $G_1$ before applying the denoiser $D$. Therefore the corruptions of the output of $G_1$ before denoising match ground-truth noise corruption. Unlike TDPM, $G_1$ does not predict noisy images directly. Instead, $G_1$ can learn any distribution such that adding noise to the output of $G_1$ matches the distribution of noisy data. We experimented with directly predicting noisy samples with $G_1$ as in TDPM but found poor results for large noise magnitudes.
* *Lack of Conditional Models:* In our experience, conditional EBMs often come with additional instability beyond their unconditional counterparts. We focus our efforts in a direction where we believe our method has the most potential compared with existing methods. Although Hat EBM uses the term "conditional Hat EBM" for one variant of the model, all experiments in Hat EBM are performed on unconditional data.
* *Additional Experiments:* Additional interpolation, reconstruction, and inpainting experiments are included in Figure 1 of the global response.
[a] Goodfellow et al., Generative Adversarial Nets.
[b] Che et al., Your GAN is secretly an energy-based model..., 2020.
[c] Karras et al., Elucidating the Design Space of Diffusion-Based Generative Models, 2022.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I have read the other reviews and the authors' rebuttal. I agree with other reviewers that the proposed method is fairly incremental compared to Hat EBM, but I think that HDEBM is a valid contribution that improves performance compared to other EBMs, however not necessarily compared to other diffusion models. I thank the authors for their responses, and for performing additional experiments in the rebuttal PDF. The authors have addressed most of my concerns.
HDEBM is related to TDPM and ES-DDPM, and it would be good to add more discussion of these related works in the paper.
I raised my score to 7.
---
Reply to Comment 1.1.1:
Title: Thanks for your time and guidance
Comment: Thanks again for taking the time and effort to provide a thorough and constructive review. We sincerely believe our paper will be significantly improved by incorporating feedback from yourself and other reviewers. We are very glad to hear that we have addressed most of your questions and that you have decided to raise your score. Future versions of the paper will provide more discussion of and comparison with the related works TDPM and ES-DDPM. | Rebuttal 1:
Rebuttal: Thanks to all reviewers for their time and insightful comments and suggestions. Our paper will certainly benefit from incorporating reviewer feedback in future revisions. Our global response includes:
* a larger scale HDEBM experiment
* results of ADM for unconditional ImageNet 128x128
* additional reconstruction, image completion, and interpolation experiments
* further investigation of the samples from different steps of our model
Pdf: /pdf/c5aa920ade6a1cb33076ddbcb9ec4a0f94f2cd52.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors describe a method to facilitate faster sampling in diffusion models whilst retaining quality, with a mix of energy and diffusion model. This consists of using an implicit generator, followed by noising then demonising from a retrained distilled diffusion model as a corrector, followed by an energy based model as an additional corrector. This is then trained in two stages, with the exception of the pre-trained distilled diffusion.
The authors introduce an MCMC approach whereby each step consists of adding noise and then removing noise using a demonising diffusion model.
Strengths: The proposed methods obtains reasonably competitive performance across the tasks in question.
Taking gradients through truncated / distilled diffusions is interesting, and could be a useful contribution elsewhere.
Using noising then denoising diffusion model as a corrector is a nice insight and idea. Though similar approaches have already been considered in the literature as a way of correcting samples alongside the replacement conditioning method [1].
[1] RePaint: Inpainting Using Denoising Diffusion Probabilistic Models, Lugmayr 2022, https://openaccess.thecvf.com/content/CVPR2022/html/Lugmayr_RePaint_Inpainting_Using_Denoising_Diffusion_Probabilistic_Models_CVPR_2022_paper.html
Weaknesses: If I have understood this correctly, the method appears quite incremental over existing truncated diffusion models [1]. The approach of [1] also trains an implicit generator for a noised version of the data, to which a diffusion is then applied as a corrector. Given the similarities I would appreciate a lot more discussion of how this approach differs and comparison experimentally in the main text. I understand some experiments are shown in Table 2 which suggests that [1] performs better in terms of FID than this approach, what is the reason for this given the similarities?
What is the benefit of the energy based parameterisation? The Langevin dynamics using the energy parameterisation is slow to sample as one needs to take a gradient at each step, this is at odds with the objective of accelerated sampling with distilled diffusion and truncation/ implicit generator, can the same Langevin dynamics not be applied using the score from the diffusion model at time T'? This is well known as "corrector" steps in the diffusion model literature [2].
Table 1 presents this work as an energy based model, it is not clear if that is really the case given the output is corrected with diffusion model. The distinction is not so clear to me. Table 1 shows the authors' method as top performing but neglects to show better performing EBM such as ones trained using score-matching in [3], which have similar performance to score-parameterised diffusion models.
Greater clarity is required around defining G, G_1, G_2, D. These are functions, networks etc. yet sentences such as "noising and demonising applied to G_2" then does not make much sense. The multi stage training, pre-training, and multi stage sampling is quite complicated and appears difficult to implement. It would be beneficial to include algorithms from the appendices in the main text to improve clarity.
Altogether clarity is an issue. Separating the training from the generation sections would be helpful.
[1] Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Auto-Encoders, Zheng et al 2022
[2] Score-Based Generative Modeling through Stochastic Differential Equations, Song et al 2021
[3] Should EBMs model the energy or the score?, Salimans et al 2021, https://openreview.net/forum?id=9AS-TF2jRNb
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See weaknesses above.
What is the benefit of this approach over truncated diffusion models or regular diffusion models? It appears sampling is roughly the same cost (including Langevin dynamics cost) and quality is questionable.
Perhaps this is an open review issue but the pdf text is not searchable, I do not have this issues on other papers I am reviewing. Is the text saved as an image in the pdf? This makes reviewing very difficult.
How is ENFE defined? Doesn't the MCMC steps essentially still involve neural function evaluations but also gradient so perhaps twice the cost of a single network evaluation?
Given the author's methods has been tried on imagenet128, other baselines should also be investigated on imagenet128 including ADM [7] given its better performance at higher resolution.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors include limitations and broader impact sections in the appendices, this probably ought to be in the main.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and your suggestions for improving our work. We agree that our central technical innovation is that taking gradients through the truncated and distilled diffusion, which allows us to learn energy and generator networks that are adapted to the truncated diffusion. We address your review comments and questions below.
* *Relation with TDPM:* Our work is certainly related to TDPM and the similar work ES-DDPM. Unlike TDPM and like ES-DDPM, we add noise to generator samples before applying the truncated diffusion. The initial generator $G_1$ does not predict noisy images directly, but learns a distribution such that samples from $G_1$ plus additive noise match noisy data. We attempted to teach $G_1$ to directly predict noisy data but found this approach ineffective because it became difficult to directly model noisy data when large amounts of noise are added ($T'=512$ truncations steps as in our work), and our results were very similar to the base Hat EBM when small amounts of noise were added ($T'=100$ truncation steps as in TDPM). Unlike both TDPM and ES-DDPM, our auxiliary energy and generator are adapted to the truncated diffusion instead of learned independently, and we explore high-resolution generation with truncated models trained from scratch. Further comparison can be found in Appendix C.2. We will include more comparison with existing methods in the main paper. TDPM, as GAN-based method, outperforms HDEBM on small-scale datasets such as CIFAR-10 likely because GAN-based methods generally outperform EBM methods in this domain. Our central focus in this work is exploring high-resolution generation, where relative performance of GAN and EBM methods might not match low-resolution trends.
* *Choice of EBM model:* We chose the Hat EBM parameterization for two reasons: 1) Hat EBM shows good performance relative to GANs with simple networks for high-resolution unconditional generation and 2) Hat EBM is compatible with a refinement stage (Stage 2 HDEBM), where the image appearance can be improved by movement in the latent space. We believe similar ideas could be applied to other kinds of generative models, but this is beyond our work's scope. EBMs are much smaller than the diffusion networks and an EBM Langevin step with a backward pass can take an order of magnitude less compute than a forward pass with a diffusion UNet. To keep compute low, our truncated distilled diffusions only require one forward pass to perform the entire the reverse process. More steps or predictor/corrector steps could yield improvement but we leave this for future work.
* *EBM model family and related work [3]:* In this work, we use EBM to refer to a family of models that learns a single energy surface that represents the energy surface of the data distribution. Both stages of HDEBM incorporate all networks, including the truncated diffusion model, into a single unnormalized density in equations (8) and (11). Our work is derived from Cooperative Learning and Hat EBM, which both fall under the EBM umbrella. We view the work [3] as a diffusion model whose components are energy-based models. Although such distinctions are ultimately subjective, we believe [3] belongs more to the diffusion family than the EBM family, since the goal is to learn many models of noisy data at various noise levels instead of a single model of non-noisy data. We also note that recent EBM works such as CLEL do not include [3] among EBM baselines. We will include the reference [3] in future revisions as an "Energy-Based Diffusion" (in contrast to typical score-based diffusion).
* *Definition of $G_1$, $G_2$, and $D$:* $G_1 (z)$ provides initial proposals from noise $z$. These samples, plus additive Gaussian noise, should match noisy data samples (line 206-207). $D(x)$ is a truncated distilled diffusion which can denoise a noisy data image $x \sim q_{T'}$ in a single forward pass (lines 188-190). $G_2 (x, z)$ is defined in (7), and it takes a sample $x$ from $G_1$, applies $T'$ steps of the forward process with noise $z$ to get a noisy sample, then denoises this noisy sample with $D$. "Noising and denoising through $G_2$" refers to the fact that $G_2$ contains both the forward process (noising) and the reverse process (denoising).
* *Implementation complexity:* Our model is fairly straightforward to implement in practice. As in many EBM works, sampling during test-time is identical to sampling during training. We will shift implementation details from the appendix to the main paper.
* *Benefit over diffusion models:* On one hand, our work seeks to extend high-quality generation to the EBM family. Learning (unnormalized) densities of high-dimension data is a long-standing open problem and our work is a step in this direction. On the other hand, our work shows that HDEBM can improve performance over diffusion models for highly multi-modal unconditional distributions.
* *Text not searchable:* We sincerely apologize for this issue. It was caused by separating the appendix from the main paper manually. We will ensure that future versions are fully searchable.
* *ENFE definition:* ENFE measures the runtime of HDEBM relative to a common fixed diffusion architecture. We generate samples from the HDEBM model and the reference diffusion model using 1000 timesteps for the diffusion model. Then ENFE is the ratio of HDEBM generation time over diffusion generation time, multiplied by 1000. HDEBM Langevin steps (backward pass included) can take a magnitude of order less compute than a forward pass with a large diffusion UNet.
* *Comparison with ADM:* We agree that a comparison with ADM is essential for contextualizing our results. We were unable to complete a replication of ADM in time for the initial submission, but since that time has implemented an ADM baseline. Table 1 in the global response shows metric results for ADM on unconditional ImageNet at 128x128 resolution. HDEBM generally has stronger performance.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
- **initial generator G1 does not predict noisy images directly, but learns a distribution such that samples from G1 plus additive noise match noisy data** \
This appears like a reparameterization of a generator targeting noising data i.e. the generator is the G1+additive noise.
- **On the other hand, our work shows that HDEBM can improve performance over diffusion models for highly multi-modal unconditional distributions.** \
I am not sure where this work demonstrates this.
- **EBMs are much smaller than the diffusion networks and an EBM Langevin step with a backward pass can take an order of magnitude less compute than a forward pass with a diffusion UNet.** \
This is a good point that should be emphasized, though the reason behind it is unclear to me.
- **EBM model family and related work [3]:** \
Only one network is learned for the various energy models at each time t. Given an energy is learnt then it is an energy based model, in my opinion. This argument based on other works arbitrary distinction is not very strong in my opinion. A stronger argument for your method (and not [3]) would perhaps be in training time, or performance. Do these energy based diffusions not work as well as energy based models for energy tasks to more standard regular energy models?
Overall it is still not clear to me what the benefit of this method is. If the objective is to learn an energy model for density estimation, then perhaps other tasks specific to energy models, such as density estimation tasks/ experiments should be shown rather than generative modelling and fid, where this approach appears more complicated (subjective / my opinion) and worse performing to diffusions.
I do now believe there could be some benefit and interest in learning a high performance energy based model as opposed to diffusion, but I think more work is needed in this paper to show this. I have increased my scores on soundness, but overall I believe more work is needed to justify a higher overall score.
---
Reply to Comment 1.1.1:
Title: Thanks for your review and continued discussion
Comment: Thanks again for your in-depth review and continued engagement in evaluating our work. Here are our responses to your further feedback.
* *Reparameterization of the generator*: We agree that one interpretation of our method is to absorb the noise into $G_1$. In practice, we found this to be a crucial design choice compared to the TDPM approach where $G_1$ learns noisy data directly (without absorbing true Gaussian noise) and believe there is value in sharing this approach. Although ES-DDPM used a similar approach, all networks in that work are pretrained, while we explore ways to train auxiliary models that are fine-tuned to work in a coordinated way with a specific truncated diffusion.
* *Improved performance of HDEBM over diffusion models*: Please see our global response, where we replicated ADM at 128x128 resolution on unconditional ImageNet and find that the smaller Stage 2 HDEBM and the larger Stage 1 and Stage 2 HDEBM outperform ADM in several metrics. Although a definitive comparison of generative models is very difficult, we believe that these results give very strong evidence that HDEBM significantly closes the low-resolution performance gap between EBM and diffusion models, and that HDEBM can outperform certain highly optimized implementations of diffusion models. In general, we believe that both HDEBM and ADM results can be improved by better architecture, better hyperparameters, more compute, etc. We believe our high-resolution results are valuable explorations in an area that has received much less attention than typical low-resolution benchmarks, especially in the EBM literature. To our knowledge, the larger Stage 2 HDEBM FID score of 17.03 on unconditional ImageNet 128x128 is SOTA in the literature. In future revisions, we will emphasize that definitive comparisons are extremely difficult and that our intention is not necessarily to definitively show improvement over diffusion models, but to greatly improve EBM learning to a point where it can become quite competitive with diffusion models at high resolutions.
* *Smaller EBM size*: The EBM has a classifier-type structure similar to the encoder part of a UNet, without the expensive decoder and channel concatenation used in a typical UNet. Furthermore, we do not employ attention in the EBM for computational savings during MCMC sampling and instead rely on attention layers in the generator and truncated diffusion.
* *Related work [3]*: While it is true that the model in [3] is a single network, this network parameterizes a family of energy functions $p_t (x; \theta)$ for different timesteps $t$, representing different distributions of noisy data. Even though these models are contained in a single network, we still feel that it is proper to describe them as a family of distinct EBM models. The models $p_t (x; \theta)$ for very small $t$ are indeed directly analogous to a "standard" single EBM model. However, since the learning for small $t$ uses Fisher Divergence with samples generated from small perturbations of the finite dataset, the energy surface for such models is learned only in a very small region around data samples. Although it is possible that information from larger $t$ could help develop the surface for smaller $t$, to our knowledge there is no theoretical justification to support this possibility. The use of negative MCMC samples in EBM learning allows HDEBM and related methods to develop an energy surface throughout the state space. Again, such a distinction is somewhat subjective, and we believe the "classification" of [3] will eventually be determined by common practice in the generative modeling community. Since [3] does not scale their method beyond CIFAR-10 and incurs significant computation cost beyond standard diffusion models from taking a second derivative of the score network during training which problematizes scaling, we do not feel that the classification of [3] either as an EBM or diffusion model significantly impacts the central claims or contributions of our work.
We view learning a proper (unnormalized) density model as the long-term potential of EBM and we agree that HDEBM would be strengthened by more applications in this direction. Nonetheless, learning realistic synthesis is necessary condition of learning an accurate density and our work makes significant progress in this direction compared to other models in the EBM family. We believe our work is a valuable contribution with strong empirical results which would be of interest to the EBM and generative modeling community. | null | null | null | null | null | null |
Make the U in UDA Matter: Invariant Consistency Learning for Unsupervised Domain Adaptation | Accept (poster) | Summary: This paper presents a novel unsupervised domain adaptation method called invariant CONsistency learning (ICON). ICON is very simple; assuming that labeled source samples and clustered target samples are available, ICON uses BCE losses to make the inner product of the features (softmax-normalized) of any two samples in a mini-batch in each domain close to 1 if they are of the same class/cluster, and 0 otherwise. Evaluation experiments on 10 different benchmark datasets (Office-Home, VisDA-2017, WILDS 2.0) show that ICON achieves superior performance to several representative methods on most of them.
Strengths: The proposed method, ICON, is surprisingly simple and easy to use.
A theoretical property of ICON is discussed; ICON gives the optimal classifier under certain assumptions. (this is a fairly high-level characteristic, and the realism of the assumptions are somewhat questionable, though.)
Various experiments are conducted. Despite its simplicity, ICON achieves sufficiently high accuracy on many datasets.
The paper is generally well-written and easy to follow.
Weaknesses: 1. The performance of ICON on Office-Home and VisDA-2017 is inferior to that of SoTA. For example, CDTrans [Xu+, ICLR2022] achieves 88.4% on VisDA-2017 and 80.5% on OfficeHome, both higher than ICON.
2. Since the assumptions underlying the theorem appear to be quite strong, it is questionable to what extent they are valid in practice. (this is discussed more or less in the limitation part in the supplementary material, though.)
3. Intuitively, the principle of ICON (i.e., bringing features within the same class/cluster closer together) seems highly similar to that of contrastive learning, which is also a major approach to unsupervised domain adaptation (e.g. [Shen+, ICML2022 ], [Wang+, TMM2023]). Discussing the differences will highlight the property and uniqueness of ICON.
4. While this may be outside the scope of this paper, it would be interesting to discuss the possibility of extending to more advanced domain adaptation problems, such as universal domain adaptation and source-free domain adaptation. Since ICON (perhaps implicitly) assumes that the number of classes in the source and target are the same, and the impact of the number of clusters on accuracy is significant (see Table 2). So the performance of ICON on universal domain adaptation, where the number of classes cannot be assumed to be the same, may not be as promising. Application to source-free domain adaptation is also non-trivial.
[Xu+, ICLR2022] CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation
[Shen+, ICML2022] Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
[Wang+, TMM2023] Cross-Domain Contrastive Learning for Unsupervised Domain Adaptation
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have no specific questions that I would like the authors to answer, but it would be great if I could read some discussions on the weaknesses I mentioned above, especially on 1. the comparisons with SoTA.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I found a discussion of that limitation in the supplemental material, and it is very reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the in-depth review. We address all weaknesses below.
**W1 - SoTA > ICON?** ICON is SoTA on the ResNet-50 backbone. UDA performance is sensitive to backbone choice. We choose the most classic and widely used ResNet-50 to demonstrate the superiority of ICON. In contrast, CDTrans uses the much larger DeiT-base (more than tripling the number of parameters of ResNet-50). Hence the results are not comparable. We also highlight that ICON is agnostic to the backbone choice, and we will try transformer-based backbones in future work.
**W2 - Strong assumption.** The assumptions are actually fundamental for UDA.
- Assumption 1 corresponds to the classic clustering assumption (or low-density assumption), a necessary assumption for learning with unlabeled data [3]. In fact, UDA methods (and semi-supervised learning in general) commonly leverage feature pre-training and data augmentations to help fulfill the assumptions.
- Assumption 2 is necessary to rule out corner cases, *e.g.*, in Figure 4b, without additional prior, there is no way for any UDA method to tell if each one of the blue cluster (unlabeled) should be pseudo-labeled as "0" or "1". In practice, these corner cases can be avoided by collecting diverse unlabeled data (lines 213-215), which is generally abundant, *e.g.*, collecting unlabeled street images from a camera-equipped vehicle cruise.
**W3 - Differences with methods based on contrastive learning.** Yes, our method can be viewed as contrastive learning. The differences with previous methods lie in ***what to contrast***. For example, [Shen+, ICML2022] contrasts augmented samples, *i.e.*, a sample under different augmented views shares similar features, and different samples have dissimilar features. [Wang+, TMM2023] contrasts ***cross-domain*** sample pairs (one from source domain $S$ and the other from target domain $T$), *i.e.*, pairs from the same class share similar features (and vice versa). Unfortunately, they still generate $T$ pseudo-labels based on $S$ supervision like self-training methods, and hence are prone to spurious correlations (lines 52-57). Our ICON contrasts ***in-domain*** sample pairs (both samples from $S$ or $T$), *i.e.*, pairs from the same class in $S$ or cluster in $T$ share similar features (and vice versa). In this way, our cluster labels in $T$ only capture the inherent distribution of $T$, which helps remove spurious correlations (lines 66-80).
**W4 - Extension to more advanced DA problems**. Interesting question.
1. In Universal DA, there exist classes that only appear in $S$ and $T$. Still, all classes in $S$ and $T$ are typically under a common task, *e.g.*, [1] uses animal classification in its motivating example. Hence this setting meets the condition of ICON (lines 27-32): any sample is generated from a causal feature determining the class identity (*e.g.*, animal appearance) and an environmental feature (*e.g.*, background). From a technical perspective, the extension is not difficult: instead of setting the cluster number in $T$ as the class number in $S$, one can estimate the cluster number (*e.g.*, with semi-supervised k-means in [2]), or use density-based clustering (*e.g.*, DBSCAN) that does not require cluster number.
2. Source-Free DA (SFDA) loses access to the data in $S$, *i.e.*, the goal is to adapt a model trained in $S$ with unlabeled data in $T$. Part of ICON can be extended to this setting: In Eq. 2, one can use any loss function in SFDA as $\mathcal{L}_{st}$ and add our consistency loss $\mathrm{BCE}(T,f)$ to it. However, the invariance constraint is no longer applicable without data in $S$.
We will try these settings for future work.
[1] Kaichao You, et al. Universal domain adaptation. CVPR 2019.
[2] Kai Han, et al. Learning to discover novel visual categories via deep transfer clustering. ICCV 2019.
[3] Van Engelen JE, Hoos HH. A survey on semi-supervised learning. Machine learning. 2020 Feb.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their replies. Regarding #1, I agree with the authors' comments but at the same time would be interested to see if ICON shows a similar advantage with the transformer backbone, as I am not sure if it does.
---
Reply to Comment 1.1.1:
Comment: Regarding robustness of ICON against backbone choice, we have additional evidence. While we use ResNet-50 on Office-Home and VisDA-2017 following their standard practice, we have a variety of backbone choices on WILDS 2.0 benchmark (following the standard practice of the leaderboard), which is discussed in Appendix Section E.
- We used the followings pre-trained on ImageNet: ResNet-50 [11] on IWILDCAM, DenseNet-121 [12] on FMOW and Faster-RCNN [9] on GLOBALWHEAT.
- We used the *transformer-based* DistilBERT with pre-trained weights from the Transformers library on CIVILCOMMENTS and AMAZON.
- On CAMELYON17, we used DenseNet-121 [12] pre-trained by the self-supervised SwAV [1].
- On POVERTYMAP and OGBMOLPCBA with no pre-training available, we used multi-spectral ResNet-18 [11] and graph isomorphism network [32] trained with the labelled samples in the source domain S, respectively.
In Table 1, we achieve SOTA on all datasets with different backbones, which is a strong proof that ICON's effectiveness is agnostic to backbone. | Summary: This paper proposes ICON (Invariant CONsistency learning), a method to utilize the distribution of unlabeled target data in the UDA task.
And it obtains stable performance improvements over 8 UDA tasks.
Strengths: 1. The idea of the article is simple, but it makes sense that the distribution of unlabeled target data does have an impact on UDA tasks.
2. In terms of learning the distribution of unlabeled data, this paper has some novelty in using pseudo labels from low-dimension clustering.
3. This article provides sufficient experiments, and the effectiveness of ICON has been verified under various UDA settings
Weaknesses: Essentially, I think that the core of the ICON method is a pseudo label-based contrastive learning technology, where the pseudo label is from low-dimension clustering. And the authors provide some experiments about the rationality of the ICON, like Fig. 5,6,8. But none of this fundamentally or theoretically explains why low dimensional pseudo labels are more accurate.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Could you provide more analysis about "why low dimensional pseudo labels are more accurate?"
2. If the low-dimensional pseudo labeling works better, why do you still use high dimensions in the final validation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The author has provided limitations in their appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the constructive feedback. We address all questions below.
**Q1 - Why low-dimensional pseudo-labels are more accurate?**
We first clarify that our cluster labels are not conventional pseudo-labels, because they are not aligned with the classes in the source domain (details in Reviewer Q3G5-Q4).
We don't intend to claim that our low-dimensional cluster labels are more accurate (than conventional pseudo-labels). Instead, cluster labels in the target domain provide *additional* supervision *on top of* pseudo-labels to address their limitation, *e.g.*, we empirically show that cluster labels correct pseudo-labeling error in Figure 6b. We also have illustrative and theoretical explanations:
- In lines 52-57, we illustrate the limitation of conventional pseudo-labels due to spurious correlation in the source domain.
- In lines 58-65, we explain how cluster labels provide *additional* supervision, and we show how they remove the spurious correlation in lines 66-80.
- In Section 4, we present a theoretical analysis.
- We also highlight that "low-dimensional'' is cherry-on-top instead of bread-and-butter for ICON. Please refer to Reviewer cknk-W1 for results and explanation.
**Q2 - Why use high dimensions in final validation?**
This is for practicality and fair comparison: The best-performing dimensionality reduction methods (*e.g.*, t-SNE or UMAP) need to process all data at once. If we train a classifier with low-dimensional features, the final validation must be performed in an offline manner, *i.e.*, having access to all test data to compute low-dimensional features. This is extremely restrictive and impractical, as such a model cannot be deployed to predict online/streaming data. It is also unfair to compare the performance of it against previous methods, which do not have the restriction. Hence we only use dimensionality reduction for clustering the unlabeled training data. | Summary: This paper deals with unsupervised domain adaptation problem. This paper focuses on how to exploit the inherent distribution of target domain to improve the adaptation performance. In detail, it trains two classifier: one is on source domain and the other one is on target domain. Each classifier is trained with BCE loss on a pair of source samples or target samples. For source domain, the same ground-truth label of a pair of samples indicates a positive pair; for target domain, the same cluster label (obtained through clustering) of a pair of samples indicates a positive pair. Finally, the intersection of two classifier is exploited to form the optimal classifier. Additional cross-entropy loss and self-training loss are added to the objective to supervise the adaptation process. Experiment results on various benchmarks validate the effectiveness of proposed method.
Strengths: - The idea is interesting and reasonable to an extent.
- The paper is generally easy to understand and follow.
- The method has been tested on various UDA benchmarks.
Weaknesses: - In one of the claimed contributions, the authors state that their method ICON is orthogonal to previous UDA methods. But I don't see any evidence showing that previous UDA methods combined with ICON should achieve a better adaptation performance.
- In my opinion, it is not suitable to claim outperforming all the conventional methods on WILDS 2.0 UDA benchmark without fully evaluating existing previous works.
- It is interesting to use REx to realize the goal of the second line of Eq. (2). I just wonder what if we simply share the $f$
between two domains. Will it achieve similar performance? I would like to see such ablations.
- For the experiments, the authors should compare with more recent SOTA UDA methods. The current status of comparisons listed in Table 1 is unsatisfactory and may not justify the performance superiority of proposed method.
- As shown in Fig. 5, the model seems to be sensitive to the hyper-parameter selection. As we know, it is hard to perform hyper-parameter selection in UDA as we don't have labels of target domain data. I feel concerned about whether this work can be really useful in practice.
- The details about how to utilize the self-training loss are not clear. The ablations for self-training terms are missing. Without providing such ablations, we cannot know if the improvement really comes from what the authors claimed.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See the weakness part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the in-depth review. We will address all weaknesses.
**W1 - Orthogonality of ICON.**
Sorry for the misleading term. We intend to mean that our ICON loss can be plugged into different self-training baselines ($\mathcal{L}_{st}$ in Eq. 2).
We discussed the choice of the self-training baselines in Appendix Section E under self-training details. For example, on Office-Home and VisDA-2017, we use FixMatch, and greatly improve the accuracy from 69.1% and 76.6% (FixMatch) to 75.4% and 87.4% (FixMatch + ICON) on the two datasets, respectively. We will clarify this in the revision.
**W2 - Claim on WILDS 2.0.**
Thanks for the suggestion. We will revise accordingly.
**W3 - Ablation on sharing $f$.**
We have tried sharing $f$ without REx, and it’s worse in Table 3 (ICON-INV).
**W4 - Table 1 may not justify ICON.** We gracefully disagree.
- For Office-Home and VisDA-2017, we've already included the best-performing UDA methods on ResNet-50 in Table 1. We supplement it with a more comprehensive list and results on ResNet-101 in Appendix Table 3 and 4. Notably, ICON even outperforms the previous SOTA on the much deeper ResNet-101.
- For WILDS 2.0 benchmark, we included all previous results in its official leaderboard. We also tried implementing recent SOTAs on WILDS2.0. However, without official implementation, it is indeed a nontrivial task to fairly reproduce them as what they claimed to be: they are outperformed by ERM, or even not applicable (*e.g.*, on text/graph data). Hence we feel that it may be disrespectful to include results by us, *e.g.*, the result of recent CST on applicable datasets are shown below. This is in line with WILDS 2.0 organizers' observation that ERM as a naive baseline usually has the best performance, which is exactly the point of WILDS 2.0 as a diagnosis of the ever-overlooked issues in UDA. Hence we believe that dominating this challenging leaderboard strongly justifies the superiority of our ICON.
| Method | iWildCAM | Camelyon | FMoW |
| ------ | -------- | -------- | ---- |
| ERM | 32.2 | 82.0 | 34.8 |
| CST | 31.3 | 74.5 | 32.6 |
**W5 - Hyper-parameter ablations.**
Our work is as practical as other UDA methods: We apologize that our presentation on invariant weight $\beta$ may mislead you. It is standard in the IRM community to choose a small $\beta$ (*e.g.*, 0.1). Then, tuning $\alpha$ follows the same process as other self-training UDA methods.
**W6 - Missing details and ablation for self-training loss.**
Sorry for the confusion. We include self-training loss details in Appendix Section B.3 and E. The ablation for self-training loss weight $\alpha$ is in Figure 5. Please also refer to Reviewer cknk-W1 for a discussion on ICON's improvement over self-training. | Summary: This paper proposes a new UDA method which strives to produce a consistent classifier for labels in source domain and clusters in target domain. Specifically, this paper introduces an auxiliary task for distinguishing whether the input image pair share the same class/cluster or not. This binary classification task would encourage the features from the same class to be similar and features from different classes to be dissimilar. The groundtruth for this binary classification task on target pair is determined by performing clustering on target features. By combining this binary classification task with supervised learning on source domain and self training on target domain, this method obtains state-of-the-art performance on multiple challenging UDA benchmark.
Strengths: 1. This paper utilizes clustering on target features to determine whether the target image pair share the same class. Compared to previous self-training methods, this design better reduces the noise in the pseudo labels and alleviate overfitting on incorrect pseudo labels.
2. The proposed method obtains state-of-the-art performance on many challenging UDA tasks. A sufficient ablation study and analysis clearly depicts the advantage of the proposed method, which would provide great insight for furtehr works.
Weaknesses: Basically I believe the method proposed method greatly meets its motivation and sufficient analysis has proven its effectiveness. The introduction part could be further polished to emphasize the core idea better. For figure 1, the lack of comparison on t-SNE of baselines might make the reader difficult to understand the advantage of ICON.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. In line41 there are double commas.
2. The BCE loss is only applied on source-source pair or target-target pair. What if applying it on source-target pair? Does it bring better performance?
3. How the pseudo labels for $L_{st}$ is obtained? A short introduction should be included in the implementation details.
4. Now that the main idea is to utilize the cluster label to group target domain features, is it necessary to add a BCE task for UDA classification? Is it possible to utilize the cluster label as the pseudo label for self-training?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: This paper adequately discussed the limitation of the proposed method and provide an analysis about the possible reasons. No potential negative societal impact is discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the constructive feedback. We will fix the typos in Q1 and address all concerns below.
**W - Lack of baselines t-SNE in Figure 1.** Sorry for the confusion. Actually, the goal of Figure 1 is not to compare our ICON with baselines, but to depict the condition where a model generalizes. In Figure 2, we further discuss how ICON achieves this condition and hence improves existing baselines.
**Q2 - BCE between source-target pair?** Thanks for the suggestion. We tried and observed comparable performance (75.3% on Office-Home, 87.0% on VisDA-2017). Specifically, for a source-target pair, we compared the ground-truth label (source sample) and pseudo-label (target sample) to get the binary label for BCE loss (*i.e.*, $b$ in Eq. 1). This is because we cannot directly compare the ground-truth label and cluster label (see Q4). We postulate that the self-training loss ($\mathcal{L}_{st}$ in Eq. 2) achieves similar effect with the source-target BCE loss, *i.e.*, bringing each target sample closer to its pseudo-label class center.
**Q3 - How to obtain $\mathcal{L}_{st}$.** Sorry for the confusion. We discussed this in Appendix Section B.3 and E (under self-training details). We mainly used FixMatch and Noisy Student loss as $\mathcal{L}_{st}$, where pseudo-labels are generated on weakly-augmented target domain samples, and the model is trained to predict the pseudo-labels given strongly-augmented samples. We will give a short introduction in the main text in revision.
**Q4 - Cluster label as pseudo label?.**
Unfortunately, this is not possible, because the cluster labels (*e.g.*, cluster 1 or 2) are not aligned with the class labels (*e.g.*, "bed'' or "clock'') as discussed in lines 155-156, and explicit alignment (*e.g.*, assigning cluster 1 to "bed'') reduces our method to standard self-training. Hence it is necessary to use the BCE loss to train the classifier, such that its decision boundary simultaneously separates the classes in the source domain and the clusters in the target domain (Figure 2d).
---
Rebuttal Comment 1.1:
Comment: The rebuttal has addressed all my concerns. After viewing the rebuttal and other reviewers' comments, I believe this paper worth acceptance for top-level conferences like Neurips. Thus, I will keep my rating to accept. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose an unsupervised domain adaptation method, ICON. The algorithm is similar to self-training on the unlabeled target data, but at the start of each epoch, the unlabeled data are first projected from feature space to a reduced-dimension space and clustered. An auxiliary loss enforces consistent pseudo-labels within clusters. The method attained state-of-the-art on the WILDS unlabeled benchmark [1].
[1] Sagawa, S., Koh, P. W., Lee, T., Gao, I., Xie, S. M., Shen, K., ... & Liang, P. (2021). Extending the WILDS benchmark for unsupervised adaptation. *arXiv preprint arXiv:2112.05090*.
Strengths: - ICON’s strong empirical performance is striking, especially on WILDS; the authors outperform similar self-training-style methods, e.g.. vanilla self-training, FixMatch, and Noisy Student. Notably, ICON attains this performance without using aggressive data augmentations.
- The paper evaluates on ten datasets across 3 modalities, and evaluations are conducted over multiple seeds.
- The presentation is good, and the experiments are notably well-documented.
Weaknesses: - Given its similarity to other self-training methods, it seems important to analyze, whether through experimental ablations or theory, why ICON outperforms vanilla self-training so strikingly. One unique aspect of ICON is that the unlabeled consistency loss uses clusters computed in a reduced dimension space; this seems important in ablations (Table 2). The authors motivate this by stating that dimensionality reduction acts to suppress environmental features and highlight the causal feature (L138), but this statement seems unsupported to me. Could the authors clarify theoretically why this is?
- The theory in Section 4 makes two strong assumptions, to the effect of (1) the classes in T are cleanly separated by clusters, and (2) if the model separates classes correctly in S and clusters in T, then the classes in T are predicted correctly, as prediction uses the invariant feature.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - The authors present ICON as orthogonal to existing UDA approaches, e.g. consistency regularization using views from data augmentations. Could the authors share how ICON performs when combined with strong data augmentations, e.g. RandAugment?
- On page 4, the authors state that any clustering algorithm can be used, so long as the number of clusters should be equal to the number of classes. This is a useful ablation; in general, understanding which representation space and how to cluster seems important. Did the authors have k-means experiments to support this?
- I could not find the UMAP hyperparameters in the appendix as the main paper suggested; in particular, what was the output dimension used?
- The results of iWildCam2020-WILDS and CivilComments-WILDS are a bit unaligned with the theory, as in these cases, the unlabeled data is *not* drawn from the test distribution. It would be great if the main paper could comment briefly on this discrepancy.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors note that the assumptions made in Section 4 are restrictive.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the in-depth comments and suggestions. We will address all concerns below.
**W1 - Why ICON outperforms self-training.** We clarify that the key to ICON's success is invariant consistency instead of dimensionality reduction with UMAP.
- For empirical evidence, we perform additional experiments and reformat the results in Table 2 to form Table r1 below. Note that we used FixMatch as a self-training baseline on Office-Home and VisDA-2017. It shows that consistency loss brings the most improvement, followed by invariance constraint, and lastly UMAP.
- For theoretical analysis, we analyze the failure of self-training in Figure 2a (lines 52-57) and discuss how clusters in the target domain provide additional supervision to prevent the failure (lines 58-65). Then we propose consistency loss (lines 66-71) and invariance constraint (lines 72-80) to incorporate this supervision. We clarify that UMAP is simply a pre-processing technique to improve clustering, which is shown to benefit classification tasks (*i.e.*, highlights causal feature) supported by rigorous theoretical results [1].
Table r1: Ablations on each ICON component. Supplement to Table 2.
| Method | Office-Home | VisDA-2017 |
| ------------------------------------------------- | ----------- | ---------- |
| FixMatch | 69.1 | 76.6 |
| FixMatch + Consistency | 74.1 | 82.0 |
| FixMatch + Consistency + Invariance | 75.2 | 86.5 |
| FixMatch + Consistency + Invariance + UMAP (ICON) | 75.4 | 87.4 |
**W2 - Strong assumptions.** The assumptions are actually fundamental for UDA.
- Assumption 1 corresponds to the classic clustering assumption (or low-density assumption), a necessary assumption for learning with unlabeled data [2]. In fact, UDA methods (and semi-supervised learning in general) commonly leverage feature pre-training and data augmentations to help fulfill the assumptions.
- Assumption 2 is necessary to rule out corner cases, *e.g.*, in Figure 4b, without additional prior, there is no way for *any* UDA method to tell if each one of the blue cluster (unlabeled) should be pseudo-labeled as "0" or "1". In practice, these corner cases can be avoided by collecting diverse unlabeled data (lines 213-215), which is generally abundant, *e.g.*, collecting unlabeled street images from a camera-equipped vehicle cruise.
**Q1 - ICON + strong data augmentations.** Actually, self-training baselines already leverage strong data augmentations. For example, FixMatch in Table r1 enforces the predictions on strongly augmented samples (using RandAugment) to be similar to those on weakly augmented samples.
We also highlight that ICON's success does not rely on strong augmentations, *e.g.*, ICON still outperforms on text modality (CivilComments and Amazon), where strong augmentations are not available.
**Q2 - k-means experiment.** We tried k-means and got 85.6% on VisDA-2017. We postulate that rank-statistics is better because its online clustering provides up-to-date cluster assignments. In the future, we will try other more recent online clustering methods, *e.g.*, an optimal transport-based method [3].
**Q3 - UMAP hyperparameters.** Sorry for omitting it. We used 50 as the output dimension and used the default values in the official UMAP repo for all other hyperparameters.
**Q4 - Datasets unaligned with theory.** Sorry for the confusion. The two datasets are indeed aligned with the theory. Note that ICON works by learning the causal feature $\mathbf{c}$ and discarding the environmental feature $\mathbf{e}$ (lines 66-80). Hence the theory holds as long as the definition of $\mathbf{c}$ and $\mathbf{e}$ is consistent across the training and test data, *e.g.*, when the task is digit classification as in Figure 2, $\mathbf{c},\mathbf{e}$ corresponds to digit shape and digit color, respectively. In iWildCam-WILDS and CivilComments-WILDS, while the unlabeled data is not drawn from the test distribution, the task is the same across labeled, unlabeled, and test data (*e.g.*, animal classification), *i.e.*, the definition of $\mathbf{c},\mathbf{e}$ remains consistent. We will revise the main paper to clarify this.
[1] Leland McInnes, et al. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction.
[2] Van Engelen JE, Hoos HH. A survey on semi-supervised learning. Machine learning. 2020 Feb.
[3] Mathilde Caron, et al. Unsupervised learning of visual features by contrasting cluster assignments. NeurIPS 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response.
- **W1-W2.** Thanks to the authors for their response, particularly the empirical results showing that UMAP is helpful (rows 3 v. 4 in Table r1). However, I feel that my question has not been adequately addressed. My question was about the claim made that dimensionality reduction "highlights" causal features and "suppresses" environmental features (L138, LL207-208). If I understand correctly, this claim is not studied in the theory, which assumes that UMAP has already been applied to satisfy Assumption 1. I've also looked through the reference [1] that the authors gave; I do not see a justification for the authors' claim in this work. Perhaps the authors could point me to a particular page/line of [1] or Section 4 / Appendix D if I've misunderstood. It's important that this claim is justified, *particularly* since Table r1 and the authors' experiment with k-means suggests that the dimensionality reduction / clustering steps do contribute to the empirical success of ICON.
- **Q1.** I agree with what the authors have responded -- it is indeed impressive that ICON does not require strong data augmentations -- but I'd like to clarify my question. I was asking for empirical evidence validating L82, which states that ICON is orthogonal to data augmentation techniques and can be combined with them (presumably, with additive gains). Do the authors have experiments verifying this?
- **Q2-4.** Thanks for the clarifications; these have answered my questions.
---
Reply to Comment 1.1.1:
Comment: Thanks for clarifying the questions. We will address them below.
**W1-W2.** Yes, as you point out, we do not aim to study/prove this claim in our theory. We use dimensionality reduction as a practical method to approach the conditions in Assumption 1 (*i.e.*, clustering unlabeld samples), which can be justified in the following ways:
1) In Section 5.1-5.2 of [1], UMAP is empirically validated to faithfully capture the local data structure (e.g., high k-NN classification accuracy) as well as the global structure (e.g., more meaningful global relationships between different classes) in a low-dimensional space. Such structural information is however elusive in the original high-dimensional space due to the curse of dimensionality. Hence we say that UMAP highlights the causal feature, *i.e.*, clustering same-class samples (local structure) and pushing away different-class samples (global structure).
2) Dimensionality reduction is already extensively used in classification tasks. For example, feature bottlenecking, as a learnable dimensionality reduction method, is adopted in UDA methods such as CST [29], and CLIP-Adapter for fine-tuning vision language models on downstream tasks. Another example is t-SNE, a standard visualization method in classification tasks to evaluate feature quality (*i.e.*, whether same-class samples are clustered).
**Q1.** We clarify that L82 means that ICON can be combined with different self-training methods. We include the self-training details in Appendix Section E. In Table 1, we improve all self-training methods. For data augmentations, we follow the standard practice in all benchmarks to facilitate fair comparison with previous works.
---
Rebuttal 2:
Title: Feedback to the authors
Comment: Dear R#cknk,
The authors have provided a rebuttal. Can you please confirm whether it addresses the questions you had asked. We have a brief period for interaction between reviewers and authors.
Best,
AC | null | null | null | null | null | null |
Recovering from Out-of-sample States via Inverse Dynamics in Offline Reinforcement Learning | Accept (poster) | Summary: This paper aims to tackle a critical challenge in offline reinforcement learning, which involves recovering the state distribution during testing from out-of-sample states. To address this, the authors propose two methods, OSR and OSR-v, which leverage a learned inverse dynamics model to regularize the policy and underestimate the value function. The authors provide theoretical evidence supporting OSR's ability to align the transited state distribution of the new policy with the offline dataset distribution at out-of-sample states. Extensive experiments demonstrate that these proposed methods achieve state-of-the-art performance on general offline RL benchmarks as well as an out-of-sample MuJoCo benchmark.
Strengths: * Overall, the paper is well-written and easy to follow.
* This paper studies a significant problem in offline reinforcement learning, focusing on state distribution correction. The proposed methods show a link between State Deviation Correction (SDC) and robust offline RL.
* The proposed method achieves state-of-the-art (SOTA) performance in both two general offline RL benchmark and an out-of-sample MuJoCo benchmark.
* The effectiveness of OSR in recovering from specific perturbations is validated through visualizations presented in Section 5.2 and Appendix 4.2.
* The code necessary for reproducing the results is provided.
Weaknesses: * Although this paper has already conducted numerous experiments, I believe that a more comprehensive ablation study would be valuable. Specifically, in Appendix D.1, all the figures demonstrate that the smallest $\beta$ achieves the best performance. I am curious to know the performance of OSR when $\beta=0$, meaning the utilization of only $\mathcal{D}$ for training. Theoretical expectations indicate that OSR-v's performance should align with CQL, while OSR's performance should match that of CQL+BC. If OSR with $\beta=0$ still outperforms CQL and CQL+BC, it suggests that the action noise injected by the reverse model plays a crucial role in achieving a smooth policy output. Furthermore, conducting additional comparisons between CQL/CQL+BC with vanilla action noise would provide further insights.
* I have concerns regarding Eq (5), as it assigns the same scale of Gaussian noise to different dimensions. Perhaps normalizing the states by their mean and standard deviation, similar to prior work [1], would be beneficial.
* I think the proposed method also has a relevance to robust RL, as demonstrated by the Out-of-sample MuJoCo experiments which highlight OSR's capability to withstand testing-time attacks. Consequently, it is important to discuss prior works in robust RL and robust offline RL in the related works section, and to provide comparisons with existing robust offline RL approaches in the Out-of-sample MuJoCo benchmark.
* The literature review lacks an inclusion of prior offline RL works such as ROMI [2], which incorporates a reverse model for data augmentation. Hence, the authors are suggested to further engage in a discussion and comparison with ROMI to enhance the comprehensiveness of the review.
[1] Yang R, Bai C, Ma X, et al. Rorl: Robust offline reinforcement learning via conservative smoothing[J]. Advances in Neural Information Processing Systems, 2022, 35: 23851-23866.
[2] Wang J, Li W, Jiang H, et al. Offline reinforcement learning with reverse model-based imagination. Advances in Neural Information Processing Systems, 2021, 34: 29420-29432.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * Can the authors provide a more comprehensive ablation study, which should include scenarios such as OSR with $\beta=0$, a comparison with CQL+BC, CQL+BC with state noise, and CQL+BC with both state and action noise? This will effectively illustrate the advantages of OSR and demonstrate the significance of incorporating state noise and the inverse model.
* Would normalizing the states by their mean and standard deviation be more useful in addressing perturbations across different state dimensions and ensuring a more consistent $\beta$?
* Additionally, it is recommended to include an additional subsection in the related work that focuses on prior works in robust RL and robust offline RL.
* It is suggested to compare prior robust offline RL work, such as RORL, against the out-of-sample MuJoCo benchmark
* Furthermore, it is necessary to engage in further discussion and conduct comparisons with prior offline RL works that utilize a reverse model.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitation highlighted by the authors in this work is the challenge of applying it in scenarios where the two basic assumptions, namely the smooth behavior policy transition and Gaussian policy assumption, do not hold.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your thoughtful comments. We provide clarification to your questions as below. We appreciate it if you have any further questions or comments.
**Q1:... more comprehensive ablation study...**
**Response:** Per your suggestion, we have conducted a more comprehensive ablation study, and the results are shown in the Table below,
| |Halfcheetah-m.-e.|Hopper-m.-e.|Walker2d-m.-e.|
|-|-|-|-|
|CQL|62.4|98.7|111.0|
|CQL+BC|85.7|111.8|104.3|
|CQL+BC(s.)|91.8|111.2|109.2|
|CQL+BC(s.a.)|88.3|111.4|108.9|
|OSR($\beta$=0)|92.3|111.8|110.1|
|OSR|**94.7**|**114.3**|**113.1**|
First, we run our OSR algorithm with the noisy level being zero and compare it with CQL - an OSR baseline without the inverse dynamic model(IDM). The results indicate that our OSR ($\beta$=0) (the 5th row) outperforms CQL (the 1st row) significantly on the 3 tasks. This demonstrates the significance of the inverse dynamics model (IDM) in helping the agent to navigate to more safe regions. Similar performance improvement can be observed if we replace the IDM model with the behavior policy (see the CQL + BC setting, the second row), but its effect is not so significant as our OSR method.
Then we introduce the state noise into our base model OSR ($\beta$=0). The results show that this further improves the performance of the OSR by 2-3$\%$ (last row), and adding noise improves the performance of CQL + BC (the 3rd and the 4th row) as well except on the Hopper task, while the proposed OSR($\beta$>0) method consistently improves the performance over all the three environments.
**Q2:... normalizing the states ... useful?...**
**Response:** We added the suggested normalization step into our implementation while keeping a consistent $\beta$ value. The results shown below reveal that overall normalizing the states is beneficial to the performance and is useful in setting the value of $\beta$ across different state dimensions (the dimension of the tested environments is 17,11,17, respectively).
| |Halfcheetah-m.-e.|Hopper-m.-e.|Walker2d-m.-e.|
|-|-|-|-|
|OSR|94.7|**114.3**|113.1|
|OSR-norm|**95.8**|113.5|**113.8**|
**Q3:... additional subsection ... on prior works....**
**Response:** We will do that in the revised manuscript. Our method can be used to improve the robustness of the agent against unfamiliar states, which is consistent with the goal of prior works in robust RL and robust offline RL, although in different manners.
**Q4:... compare prior robust offline RL work....**
**Response:** Per your suggestion, we have compared our method with RORL on the out-of-sample MuJoCo benchmark, and the results are given below; for comparison, we also give the results obtained without using the out-of-samples on these benchmarks. From the table, we observe that RORL generalizes well at out-of-sample states with slight noise, but its performance may suffer a lot under large perturbation. For example, while its normalized score on Halfcheetah-OOS-s. is 103.8, its performance drops significantly to 55.6 on the more challenging Halfcheetah-OOS-l. benchmark. On the contrary, our method performs much more stable across all the environments. In the attached PDF file, we give some visualization of the trajectories generated by both methods in some OOS benchmarks.
|| Halfcheetah-OOS-s.|Halfcheetah-OOS-m.|Halfcheetah-OOS-l.|Hopper-OOS-s.|Hopper-OOS-m.|Hopper-OOS-l.|Walker2d-OOD-s.|Walker2d-OOS-m.|Walker2d-OOS-l.|
|-|-|-|-|-|-|-|-|-|-|
|RORL-OOS|103.8|79.8|55.6|111.5|89.5|66.5|117.8|82.2|46.5|
|RORL-w/o OOS|107.8|107.8|107.8|121.2|121.2|121.2|112.7|112.7|112.7|
|OSR-OOS|94.1|92.7|91.7|113.3|113.2|110.1|111.4|109.2|106.1|
|OSR-w/o OOS|94.7|94.7|94.7|114.3|114.3|114.3|113.1|113.1|113.1|
**Q5:... comparisons with prior offline RL works that utilize a reverse model....**
**Response:** Firstly, we would like to emphasize the conceptual difference between the inverse dynamics model (IDM) and the reverse dynamics model (RDM) -- IDM $I(a|s,s')$ behaves more like a policy which gives the action distribution while RDM $R(s|s',a)$ is a generative model which predicts the reverse state distribution. Hence if the dimensions of state space is much higher than that of the action space, it would not be easy to train a RDM model or generate/image a new state, and vice versa for the IDM model. In addition, the RDM model performs a counterfactual query on the possible cause of action, which can be problematic with a large action space - by contrast, the IDM model directly infers the most likely action that leads to a given consequence.
Per your suggestion, we have compared our method with ROMI [1] - an offline RL method based on RDM, and the results are given below. This shows that although, on average, both methods perform comparably across the environments (average score: 68.9 (OSR) vs. 68.2 (ROMI)), our method performs much better in most 'medium' and 'medium-expert' benchmarks (e.g., Halfcheetah-m.-e., Hopper-m., Hopper-m.e., Walker2d-m.e.) than ROMI, indicating the advantage of using IDM if the underlying dataset covering a portion of high-value areas. However, our method may suffer from very noisy datasets (e.g., Hopper-r.), in which case RDM works better. This suggests that it could be beneficial to combine the advantages of both models for an even more robust offline RL.
| |Halfcheetah-m.|Halfcheetah-m.-r.|Halfcheetah-m.-e.|Halfcheetah-r.|Hopper-m.|Hopper-m.-r.|Hopper-m.-e.|Hopper-r.|Walker2d-m.|Walker2d-m.-r.|Walker2d-m.-e.|Walker2d-r.|
|-|-|-|-|-|-|-|-|-|-|-|-|-|
|ROMI|**49.1**|**47.0**|86.8|24.5|72.3|**98.1**|111.4|**30.2**|84.3|**109.7**|109.7|7.5|
|OSR(ours)|48.8|46.8|**94.7**|**35.2**|**83.1**|96.7|**113.1**|10.3|**85.7**|87.9|**114.3**|**13.5**|
[1] Wang J, Li W, Jiang H, et al. Offline reinforcement learning with reverse model-based imagination. Advances in Neural Information Processing Systems, 2021, 34: 29420-29432.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for providing such a detailed response. I particularly appreciate the additional ablation/comparison conducted and the clarification regarding the reverse model used in prior work, as these aspects enhance the persuasiveness of the paper.
Regarding Q4, I find the results to be intriguing. However, I am curious about the specific injected perturbation scales employed for RORL and OSR during training. It would be greatly appreciated if the authors could provide more details on these experiments.
---
Reply to Comment 1.1.1:
Title: Than you for the comment
Comment: Dear Reviewer cN6c,
Thank you for the comment. We peformed the experiments in the response of **Q4** with the following hyperparameters of RORL in training, and other hyperparameters are same as those introduced in [1], as well.
| |Halfcheetah-medium-expert|Hopper-medium-expert|Wakler2d-medium-expert|
|-|-|-|-|
|Scalar $\epsilon_{OOD}$ of OOD Loss|0.00|0.01|0.01|
|Scalar $\epsilon_{Q}$ of Q Smooth Loss|0.001|0.005|0.01|
|Scalar $\epsilon_{P}$ of Policy Smooth Loss|0.001|0.005|0.01|
While we trained the proposed OSR using the hyperparameters shown in the table below, and you can achieve the similar results in the response of **Q4** with the provided hyperparameters.
| |Halfcheetah-medium-expert|Hopper-medium-expert|Wakler2d-medium-expert|
|-|-|-|-|
|Scalar $\beta$ of noise injection| 1e-3|5e-3|2e-3|
|Weight $\lambda$ of OSR term|0.5|0.5|0.5|
|Weight $\alpha$ of CQL term|10.0|5.0|5.0|
[1] Yang R, Bai C, Ma X, et al. Rorl: Robust offline reinforcement learning via conservative smoothing[J]. Advances in Neural Information Processing Systems, 2022, 35: 23851-23866. | Summary: The paper proposes a solution to the problem of state distributional shift in offline RL - the agent takes unreliable actions in out-of-sample states during testing. The paper introduces the use of inverse dynamics models to guide the state recovery behavior of learned policy. Without constructing forward model, OSR aligns the learned policy’s transited state distribution at out-of-sample state with the offline dataset. Experimental results demonstrate the effectiveness of the proposed method.
Strengths: I think the idea and effort to deal with state deviation in offline RL is meaningful and promising.
The paper is easy to read and understand.
The experimental results in the Out-of-sample MuJoCo setting are interesting and demonstrate the advantage of training state recovery policy.
Weaknesses: It seems there is a non-negligible difference between the proposed method (theory) and the implementation. Eq. 7 and Eq.11 are not equivalent since the expectation w.r.t. s' is put outside the KL in Eq.11. One can find that they are not equal after simple calculation. I suppose this difference eliminates the need of predicting forward dynamics.
The paper lacks sufficient ablation study to support the effectiveness of the OSR/OSR-v term.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: How does OSR (not OSR-v) perform if the CQL term is removed? I think the OSR regularization term in Eq. 11 implies a kind of behavior cloning. Can this term alone suppress extrapolation error and overestimation?
In Fig. 6, it seems that the weight $\lambda$ for the OSR regularization has little effect on the performance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Although the performance in Tab. 1 is good, the hyperparamters need to be tuned per dataset (Tab. 3,4).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your thoughtful comments. We provide clarification to your questions as below. We appreciate it if you have any further questions or comments.
**W1:It seems there is a non-negligible difference between the proposed method (theory) and the implementation. Eq. 7 and Eq.11 are not equivalent since the expectation w.r.t. s' is put outside the KL in Eq.11. One can find that they are not equal after simple calculation. I suppose this difference eliminates the need of predicting forward dynamics.**
**Response:** Thank you for the comment. We will make the derivation of Eq.11 from Eq.7 more clear in the revised manuscript. In particular, a sample from the mixed dataset $\mathcal D_{tot}$ is denoted as $(\tilde{s}, a, r, s')$, where $\tilde{s}$ can be either the original $s$ or a perturbed one $\hat{s}$ (according to Eq.5 ).
To implement Eq.7, we remove the expectation w.r.t. s' inside the KL using Monte Calro approximation with the sample number N as 1, i.e.,
$\min_\pi\mathbb E_{\tilde{s}\sim\mathcal D_{tot}}\Big[KL\bigg(\mathbb E_{s'\sim P(s'|s,\pi_\beta)}I^{\pi_\beta}
(a|\tilde{s},s')\bigg\|\pi(a|\tilde{s}) \bigg) \Big|s, \pi_\beta\Big]$
$\approx\min_\pi\mathbb E_{\tilde{s}\sim\mathcal D_{tot}}KL\Big(\frac{1}{N}\sum^N_iI^{\pi_\beta}(a|\tilde{s},s'_i)\Big\|\pi(a|\tilde{s})\Big)$
$\approx\min_\pi\mathbb E_{\tilde{s}\sim\mathcal D_{tot}}KL\Big(I^{\pi_\beta}(a|\tilde{s},s')\Big\|\pi(a|\tilde{s})\Big)$
Therefore, more strictly, Eq.11 should be written as:
$L_{sr} = E_{\tilde{s}\sim\mathcal D_{tot}}KL\Big( I^{\pi_\beta}(a|\tilde{s},s')\Big\|\pi(a|\tilde{s})\Big)$
where $\tilde{s}, s'$ are in the same couple sampled from the mixed dataset $\mathcal D_{tot}$. It should be emphasized that we do not calculate the expectation of $s'$ outside of KL.
**Q1:How does OSR (not OSR-v) perform if the CQL term is removed? I think the OSR regularization term in Eq. 11 implies a kind of behavior cloning. Can this term alone suppress extrapolation error and overestimation?**
**Response:** This is an interesting question, and per your suggestion, we have conducted a more comprehensive ablation study on three environments, shown in the Table below, where each row gives the normalized scores of a specific compared ablated method.
| |Halfcheetah-m.-e.|Hopper-m.-e.|Walker2d-m.-e.|
|-|-|-|-|
|QL|9.8|0.3|0.2|
|QL+BC|41.2|44.7|73.6|
|QL+IDM|47.5|53.9|80.2|
|CQL|62.4|98.7|111.0|
|CQL+BC|85.7|111.8|104.3|
|CQL+IDM(OSR)|**94.7**|**114.3**|**113.1**|
Based on the above results, we have the following observations:
1) the IDM model is useful - we can see that the performance ranking is: OSR > CQL+BC > CQL , where BC denotes traditional behavior cloning (using behavior policy). The IDM can be thought of as a behavior cloning but taking the consequence of an action into consideration when cloning that action.
2) regulating the Q function search space is important for the generalization capability of an offline RL agent - note that if we replace CQL with standard QL in OSR, we have the following performance ranking: OSR>QL+IDM>QL+BC, which shows that the CQL-stlye value function learning is important both for IDM and BC. One possible reason for this is that the agent is trained under the actor-critic framework where both value function and policy play a role, and most importantly, in the setting of offline RL, conservative learning like CQL is critical for suppressing extrapolation error and overestimation, as pointed out by the reviewer.
3) IDM and CQL are complementary to each other - the IDM provides a way to guide the agent to navigate to safer regions, which allows the agent to learn more smart behavior when encountering unfamiliar states - in the sense that the desired behaviors of an agent should be rational (i.e., being less likely to be punished by the objective function of CQL ) not only under normal in-distribution states but under difficult unseen situations as well (this latter point is less studied in current literature) . The IDM can also be thought of as a mechanism to control the training procedure of CQL, such that it will behave less 'overly-conservatively' during learning, potentially improving its generalization capability.
**Q2:In Fig. 6, it seems that the weight $\lambda$ for the OSR regularization has little effect on the performance?**
**Response:** Thank you for the comment. The reason why this happens may be that the range of the $\lambda$s we choose is too small. To further evaluate how the hyperparameters $\lambda$ and $\beta$ affect the performance, we have attached more results of sensitive analysis based on the normalized score metrics as follows,
Halfcheetah:
|$\lambda$\\\\ $\beta$|1e-5|1e-4|1e-3|1e-2|1e-1|
|-|-|-|-|-|-|
|0(CQL)|62.4|62.4|62.4|62.4|62.4|
|0.01|58.7|59.3|64.6|62.1|46.5|
|0.1|54.2|76.4|83.4|63.7|50.3|
|0.5|75.4|87.9|**94.6**|82.3|33.6|
|1.0|73.2|89.2|92.4|46.7|34.4|
|10.0|64.7|67.9|76.5|43.9|32.7|
Hopper:
|$\lambda$\\\\ $\beta$|1e-5|1e-4|1e-3|1e-2|1e-1|
|-|-|-|-|-|-|
|0(CQL)|111.0|111.0|111.0|111.0|111.0|
|0.01|109.3|110.2|111.6|46.3|20.6|
|0.1|111.4|112.1|111.3|29.1|18.7|
|0.5|111.5|112.3|112.9|17.1|20.4|
|1.0|112.1|111.7|**113.0**|17.4|14.5|
|10.0|98.3|70.8|69.6|22.6|13.3|
Walker2d:
|$\lambda$\\\\ $\beta$|1e-5|1e-4|1e-3|1e-2|1e-1|
|-|-|-|-|-|-|
|0(CQL)|98.7|98.7|98.7|98.7|98.7|
|0.01|101.1|102.5|104.6|94.1|33.3|
|0.1|103.2|108.9|112.4|97.6|16.6|
|0.5|109.4|112.6|**114.1**|83.3|16.8|
|1.0|108.2|110.1|113.8|84.7|20.1|
|10.0|101.7|100.8|89.1|72.8|17.1|
From the results, we observe that we should be cautious in choosing a $\beta$ that is not very large; otherwise, it could lead to failure. We remark that it is better to choose the appropriate hyperparameters in the neighborhood of the bold data listed in the table above.
---
Rebuttal Comment 1.1:
Title: About my first concern
Comment: Thank you very much for the detailed reply. About your response to my first concern:
>"we remove the expectation w.r.t. s' inside the KL using Monte Calro approximation with the sample number N as 1."
In my understanding, it is biased for the Monte Calro sampling to approximate the expectation within the non-linear function KL.
Besides, to make the derivation more clear, we can show the dependence between $\tilde{s}$ and $s'$. We can add the original $s$ to the tuple $(\tilde{s},s')$ in the $D_{tot}$: $(s,\tilde{s},s')$, where $s$ is the state before perturbation.
In my understanding, the sample based minimization of Eq. 7 should be
$\mathbb{E} _{(s,\tilde{s})\sim D _{tot}} KL \left( \mathbb{E} _{s' \sim D _{tot} (\cdot| s,\tilde{s})}I^{\pi _\beta}(a|\tilde{s},s') | \pi(a|\tilde{s})\right)$
However, because the dataset stores coupled $(s,\tilde{s},s')$, the algorithm (Eq. 11) actually takes the following sample based optimization:
$\mathbb{E} _{(s,\tilde{s},s') \sim D _{tot}} KL \left( I ^{\pi _\beta}(a | \tilde{s},s') | \pi(a|\tilde{s}) \right) = \mathbb{E} _{(s,\tilde{s})\sim D _{tot}} \mathbb{E} _{s' \sim D _{tot}(\cdot| s,\tilde{s})} KL \left(I ^{\pi _\beta}(a|\tilde{s},s') | \pi(a|\tilde{s}) \right)$
The above two equations are not equivalent. More clarification would be appreciated. Thanks.
---
Reply to Comment 1.1.1:
Title: More clarification about your first concern
Comment: Thank you for your comment. First, per your suggestion, we would add the original $s$ to the tuple $(\tilde{s}, s')$ in the $D_{tot}$ for clearer derivation, as $(s, \tilde{s}, s')$. And we have also noticed your concern - it looks like that the proposed optimization objective (Eq.7) is not equivalent to its actual implementation (Eq.11), i.e.,
$\mathbb E_{(s,\tilde{s})\sim D_{tot}} D_{KL}\bigg[\mathbb E_{s'\sim P(s'|s, \pi_\beta)} I^{\pi_\beta}(a|\tilde{s}, s')\bigg\|\pi(a|\tilde{s})\bigg] \neq\mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}D_{KL}\bigg[I^{\pi_\beta}(a|\tilde{s}, s')\bigg\|\pi(a|\tilde{s})\bigg]$, where $P(s'|s, \pi_\beta)$ is the transition distribution, denoted as $D_{tot}(s'|s, \tilde{s})$ as well for sampled version.
However, we would like to clarify that these two optimization problems are actually equivalent, in the sense that they induce the same solution,
$\arg\min\limits_\pi \mathbb E_{(s,\tilde{s})\sim D_{tot}} D_{KL}\bigg[\mathbb E_{s'\sim P(s'|s, \pi_\beta)} I^{\pi_\beta}(a|\tilde{s}, s')\bigg\|\pi(a|\tilde{s})\bigg] = \arg\min\limits_\pi \mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}D_{KL}\bigg[I^{\pi_\beta}(a|\tilde{s}, s')\bigg\|\pi(a|\tilde{s})\bigg]$ (1)
In what below, we give the detailed derivation,
$\arg\min\limits_\pi \mathbb E_{(s,\tilde{s})\sim D_{tot}} D_{KL}\bigg[\mathbb E_{s'\sim P(s'|s, \pi_\beta)} I^{\pi_\beta}(a|\tilde{s}, s')\bigg\|\pi(a|\tilde{s})\bigg]$ \ \ \ \ (2)
$= \arg\min\limits_\pi \mathbb E_{(s,\tilde{s})\sim D_{tot}}\sum_a \mathbb E_{s'\sim P(s'|s, \pi_\beta)}I^{\pi_\beta}(a|\tilde{s}, s')\log\frac{\mathbb E_{s''\sim P(s''|s, \pi_\beta)} I^{\pi_\beta}(a|\tilde{s}, s'')}{\pi(a|\tilde{s})}$ \ \ \ \ (3)
$= \arg\min\limits_\pi \mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}\sum_a I^{\pi_\beta}(a|\tilde{s}, s')\log\frac{\mathbb E_{s''\sim P(s''|s, \pi_\beta)} I^{\pi_\beta}(a|\tilde{s}, s'')}{\pi(a|\tilde{s})}$ \ \ \ \ (4)
$= \arg\min\limits_\pi \mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}\sum_a I^{\pi_\beta}(a|\tilde{s}, s')\log\mathbb E_{s''\sim P(s''|s, \pi_\beta)} I^{\pi_\beta}(a|\tilde{s}, s'') + \mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}\sum_a I^{\pi_\beta}(a|\tilde{s}, s')\log\frac{1}{\pi(a|\tilde{s})}$ \ \ \ \ (5)
Note the term $\mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}\sum_a I^{\pi_\beta}(a|\tilde{s}, s')\log\mathbb E_{s''\sim P(s''|s, \pi_\beta)} I^{\pi_\beta}(a|\tilde{s}, s'')$ in Eq.(5) is a constant w.r.t. $\pi$, hence we can remove it as follows,
$(5)= \arg\min\limits_\pi \mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}\sum_a I^{\pi_\beta}(a|\tilde{s}, s')\log\frac{1}{\pi(a|\tilde{s})}$ \ \ \ \ (6)
We remark that $\mathbb E_{(s,\tilde{s})\sim D_{tot}} \mathbb E_{s'\sim P(s'|s, \pi_\beta)}\sum_a I^{\pi_\beta}(a|\tilde{s}, s')\log I^{\pi_\beta}(a|\tilde{s}, s')$ is a constant w.r.t. $\pi$, so we can add it onto Eq.(6) as follows,
$(6)= \arg\min\limits_\pi \mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}\sum_a I^{\pi_\beta}(a|\tilde{s}, s')\log\frac{1}{\pi(a|\tilde{s})} + \mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}\sum_a I^{\pi_\beta}(a|\tilde{s}, s')\log I^{\pi_\beta}(a|\tilde{s}, s')$ \ \ \ \ (7)
$= \arg\min\limits_\pi \mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}\sum_a I^{\pi_\beta}(a|\tilde{s}, s')\log\frac{I^{\pi_\beta}(a|\tilde{s}, s')}{\pi(a|\tilde{s})}$ \ \ \ \ (8)
$= \arg\min\limits_\pi \mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}D_{KL}\bigg[I^{\pi_\beta}(a|\tilde{s}, s')\bigg\|\pi(a|\tilde{s})\bigg]$ \ \ \ \ (9)
And then we can remove the expectation w.r.t. $s'$ in Eq.(9) with Monte Calro approximation with the sample number $N$ as 1,
$\arg\min\limits_\pi \mathbb E_{(s,\tilde{s})\sim D_{tot}}\mathbb E_{s'\sim P(s'|s, \pi_\beta)}D_{KL}\bigg[I^{\pi_\beta}(a|\tilde{s}, s')\bigg\|\pi(a|\tilde{s})\bigg]$
$\approx \arg\min\limits_\pi \mathbb E_{\tilde{s}\sim D_{tot}}\frac{1}{N}\sum_{i=1}^ND_{KL}\bigg[I^{\pi_\beta}(a|\tilde{s}, s'_{i})\bigg\|\pi(a|\tilde{s})\bigg]$ \ \ \ \ (10)
$\approx \arg\min\limits_\pi \mathbb E_{\tilde{s}\sim D_{tot}}D_{KL}\bigg[I^{\pi_\beta}(a|\tilde{s}, s')\bigg\|\pi(a|\tilde{s})\bigg]$ \ \ \ \ (11)
where $(s, \tilde{s}, s')$ are in the same couple sampled from the mixed dataset $D_{tot}$. Here the Eq.(11) is the Eq.11 in our paper.
---
Reply to Comment 1.1.2:
Title: Dear Reviewer kCan
Comment: Dear Reviewer kCan,
Thank you for your efforts in improving our work. Our newest official commnet raised has provided more clarification about your first concern with a detailed derivation. We would be happy to include the above discussions in our work and we would also be very grateful if you could respond to these points. | Summary: The paper addresses the issue of state distributional shift in offline reinforcement learning, where an agent tends to take unreliable actions when faced with unseen states during testing. The authors propose a solution to encourage the agent to follow the state recovery principle when making decisions. In addition to considering long-term return, the agent takes into account the immediate consequences of its current action, prioritizing actions that are capable of recovering the state distribution of the behavior policy. To achieve this, the authors train an inverse dynamics model, which is then used to guide the state recovery behavior of the new policy. The authors demonstrate the effectiveness and feasibility of their approach by achieving state-of-the-art performance on general offline RL benchmarks. Importantly, the proposed method aligns the transited state distribution of the new policy with the offline dataset at out-of-sample states without the need for explicit prediction, which is particularly challenging in complex and high-dimensional environments.
Strengths: 1. The paper addresses the state distributional shift problem, which has been relatively overlooked in prior research that primarily concentrates on mitigating out-of-distribution (OOD) actions during training.
2. The paper exhibits a high level of writing proficiency, logical organization, and reader-friendliness.
3. The utilization of an inverse dynamics model as a guide for state recovery represents a novel approach in the field.
4. The paper provides a rigorous theoretical analysis that demonstrates the effectiveness of the proposed algorithm.
Weaknesses: A significant concern pertains to the construction of the noisy dataset. Two subproblems arise in this context. Firstly, the constructed noisy states may not adequately reflect the distribution of out-of-sample states encountered in the real world. For instance, when dealing with a robot control task, simply introducing Gaussian noise to the logged states might not align well with practical out-of-sample states due to various physical or environmental constraints on state transitions. Secondly, the constructed transition data, where the noisy state transitions to the next state, may not accurately represent the transition distribution in the real world. Consequently, this discrepancy between the two sources of data conflicts with the fundamental claim of the paper, as it becomes challenging to demonstrate that the inverse dynamics learned truly reflect the correct dynamics when the data sources are mismatched. More discussion or experimental results should be presented to address this concern.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In light of the aforementioned weaknesses, I have a few questions that, if addressed by the authors, could potentially enhance the overall quality of the paper:
1. How can the issue of constructing a noisy dataset be mitigated to better reflect the distribution of out-of-sample states encountered in real-world scenarios, considering the presence of physical and environmental constraints on state transitions?
2. Is there a way to ensure that the constructed transition data accurately represents the true transition distribution in real-world settings, thus aligning with the core claim of the paper regarding the learned inverse dynamics?
3. Could the authors provide further evidence or explanations to alleviate concerns regarding the mismatch between the constructed noisy dataset and the actual out-of-sample states, thereby strengthening the validity of the proposed algorithm?
I believe that addressing these questions would significantly contribute to improving the paper and potentially enhance its evaluation score.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: One notable limitation of this work is the potential mismatch between the constructed transition distribution and the true transition distribution encountered in real-world scenarios. This disparity may restrict the practical applicability of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comment. We appreciate your questions and provide clarification below.
**Q1: ... noisy dataset be mitigated to better reflect the distribution of out-of-sample states ...?**
**Response:** Before answering this concern, please note that modeling OOD samples is not our ultimate goal, and the reasons are as follows: 1) encountering out-of-sample situations is almost inevitable for any agent due to the presence of approximal errors and other factors (e.g., finite sample size), 2) there exists experimental evidence showing that even when the OOD states are not well modeled (e.g., using our naive Gaussian noise injection), state of art performance can still be achieved if we can find a way to properly guide the agent to navigate to more safe regions (e.g., using our inverse model). Please see our response to Reviewer tZMd for more details.
Due to the above reasons, the focus of our work is actually not to explore the real distribution of out-of-sample states, but on how to improve the robustness of the agent against unfamiliar states, and to encourage the agent to learn more smart behavior when encountering such situations.
As noise injection is a well-known method to enhance the robustness of the learning algorithm, per your suggestion, we have explored alternative way to better model the OOD states, which takes into account the physical and environmental constraints, as well as the currently learned policy and Q-values. Particularly, we employ a series of adversarial attacks, including random, action difference, and minimum Q attacks, to generate OOD samples, and use them as data augmentation. The detailed results are given in our response to **W1** of Reviewer tZMd.
From the results, we can see that the utilization of a more intricate and meticulously designed noisy dataset is indeed useful in enhancing the performance of our method, probably due to the fact that such methods can improve the efficiency of OOD sampling and expand the coverage of noisy dataset.
**Q2: ... ensure that the constructed transition data accurately represents the true transition ...**
**Response:** Thank you for your comment. Instead of treating it as a dynamic model in representing the true transition distribution, in our work, we regard the inverse model (IDM) as an extended policy. It takes the current state (which may be unseen) $s$ and the one-step target state $s'$ as inputs, providing guidance to the agent on how to reach $s'$ from $s$ as effectively as possible. Although there is no strict theoretical guarantee that the IDM accurately predicts the true transition distribution at unseen states in real-world settings, our experimental results, shown in Fig. 3, 10, 11, and 12 of our paper, demonstrated that the IDM, acting as an extended policy, does aid the agent to behave better under the unseen states.
To further validate the opinion above, we conduct a simple experiment using MuJoCo suites. State pairs $(s, s')$ are sampled from the offline dataset, and out-of-sample states $\hat{s}$ are generated via adversarial attacks [1]. By evaluating the average distance between the target state $s'$ and the actual state $\hat{s}'$ obtained by taking actions from the IDM's policy $IDM(a|\hat{s}, s')$, we assess the IDM's state recovery ability. A comparison with a CQL policy is also performed, yielding the following results:
| |Halfcheetah|Hopper|Walker2d|
|-|-|-|-|
|Distance($s$,$\hat{s}$)|12.23|3.41|7.75|
|Distance($s'$, $\hat{s'}_{IDM}$)|**11.79**|**2.80**|**5.49**|
|Distance($s'$, $\hat{s'}_{CQL}$)|17.51|3.52|9.08|
From these results we can see that compared to traditional policies like CQL, the action guidance provided by IDM effectively reduce the difference between $s$ and $s'$. It is important to note that while IDM does not guarantee accurate prediction of the true transition distribution at unseen states, the information it provides still proves valuable in mitigating such risks.
**Q3: ... concerns regarding the mismatch between the constructed noisy dataset and the actual out-of-sample states ...**
**Response:** To assess the generalizability of our method, we need to first construct a dataset with more realistic out-of-sample states to test the learnt model. For this we employ a modified generative adversarial network (GAN), which is optimized as follows,
$\min\limits_G \max\limits_D [E_{s\sim P(s)}[\log D(s)]+ E_{P_{G}(s')}[\log (1-D(s'))]] + \alpha\cdot E_{P_{G(s)}}H [\pi_\beta(\cdot|s)]$
where $G$ is generator, $D$ is the discriminator, $P(s)$ is the real-word state distribution (the dataset) and $P_G(s)$ is the state distribution generated via $G$. $H$ is the entropy function. In words, the above objective aims to generate real-world samples (i..e., consistent with the in-distribution data) that confuse the behavior policy $\pi_\beta$ most; hence its output could be considered as kind of realistic out-of-sample states.
In evaluation, we initialize the MuJoCo environments with the generated out-of-sample states and assess the generalizability of the learnt agent on them. The results, based on the average of normalized scores and recovery rate, are as follows:
| |Halfcheetah|Hopper|Walker2d|
|-|-|-|-|
|CQL|20.4/33.8%|36.5/39.1%|13.1/12.6%|
|OSR|**40.1/69.9%**|**72.0/88.8%**|**43.3/32.4%**|
The above results indicate that our OSR effectively guides the agent to recover from most real-world out-of-sample situations (nearly 70%) in the Halfcheetah and Hopper tasks, despite the mismatch between the constructed (Gaussian) noisy dataset and the actual (GAN-based) out-of-sample states. Although the Walker2D task seems challenging to it, our OSR method performs significantly better than CQL, where a risk-guiding mechanism like ours is lacking.
To gain further insights, we provide visualizations of typical out-of-distribution (OOD) states in the three tasks, along with the corresponding trajectories of OSR. Please refer to the uploaded PDF for the visualizations.
---
Rebuttal Comment 1.1:
Comment: Thanks. I have improved my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your comment.
Comment: Thank you for your comment. If you have any other questions, please post them. We are happy to continue our communication. | Summary: The authors tackle the state distributional shift problem in offline reinforcement learning, by learning to *recover* to states that are close to the in-distribution region, where the proposed method is named Out-of-sample State Recovery (OSR). They augment the offline dataset by generating new samples with Gaussian noise injected to states for training. Also, they train an inverse dynamics model (IDM) to predict the *recovering* action given the current and next actions. Based on it, they learn a policy and encourage it to imitate the IDM to output *recovering* actions using their KL divergence term, or penalize Q-values for going out-of-sample. The authors present empirical results in MuJoCo and AntMaze environments.
Strengths: - The training of IDM with perturbed samples and its use for encouraging the policy to recover to seen states is novel to some extent.
- The experimental evaluation and analyses of the proposed methods are done in multiple settings and perspectives, which backs up the proposed methods empirically.
- State distributional shift is one of the problems that need to be tackled in the field of offline reinforcement learning.
Weaknesses: - The noise injection may not be enough in more complex environments with more state dimensions, as the state space would be too big to cover possible close OOD states with random sampling.
- On a related note, as the size of the original offline dataset increases, the needed amount of augmented data could become too large or otherwise the augmentation might get less effective.
- In terms of presentation, I think merging Fig.13 and Fig.14 would be more informative and make the comparison of the results with OSR and OSR-v easier.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Do you think the proposed IDM-based approach is supposed to work better than existing pessimism approaches with dataset augmentation (using noise injection) in general, especially when no specific modifications to cause out-of-sample situations are made for the environment? If so, I would like to hear the justification.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors fairly covered the limitations of this work on the assumptions for the theoretical derivations and the recoverability depending on the type of out-of-sample situations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your thoughtful comments. We provide clarification to your questions as below. We appreciate it if you have any further questions or comments.
**W1: The noise injection ... not be enough in more complex environments with more state dimensions...**
**Response**: We agree that Gaussian noise injection would quickly become challenging in modeling OOD in high dimensional state space. In our opinion, an ideal OOD sample generator should be both efficient or purposeful and relatively insensitive to the dimensions of the state space. One possible way to achieve this is through adversarial attacks [1] which purposefully searches those OOD samples in each $s$ centered $\epsilon$-neighborhood such that they either cause large policy perturbation or have lower Q value. The method is also less sensitive to the dimensions of state space due to the technique of deep generative networks. We construct a new dataset using this method and denote the experimental results on it as OSR-a, shown below,
| |Halfcheetah-m.|Halfcheetah-m.-r.|Halfcheetah-m.-e.|Halfcheetah-r.|Halfcheetah-e.|Hopper-m.|Hopper-m.-r.|Hopper-m.-e.|Hopper-r.|Hopper-e.|Walker2d-m.|Walker2d-m.-r.|Walker2d-m.-e.|Walker2d-r.|Walker2d-e.|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|OSR-a|**52.3**|**51.2**|**101.2**|**35.2**|**103.1**|**85.2**|**97.4**|112.7|**10.9**|112.2|**87.2**|86.8|113.4|**15.1**|**111.2**|
|OSR|48.8|46.8|94.7|**35.2**|97.7|83.1|96.7|**113.1**|10.3|**113.1**|85.7|**87.9**|**114.3**|13.5|110.3|
From the results we observe that OSR-a outperforms OSR at most benchmarks. We remark that better-designed out-of-sample generating methods would further improve the performance of our method, probably because such methods take into account the physical and environmental constraints, resulting in better sampling efficiency.
**W2: ... dataset increases, the needed amount of augmented ...**
**Reponse:** Please note that modeling OOD samples based on the observed experience is not our ultimate goal. Actually, as mentioned in our response to Q1 of Reviewer cN6c, the performance of our method is largely due to the inverse model, although noise injection is shown to further improve the performance - for your convenience, we copy the results below, where our method of OSR($\beta=0$) is roughly equivalent to CQL + IDM without noise injection, while OSR is with noise injection. From these we can see that it is IDM, instead of noise injection, that contributes most to the performance of OSR, and data augmentation is more like icing on the cake for OSR (see 2nd and 3rd rows). Hence we conclude that even with no noise injection at all, our method would preserve most of its advantages in training a rational agent in the offline RL setting, provided that the dataset is large enough to train a good inverse dynamics model (IDM) .
| |Halfcheetah-m.-e.|Hopper-m.-e.|Walker2d-m.-e.|
|-|-|-|-|
|CQL|62.4|98.7|111.0|
|OSR($\beta$=0)|92.3|111.8|110.1|
|OSR|**94.7**|**114.3**|**113.1**|
**W3: ... merging Fig.13 and Fig.14 ...**
**Response:** Thanks for your suggestion and we will do that in the revised manuscript.
**Q1: .... OSR ... better than existing pessimism approaches with dataset augmentation (using noise injection) .... justification.**
**Response:** Our answer is yes, and our justification is as follows,
Although data augmentation is useful when the training data are not so representative, encountering out-of-sample situations is almost inevitable for any agent due to the presence of approximal errors and other factors. Such unfortunate situations aggravate when the new policy makes unreliable decisions at unseen states, causing the agent to deviate from the offline dataset, and existing data augmentation-based approaches, e.g., RORL [1] and ROMI [2], seldom consider the problem of how to recover from such scenarios.
To give some experimental evidence on this, we compare two popular data augmentation approaches, i.e., ROMI and RORL, with our method on the standard MuJoCo benchmark in the table below, where OSR-10 is OSR implemented with 10 Q networks, the same as RORL.
| |Halfcheetah-m.|Halfcheetah-m.-r.|Halfcheetah-m.-e.|Halfcheetah-r.|Hopper-m.|Hopper-m.-r.|Hopper-m.-e.|Hopper-r.|Walker2d-m.|Walker2d-m.-r.|Walker2d-m.-e.|Walker2d-r.|
|-|-|-|-|-|-|-|-|-|-|-|-|-|
|ROMI|49.1|47.0|86.8|24.5|72.3|98.1|111.4|30.2|84.3|**109.7**|109.7|7.5|
|RORL|66.8|61.9|107.8|**28.5**|104.8|102.8|112.7|**31.4**|**102.4**|90.4|121.2|21.4|
|OSR(ours)|48.8|46.8|94.7|35.2|83.1|96.7|113.1|10.3|85.7|87.9|114.3|13.5|
|OSR-10(ours)|**67.1**|**64.7**|**108.7**|28.1|**105.5**|**103.1**|**113.2**|30.2|102.0|93.8|**123.4**|**23.1**|
These results show that although only a very naive noise injection mechanism is used for data augmentation in our OSR, its performs comparable or better than both ROMI and RORL in which more complex data augmentation techniques are involved. Besides, we also compare our method with RORL in the Out-of-sample MuJoCo benchmarks, and the results could be found in our response to the reviewer cN6c, showing that our IDM-based offline RL method OSR works much more stable than the state of the art data augmentation-based method (RORL) on the challenging OOD benchmarks.
Furthermore, in the attached PDF file, we illustrate visually how our method enables the agent to recover from many out-of-sample situations on MuJoCo benchmarks.
[1] Yang R, Bai C, Ma X, et al. Rorl: Robust offline reinforcement learning via conservative smoothing[J]. Advances in Neural Information Processing Systems, 2022, 35: 23851-23866.
[2] Wang J , Li W , Jiang H ,et al.Offline Reinforcement Learning with Reverse Model-based Imagination[J].arXiv e-prints, 2021.DOI:10.48550/arXiv.2110.00188. | Rebuttal 1:
Rebuttal: Thank you for all reviewers' thoughtful and constructive comments on our works discussing a significant but overlooked issue, state distribution shifting, on offline reinforcement learning.
In summary, our response includes the following aspects:
1.**[Efficiency of noise injection and OOD sampling]** Providing additional experimental results with the adversarial methods onto our implementation, referred as OSR-a, and additiional ablation study including the removal of noise ($\beta=0$) and so on. The detailed results and analysis could be found in the response to Reviewer tZMd's **W1** and **W2** and Reviewer tm1v's **Q1**.
2.**[Validity of the use of IDM]** Providing additional experimental results to explore the behavior of the inverse dynamics model (IDM) we use and its guided policy, enhancing the validity of our work. The detailed results and analysis could be found in the response to Reviewer tm1v's **Q2, Q3** and the visualization in the attached PDF file.
3.**[Additional ablation study]** Providing a more comprehensive ablation study (sensitive analysis), clarifying the role of each component in our work. This could be found in the response to Reviewer kCan's **Q1,Q2** and Reviewer cN6c's **Q1**.
4.**[Additional comparison study]** Providing a further comparison study with related works, including ROMI, a representative offline RL algorithm with the reversed model, and RORL, a SOTA robust offline RL algorithm. The detailed results and analysis could be found in the response to Reviewer tZMd's **Q1**, Reviewer cN6c's **Q5**.
5.**[Performing a prior robust offline RL algormthm in OOS MuJoCo benchmarks]** Providing experimental results of RORL, a SOTA robust offline RL algorithm, in the proposed OOS MuJoCo benchmarks. The detailed results and analysis could be found in the response to Reviewer cN6c's **Q4** and the visualization in the attached PDF file.
6.**[Gap between theory and implementation]** Clarifying this concern in the response to Reviewer kCan's **W1**.
7.**[State normalization]** Providing experimental results of our implementation with the state normalization trick. The detailed results and analysis could be found in the response to Reviewer cN6c's **Q2**.
8.**[Format suggestions]** Including Reviewer tZMd's **W3** and Reviewer cN6c's **Q3**. We will do that in the revised manuscript.
We hope our response could address the reviewers' concerns. If you have any further questions, please post them. We are pleased to have further discussions.
Pdf: /pdf/b9a7f2abb142b6beda0b14d6aa141dfa04bc4d29.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning | Accept (poster) | Summary: This paper introduces A-Crab, an offline RL algorithm derived from ATAC, which incorporates a modified loss function for the Q-function. Instead of employing the square Bellman error used in ATAC, A-Crab utilizes the importance-weighted Bellman error. With this modified loss function, A-Crab effectively addresses some disadvantages of ATAC. These include alleviating the visitation coverage assumption, enhancing the suboptimality rate, and simplifying the minimax optimization process. Additionally, A-Crab inherits several benefits from ATAC. Theoretical findings presented in this paper demonstrate the superior advantages of A-Crab in comparison to ATAC.
Strengths: 1. This paper is well-organized, presenting a table that compares provable offline RL algorithms, outlining the key distinctions of the proposed algorithm, A-Crab, in contrast to ATAC, and highlighting the advantages brought about by these modifications.
2. By simply incorporating the importance sampling ratio from the loss function of ATAC, A-Crab exhibits significantly improved characteristics, namely: 1) relaxation of the visitation coverage assumption, 2) enhancement of the suboptimal rate, and 3) simplification of the minimax problem into a maximization problem. These advancements result from non-trivial steps and can be accomplished by making only a few adjustments to the terms in the loss functions of ATAC.
Weaknesses: 1. Despite the noteworthy novelty of this paper in terms of presenting theoretical advantages of A-Crab, the absence of empirical results diminishes its impact. I acknowledge that it is common to solely provide theoretical results in the realm of provable offline RL. However, considering that A-Crab is built upon ATAC, which offers both theoretical and numerical results, it would be feasible to compare the performance of A-Crab with that of ATAC. I believe that including such results would significantly enhance the paper.
2. The importance sampling ratio in offline RL can become excessively large due to the limited coverage of the state-action space in the offline dataset and substantial differences between the current policy and the dataset policy. Consequently, the A-Crab algorithm may encounter instability when attempting to learn a robust policy in practical scenarios. However, if this paper addresses this issue, it would substantially strengthen its credibility.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. [Regarding Weakness 1] Would it be possible for you to present empirical results on D4RL datasets that compare the performance of A-Crab with that of ATAC?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The only limitation of this paper is the absence of empirical results. However, the implementation of A-Crab appears to be relatively straightforward, as it can be achieve by replacing the squared Bellman error in ATAC's implementation with the importance-weighted averaged Bellman error. Given that the implementation of ATAC is already available on GitHub, there is an opportunity to enhance this paper by presenting empirical results that compare the performance of A-Crab with that of ATAC.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful and insightful comments. Below are our responses.
>Would it be possible for you to present empirical results on D4RL datasets that compare the performance of A-Crab with that of ATAC?
Yes. Please see the “global” response for details.
>The importance sampling ratio in offline RL can become excessively large due to the limited coverage of the state-action space in the offline dataset and substantial differences between the current policy and the dataset policy. Consequently, the A-Crab algorithm may encounter instability when attempting to learn a robust policy in practical scenarios.
Note that our algorithm only requires that the dataset covers the policy we aim to learn ( the goal policy), instead of covering all policies. In other words, we only require that the importance sampling ratio of the goal policy be bounded in a weighted $\ell_2$ sense. It does not matter whether the current policy is well covered by the dataset or has a bounded importance sampling ratio since our algorithm does not require computing the ratio for the current policy. Also, in our practical implementation, we can avoid using the $w$ function by using a computationally more efficient approximation of the importance-weighted Bellman error (see “global” response for details).
---
Rebuttal Comment 1.1:
Comment: Thank you for your kind responses to my questions. I am delighted to see the empirical results on D4RL tasks, comparing A-Crab with ATAC. These results can help readers better understand the effectiveness of the proposed method.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer again for the valuable feedback, and thanks for the suggestion of adding experimental results that would significantly enhance our paper. Since we have addressed the concerns pointed out during the official review, would you like to consider raising your score accordingly? | Summary: The paper proposes an offline reinforcement learning algorithm called Actor-Critic Regularized by Average Bellman error (A-Crab). A-Crab modifies the pessimistic offline RL framework by replacing the usual squared TD error with an importance sampled TD error. Due to the linearity of the importance sampled TD error, overestimation does not occur, allowing the removal of the correction term. A-Crab achieves the optimal suboptimality rate with weaker assumptions and is computationally efficient. An improvement over the behavior policy is also guaranteed under a wide range of hyperparameters.
Strengths: ### Originality
The authors introduced a novel technique using the importance sampled TD error instead of the squared TD error. Due to its linearity, the correction for overestimation is unnecessary, allowing better suboptimality bounds and weaker assumptions. The analysis also becomes very simple with the help of Cauchy–Schwarz inequality.
### Quality
I could not find any technical flaws in the arguments presented in the paper.
### Clarity
The paper is overall well-written and easy to understand.
### Significance
A-Crab achieves the optimal statistical rate of $1/\sqrt{N}$ with assumptions that are weaker compared to other provable offline RL algorithms. Also, using importance sampling instead of squaring can be useful in areas other than offline RL theory.
Weaknesses: 1. The authors unrealistically assume the action space, the policy space, the importance sampling weight function space, and the value function space to be finite.
2. Experimental verification of the performance bounds on a simple toy example would be interesting.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Line 70 states that the A-Crab enjoys an optimal statistical rate of $1/\sqrt{N}$. Does this mean the optimality of $O(1/\sqrt{N})$ is theoretically proved? Or does this just mean that it is the best suboptimality rate to be discovered?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not address the limitations of their work. Assumptions on the finiteness of the action space, the policy space, the importance sampling weight function space, and the value function space can be viewed as limitations of this work. I believe this work does not have a potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful and insightful comments. Below are our responses.
>The authors unrealistically assume the action space, the policy space, the importance sampling weight function space, and the value function space to be finite.
The finite cardinality assumption on all the above function classes was made only for convenience. Actually, we can instead replace the finite cardinality assumption with the bounded log covering-number assumption. The replacement is straightforward and is basically the same as Appendix B in [1].
>Experimental verification of the performance bounds on a simple toy example would be interesting.
We included experimental verification in the “global” response.
>Line 70 states that the A-Crab enjoys an optimal statistical rate of $1/\sqrt{N}$. Does this mean the optimality of
$O(1/\sqrt{N})$ is theoretically proved? Or does this just mean that it is the best suboptimality rate to be discovered?
An $O(1/\sqrt{N})$ rate is the best one can hope in terms of $N$, and this is due to the intrinsic statistical error. For example, when we want to estimate the expectation of a random variable $X$ using $N$ i.i.d. samples $X_1, \ldots, X_N$, with constant probability, $|\bar X - \mathbb{E}[X]| \geq \Omega(1/\sqrt{N})$ where $\bar X = \frac{1}{N} \sum_{i=1}^N X_i$ is the empirical mean. This means the best rate of error one can hope is $O(1/\sqrt{N})$. Similarly, it can be proved that there is a lower bound $\Omega(1/\sqrt{N})$ of the suboptimality of the learned algorithm, which means $O(1/\sqrt{N})$ is optimal in terms of $N$.
**References:**
[1] Cheng, C. A., Xie, T., Jiang, N., & Agarwal, A. (2022, June). Adversarially trained actor critic for offline reinforcement learning. In International Conference on Machine Learning (pp. 3852-3878). PMLR.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. It would be nice to have these explanations included in the final version of the paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your time reviewing our paper and reading our response. We will include the points you mentioned in our revision.
---
Reply to Comment 1.1.2:
Comment: We thank the reviewer again for their great efforts in reviewing our paper. Since we have addressed your concern on both theoretical and empirical sides (the two points in the weakness section), would you like to consider raising your score accordingly? | Summary: The paper introduces A-Crab, which combines marginalized importance sampling with the actor-critic paradigm to achieve optimal statistical rate in offline RL. Fm theoretical analysis, this algorithm is also more computationally efficient and relies on a weaker average notion of policy coverage compared to prior work.
Strengths: 1. The paper is well-written, effectively summarizing prior work and building upon it to propose the new algorithm. The notations are clear and consistent.
2. The algorithm itself exhibits strong theoretical properties, such as optimal statistical rate and efficiency, making it promising for offline RL problems.
Weaknesses: 1. The empirical evaluation of the algorithm is a major concern. Since the theory only requires general function approximators, the gap between theory and practice may not be substantial. Experimental results help demonstrate the algorithm's effectiveness on computation cost and learning speed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Line 224, where does the overestimation come from? Please provide clarification.
2. In Line 248, why does ATAC need to optimize two functions, f and g? The previous mention in Line 214 only refers to f.
3. In Line 280, how does the theorem also demonstrate robustness against model mismatch? Please elaborate on this point.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful and insightful comments. Below are our responses.
>The empirical evaluation of the algorithm is a major concern.
We provided empirical results to demonstrate the algorithm’s effectiveness. See the “global” response for details.
>In Line 224, where does the overestimation come from?
The overestimation is caused when directly using the empirical version $\mathbb{E}\_{\mathcal{D}} \left[(f(s,a)-r - \gamma f(s', \pi))^2\right]$ to estimate the term $\mathbb{E}\_\mu [((f - \mathcal{T}^\pi f)(s,a))^2]$. To better illustrate, we use a simpler example. Assume there are two random variables $X$, $Y$, and we want to estimate
$\mathbb{E}[ ( X - \mathbb{E}[Y|X])^2]$
(note that $\mathcal{T}^\pi f(s,a)$ is also an expectation conditioned on $f(s,a)$). If we have $n$ samples of $(X\_i, Y\_i)$ pairs, and directly use $\frac{1}{n}\sum\_{i=1}^n (X\_i - Y\_i)^2$ as an empirical estimator, then it is an overestimation of $\mathbb{E}[(X - \mathbb{E}[Y|X])^2]$ since $\mathbb{E}[\frac{1}{n}\sum\_{i=1}^n (X\_i - Y\_i)^2] = \mathbb{E}[(X-Y)^2] \geq \mathbb{E}[(X-\mathbb{E}[Y|X])^2]$. This is essentially the same as the overestimation in Line 224.
>In Line 248, why does ATAC need to optimize two functions, f and g?
This is highly related to the previous question. The reason ATAC needs another function $g$ is to address the overestimation issue. As we already discussed, $\mathbb{E}\_{\mathcal{D}} \left[(f(s,a)-r - \gamma f(s', \pi))^2\right]$ is an overestimate of $\mathbb{E}\_\mu [((f - \mathcal{T}^\pi f)(s,a))^2]$, and the amount of the overestimation is roughly equal to $\min\_{g \in \mathcal{F}} \mathbb{E}\_{\mathcal{D}} \left[(g(s,a)-r - \gamma f(s', \pi))^2\right]$. Therefore, one should use $\mathbb{E}\_{\mathcal{D}} \left[(f(s,a)-r - \gamma f(s', \pi))^2\right] - \min\_{g \in \mathcal{F}} \mathbb{E}\_{\mathcal{D}} \left[(g(s,a)-r - \gamma f(s', \pi))^2\right]$ as an unbiased estimator as also mentioned in Line 225 and 226.
>In Line 280, how does the theorem also demonstrate robustness against model mismatch?
In Theorem 1, the upper bound of the suboptimality contains a term $C^\star_{\ell_2} \sqrt{\epsilon_\mathcal{F}}$ where $\epsilon_\mathcal{F}$ quantifies the model mismatch, which is defined in Assumption 1 (Line 137). Note that when $\epsilon_\mathcal{F} = 0$, there is no model mismatch; when $\epsilon_\mathcal{F} > 0$, the additional suboptimality caused by model mismatch is at most $C^\star_{\ell_2} \sqrt{\epsilon_\mathcal{F}}$, which means that our algorithm is robust against model mismatch.
---
Rebuttal Comment 1.1:
Comment: Thanks for the replies and additional experiments. My questions on the theoretical side have been solved. However, I still have concerns regarding the empirical results and the connections to the theory. For example, how does the performance difference change during training, and how is the bound influenced by the usage of complex function approximation? I hope the authors could add more empirical details in the revised version.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the time reviewing our paper, reading our response, and providing valuable feedback! We are glad that we have successfully addressed the reviewer's questions on the theoretical side.
For the empirical results, the primary purpose is to (empirically) prove that our algorithm is practical and can achieve great performance. Compared to ATAC, our A-Crab algorithm achieves better or comparable performance in various settings, as shown in the plots in the "global response". This is consistent with our theoretical results that our algorithm has better sample complexity than the previous algorithm.
Below we also respond to two specific questions the reviewer mentioned.
>how does the performance difference change during training
The change of performance difference during training is indicated by the change of performance during the training as shown in our plots. However, since it is hard to know the performance of the optimal policy in those relatively complex environments, it is also hard to plot the performance difference directly. However, note that the performance difference can be viewed as a constant minus the performance in any specific environment, so the performance difference curve can be obtained by mirroring the performance curve along the horizontal axis up to a constant shift.
>how is the bound influenced by the usage of complex function approximation
Typically there is a tradeoff between the cardinality (or covering number) of the function class and the model misspecification. The richer the function class, the smaller the model-misspecification parameter. Roughly speaking, there should be an "optimal" size of the function class that best balances the cardinality and model misspecification. The function class complexity can be changed by using different architectures of neural networks. Since the main focus of our experiments conducted during the rebuttal session is to compare with the previous ATAC algorithm and show our algorithm is practical, we use the same architecture of networks as ATAC for fairness. Also, exploring the influence on the performance of different network architectures is not the main focus of this work, but it would be interesting to see whether a different network can achieve better results. | Summary: This paper proposes a novel algorithm called A-Crab (Actor-Critic Regularized by Average Bellman Error) for offline reinforcement learning (RL) in complex environments with insufficient data coverage. The algorithm combines the marginalized importance sampling framework with the actor-critic paradigm and addresses the challenges of handling high-dimensional observations and minimal data coverage. The paper presents sufficient theoretical analysis to demonstrate the advantages of the proposed algorithm over existing methods.
Strengths: - The paper proposes a novel algorithm that combines various techniques to address the challenges of offline RL in complex environments. The use of marginalized importance sampling and the average Bellman error regularization in the critic's objective are innovative and practical.
- The theoretical analysis provides insights into the statistical properties of the proposed algorithm, with a focus on achieving the optimal statistical rate in converging to the best policy covered in the offline dataset.
- The paper provides a comparison with existing provable offline RL algorithms, highlighting the strengths and advantages of the proposed A-Crab algorithm.
Weaknesses: - The paper provides no empirical evaluations or demonstrations of the proposed algorithm. Neither does it shed light on the design of practical algorithms.
- There remain some issues unsolved in the paper. See the questions for details.
- Some related works [1,2] use a similar optimization problem as Eq. (2) and also focus on computing an prioritization weight $w$ for Bellman error. They should be cited and discussed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - There seems to be a lack of comparison between $C_{\text{bellman}}$ and $C_{l_2}$ : How to show $C_{l_2}$ is a weaker assumption than $C_{\text{bellman}}$?
- How can the A-Crab algorithm shed light on the design of practical algorithms?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: References
[1] Kumar, Aviral, Abhishek Gupta, and Sergey Levine. "Discor: Corrective feedback in reinforcement learning via distribution correction." Advances in Neural Information Processing Systems 33 (2020): 18560-18572.
[2] Liu, Xu-Hui, et al. "Regret minimization experience replay in off-policy reinforcement learning." Advances in Neural Information Processing Systems 34 (2021): 17604-17615.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful and insightful comments. Below are our responses.
>The paper provides no empirical evaluations or demonstrations of the proposed algorithm. Neither does it shed light on the design of practical algorithms.
We showed empirical evaluation results to demonstrate that our A-Crab algorithm is practical. See the “global” response for details.
>There seems to be a lack of comparison between $C_{bellman}$ and $C_{\ell_2}$: How to show $C_{\ell_2}$ is a weaker assumption than $C_{bellman}$?
In Figure 1, page 3 of the paper, we provided an intuitive comparison of $C_{bellman}$ and $C_{\ell_2}$, where for any fixed policy $\pi$, $C_{\ell_2}$ remains unchanged while $C_{bellman}$ goes larger as the function class $\mathcal{F}$ gets richer. When $\mathcal{F}$ is extremely expressive, $C_{bellman}$ can be as large as $C_{\infty}$ and thus can be much larger than $C_{\ell_2}$. For an arbitrary $\mathcal{F}$, it is hard to compare $C_{bellman}$ and $C_{\ell_2}$. However, they are both smaller than the previously widely-used coverage notion $C_{\ell_\infty}$.
>Some related works [1,2] use a similar optimization problem as Eq. (2) and also focus on computing an prioritization weight $w$ for Bellman error. They should be cited and discussed.
Thanks for pointing out. We will cite these two related works and add a discussion.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I acknowledge the authors' rebuttal and I remain my rating towards acceptance.
---
Reply to Comment 1.1.1:
Comment: Thanks for your time providing valuable feedback and reading our response. We will incorporate your suggestions in the revision. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their helpful and insightful comments. Below we first address common issues. Since all the reviewers mentioned that adding experimental results would make our theoretical results more solid and significantly enhance our paper, we compared our A-Crab algorithm to the previous ATAC algorithm on 12 Mujoco datasets (v2) from D4RL offline RL benchmark (the same as the datasets used in the ATAC paper for their main results). The attached pdf file contains plots showing the performance of ACrab and ATAC during training. Each curve is averaged over 8 random seeds (we choose 0-7 as random seeds), and each training step corresponds to a batch size of 256 (10^6 steps is roughly 1000 epochs). It shows that our A-Crab has higher returns and smaller variance (which indicates a more stable training procedure) than ATAC in various environments and has at least comparable results to ATAC for almost all environments.
Now we discuss some details of the implementation of A-Crab. Since it is straightforward to implement our algorithm based on ATAC, and nearly all the hyperparameters we used are the same as ATAC, we mainly emphasize the difference between A-Crab and ATAC in implementation.
Note that we only need to replace the squared Bellman error regularizer for the critic in ATAC with our proposed weighted average Bellman error regularizer. Recall the definition of our proposed weighted average Bellman error regularizer
$\mathcal{E}\_\mathcal{D}(\pi, f) = \max\_{w \in \mathcal{W}} \left| \mathbb{E}\_\mathcal{D}[w(s,a)(f(s,a)-r-\gamma f(s',\pi))]\right|. $
Since the calculation of $\mathcal{E}\_\mathcal{D}(\pi, f)$ requires solving an optimization problem w.r.t. importance weights $w$, for computational efficiency, we choose $\mathcal{W} = [0, C\_\infty]^{\mathcal{S} \times \mathcal{A}}$, and thus
$ \mathcal{E}\_{\mathcal{D}}^{\text{app}}(\pi, f) = C\_\infty\max$ { $ \mathbb{E}\_{\mathcal{D}}[(f(s,a)-r-\gamma f(s',\pi))\_+], \mathbb{E}\_{\mathcal{D}}[(r+\gamma f(s',\pi)-f(s,a))\_+] $ },
where $(\cdot)\_+ = \max\$ { $\cdot, 0$ } and $C_\infty$ can be viewed as a hyperparameter. We also observed that using a combination of squared Bellman error and our average Bellman error achieves better performance in practice, and we conjecture the reason is that the squared Bellman error regularizer is computationally more efficient and statistically suboptimal, while our average Bellman error regularizer is statistically optimal while computationally less efficient, and thus the combination of these two regularizers can benefit the training procedure. Specifically, in implementation, we choose
$ \frac{1}{2}\left(2\mathcal{E}\_\mathcal{D}^{\text{app}}(\pi, f) + \beta \mathbb{E}\_\mathcal{D} [((f - \mathcal{T}^\pi f)(s,a))^2] \right)$
as the regularizer, while ATAC uses $\beta \mathbb{E}\_\mathcal{D} [((f - \mathcal{T}^\pi f)(s,a))^2]$.
All hyperparameters are the same as ATAC, including $\beta$. For the additional hyperparameter $C\_\infty$, we do a grid search on { $1, 2, 5, 10, 20, 50, 100, 200$ }.
Pdf: /pdf/8b0707e3ffe81988fc746a91854eb62c564ad3fa.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Likelihood Ratio Confidence Sets for Sequential Decision Making | Accept (poster) | Summary: This paper proposed to use the likelihood ratio approach to provide an any-time valid confidence sequence, which is suitable for problems with well-specified likelihood. It discusses how to provably choose the best sequence of estimators and sheds light on connections to online convex optimization. To counteract the initially large bias of estimators, they propose a reweighing scheme. They also provide a non-asymptotic analysis of the likelihood ratio confidence set size for GLM, and performed numerical simulations.
Strengths: The idea of using the universal inference method in bandit problems is interesting.
Weaknesses: In bandit problems, the target of interest is regret, and the validity/length of the confidence set is a tool of analysis. One shortcoming of this paper is that there is no regret analysis (of the bandit problem) of the proposed method. One important question is, what is the regret of the proposed method on simple bandit problems such as the K-armed bandit (UCB method gives \sqrt{KT} regret) and the linear bandit (LinUCB gives \sqrt{d^2 T} regret). Since bandit is the focus of this paper, without a concrete regret bound (showing that it matches the best existing method), the benefit of the proposed method is not convincing.
The method proposed in this paper is similar to the OMLE method [1], which has a provable regret guarantee in many RL problems. One interesting question is to study whether the proposed algorithm has benefit over the OMLE algorithm.
[1] Optimistic MLE—A Generic Model-based Algorithm for Partially Observable Sequential Decision Making.
After reading the rebuttal:
I am convinced that a regret bound can be obtained for this method. I trust the authors will add this in the revised version.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NA
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. As you mention, the LinUCB analysis uses the radius of the confidence set to derive the overall regret bound for the bandit problem. Of course, we can do this with our confidence sets as well, and in the linear case, obtain immediately the optimal regret. Indeed, the proof is one line, starting from our confidence set diameter, and multiplying with a $\sqrt{Td\log T}$ factor due to the elliptical potential Lemma (Lemma 9), we obtain a $ \sqrt{Td\log T} \times \texttt{confidence parameter}$ regret bound. Since our confidence sets have size $\mathcal{O}(\sqrt{d\log T})$ we match the lower bounds $\sqrt{T}d$ for the problem up to log-factors for the Gaussian linear model case, i.e. $\sqrt{T}d\log T$. Similar steps can be taken for generalized linear models leveraging smoothness and strong convexity constants. We can add these theoretical results in a revised version.
As to the OMLE paper, this work also uses likelihood ratios to define the confidence set for a family of likelihoods. **However the method significantly differs from our setup.** *They do not use the online prediction game as a comparator for the likelihood ratio in the denominator.* Instead, they use a running MLE estimator as a comparator – not a sequence of estimators viewed as a game. This way they have to resort to the same type of analysis as prior works, say for sub-Gaussian likelihoods when calculating the confidence parameter $\beta$, and need to adopt the worst-case perspective.
In our work, we do not depend on these worst-case analyses, since the confidence set radius is defined *implicitly*, by way of the real-time performance of the online sub-learner, and not its worst-case performance guarantee. This is a **crucial distinction** between our work and other approaches, including OMLE and online-to-confidence set conversions. Additionally, our method can be applied to any likelihood without the need to perform a worst-case/theoretical analysis to get the right confidence parameter *at all*.
We invite you to reconsider your score, as our work brings significant theoretical and practical contributions on top of the paper you mention, and we are convinced that the similarities are only superficial.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification of the authors. I am satisfied that the regret bound can be obtained for the linear bandit, and I trust that the authors will add it to the revised version. I will raise my score. | Summary: This paper examines the confidence set of an estimator, defined by a likelihood ratio. The contributions stated within the work, alongside my corresponding queries, are outlined as follows:
* For generalized linear models, we theoretically analyze the geometry of the LR confidence sets under mild assumptions. We show their geometry is dictated by Bregman divergences of exponential families (Chowdhury et al., 2022).
*We show that the size of the confidence set is dictated by an online prediction game. The size of these sets depends on a sequence of estimators \{\theta_s\}_{s=1}^t that one uses to estimate the unknown parameter \theta_*. We discuss how to pick the estimator sequence in order to yield a provably small radius of the sets, by using the Follow-the-Regularized-Leader algorithm, which implements a regularized maximum-likelihood estimator. We prove that the radius of the confidence sets is nearly-worst-case optimal, and accordingly they yield nearly-worst-case regret bounds when used in generalized linear bandit applications. However, due to their data-dependent nature, they can be much tighter than this theory suggests.
*We analyze limitations of classical (un-weighted) LR sets when the underlying conditional observation model is not identifiable. In this case, the resulting (inevitable) estimation bias unnecessarily increases the size of the confidence sets. To mitigate this, we propose an adaptive reweighting scheme that decreases the effect of uninformed early bias of the estimator sequence on the size of the sets downstream. The reweighting does not affect the coverage guarantees of our sets, and utilizes an elegant connection to (robust) powered likelihoods (Wasserman et al., 2020).
* Thanks to the adaptive reweighting scheme, our sets are very practical as we showcase experimentally. We demonstrate that our method works well with exponential and non-exponential family likelihoods, and in parametric as well as in kernelized settings. We attribute their practical benefits to the fact that they do not depend on (possibly loose) worst-case parameters.
Strengths: The research question posed in this paper is notably intriguing and holds significant potential for various applications. Furthermore, the innovative approach adopted by the authors represents a substantial contribution to the field.
Weaknesses: While the paper offers several valuable insights, I noticed that the overall integration of its contributions could be improved. For instance, claimed contributions 1 and 3, which could potentially have been effectively merged, were instead treated separately. This compromise the contribution significantly from my perspective. Furthermore, I have concerns about the clarity and accuracy of some claims. For more specific observations, please refer to the "Questions" section. I believe addressing these issues would greatly enhance the coherence and validity of the work.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: * The relationship between LR confidence set geometry and Bregman divergences is briefly discussed on page 6. It would be beneficial to have this connection elaborated upon more thoroughly in the main body of the text.
* Section 2 provides a clear explanation on how to select a sequence of $\{ \theta_s \}_{s=1}^t$ to minimize the radius of the confidence set. However, this connection becomes less clear in Section 3.
Moreover, the paper claims that the radius of the confidence set is nearly-worst-case optimal. Could you please provide further explanation as to why this is the case? While it's clear that the FTRL can lead to nearly-worst-case optimal regret for the online optimization problem, $\mathcal{R}_t$ defined in $(3)$ appears to be not identical, but smaller than the definition of regret in the online optimization problem. If $\mathcal{R}_t$ is significantly smaller than the regret of the FTRL algorithm, might our confidence set not be optimal?
* The introduction of a re-weighting scheme is intriguing. However, given that the re-weight $w_t$ is difficult to compute, could you provide any theoretical performance guarantees? Additionally, in Theorem 2, the bias estimate depends on \norm{\theta_*}_2^2. Could this potentially conflict with our objective to estimate the confidence set of \theta_*? Furthermore, in Section 3, we ascertain the geometry of the linear models assuming $w_t = 1$. Is it possible to perform a similar analysis incorporating the re-weighting scheme? Otherwise, the value of the re-weighting scheme might be discounted if we cannot incorporate it into the analysis of simple model
* A query arises from Theorem 3, where \theta \in \mathcal{C}_t satisfies a certain inequality. I am curious about the application of this inequality in estimating the confidence set of \theta_*, given that the Bregman distance involved depends on $\theta_*$, which is unknown. Is this result primarily of theoretical interest, or is it of practical use in this context?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback on improving the clarity and coherence of our paper. We very much welcome the fact you see our contribution as substantial and we will use your insights to improve the exposition. The reason we separated the exposition of our first and third contribution is because we see the theory and methodology as separate contributions. The theory for GLMs and the proposition of a reweighting scheme can be considered adjacently, as reweighting is a more general approach that is not limited to GLMs.
We now answer your individual questions in order:
**Q:** *Bregman divergence and confidence sets* **R:** The relationship between Bregman divergence and confidence set is indeed intriguing and we are planning to add more information in a revised version. The main reason it arises is in the study of exponential family likelihoods that generalize the Gaussian case. In the Gaussian case, we can measure the confidence set in an ellipsoid norm, and in GLMs, Bregman divergences are the appropriate generalization. We plan to add more 2d confidence sets for different exponential families to provide intuition in the revised version.
**Q:** Selecting online learners **R:** Any low regret online learning algorithm can be chosen to select $\\{\hat{\theta}_t\\}_t$ as you can see in Eq. (9). We make the FTRL choice because of its well-specified meaning as a regularized maximum likelihood estimator, but considering algorithms based on exponential weights or other types of forecasters is possible too.
**Q:** *Regret notions* **R:** The two notions of regret are indeed different, but lower bounds on linear regression show that the difference between the two can be at most logarithmic. Furthermore, a good way to convince ourselves that our confidence sets are nearly optimal at least in the Gaussian case is to look at the downstream linear bandit regret. It follows easily from our confidence set size and specification that we achieve optimal regret in a linear bandit application. Therefore, the set being any smaller would imply that we broke through a lower bound for linear bandits, an impossibility.
Additionally, we would like to point out an *important distinction* between our work and other types of online-to-confidence set conversions. Here, we do not need a regret bound for the online learner to implement our algorithm. Our confidence set will simply *adapt* to the performance of the learner we use. It is therefore not crucial to understand the sharpest possible rates of FTRL on this slightly easier problem. Our theory is mostly derived for validation, but is not necessary for implementation.
**Q:** *The introduction of a re-weighting scheme is intriguing.* **R:** The reweighting we propose can be difficult to compute in general, but does not have to be. For example, for GLMs, or any likelihood satisfying Ass of Thm. 2 the calculation is rather straightforward. The reweighting scheme is a very important addition for practical purposes, see the gray arrow in Figure 2a). This scheme is a crucial ingredient in making our method competitive with heuristics.
**Q:** *“In Theorem 2, the bias estimate depends on* $||\theta_*||$ *conflict with our objective to estimate the confidence set of* $\theta_*$?*”* **R:** In Theorem 2, the bias does indeed depend on the value of $\theta_*$, but for our purposes an upper bound on its norm is sufficient. Namely, we merely require having $||\theta_*||_2^2 \leq B^2$ throughout the paper (Assumption 1), which is necessary knowledge for competing methods as well. Making the bound loose for the purposes of balancing bias just hampers the sharpness of the confidence sets and not the coverage guarantees.
**Q:** *Incorporating the reweighing* **R:** Incorporating the reweighing scheme in Section 3 is a challenging problem. We were unable to provide do so, and we find this extension challenging. While the geometry of the set is influenced by the weighting (effects we do not analyze), the size of the set is analyzed with the weighting scheme as well, which gives us the intuition. Arguably, the weighting scheme is in place to counteract undesirable effects of bias, which inflates the regret of the online learner. Therefore, it arguably makes sense to analyze the effects of weighting on the online learning regret. We do agree that in principle though, it would be very interesting to understand the geometry at an even more precise level. For purely theoretical purposes, the weighting can be discarded, since up to constant factors, the resulting set will be equivalent with and without it. However, reweighting observations is of tremendous practical importance, in particular in non-parametric settings, so we are convinced of the scheme's value nonetheless. Thanks to the weighting scheme, our sets should be the go-to practical method in cases where the likelihood is well-specified.
First and foremost, we wish to emphasize that Theorem 3 is not needed for the construction of the sets in any way. The theorem is merely there for theoretical understanding. The reason the Bregman divergence depends on $\theta_*$ is that the likelihood landscape (geometry) is influenced by the true data-generating process beyond the Gaussian case. In case one desires global bounds, a worst-case perspective could be taken.
---
Rebuttal Comment 1.1:
Title: Discussion period ends soon
Comment: Dear reviewer, the discussion window is closing soon. We believe we have addressed your questions and concerns and are happy to engage in further discussion. Please reconsider your score.
---
Rebuttal Comment 1.2:
Comment: I concur with the authors regarding their well-articulated responses, which addressed my queries. As a result, I am inclined to raise my review score. Nonetheless, I share the sentiment expressed by some of my fellow reviewers that the manuscript would benefit greatly from a thorough revision for readability. Enhanced clarity and structure could not only facilitate better comprehension but also underscore the significance of the paper's contributions. | Summary: The paper proposes to use likelihood ratios to construct confidence sequences that facilitate the downstream online decision making under uncertainty. The weighting and corresponding bias estimation are proposed to avoid regret blow-up in low-noise setting. The paper offers theoretical insights for the bias estimation, the geometry of the confidence set for generalized linear model (GLM), and regret in the online optimization in GLM. The paper offers the corresponding empirical results on bandits problems.
Strengths: 1. The paper incorporates the likelihood ratios into online decision-making. The LR confidence set reduces the requirement of conventional concentration results and allows existing algorithms to solve more generalized problems as long as the likelihood is well specified. There is room for studying its application in various sequential decision-making tasks.
2. The adaptive weighting scheme is well justified with practical bias estimation.
3. The geometry analysis could be helpful to justify the usage of the LR confidence set rather than alternatives.
Weaknesses: 1. In general the paper is clearly organized. However, the scattered yet closely related notations in sections 2 and 3 make it challenging to follow. I would recommend further highlighting important definitions and formulas or trying to concentrate on them to allow easier revisits.
2. The experiment results shown in Figure 2 lack statistical significance. The fact that the figure shows median values rather than mean values indicates the stability problem of the experiment results.
3. The mismatch between the theoretical results in section 3 with regards to the online optimization and the empirical study mostly focused on bandit problems in section 4. As only the downstream cumulative regrets are shown.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Minor issue: the product on the denominator of eq (1) seems to be falsely indexed, could the author double-check?
2. In the bandit problems, how are the confidence levels specified?
3. Could the author briefly comment on the potential impact of misspecified likelihood (robustness to misspecification when there is a uniform error bound for the misspecification)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Discussed in the comments above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments, we will try to polish notation and improve the readability of the manuscript. Now to specific concerns and questions:
**C:**: *"The experiment results shown in Figure 2 lack statistical significance."* **R:** We provide the same plot with the mean in the attached document. However, we disagree with you that plotting the median shows instability. What shows instability is the large spread in general. Plotting quantiles with mean is to the best of our knowledge non-standard, hence the median was chosen. We find results reporting mean more questionable. Additionally, we would like to point out that our method is stable and instability usually arises with heuristics such as MK2021 or NR2020 which suffer from numerical issues.
**C:** *“Mismatch”* **R:** At first sight, there might seem to be a mismatch between theory and experiments, however, downstream bandit regret and the size of the confidence set are intimately related. For the UCB algorithm (the one we use in the experiments), the better the confidence sets, the better the algorithm's bound and its practical performance. We chose to demonstrate our applicability in this manner, as at the end of the day, our sets' worth is measured by their downstream applicability. This being said, we prove a calibration plot in the attached documents and will add it to the paper. The plot shows that the heuristic methods lose coverage while the other provable methods are way too conservative. These likelihood ratios are somewhere between these two extremes. The calibration plots come with caveats however, since they depend on the true parameter $\theta_*$, and how you collect the data. Since we are not interested in the iid setting, we need to choose a way to collect data, say via a bandit algorithm or in some active learning setting. In these scenarios, calibration plots are rarely what people are interested in, and therefore a rather non-standard method of evaluation. We show our results for two different values of $\theta_*$.
We understand that the different regret notions (bandit application regret vs. regret of the online learner) could lead to confusion, and we will clarify this in a revised version. In terms of theory, for that reason as well, we were reluctant to state explicit regret bounds of the bandit algorithm. Note however, that they follow immediately by an application of the elliptical potential lemma. For instance, for linear bandits, we directly obtain the optimal $\tilde{\mathcal{O}}(d\sqrt{T})$ regret rate.
**Q1:**: Thank you for pointing this out. This is indeed a typo, and the index should be $s$ instead of $i$.
**Q2:** The confidence levels are done exactly as proposed in the paper and maintain provable coverage up to the chosen $\delta$. We chose $\delta = 0.1$. We compare the exact methods from the literature as well as to empirical heuristics popular in the field that people use in deployed applications. The reason our sets need no tuning is that the confidence radius is implicit when compared to ellipsoid confidence sets from the literature with fixed (worst-case dependent) radius. Since we practically use no inequalities to derive our confidence sets, they are arguably closer to what is practically achievable with provable coverage, and inherit forms of instance-dependency.
**Q3:** We do not discuss robustness in the paper, and a proper treatment requires a follow-up. In general, as noted in the work on universal inference [1], exponentiating a likelihood with small powers can improve robustness, but this is only a heuristic. In general, in the presence of misspecification, it is not possible to guarantee coverage with our approach. In contrast, the size analysis mostly depends on tail properties of the random variables involved, and it is believable that this part can be wholly extended accordingly.
We spent some time thinking about probably robust versions of our approach, but did not succeed. In case robust variants are necessary, confidence sets which are designed for sub-families may be more appropriate e.g. [1].
*[1] Yadkori, Pal,Szepesvari (2012) Improved algorithms for linear stochastic bandits*
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed response. I believe my questions have been well answered, and incorporating the corresponding discussion into the paper would enhance the overall presentation. I have carefully considered the responses to my concerns about the mismatch between the theoretical results and the later empirical study on regrets of bandit problems, as similar questions have also been raised by other reviewers. While I believe the answers provided could alleviate the issue, I still hold the belief that establishing a more direct connection to justify the benefits of the proposed method in the bandit problem is crucial for the paper's coherence. In general, I value the theoretical contribution of the paper, but I do have concerns about its completeness and presentation, which could potentially be better addressed in the revised version. Therefore, I will maintain my current score.
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging in the discussion. We would like to respectfully disagree with your conclusions:
- We provided the arguably most useful demonstration of improvement in bandit problems by showing that empirically the regret decreases when using our method. This is almost always the relevant and targeted metric in the field.
- As we discussed, the mismatch the reviewer is pointing out is inherent for adaptive sequences. One first needs an algorithm to generate the adaptive sequence which makes the generation of calibration plots application dependent. We believe they are not the optimal way to showcase our method accordingly, which is why we refrained from doing so in the first place.
- We provided calibration plots in response to the review nonetheless. One last possible addition would be to directly plot the radius of the confidence sets and compare them to a baseline ellipsoidal set. However, this is only satisfactory in the Gaussian case as it‘s the only case where our sets are actually ellipses -- essentially our banner plot. We are interested to know if the reviewer would be satisfied with this.
- We provided an explanation that a regret bound follows automatically from our confidence sets. However, the main contribution is not improving bandit rates. We do not improve any known rates. We match the existing rates via our universal setup.
We invite the reviewer to look at this paper more holistically instead of looking purely for theoretical regret bounds. This paper provides a novel and unique way of looking at the problem of adaptive confidence sets in machine learning applications. We hope that matching regret rates in known settings will encourage practitioners to use our methods in different and novel settings and enjoy the empirical benefit of this provable method. | Summary: This paper proposes a new construction of confidence sets for a parametric setting, where the likelihood of the noise process is explicitly given.
The proposed method is based on a weighted variant of the sequential likelihood ratio (LR) statistics, which was proposed in universal inference (Wasserman et al., 2020).
The confidence set is a function of the choice of estimators that construct the sequential LR statistics, and the paper proposes a specific method to construct such estimators by the means of Follow-the-Regularized-Leader (FTRL) algorithm in online learning, by viewing the confidence parameter minimization as a regret minimization problem.
Further, the paper proposes a new adaptive reweighting scheme, which uses a powered LR statistics to make the resulting confidence set more “robust”. The proposed reweighting scheme is based on the bias of the used estimators. While the bias may not be directly estimated, they provide a computable upper bound on the bias for a certain form of distributions.
For generalized linear models, the size of the confidence set is analyzed under regularity assumptions. To make the bound fully concrete, they also provide a high-probability regret guarantee of the used FTRL estimators (for GLMs).
The effectiveness of the proposed method is demonstrated with linear and kernelized bandit experiments.
Strengths: The paper is very well written. The proposed framework and analysis are a nice and careful combination of several different techniques (universal inference, FTRL, Bregman information gain, …) in statistics and online learning.
I think this could be indeed useful in bandit problems but it can be also applicable whenever parametric models are available.
Weaknesses: I do not find a crucial weakness of this paper, except a few questions detailed below.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - I find the sentence in line 107 a bit confusing. It says “If the confidence parameter goes to zero, only $\theta_\star$ is in the set.” This is not the case, as the LHS (of the inequality inside the set notation) in the equation after line 103 is 1 if $\theta=\theta_\star$, and so $\theta_\star$ will be excluded if the RHS goes to zero. What should be a proper explanation?
- I cannot make much sense of the specific reweighting scheme in line 162 right away, though I can understand the intuition thanks to the Gaussian example. Can you elaborate this choice?
A possible typo:
- Line 153: depending only $\theta_\star$
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. Indeed you are right; this is a typo on our side. What we meant to say is if the $\log$ of the expression goes to zero (or the expression goes to 1), then only the true parameter $\theta^*$ is included in the set. We bear in mind here that the prediction game is played on the log-likelihood loss.
Regarding the reweighting: In general, there are two sources of error, stochastic (sometimes called "aleatoric" uncertainty) and deterministic (sometimes called "epistemic" uncertainty) – we refer to it as bias. The likelihood function properly measures only stochastic noise. In a certain sense, our problem is not well posed in the absence of sufficiently rich data. For example, it is impossible to distinguish between (linear) models that differ only in the subspace orthogonal to the data. This causes the likelihood ratio to unnecessarily blow up, as we show in theory and in examples.
Let us now give some intuition behind the choice of our reweighting scheme. A naive solution would be to gather data and set $w_i = 0$ until the bias vanishes -- i.e., we observe data that span the whole space. However, this might never happen in non-parametric settings like RKHSs, and is therefore not viable. Additionally, even when the bias is non-zero, there is information that can be leveraged by including it in the likelihood ratio sequence. Now, the question is how much information, and how can we include it. The reasoning we adopt is motivated by understanding how much the deterministic error (bias) influences the likelihood ratio when compared against the stochastic component. If these two parts are, say, equal in magnitude, then we would choose $w_i =1/2$ to reflect that only half of the contribution is the stochastic part relevant to the likelihood formulation. This simple example, generalizes the intuition from the Gaussian case to GLMs, and helps us to reduce the bias of the online optimizer that we analyze in Sec. 3.2.
Given that we are interested in practical methods, and not just theory, we were driven to propose a widely applicable and performant scheme. Our reweighting scheme is just that: motivated by balancing out terms in the regret of the online learner (by effectively changing its objective), it strikes a balance between bias and variance.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response and I read other reviews as well. I think that the authors can incorporate the changes in the revision to make the contributions clearer. Especially the regret bound for linear bandits (as pointed out by Reviewer XFVf) and a better intuitive explanation on the reweighting scheme will strengthen the paper. Hence I will keep the score. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their time and effort spent. We are pleased that most reviewers see the benefits of our work. We hope that our individual responses clarify any misunderstandings and that the reviewers will consider raising their scores if they see fit.
We really believe this method is of great importance to the fields utilizing adaptive inference, and hope that we will get the opportunity to present it to the community. We attach some additional plots for reviewer DGax.
Pdf: /pdf/cfc6592b1114b19730b2e84454127bb1570d6e77.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
What Makes Data Suitable for a Locally Connected Neural Network? A Necessary and Sufficient Condition Based on Quantum Entanglement. | Accept (spotlight) | Summary: By utilizing theoretical tools from quantum physics, the authors propose that a locally connected neural network can accurately predict data if and only if the data distribution exhibits low quantum entanglement under certain feature partitions. Based on this result, they develop a preprocessing method to enhance the compatibility between data distributions and locally connected neural networks. Experimental evaluations using various datasets and models validate their findings.
Strengths: It investigates what makes a data distribution suitable for machine learning from the theories of tensor network and quantum entanglement, which provides a new perspective and tools from other fields to explore the learning theory.
Weaknesses: The numerical experiments are insufficient, it is only applied to randomly arranged data instead of original data which might be more convincing to support the argument.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. In corollary 1(line 223), does it still work under other partitions? and what's the motivation for choosing "canonical partition"?
2. It demonstrates that the locally connected neural network is capable of accurate prediction over distribution if only if it admits low entanglements. In the experiments, it increases the entanglement of data via randomly swapping the feature and provides the comparison between a random permutation of features and proposed methods. Why the proposed method, in the numerics (Table 1, 2), is applied to random arrangement data instead of original data? After we manually randomly arrange the data feature, the data might be meaningless which is expected to get bad performance. Thus, is it appropriate to support the argument? If I miss some thing, please correct me.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We respond to your comments and questions below. If our response is satisfactory, we would greatly appreciate it if you would consider raising your score.
> *The numerical experiments are insufficient, it is only applied to randomly arranged data instead of original data which might be more convincing to support the argument.*
> *It demonstrates that the locally connected neural network is capable of accurate prediction over distribution if only if it admits low entanglements. In the experiments, it increases the entanglement of data via randomly swapping the feature and provides the comparison between a random permutation of features and proposed methods. Why the proposed method, in the numerics (Table 1, 2), is applied to random arrangement data instead of original data? After we manually randomly arrange the data feature, the data might be meaningless which is expected to get bad performance. Thus, is it appropriate to support the argument? If I miss some thing, please correct me.*
Our experimentation (Figures 3 and 8, and Tables 1 to 5) establishes the following:
* Audio and image datasets, on which locally connected neural networks achieve high prediction accuracies, satisfy the necessary and sufficient condition we derive — low entanglement under canonical partitions.
* Randomly permuting features in the above datasets leads the condition to be violated, i.e. the entanglement under canonical partitions to be higher, and accordingly prediction accuracies of locally connected neural networks deteriorate.
* Applying our preprocessing algorithm (which is designed to ensure that the condition is met, i.e. that entanglement under canonical partitions is low) to the permuted datasets recovers a significant portion of the performance lost.
* Applying our preprocessing algorithm to tabular datasets, on which the prediction accuracies of locally connected neural networks are known to be subpar, leads to significant gains in performance.
It is important to note that in tabular data, features are by definition arranged arbitrarily, so randomly permuting them (as we did) prior to applying a non-permutation invariant learning architecture (e.g. a locally connected neural network) is the correct thing to do (otherwise, results may be skewed by implicit structure in the default feature arrangement, which is not supposed to exist).
Note also that randomly permuting the features of a dataset does not render it meaningless. Indeed, learning architectures that are permutation invariant (for example fully connected neural networks) are completely unaffected by such permutation, and their prediction accuracies are often high, and in particular far better than chance.
> *In corollary 1(line 223), does it still work under other partitions? and what's the motivation for choosing "canonical partition"?*
The motivation for defining canonical partitions (Definition 2) is that, per our analysis (Corollary 1 in particular), low entanglement under these partitions characterizes the ability of a locally connected neural network to achieve high prediction accuracy. If you are asking about the intuition behind canonical partitions, note that, in general, the entanglement under a partition $( \mathcal{K} , \mathcal{J} )$ characterizes the dependence between $\mathcal{K}$ and $\mathcal{J}$. This, together with the fact that in canonical partitions $\mathcal{K}$ consists of contiguous indices, imply that low entanglement under canonical partitions can be viewed as a formalization of locality.
With regards to extension of Corollary 1 to other partitions:
It is oftentimes the case that entanglement is low under canonical partitions, while being high under other partitions. Accordingly, Corollary 1 will not hold true if one replaces the canonical partitions with an arbitrary set of partitions. Nonetheless, if the architecture of the locally connected neural network is modified, then in order for Corollary 1 to hold, the definition of canonical partitions (Definition 2) needs to be modified as well. In that sense, Corollary 1 can be extended to account for other partitions.
We hope the above discussion adequately addresses your questions (please let us know if not and we will happily elaborate). It is planned to be included in the camera-ready version. Thank you!
---
Rebuttal Comment 1.1:
Comment: Thank you for your response that helps me better understand, I have no further questions. | Summary: The paper investigates criterion that make data distributions suitable for being accurately fit by neural networks using tools from tensor networks. Specifically, it shows that some locally connected neural networks (with polynomial activations) fit a data distribution accurately if and only if the quantum entanglement of the data tensor is sufficiently low under all canonical partitions of the axes. The argument uses the previously established fact that a certain tensor network (locally connected tensor network) is equivalent to a locally connected neural network. The main theoretical contribution is to show that low quantum entanglement across canonical partitions is a necessary and sufficient condition for an arbitrary tensor $T$ to be approximated by the locally connected tensor network $\mathcal{W}_\mathrm{TN}$. The problem of a locally connected neural network fitting a distribution is then identified with the corresponding data tensor being well-approximated by $\mathcal{W}_\mathrm{TN}$, to establish the main claim.
Numerical simulations are performed to show that under random swaps of the features of standard datasets (that are known to be well fit by local neural networks), more swaps results in more entanglement and this corresponds to the predictive performance being correspondingly lower. The authors also suggest a heuristic method for improving the compatibility of data to a local neural network architecture, by searching for permutations of the axes that result in a low entanglement.
Strengths: The main result is compelling, providing a clean and computationally efficient measure that may be used to quantify the suitability of a data distribution to certain local neural network architectures. Despite the minor limitations (such as polynomial activations) placed on the corresponding network, this still seems to be significant progress on an important and difficult question. The potential of systematically searching for feature permutations to improve the predictive power of locally connected networks is also an intriguing possibility.
The theoretical results seem sound and correct to the best of my knowledge, although I did not check every detail.
The numerical results do provide evidence of an increase in predictive power from searching for low entanglement feature permutations.
Weaknesses: 1. The presentation of the paper is rather dense in some places, which is understandable due to the breadth of material covered, but makes the exposition hard to follow in places. It would be helpful to have some intuition about how the notions defined in the paper correspond to standard ones in deep learning. For example, the definition of canonical partitions seems unmotivated at first and it may be helpful to the reader if the connection to locality is highlighted.
As a minor comment, there are a few terms that are defined once and then used in later statements without context: for example, it would be helpful to mention in Definition 2 that $N = 2^L$, and to have a reminder about $R$ near Corollary 1.
2. Section 5.2.2 only presents numerical results for the proposed preprocessing pipeline against a baseline of randomly permuted features. It is not clear to me whether these improvements manifest for real data ie. data already suited to a locally connected network, with the goal being to improve predictive accuracy. It is unlikely in practice that a preprocessing step would be used to improve the performance of a model with very low predictive accuracy.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Does the suggested pipeline lead to any accuracy improvements for natural datasets (without randomly permuted features) and local networks that already achieve good performance on them?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback, and in particular for describing our contributions as “significant progress on an important and difficult question”! We respond to your comments and questions below.
> *The presentation of the paper is rather dense in some places, which is understandable due to the breadth of material covered, but makes the exposition hard to follow in places. It would be helpful to have some intuition about how the notions defined in the paper correspond to standard ones in deep learning. For example, the definition of canonical partitions seems unmotivated at first and it may be helpful to the reader if the connection to locality is highlighted.*
Thank you for raising this point! In line with your suggestion, we plan to use additional space in the camera-ready version for discussions providing more intuition behind the notions we use and their relation to deep learning. Below is an initial discussion about canonical partitions.
The motivation for defining canonical partitions (Definition 2) is that, per Theorem 1, low entanglement under these partitions characterizes the ability of the locally connected tensor network to fit a given tensor. In general, the entanglement under a partition $( \mathcal{K} , \mathcal{J} )$ characterizes the dependence between $\mathcal{K}$ and $\mathcal{J}$. This, together with the fact that in canonical partitions $\mathcal{K}$ consists of contiguous indices, imply that low entanglement under canonical partitions can indeed be viewed as a formalization of locality.
> *As a minor comment, there are a few terms that are defined once and then used in later statements without context: for example, it would be helpful to mention in Definition 2 that
\(N=2^{L}\), and to have a reminder about R near Corollary 1.*
Thank you for the suggestions! They will be incorporated into the text.
> *Section 5.2.2 only presents numerical results for the proposed preprocessing pipeline against a baseline of randomly permuted features. It is not clear to me whether these improvements manifest for real data ie. data already suited to a locally connected network, with the goal being to improve predictive accuracy. It is unlikely in practice that a preprocessing step would be used to improve the performance of a model with very low predictive accuracy.*
> *Does the suggested pipeline lead to any accuracy improvements for natural datasets (without randomly permuted features) and local networks that already achieve good performance on them?*
The preprocessing algorithm we propose (in Section 5.1) is not designed for data on which locally connected neural networks already achieve high prediction accuracies. Indeed, we have shown (in Figures 3 and 8) that on such data, namely on audio and image datasets, the criterion sought after by the algorithm — low entanglement under canonical partitions — is satisfied to begin with. In contrast, on tabular data (which may also be real data) the prediction accuracies of locally connected neural networks are known to be subpar (see lines 328-329 of the paper), and our preprocessing algorithm may lead to significant improvements. We demonstrated this in Section 5.2.2 with standard datasets. Note that in tabular data features are by definition arranged arbitrarily, so randomly permuting them (as we did) prior to applying a non-permutation invariant learning architecture (e.g. a locally connected neural network) is the correct thing to do (otherwise, results may be skewed by implicit structure in the default feature arrangement, which is not supposed to exist). | Summary: This paper focuses on the problem that which data distribution is more learnable by locally-connected neural networks such as CNN, RNN, and local-attention. The paper introduces the notation of quantum entanglement, theoretically proves and empirically verifies that the network can achieve accurate predictions if and only if the data distribution exhibits low quantum entanglement under canonical partitions of features.
Strengths: 1. This paper introduces the notation of quantum entanglement from physics, which provides new insights into the problem of which data distribution is more learnable by a specific family of neural networks.
2. The paper provides a comprehensive theoretical analysis of the above problem.
3. Overall speaking, the paper is easy to read and follow.
Weaknesses: 1. The notion of entanglement is not very intuitive. Although the authors formally introduce the mathematical definition of quantum entanglement, it is still not clear what characteristics a data distribution would exhibit if the distribution has low entanglement. I suggest the authors illustrate this low-entanglement property on toy datasets and/or textual data. This intuitive illustration might provide valuable insights to the machine learning community.
2. The authors are encouraged to conduct comprehensive experiments on more architectures and datasets. For example, for image classification, it is suggested to test on classical CNN architectures such as AlexNet, VGG, and ResNet. Besides, textual data should also be investigated.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. In Theorem 1, the authors derive an upper bound for quantum entanglement. However, it is not clear whether this upper bound truly ensures *low* entanglement. There are mainly two terms in the bound, $\ln(R)$ and $\ln(D_\mathcal{K})$. The magnitude of the first term is already addressed by the authors, but it is not clear whether the second term can be extremely large. The authors are encouraged to have a discussion on this issue and show that this upper bound is indeed much smaller than the entanglement of a random tensor (as a baseline).
2. How will the network architecture affect the main theoretical results in Section 3.1, 4.1, and 4.2? For example, I wonder if the results still hold if the neural network has skip connections and batch normalization operations.
3. What does feature (re)arrangement mean in Section 5.1? Let us take textual data as an example and consider a sentence with N words. Does a feature rearrangement mean that the words (as well as their embeddings) in the sentence are randomly permuted? The authors are suggested to make this clearer, and better with examples.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback, and specifically for noting the soundness of our theory and the clarity of our presentation. We respond to your comments and questions below. If our response is satisfactory, we would greatly appreciate it if you would consider raising your score.
> *The notion of entanglement is not very intuitive. … it is still not clear what characteristics a data distribution would exhibit if the distribution has low entanglement. I suggest the authors illustrate this low-entanglement property … . This intuitive illustration might provide valuable insights … .*
Thank you for raising this! In line with your suggestion, we will add to the text more intuition behind entanglement, in general and in the context of data distributions. Below is an initial discussion along this line.
Generally, entanglement is a concept which, given a collection of elements $[N] := \\{ 1, 2, … , N \\}$ partitioned into two sets $\mathcal{K}$ and $\mathcal{J} := [N] \setminus K$, quantifies how dependent $\mathcal{K}$ and $\mathcal{J}$ are. In quantum physics the elements in $[N]$ are particles, and the entanglement quantifies the dependence (quantum interaction) between the particles in $\mathcal{K}$ and those in $\mathcal{J}$. In the context of data distributions, the elements in $[N]$ are features (e.g. pixels, text tokens or audio samples), and the entanglement is a measure of dependence between the features in $\mathcal{K}$ and those in $\mathcal{J}$.
To gain some intuition on entanglement as a measure of dependence between the features in $\mathcal{K}$ and those in $\mathcal{J}$, consider the case of entanglement equal to zero. There, the population data tensor (Eq. (3)) can be written as an outer product between two tensors, one corresponding to $\mathcal{K}$ and the other to $\mathcal{J}$. This may be interpreted as zero correlation between the features in $\mathcal{K}$ and those in $\mathcal{J}$ (indeed, by Eq. (3), the population data tensor holds expectations of products of features, and the expectation of a product being equal to a product of expectations is the definition of zero correlation). Moving to the case where entanglement is non-zero, one may view it as a measure of distance from zero correlation — the higher the entanglement, the farther we are from this state, and vice versa.
> *The authors are encouraged to conduct comprehensive experiments on more architectures and datasets ... .*
As stated in the paper (experimental sections and Appendix J), our experiments currently include the following architectures:
- M5 and ResNet CNNs;
- S4 RNN; and
- local self-attention,
and the following datasets:
- Speech Commands audio;
- CIFAR10 images; and
- semeion, isolet and dna tabular benchmarks.
To our knowledge, for a paper whose nature is primarily theoretical, this empirical evaluation is relatively broad.
Notwithstanding the above, following your comment, we began conducting additional experiments with more architectures and more datasets (including textual data). These experiments will take several weeks to run on our hardware. We did however manage to obtain complete results for VGG CNN architecture, and these are qualitatively identical to the CNN results reported in the paper. We will include a full account of the additional experiments in the text once they are complete.
> *In Theorem 1 … it is not clear whether this upper bound truly ensures low entanglement. There are mainly two terms in the bound, $\ln (R)$ and $\ln (D_{\mathcal{K}})$. The magnitude of the first term is already addressed by the authors, but it is not clear whether the second term can be extremely large. The authors are encouraged to have a discussion on this issue and show that this upper bound is indeed much smaller than the entanglement of a random tensor … .*
Note that in Theorem 1, the term $\ln ( D_{\mathcal{K}} )$ (which up to a logarithmic factor is on the order of the number of axes $N$) is **multiplied by $\epsilon$**, the desired degree of approximation. Accordingly, in the regime of interest (low $\epsilon$), the upper bound is indeed small. In contrast, the theorem shows that there exist tensors for which entanglements are on the order of $\ln ( D_{\mathcal{K}} )$. This in fact holds for random tensors as well; a point that will be clarified in the text. Thank you for the question!
> *How will the network architecture affect the main theoretical results … ? For example, I wonder if the results still hold if the neural network has skip connections and batch normalization … .*
It is possible to extend our analysis to tensor networks, and corresponding neural networks, with connectivities beyond those considered (e.g. connectivities that are non-local, or ones involving skip connections). Such extensions can follow the lines of Appendix I (which lifts the one-dimensional analysis in paper body to arbitrary dimensions). In particular, under such extensions, our theoretical results will still hold, but the definition of canonical partitions (Def. 2) will require adaptation. With regards to batch normalization, as long as it maintains representational capacity (as is typically the case), it will be automatically accounted for by our theory. We will add to the text a discussion including the above, as well as architectural extensions that require further research. Thank you for the question!
> *What does feature (re)arrangement mean in Section 5.1? Let us take textual data as an example … . Does a feature rearrangement mean that the words … are randomly permuted? … .*
In our context, feature (re)arrangement means applying a permutation to the features in each data instance (e.g. the word embeddings in each sentence, the pixels in each image, or the samples in each audio recording). Note that the permutation need not be random. In fact, Section 5.1 concerns searching for a specific permutation which satisfies a specific property (low entanglement under canonical partitions).
---
Rebuttal Comment 1.1:
Comment: The authors' rebuttal addresses most of my concerns. I would like to keep my original rating toward accepting this paper. | Summary: This paper investigates the representation power of locally connected neural networks, a prevalent family of deep learning architectures, using tools from quantum physics. In particular, following the established equivalence between locally connected neural network and locally connected tensor network, the authors showed that the necessary and sufficient condition of a locally connected neural network fitting an objective tensor is the entanglements of the tensor admits under canonical partitions being small enough. As an application, the authors discuss the condition of making accurate predictions using locally connected neural networks, accompanied by empirical demonstrations. Notably, the findings suggest that the representation power of locally connected neural networks can potentially be enhanced by reorganizing the features to achieve reduced entanglements under canonical partitions.
Strengths: The authors successfully extend the connection between deep learning and quantum physics by revealing a compelling link between representation power and entanglement – two fundamental concepts in both domains. From my perspective, the results are not only theoretically elegant but also closely related to illustrations of practical situations.
Weaknesses: The paper's presentation could benefit from further improvement, especially in providing more qualitative discussions to enhance readers' intuition regarding the technical correspondence between representation power and entanglement. While the link between these two concepts is established, the paper lacks sufficient qualitative explanations to make the connection more accessible.
Furthermore, the tensor network's width, denoted as $R$ in the paper, seems to be crucial in the derived bounds. However, the definition of this parameter remains somewhat unclear, so I'm not fully convinced on why it is usually small.
Furthermore, the current results solely focus on the representation of tensors using locally connected neural networks. It would be good to include an explanation as to why other types of underlying functions for representation were not considered.
If these questions are addressed, I'm happy to further increase my evaluation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I don't have more questions other than the existing ones above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and support! We greatly appreciate your willingness to further increase your evaluation if your questions are addressed. We treat them below.
> *The paper's presentation could benefit from further improvement, especially in providing more qualitative discussions to enhance readers' intuition regarding the technical correspondence between representation power and entanglement. While the link between these two concepts is established, the paper lacks sufficient qualitative explanations to make the connection more accessible.*
Thank you for raising this point! In line with your suggestion, we plan to use additional space in the camera-ready version for qualitative discussions aimed to enhance readers’ intuition. Below is an initial discussion on the technical correspondence between representation power and entanglement.
Roughly speaking, entanglement is a concept which, given a collection of elements $[N] := \\{ 1, 2, … , N \\}$ partitioned into two sets $\\mathcal{K}$ and $\\mathcal{J} := [N] \\setminus K$, quantifies how dependent $\\mathcal{K}$ and $\\mathcal{J}$ are. In quantum physics the elements in $[N]$ are particles, and the entanglement quantifies the dependence, i.e. the quantum interaction, between the particles in $\\mathcal{K}$ and those in $\\mathcal{J}$. In the context of neural networks, the elements in $[N]$ are input variables (e.g. pixels, text tokens or audio samples), and the entanglement quantifies the dependence that a neural network can represent between the variables in $\\mathcal{K}$ and those in $\\mathcal{J}$.
To gain some intuition on the latter (entanglement quantifying the dependence that a neural network can represent between the variables in $\\mathcal{K}$ and those in $\\mathcal{J}$), consider the case of entanglement being equal to zero. There, a function $f ( \\cdot )$ realized by the neural network must be separable with respect to $\\mathcal{K}$ and $\\mathcal{J}$, meaning it can be written as $f ( X ) = g ( X_\\mathcal{K} ) h ( X_\\mathcal{J} )$, i.e. as a product of two functions, one that intakes only variables in $\\mathcal{K}$, and another that intakes only variables in $\\mathcal{J}$. This means that there is no dependence between $\\mathcal{K}$ and $\\mathcal{J}$. Indeed, in a statistical setting, where $f ( \\cdot )$, $g ( \\cdot )$ and $h ( \\cdot )$ are probability density functions, this is the definition of $\\mathcal{K}$ and $\\mathcal{J}$ being statistically independent. Moving to the general case (where the entanglement is not necessarily zero), one may view the entanglement as the distance from the above-described independence, i.e. the distance $f ( \\cdot )$ can have from the closest function which is separable with respect to $\\mathcal{K}$ and $\\mathcal{J}$. The higher the entanglement, the more dependence can be represented, and vice versa.
> *Furthermore, the tensor network's width, denoted as R in the paper, seems to be crucial in the derived bounds. However, the definition of this parameter remains somewhat unclear, so I'm not fully convinced on why it is usually small.*
The width of the tensor network $R$ corresponds to the width of hidden layers in the equivalent neural network, thus in practice (i.e. in any situation where the neural network is to be implemented) $R$ must be of moderate size (typically no more than a few hundreds or thousands), and in particular $\ln ( R )$ is much smaller than the number of input elements $N$. This is discussed in lines 124-127 of the paper. We will broaden that portion of the text to further clarify. Thank you for bringing this up!
> *Furthermore, the current results solely focus on the representation of tensors using locally connected neural networks. It would be good to include an explanation as to why other types of underlying functions for representation were not considered.*
In Section 4.1 we show that, for the analyzed model (locally connected neural network) over a standard SVM objective, accurate prediction is equivalent to fitting (representing) a tensor. Extending this equivalence to other types of objectives, e.g. multi-class SVM, is viewed as an interesting direction for future work. We will mention this in the conclusion; thank you!
*P.S.*
$\\,$ If we misunderstood the intent behind “other types of underlying functions for representation” please let us know and we will gladly elaborate.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for addressing the questions I have raised in a satisfactory manner. Accordingly, I have decided to raise my rating to 8. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The fundamental question of what makes a data distribution suitable for deep learning is addressed in this study, focusing on locally connected neural networks. The study uses theoretical tools from quantum physics to tackle this problem. The main theoretical finding is that a specific type of locally connected neural network can accurately predict over a data distribution if, and only if, the data distribution shows low quantum entanglement under specific feature partitions. The study suggests that using quantum entanglement could inspire further use of physics tools to understand the relationship between deep learning and real-world data.
Strengths: - The paper technically sounds correct and claims well supported by theoretical analysis and experimental results.
- Related works and background knowledge are covered and discussed.
- Experiments are conducted extensively in different locally connected neural networks (CNN, RNN, local self-attention), and experimental results are thoroughly discussed.
Weaknesses: - Need more discussion on the motivation of using quantum entanglement.
- Need more discussion on the limitations of this study.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - The whole study targtes at locally connected neural networks. What happens to a NN with high connectivity? Does entanglement help the learning task?
- Entanglement entropy is a typical metric. But what does entanglement mean in the dataset? If it is used to quantify the temporal/spatial non-locality in the data, why not use other metrics such as autocorrelation or other metrics from information theory? Why choose entanglement specifically?
- Why use a tensor network as the equivalent model instead of directly evaluating on locally connected neural networks?
- Will the code be available to reproduce the findings?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Not found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback, and specifically for noting the soundness of our theory and experiments, as well as our account for background and related work. We respond to your comments and questions below. If our response is satisfactory, we would greatly appreciate it if you would consider raising your score.
> *Need more discussion on the motivation of using quantum entanglement.*
The motivation for using quantum entanglement is discussed in the introduction (Section 1). As stated there, quantum entanglement facilitates a widely accepted theory that allows for assessing the suitability of a distribution to a computational model, where distributions are described by tensors and the computational models are tensor networks. This, along with a known equivalence between tensor networks and certain deep neural networks, allows us to make progress on the question of what makes a (data) distribution suitable for deep learning.
We hope the above addresses your comment. If not, and the intent behind “motivation of using quantum entanglement” is for an intuition behind its connection to neural networks, then please see our response to the first point raised by Reviewer CJsr.
> *Need more discussion on the limitations of this study.*
Limitations are currently discussed throughout the paper (for example, the fact that, as you mention, our theory is restricted to a specific type of locally connected neural network, is explicitly conveyed in Section 3.1). Following your comment, we plan to use additional space in the camera-ready version for a concentrated account of limitations.
> *The whole study targtes at locally connected neural networks. What happens to a NN with high connectivity? Does entanglement help the learning task?*
Our study is indeed limited to locally connected neural networks, which, while prevalent in practice, do not account for all deep learning architectures being used. When adding connectivity to a locally connected neural network its representational capacity is generally enhanced, so the condition we derived (which is both necessary and sufficient before connectivity is added) remains sufficient. We do not expect it to remain necessary though. Investigation of necessary conditions for a data distribution to be suitable to a neural network with high connectivity is a promising direction for future research; we hope that our work will inspire such progress. We will mention the above in the camera-ready version of the paper; thank you!
> *Entanglement entropy is a typical metric. But what does entanglement mean in the dataset? If it is used to quantify the temporal/spatial non-locality in the data, why not use other metrics such as autocorrelation or other metrics from information theory? Why choose entanglement specifically?*
In general, entanglement is a concept which, given a collection of elements $[N] := \\{ 1, 2, … , N \\}$ partitioned into two sets $\\mathcal{K}$ and $\\mathcal{J}:= [N] \\setminus K$, quantifies how dependent $\\mathcal{K}$ and $\mathcal{J}$ are. When the elements $[N]$ are features of a dataset ordered by time/space, and the partition is such that $\\mathcal{K}$ is contiguous (this is the case for all canonical partitions), then the entanglement can indeed be viewed as quantifying the temporal/spatial non-locality in the data.
The reason we use entanglement and not other measures as you suggest is laid out in our response to your first comment. Namely, entanglement facilitates a widely accepted theory in quantum physics, which allows us to make progress on the question of what makes a data distribution suitable for deep learning.
> *Why use a tensor network as the equivalent model instead of directly evaluating on locally connected neural networks?*
As discussed in the introduction (Section 1), tensor networks tie to a widely accepted theory from quantum physics, which we imported for addressing our question of study (what makes a data distribution suitable for deep learning). Although it is possible to present our analysis solely through the lens of locally connected neural networks, we chose to include the tensor network formalism since it reflects the connection to physics — a branch of science we believe will be key to formally reasoning about the relation between deep learning and real-world data.
We note that tensor networks were central to many past studies of expressiveness and generalization in deep learning. See lines 128-138 of the paper for further details.
> *Will the code be available to reproduce the findings?*
Code for reproducing our experiments is available in the supplementary material. A public repository holding this code will be referenced in the camera-ready version of the text.
---
Rebuttal Comment 1.1:
Comment: The authors' rebuttal addresses my concerns. I would like to raise my rating to 6. | null | null | null | null | null | null |
Breaking the Communication-Privacy-Accuracy Tradeoff with $f$-Differential Privacy | Accept (poster) | Summary: This paper studies distributed mean estimation under privacy and communication constraints. This paper focuses on characterizing the recently defined notion of $f$-DP for communication-efficient mechanisms, where $f$-DP can be converted to the standard $(\epsilon,\delta)$-DP. The paper analyzed the $f$-DP of different known mechanisms in the literature. Furthermore, a new algorithm named ternary compression is proposed.
Strengths: The distributed mean estimation is an important topic and has lots of applications in federated learning under privacy and communication constraints. The presentation is well written.
Weaknesses: Although the topic is interesting, my main concerns are about the contributions of this paper.
* The first part of the paper is to analyze the $f$-DP of well-known discrete mechanisms. While analyzing the $f$-DP is useful, it is not hard to compute it. I don't know what are the major challenges in this analysis and if this analysis provides any new insights into analyzing the $f$-DP. Hence, in my opinion, this part isn't novel enough as a main contribution of this paper.
* The expressions of the $f$-DP are not in a closed form; the expressions are functions of a set of probabilities that might be computationally expensive to compute for large system parameters.
* The proposed mechanism of ternary compression is not novel. There are lots of similar schemes that don't send some coordinates with some probability from sampling the coordinates for example [13] and [23]. Actually, the Ternary mechanism can be seen as a combination of CLDP for a set of coordinates that are chosen i.i.d. with probability $B/A$.
* In general, this ternary mechanism has higher privacy because it has lower accuracy, since with some non-zero probability (when $B>A$), the client will not send anything.
* The optimal trade-offs between privacy, communication, and accuracy have already been characterized for LDP in different works in the literature, and hence, I don't understand what is the major difference in this paper in comparison with the existing work.
* Typos: please define the abbreviation of GDP (Gaussian differential privacy) for using it. In Algorithm 1, $x_i\in\lbrace 0,1,\ldots,l \rbrace$ instead of square brackets. Figure 4 is small and the lines are close enough so that it is hard to distinguish the different mechanisms.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * It is mentioned in the abstract and also in the introduction that *it remains an open problem whether such discrete-valued mechanisms provide any privacy protection.*
I don't know which open problem the authors referring to. If it is the distributed mean estimation under jointly local differential privacy and communication constraints, then this problem is already solved in the literature.
* In line 51, *SQKR doesn't account for the privacy introduced during sparsification.* I don't understand this part. AFAIK, SQKR is order optimal, and hence, it doesn't lose privacy analysis.
* In lines 259-261, what do these numbers of $\epsilon$ refer to, and from where it comes?
* In lines 331-333, *we essentially remove the dependency of accuracy on the communication overhead.* Could you please explain more? In general, it is supposed that there is a trade-off between communication and accuracy, so the accuracy depends on the communication budget.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer mVNo,
We appreciate your time in reviewing our paper and providing helpful comments. We believe that your concerns are due to misunderstandings. Different from existing methods (e.g., SQKR) which ignore the privacy amplification in compression, the proposed ternary compressor achieves much better privacy guarantees. For example, as we discussed in our global response, the ternary compressor improves the privacy guarantees for SQKR from $\epsilon_{SQKR} = \\{1,2,5\\}$ to $\epsilon_{ternary} = \\{0.05,0.2,3.9\\}$ **given the same communication cost and MSE**. Please find our detailed response below.
**Comment 1**: The main challenge of the analyses lies in the fact that the tradeoff functions are piece-wise functions with both the domain and the range of each piece determined by both the mechanism and the datasets, and finding the infimums analytically is highly non-trivial. Generally, since the piece-wise tradeoff functions are mechanism-dependent, the analyses are also mechanism specific. We adopt completely different strategies in finding the infimums for the binomial noise and the Poisson binomial mechanism.
Our analyses advance the literature by deriving tighter privacy guarantees for the binomial noise and deriving local DP guarantees for the binomial mechanism in $f$-DP, which is of vital importance in their practical use. More importantly, based on our analyses, we propose the ternary compressor that utilizes compression for privacy amplification (to the best of our knowledge, this is the first work that achieves it), which outperforms the existing methods and delivers almost identical privacy-accuracy performance to the classic Gaussian mechanism while improving communication efficiency. This is of significant importance in applications like federated learning when communication and privacy are the major bottlenecks.
**Comment 2:** The probabilities are not expensive to compute. All the random variables in Theorem 1 and Theorem 2 follow a binomial distribution with known parameters (i.e., $M$ and $p$). Both their pdfs and cdfs are known analytically and easy to compute numerically. Therefore, we consider the corresponding $f$-DP guarantees are in closed-form expressions. In fact, the tradeoff functions are piecewise linear functions with fixed slopes for each piece. We only need to compute the boundary points for each piece, i.e., at most $O(M)$ cdfs values of the binomial distribution are needed to obtain Eq. (5) and Eq. (6) (similarly for Theorem 2), which is computationally affordable. We believe the concern is due to different interpretations of "closed-form" and will revise carefully.
**Comment 3, 4, 5, Question 2:** There are indeed several schemes that simultaneously consider compression and DP. The major difference is that they fail to account for the privacy amplification by sparsification, by exploiting which the proposed ternary mechanism provides significantly better privacy guarantees. For CLDP in [23], as we discussed in Example 2, it is a special case of sto-sign, and the proposed ternary compressor obtains an amplification in privacy (see Fig. 3 in the manuscript). For SQKR in [13], given a $d$-dimensional data $x$, SQKR with $\epsilon$-LDP first samples $k$ out of $d$ coordinates (denote the output by $y$) and applies the $\epsilon$-LDP randomized response mechanism to $y$ (denote the output by $z$). It accounts for the privacy in the process of generating $z$ from $y$, but ignores the privacy in the process of generating $y$ from $x$ (i.e., sparsification). Therefore, these existing methods are only order-optimal (rather than optimal) in characterizing the tradeoff between privacy, communication, and accuracy.
Compared to existing methods, the proposed ternary compressor does not obtain higher privacy by sacrificing accuracy. Instead, we exploit privacy amplification brought by compression (which reduces communication costs) to improve the privacy-accuracy tradeoff. As we discussed in our global response, our experimental results imply that the proposed ternary compressor significantly outperforms SQKR in [13]. We also compare the ternary compressor with the Gaussian mechanism given the same privacy and MSE in the right figure of Fig. 4. Despite the improvement in communication efficiency (at least 32x), the tradeoff between privacy and accuracy for the proposed ternary mechanism matches that of the Gaussian mechanism.
**Question 1:** By open problem, we refer to quantifying the DP guarantees of the compression mechanisms.
**Question 3:** The numbers of $\epsilon$'s refer to the privacy levels in $(\epsilon,\delta)$-DP. In Theorem 3, we derived the $f$-DP guarantee, and the corresponding $\epsilon$ and $\delta$ are obtained by invoking Lemma 1 (line 153).
**Question 4:** Indeed, accuracy depends on the communication budget, and our argument holds under fixed privacy requirements. Since the ternary compressor accounts for privacy amplification in sparsification, the privacy guarantee $\mu$ is closely related to the communication overhead, i.e., the loss in accuracy introduced by communicating less is translated to enhancement in privacy (please see our global response for a detailed discussion). There are two sources of privacy in the ternary compressor: 1) privacy introduced by randomly mapping each coordinate to +1 or -1, and 2) privacy amplification by sparsification. More aggressive sparsification (i.e., a larger $B$) leads to a larger MSE while bringing a larger privacy amplification. Meanwhile, for a target privacy $\mu$, a larger privacy amplification allows us to introduce less randomness (i.e., a smaller $A$) in the random mapping for each coordinate, which results in a smaller MSE. Overall, given the same privacy $\mu$, the MSE remains the same regardless of communication overhead if $\mu < \sqrt{4dr/(1-r)}$. For the existing methods like SQKR, reducing communication leads to degradation in MSE without enhancing privacy.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. Unfortunately, I am still seeing the contributions are limited. The proposed mechanism is not novel as mentioned earlier it can be represented by coordinate sampling and quantization. Due to these reasons, I keep my score.
---
Reply to Comment 1.1.1:
Title: Response to the follow-up comment
Comment: Dear Reviewer mVNo,
Thanks for your follow-up comment. We cannot agree that the proposed ternary compressor being a combination of coordinate sampling and quantization makes the paper not novel. It is not rare that scientific research builds on prior works. For example, SQKR (reference [13] in the manuscript) combines Kashin's representation [R1], coordinate sampling, and the Randomized Response mechanism (which can be traced back to [R2]), while CLDP (reference [23] in the manuscript) combines the 1-bit quantizer (a special case of the randomized response mechanism) and coordinate sampling. These works make solid contributions to the area by demonstrating satisfactory accuracy/privacy performance or providing rigorous theoretical analyses (or both). Similarly, our contribution and novelty are not only proposing the ternary compressor (in fact, we have already mentioned in lines 243-244 of our manuscript that the ternary compressor is a combination of sign-based quantization and sparsification) but also providing the corresponding theoretical analyses on privacy guarantees in terms of the emerging and promising $f$-DP for both the proposed scheme and the existing mechanisms. For the proposed ternary compressor, its improvement in privacy compared to existing methods is backed by rigorous theoretical analyses. We also advance the literature by providing tighter privacy guarantees for the binomial noise and complementing privacy analysis for the Poisson binomial mechanism by showing its local differential privacy guarantee. Our results reveal that the binomial mechanism captures numerous existing compression and differential privacy mechanisms as special cases, which are valuable on their own.
More importantly, although there are some existing works (e.g., SQKR) that utilize coordinate sampling and quantization, they adopt coordinate sampling merely for the purpose of improving communication efficiency and fail to account for its impact on privacy from the theoretical aspect. Studying the impact of sparsification on privacy is crucial, especially for applications like differentially private distributed learning where communication costs and privacy are the major bottlenecks and sparsification is one of the most effective and commonly adopted approaches to alleviate the communication burden. To the best of our knowledge, this is the first work that investigates the privacy protection brought by coordinate sampling, which shows that the loss in accuracy due to sparsification can be translated to amplification in privacy and therefore leads to significant improvement in terms of differential privacy guarantees compared to existing methods. Therefore, we believe that the results and findings in this paper are novel and of critical importance to the community.
Again, we appreciate your time and effort in reading the paper and providing comments. Please let us know if you have further questions.
Best regards,
Authors of the paper
[R1] Y. Lyubarskii and R. Vershynin, “Uncertainty principles and vector quantization,” IEEE Transactions on Information Theory, vol. 56, no. 7, pp. 3491–3501, 2010.
[R2] S. L. Warner. "Randomized response: A survey technique for eliminating evasive answer bias." Journal of the American Statistical Association, 60(309):63–69, 1965. | Summary: This paper investigates the f-DP guarantee of several discrete-valued mechanisms in the local-DP model. In particular, closed-form expressions for binomial noise mechanism and Binomial mechanics are derived.
Then, the paper considers the popular problem of aggregating d-dimensional vectors from local users subject to privacy communication constraints. Under the f-DP framework, this paper presents a new "ternary" mechanism and analyzes its f-DP privacy guarantee. Roughly, the ternary mechanism allows each local user to sparsify their vectors by keeping each coordinate with only a small probability and adding noises to the alive coordinates. The hope is that by sparsification and randomization, one can optimize communication (i.e., minimizing the messages from each user to the server) and privacy cost (e.g., each user has its signal "somewhat hidden" in the noises).
The paper's main claim is that, by working with f-DP and the ternary mechanism, we can benefit in **both** privacy and communication by the sparsification process in the ternary mechanism. Intuitively, if the sparsifying threshold is high (i.e., we only keep very few coordinates), we can hope to add less noise to achieve privacy, as the sparsification has already introduced some uncertainty in the data. The paper claims that this is something that prior works cannot offer. The reviewer, unfortunately, did not have a chance to verify the claim.
Strengths: * Working with f-DP **and** in the local model seems like a new approach.
* The proposed algorithm is natural and simple to implement, it is backed by the theory and performs favorably in the experiments.
Weaknesses: * It's a bit hard to interpret/digest the closed-form f-DP guarantee of all these mechanisms. Have some plot/converting to Renyi-DP or (eps,delta)-DP might help.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Your algorithm sends **in expection** O(d * A/B) bits. Is there a way to improve this to a worst-case guarantee? How about, say, randomly selecting (A/B * d) coordinates? Is it possible to work out the f-DP (or any other reasonable DP notion) property of this variant?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Some future directions are mentioned in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer VYNX,
We appreciate your time and effort in reviewing and providing a positive evaluation of our work. Please find the point-by-point response to the comments below.
**Comment**: It's a bit hard to interpret/digest the closed-form f-DP guarantee of all these mechanisms. Have some plot/converting to Renyi-DP or $(\epsilon,\delta)$-DP might help.
**Response:** Thanks for the valuable suggestion. We agree that the closed-form expressions characterizing the tradeoff between the two types of error rates are not easy to digest for readers not familiar with $f$-DP, and will add some figures converting it to RDP or $(\epsilon,\delta)$-DP for better illustration. We would like to mention that $f$-DP can be readily converted to $(\epsilon,\delta)$-DP through Lemma 1 in the manuscript by finding $(\epsilon,\delta)$'s such that $f(\alpha) \geq \max\\{0,1-\delta-e^{\epsilon}\alpha,e^{-\epsilon}(1-\delta-\alpha)\\}$. For the privacy guarantees of the binomial noise in Fig. 1, we convert it to $(\epsilon,\delta)$-DP in Remark 1. For the privacy guarantees of the proposed ternary compressor in Fig. 3, we convert it to $(\epsilon,\delta)$-DP in Remark 3. In our comparison with SQKR in Fig.4 of the manuscript, as we discussed in our global response, given the same MSE and communication cost as that of SQKR with $\epsilon_{SQKR} = \\{1,2,5\\}$, if we translate the privacy guarantees of the ternary compressor from $f$-DP to $\epsilon$-DP via Lemma 1, the proposed ternary compressor yields $\epsilon_{ternary} = \\{0.05,0.2,3.9\\}$. We will add the corresponding discussion in the revised manuscript.
**Question:** Your algorithm sends in expection O(d * A/B) bits. Is there a way to improve this to a worst-case guarantee? How about, say, randomly selecting (A/B * d) coordinates? Is it possible to work out the f-DP (or any other reasonable DP notion) property of this variant?
**Response:** Thanks for the valuable comment. We agree that exploring the worst-case guarantee is crucial. We would like to mention that in the proof of the privacy guarantees, we first consider the scalar case and then extend the results to the multi-dimensional case by invoking the composition theorem to account for the composition. In this case, each coordinate is processed independently. Randomly selecting a fixed number of coordinates may ruin the independence across coordinates and makes the composition more complicated.
That being said, it is also possible that we directly analyze the privacy guarantees in the vector case by invoking Lemma 2 (which characterizes the tradeoff between type I and type II error rates for generic discrete-valued mechanisms) in Appendix A of the manuscript. However, in this case, the range of the randomized mechanism will grow exponentially as the number of selected coordinates increases, and may finally become computationally infeasible. We deem it an interesting direction for extension and will work on it in our future work.
Another straightforward way to avoid the extreme case where too many coordinates remain non-zero and thus little saving in communication is achieved (if this is the concern in the Reviewer's mind) is by incorporating another mechanism that randomly samples a subset of them (with a fixed number of coordinates) before transmission. The privacy guarantee remains the same thanks to the post-processing property of DP. In this case, we lose the privacy amplification introduced by the second sparsification scheme (since we utilize the post-processing property instead of accounting for the privacy amplification). However, for the proposed ternary compressor, the number of coordinates that remain non-zero follows the binomial distribution with a mean of $d\times A/B$ and a variance of $d \times (A/B) \times (1-A/B)$. For the applications like federated learning, $d$ is the size of gradients, which would be in the order of millions for modern neural networks. The central limit theorem tells us that the probabilities of these extreme cases in which too many coordinates remain non-zero are very low.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you very much to the authors for answering my questions. I would like to keep my current score. | Summary: This paper analyses the privacy that is provided by stochastic rounding methods when doing distributed mean estimation with local differential privacy. It finds that they can contribute to the privacy guarantee thus achieving a better tradeoff.
It then "breaks" the privacy communication utility trade-off in the sense of pointing out that for the high privacy regime extra communication won't buy you any more accuracy.
Strengths: Previous naive means of implementing DP aggregation with randomised rounding have failed to take advantage of the privacy provided by stochastic compression. This should be utilised if possible which they attempt to do.
Weaknesses: The presentation fails to show a decent comparison showing hwo much is really gained by taking advantage of this randomness. It isn't clear the gain isn't negligible. There is a graph comparing ternary and binary quantisation but it isn't clear whether the parameters actually correspond to low communication, which would require sufficient sparsity.
The supposed break of the three way trade off is really just the observation that in the high privacy regime there is no gain to lots of communication. This is not what break means, the title is overselling.
What is more this observation is very far from new. The idea that O(epsilon) communication suffices goes back a long way for various problems. This was proven with complete generality in https://arxiv.org/pdf/2102.12099.pdf.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Fixing two of privacy/communication/utility how much does taking advantage of the randomness in the compression actually help the other one?
Am I missing something with the breaking of the trade-off?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: They don't really talk about the limitations, indeed the limitations aren't really clear from the presentation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Mzdj,
We appreciate your time in reviewing our paper and providing constructive comments. We believe that the concerns are mainly due to misunderstanding, please find our response below.
**Our contribution**: We would like to clarify that, as we discussed in the global response, our contribution is more than analyzing the privacy guarantees of stochastic rounding methods in distributed mean estimation with local DP, but advancing the literature by deriving tight privacy guarantees for various DP and compression mechanisms (we believe showing their $f$-DP guarantees is important on its own since it enjoys better composition property than other variants, see e.g., Fig.6 in [R1]), and proposing ternary compressor that exploits privacy amplification in compression.
[R1] El Ouadrhiri, Ahmed, and Ahmed Abdelhadi. "Differential privacy for deep and federated learning: A survey." IEEE Access, 2022.
**Comment:** It isn't clear whether the gain is negligible and the parameters correspond to low communication.
**Response:** We are afraid that this concern is due to a misunderstanding. A larger gain in privacy always corresponds to a larger improvement in communication. We believe that the graph you mention is Fig. 3 in the manuscript. As we discussed in Remark 3, we set $A=0.25$ and $B=0.5$ to generate the figure, which means that the sparsity ratio is 0.5. In addition, the parameters of $A$ and $B$ in Fig. 3 are selected for the purpose of illustration. For the proposed ternary compressor, more aggressive compression (i.e., lower communication) leads to larger privacy amplification. Specifically, the gray area in Fig. 3 corresponds to the privacy improvement (in which $\alpha \in [(A-c)/2B, 1-(A+c)/2B]$ and $f(\alpha) = 1-c/B-\alpha$ in Eq.(13)). For any $A$, increasing $B$ makes the output sparser, and $f(\alpha) = 1-c/B-\alpha$ approaches $f(\alpha) = 1-\alpha$ (which corresponds to perfect privacy). In this sense, as the communication cost decreases to zero (i.e., the output is always 0), the privacy guarantee improves and approaches perfect privacy.
Moreover, as we discussed in our global response, the gain in privacy is not negligible.
**Comment:** The supposed break of the three way trade off is just the observation that in the high privacy regime there is no gain to lots of communication. The title is overselling.
**Response:** We acknowledge that there is rich literature showing that $O(\epsilon)$ communication is sufficient, and this work is partially inspired by [13] which breaks the communication-privacy-accuracy trilemma by proposing SQKR. However, as we discussed in our global response, the message that this paper delivers is not ``communicating less will not hurt the utility much under privacy constraints``, but instead ``the loss in utility caused by communicating less is translated to enhancement in privacy``. Therefore, the results that we present are different from the existing literature, and we do not think the title is overselling. More specifically, existing methods either apply differentially private mechanisms to the compressed output (e.g., SQKR in [13]) or compress the output of differentially private mechanisms (e.g., [R1] mentioned by the reviewer). These mechanisms do not utilize privacy enhancement by compression (which is the focus of this paper). Particularly, the utility of the methods in [R1] depends on the quality of the pseudorandom generator while the utility of SQKR depends on the communication cost $k$. On the contrary, the proposed ternary compressor utilizes privacy amplification in compression, and the tradeoff is essentially only between privacy and accuracy. Since we further improve the results in [13], we also use the word ``break". If the reviewer has better suggestions on the title, we will consider revising it seriously.
Moreover, despite the condition $\mu < \sqrt{4dr/(1-r)}$, our results are not constrained in the high privacy or low privacy regimes. As $r = A/B$ increases and approaches 1 (i.e., no sparsification), the right-hand side of the inequality goes to infinity.
[R1] Feldman, Vitaly, and Kunal Talwar. "Lossless compression of efficient private local randomizers." In International Conference on Machine Learning, 2021.
**Question:** Fixing two of privacy/communication/utility how much does taking advantage of the randomness in the compression actually help the other one?
**Response:** We compare the proposed ternary compressor with SQKR in the left figure of Fig. 4 in the manuscript. As we discussed in our global response, given the same MSE and communication cost as those of SQKR with $\epsilon_{SQKR} = \\{1,2,5\\}$, the proposed ternary compressor attains privacy guarantees of $\epsilon_{ternary} = \\{0.05,0.2,3.9\\}$ by exploiting the privacy amplification in compression. We also compare the proposed ternary compressor with the Gaussian mechanism given the same privacy and utility requirements in the right figure in Fig. 4. It is shown that despite the improvement in communication efficiency (at least 32x if we use 32 bits to represent a float), the tradeoff between privacy and utility for the ternary mechanism matches that of the Gaussian mechanism (i.e., privacy for free).
**Limitations:** We did not add a section to explicitly discuss the limitations due to space constraints, which we will add in our revised manuscript. As we briefly discussed in our discussion below Fig. 4, the improvement in communication efficiency is obtained for free only in the large $d$ regime. Fortunately, in applications like distributed learning, $d$ corresponds to the model size (usually in the orders of millions for modern neural networks). Moreover, despite that the privacy-accuracy tradeoff of the proposed ternary compressor matches that of the Gaussian mechanism which is order-optimal in $(\epsilon,\delta)$-DP, we do not show its optimality by deriving lower bounds in the $f$-DP regime. We will investigate it in future work.
---
Rebuttal Comment 1.1:
Title: Response to Reviewer Mzdj
Comment: Dear Reviewer Mzdj,
We noticed that you have raised your score from 3 (reject) to 4 (borderline reject), and we appreciate it. We hope that we have clarified our contributions and addressed your concerns in our response above.
One notable difference between the proposed method and the existing literature is that we capture the correlation between differential privacy and compression while the existing methods like SQKR do not account for privacy in compression. More specifically, the existing works have shown that in the high privacy regime, the error introduced by compression will be dominated by the error introduced by privacy constraints, while this work (to the best of our knowledge, this is also the first work) further proves that the former can be translated to enhancement in privacy (and therefore provides a better utility-privacy tradeoff as we verified in both theoretical and numerical results). Moreover, we believe that our analyses on the $f$-DP guarantees of the existing differentially private mechanisms and compression schemes are valuable on their own.
Please let us know if you have further concerns or comments, and we will further clarify.
Best regards,
Authors of the paper | Summary: In a federated setting where a server coordinates the collaborative analysis of multiple users with local data, communication efficiency and data privacy are two major issues of consideration. Classical DP mechanisms such as Laplace or Gaussian mechanisms add noises as real numbers -- at the same time, to save communication cost it is preferred to have discrete noises with limited range (resolution). This paper considers all three aspects: communication efficiency, privacy, and accuracy. Specifically, the authors would like to study privacy guarantee of data compression schemes, which intuitively introduces information loss and thus is good for protecting privacy.
The authors use f-DP, and considers binomial mechanisms. Binomial mechanisms have been considered in [6] and [9], this paper provides tighter bounds. In addition, the authors also consider a compression scheme that optimizes for communication efficiency. This is a combination of a sign based quantization and sparsification.
Strengths: I like the motivation of the paper and the consideration of all three objectives, communication efficiency, privacy and accuracy together. The tighter analysis and mechanisms for communication efficiency is of use in practice.
Although the work builds on prior work the improvement by considering compression for privacy amplification is nice. The paper is generally well written.
Weaknesses: A few minor issues, see below.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Are there any lower bounds on the tradeoff of the three considerations? Any discussion here would be useful.
Writing: it would be good to summarize somewhere in the paper the best recommendation for practitioners, what parameters to choose etc.
Is there general improvement (amplification of privacy) using compression/sparsification? Can the authors elaborate/discuss this?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 191s,
We appreciate your time and effort in reviewing and providing a positive evaluation of our work. Please find the point-by-point response to the comments below.
**Comment:** Are there any lower bounds on the tradeoff of the three considerations?
**Response:** We fully agree that deriving the lower bounds is of critical importance to evaluate the optimality of the mechanism. For the lower bounds without privacy constraints, Theorem 5.3 in [R1] shows a lower bound of $\Omega(\frac{C^{2}d}{nb})$ for $b$-bit unbiased compression mechanisms, in which $n$ is the number of users. In our case, the MSE for a single user is $O(ABd)$ and the communication cost is $(\log(d)+1)\frac{A}{B}d$ bits. Let $b = (\log(d)+1)\frac{A}{B}d$, we have an MSE of $O(\frac{A^{2}d^{2}(\log(d)+1)}{b})$. Since $A > c = \frac{C}{\sqrt{d}}$, the MSE for $n$ users is therefore given by $O(\frac{C^{2}d(\log(d)+1)}{nb})$, which implies that the proposed mechanism is order-optimal up to a factor of $\log(d)$. Note that the factor of $\log(d)$ is used to represent the index of coordinates that are non-zero, which can be eliminated by allowing for shared randomness.
For the lower bounds given the privacy constraint, unfortunately, different from $(\epsilon,\delta)$-DP for which numerous existing works have derived the lower bounds in mean estimation, there is no such lower bound for $f$-DP in the existing literature. We believe that the analyses are highly non-trivial and deserve independent work.
However, it can be shown that the proposed ternary compressor is order-optimal in terms of $(\epsilon,\delta)$-DP. Particularly, Corollary 2.13 in [R2] shows that a mechanism is $\mu$-GDP if and only if it is $(\epsilon,\delta(\epsilon))$-DP for any $\epsilon \geq 0$ with $\delta(\epsilon) = \Phi(-\frac{\epsilon}{\mu}+\frac{\mu}{2}) - e^{\epsilon}\Phi(-\frac{\epsilon}{\mu}-\frac{\mu}{2})$. In terms of $(\epsilon,\delta)$-DP, the Gaussian mechanism achieves an MSE of $O(\frac{C^{2}d\log(1/\delta)}{n^{2}\epsilon^{2}})$ for central DP (and there will be a loss of factor $n$ for local DP), which is order-optimal (see, e.g., Theorem 3.1 of [R3]). Note that, in terms of $\mu$-GDP, the proposed ternary compressor induces an MSE of $4dC^{2}/\mu^{2} + C^{2} -||x||_{2}^{2}$
while the Gaussian mechanism has an MSE of $4dC^{2}/\mu^{2}$. Notice that the difference $C^{2}-||x||_{2}^{2}$ is a constant and becomes zero when the distribution of $x$ is supported on $[-c,c]^{d}$ (which is adopted in the derivation of the lower bound in [R3]). Since the Gaussian mechanism is order-optimal, the proposed ternary compressor is also order-optimal.
[R1] Chen, Wei-Ning, Christopher A. Choquette Choo, Peter Kairouz, and Ananda Theertha Suresh. "The fundamental price of secure aggregation in differentially private federated learning." In International Conference on Machine Learning, pp. 3056-3089. PMLR, 2022.
[R2] Dong, Jinshuo, Aaron Roth, and Weijie J. Su. "Gaussian differential privacy." Journal of the Royal Statistical Society Series B: Statistical Methodology 84, no. 1 (2022): 3-37.
[R3] Cai, T. Tony, Yichen Wang, and Linjun Zhang. "The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy." The Annals of Statistics 49, no. 5 (2021): 2825-2850.
**Comment:** It would be good to summarize somewhere in the paper the best recommendation for practitioners, what parameters to choose etc.
**Response:** Thanks for the constructive comment. There are two parameters (i.e., $A$ and $B$) to choose for our proposed ternary compressor. $AB$ determines the $f$-DP guarantees and the MSE while $A/B$ determines the communication cost (in expectation). Utilizing the closed-form expressions that we derive, the corresponding $A$ and $B$ can be readily obtained for any given privacy/MSE and communication cost specifications. We will add more discussions in the discussion part of Section 6.
**Comment:** Is there general improvement (amplification of privacy) using compression/sparsification?
**Response:** This is a great question that our work endeavors to answer. Intuitively, compression and sparsification introduce noise into the system and less useful information is transmitted, which would lead to privacy improvement. However, in terms of DP, compression itself does not necessarily provide privacy guarantees since DP considers the worst-case scenario. Here is a toy example. Suppose that there is a scalar $x\in [-1,1]$, and the user shares $sign(x)$ with the central server. Then, the central server can distinguish any $x > 0$ from $x < 0$ given $sign(x)$, which corresponds to $\epsilon = \infty$ since $\frac{P(sign(x) = +1|x > 0)}{P(sign(x)=+1|x<0)} = \frac{1}{0}$, i.e., no differential privacy is preserved.
Nonetheless, it is likely that sparsification does bring privacy amplification to DP mechanisms. For example, suppose that random sparsification is adopted and a user shares 0 with the server with a probability of $1-\delta$ regardless of its local data $x$. Then the random sparsification scheme itself is $(0,\delta)$-DP, i.e., perfect privacy (when the central server receives 0) with a violation probability of $\delta$ (when the user shares $x$). If the shared $x$ is itself differentially private, we believe the random sparsification scheme will further enhance it.
However, mathematically measuring the amplification in privacy may be difficult. In fact, this work constitutes the first step towards this goal by analyzing the tradeoff function between type I and type II error rates in $f$-DP (please see Lemma 2 in Appendix A) for a generic discrete-valued mechanism. Unfortunately, quantifying the $f$-DP guarantees without explicit expression for the compression/sparsification scheme is still infeasible, and we conduct analyses on several popular compression and differentially private mechanisms in this work. We will consider extending the analyses to more general cases in our future work. | Rebuttal 1:
Rebuttal: Dear Chairs and Reviewers,
The authors would like to thank you for your time in handling our paper and providing insightful comments and suggestions. We are happy to know that the reviewers find our study is of practical use (reviewer 191s), our proposed method is natural and simple to implement while backed by both theory and experiments (reviewer VYNX), the privacy in compression that we study should be utilized if possible (reviewer Mzdj), and the paper is generally well-written (reviewers 191s, reviewer mVNo).
In this work, we investigate the amplification in privacy brought by compression. To this end, we derive the expressions of the tradeoff function between the two types of error rates in the hypothesis testing problem for generic discrete-valued mechanisms, based on which we advance the literature by deriving tighter privacy guarantees for binomial noise and analyzing the local differential privacy guarantees for the Poisson binomial mechanism as well as a variety of discrete-valued differentially private and compression mechanisms. Utilizing the proposed analytical results, we further propose a ternary compressor that exploits privacy amplification in sparsification and show that the loss in accuracy introduced by compression can be translated to enhancement in privacy. Our analyses focus on the recently emerging and promising concept of $f$-DP that enjoys a better composition property than $(\epsilon,\delta)$-DP and Renyi DP, which we believe is valuable to the community on its own.
We find that most of the concerns are due to misunderstandings and believe that we have addressed them adequately. In this global response, we would like to address the following two common concerns.
**The difference from existing methods**: In this work, different from existing methods that combine compression and differentially private mechanisms, we endeavor to exploit the privacy amplification brought by compression. Therefore, what we observe and present is not ``communicating less will not hurt the utility much under privacy constraints``, but instead ``the loss in utility caused by communicating less is translated to enhancement in privacy``. More specifically, the state-of-the-art method SQKR in [R1] yields a variance of $\frac{d}{k}(\frac{e^{\epsilon}+2^{k}-1}{e^{\epsilon}-1})^{2}C^{2}-||x||_{2}^{2}$
for the transmitted signal $x$. While for finite $\epsilon$ and $k$, the MSE may be dominated by error introduced by the privacy requirement $\epsilon$, the error introduced by compression is non-zero and further degrades the MSE, i.e., *the variance is still a function of $k$, and decreasing $k$ leads to an increased MSE without affecting the privacy guarantee $\epsilon$*. On the contrary, for the proposed ternary compressor, the privacy guarantee is given by $\mu = \frac{2\sqrt{d}c}{\sqrt{AB-c^{2}}}$ (line 301 in the revised manuscript), the communication cost is determined by $\frac{A}{B}d$, and the MSE is given by $ABd-||x||_{2}^{2}$.
When the communication cost is reduced with more aggressive sparsification (i.e., $B$ increases for a fixed $A$), there is an increase in MSE, but also a corresponding amplification in privacy, i.e., $\mu$ decreases. In this sense, given a fixed target privacy guarantee $\mu_{target}$, we may decrease $A$ when $B$ increases such that $AB$ (and therefore both $\mu$ and the MSE) remains the same. In this sense, *the MSE is solely determined by the privacy $\mu$ since the error introduced by compression is translated into enhancement in privacy.* With some simple algebra, the MSE of the ternary compressor is given by $(4d/\mu^{2}+1)C^{2}-||x||_{2}^{2}$ when $\mu < \sqrt{4dr/(1-r)}$, which remains the same regardless of the communication cost $r$. In this case, the tradeoff is essentially only between privacy and accuracy, and neither increasing nor decreasing the communication cost affects the tradeoff.
**The gain in privacy compared to SQKR**: The gain in privacy by utilizing the privacy amplification brought by compression is significant. The left figure in Fig. 4 compares the privacy guarantees of the proposed ternary compressor with SQKR given the same communication cost (only 10 out of 250 coordinates are transmitted) and MSE. Note that we transform the $\epsilon$-LDP guarantee of SQKR and the $f$-DP guarantee of the ternary compressor to the tradeoff between type I error rate and type II error rate in hypothesis testing for the ease of a direct comparison. Given the same type I error rate, a larger type II error rate means that the adversary is more likely to make a mistake in the hypothesis testing (i.e., better privacy). It can be observed from the left figure in Fig. 4 that the ternary compressor significantly outperforms SQKR. For example, for SQKR with $\epsilon = 2$, given type I error rate $\alpha = 0.5$, the type II error rate of the attacker is around $f(\alpha) = 0.068$. Meanwhile, given the same MSE and communication cost, the proposed ternary compressor attains a type II error rate of $f(\alpha) = 0.484$ when $\alpha = 0.5$. Given the same MSE and communication cost as that of SQKR with $\epsilon_{SQKR} = \\{1,2,5\\}$, if we translate the privacy guarantees of the ternary compressor from $f$-DP to $\epsilon$-DP via Lemma 1 (we numerically test different $\epsilon$'s such that $f(\alpha) \geq \max\\{0,1-\delta-e^{\epsilon}\alpha,e^{-\epsilon}(1-\delta-\alpha)\\}$ holds for $\delta = 0$), they are approximately $\epsilon_{ternary} = \\{0.05,0.2,3.9\\}$ for the proposed ternary compressor (please see the attached file for the figure). Note that in the high privacy regime, the error introduced by privacy requirements dominates for SQKR, and it becomes more important to exploit the privacy amplification by compression. As a result, a larger gap is observed.
[R1] Chen, Wei-Ning, Peter Kairouz, and Ayfer Ozgur. "Breaking the communication-privacy-accuracy trilemma." Advances in Neural Information Processing Systems 33 (2020): 3312-3324.
Pdf: /pdf/a693a8b09900721c52164efae3941ac2d8efb1df.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning Sample Difficulty from Pre-trained Models for Reliable Prediction | Accept (poster) | Summary: It is known that state-of-art deep-network based machine learning models are poorly calibrated. It is also somewhat widely known that the poor calibration is because we do not model data uncertainty while training.
This work proposes to use CLIP pretrained model for estimating the sample difficulty, which when further used for modeling data uncertainty is shown to yield better calibrated models.
Relative Maholonibis distance and leveraging a multimodal pretrained model such as CLIP for estimating the sample difficulty is the core contribution of this work.
Strengths: - Convincing experimental study. All the components of their proposed method: CLIP, RMD, alpha (hyperparameter) are well argued.
- Strong evaluation. The proposed method is evaluated with positive results on various downstream applications: outlier detection, selective classification, ECE on OOD.
- Writing is easy to follow.
Weaknesses: *Limited originality*: I enjoyed reading the paper but it cuts very close to the existing work.
As mentioned in the paper, several previous methods attempted example weighting during training [39] based on their difficulty, and RMD is proposed and used earlier for outlier detection [45] (citation numbers from their paper).
As I see it, the originality of the paper lies in using RMD and pretrained models for sample weighting. Although the paper is technically strong, I am somewhat unimpressed by its originality.
*Universality of CLIP*. The proposed measure of sample difficulty is in essence how well CLIP can identify different classes.
Although CLIP may have been trained on large and diverse data, it surely has its limits.
We may then also need an additional measure that informs how difficult it is for the CLIP to provide a sample diffcilty for an example.
For instance, the proposed measure most likely cannot be useful on a chest xray dataset for a downstream task of classifying pneumonia or normal.
Comments on how such scenarios can be addressed are missing from the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is the base model used for training on downstream models (which is mentioned as ResNet34) is pretrained?
- What happens when CLIP itself is used as the base model?
- Can some justification be provided on why we see better corruption robustness in ECE in Fig. 3?
I would also appreciate if the authors can respond, if they can, to the weaknesses.
*Minor comments*.
- L171-182. I found the writing somewhat superfluous since we only care to see that the error rate dips as we move towards right in Fig 2. Perhaps there is no need to comment about why ViT is the best etc. Also, there seems to be some confusion between the text and Fig.2. Text interprets the y axis as accuracy while it is error rate in the figure.
- L310: last sentence seems incomplete.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Edit after rebuttal: Please see my comment in the thread below.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating our convincing experimental study and strong evaluation. We address the detailed concerns below. We hope that you may find our response satisfactory and raise your score accordingly.
**Q1: Limited originality, such as example weighting and RMD:** We understand your viewpoint. Although our method builds upon the established concepts of ER and RMD, it makes a simple modification that greatly improves upon classical regularization techniques. Furthermore, our work is the first to leverage pre-trained models for sample difficulty annotation and demonstrate an effective way to use sample difficulty for reliable prediction (i.e., simultaneously improving accuracy and uncertainty calibration). We believe the simplicity of the proposed approach will make it more likely that this work is adopted or studied in the community.
**Q2: Limits of CLIP itself, e.g., cannot be useful on a chest xray dataset, and the need of a usability indicator:** Yes, CLIP has its own limits. It is likely that CLIP is not straightforwardly suitable for medical domain data. Nevertheless, our method itself can work with different pre-trained models. For instance, if targeting chest xray, our method can be practiced as computing RMD in the feature space of MedCLIP [*1] for measuring the sample difficulty. In this work, CLIP is the chosen pre-trained model, as it is well-suited for handling natural images, which are the basis of the targeted benchmarks. We perform a new experiment during the rebuttal, using recent DINOv2 [*2] as a pre-trained model to quantify sample difficulty, and report ACC and ECE on CIFAR-10/100 in Table Ⅱ of the attached rebuttal PDF in our global response. As shown, our method based on DINOv2 also shows superior performance similar to CLIP and significantly outperforms the baselines, confirming the compatibility of our method with different pre-trained models.
Moreover, a simple way to assess the effectiveness of a pre-trained model within a specific domain can be achieved by examining the zero-shot classification performance. For instance, if CLIP cannot deliver strong zero-shot generalization on a chest xray classification, then we should choose a different pre-trained model or adapt CLIP for that domain first.
Thank you for your suggestion. We will include the above discussion upon revision.
Reference:
[*1] MedCLIP: Contrastive Learning from Unpaired Medical Images and Text, Wang et al., EMNLP 2022
[*2] DINOv2: Learning Robust Visual Features without Supervision, Oquab et al., arXiv:2304.07193
**Q3: Is the base model used for training on downstream models (which is mentioned as ResNet34) is pre-trained?** We understand here your 'base model' refers to the model that is used for training the classifier, e.g., ResNet34. If this understanding is correct, then the base model is randomly initialized and trained from scratch on downstream tasks. For the models that are used for sample difficulty measurement, they are pre-trained with the datasets documented in Table 8 in Appendix.
**Q4: What happens when CLIP itself is used as the base model?** Using CLIP as the base model, fine-tuning it on the downstream dataset is necessary to boost the accuracy. However, fine-tuning has the forgetting issue, as the downstream dataset is usually smaller than the pre-training dataset. Furthermore, relying on the single ground-truth label, fine-tuning would still lead to over-confident predictions. Therefore, there is no obvious evidence that it has the potential to outperform our method. Plus, our method is applicable to any base model architecture, whereas fine-tuning CLIP is limited to a small set of architectures.
Besides, we also report ACC and ECE of CLIP-RN50 fine-tuned with a linear classifier (i.e., only fine-tuning the last linear layer to alleviate the forgetting/overfitting issue and save computational cost) on CIFAR10/100 in Table A. It is prominent that fine-tuned CLIP achieves comparable accuracy but exhibits poor calibration performance (i.e., higher ECE) .
Tab.A: The comparison of fine-tuned CLIP and our method on CIFAR-10/100.
| Method | | CIFAR-10 | CIFAR-100 |
|-------------------|-----|----------|-----------|
| Fine-tuned CLIP | ACC | 94.32 | 77.16 |
| | ECE | 3.812 | 6.358 |
| Ours | ACC | **95.67** | **78.58** |
| | ECE | **1.212** | **3.410** |
**Q5: Can some justification be provided on why we see better corruption robustness in ECE in Fig. 3?** Our method can alleviate overconfidence and improve uncertainty calibration (lower ECE) owing to the proposed sample adaptive weighting. Therefore, the proposed method has the ability to make conservative predictions (low ECE) for OOD inputs (i.e., images under different levels of corruption). It has also been observed in the literature, e.g., [*2], that models with good uncertainty estimation quality are also more robust under data shifts, e.g., image corruptions.
Reference:
[*3] Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift, Yaniv et al., NeurIPS 2019
**Q6: Editing or writing errors:** Thank you for pointing out the editing error, and we will correct it upon revision.
---
Rebuttal Comment 1.1:
Comment: Thank you very much fo detailed response and additional experiments.
All my concerns are well addressed. Please make sure to include eveluation proposals to test suitability of a pretrained model to a domain. Although the concern on originality still remains, this work is a thorough study that I would like to see presented at the conference. I am increasing my score accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you very much for increasing the score!
Comment: We are pleased to know that Reviewer 93FC founds our rebuttal satisfactory and increases the score accordingly. We will include eveluation proposals to test suitability of a pretrained model to a domain in the final version. | Summary: To address the over-confidence problem of the uncertainty estimation in deep learning, this paper for the first time proposes to use the pre-trained large models to estimate the learning difficulty of each sample. Then, the estimated sample difficulty information is embedded in the final loss for training deep models. The authors evaluate the proposed framework on different datasets and with different model architectures, showing that it improves uncertainty calibration on both ID and OOD settings.
Strengths: 1. The studied problem namely uncertainty calibration is important in modern deep learning. The presentation of this paper is good and the paper is easy to follow.
2. Different from some previous works that use pre-trained large models as the base models to fine-tune, this paper proposes to use large models as the tools to estimate sample difficulty, which makes the training process flexible since any model architecture can be used in the training.
3. I like the idea of using sample difficulty as a metric to guide the modeling of uncertainty. And based on the authors' observation, large models (especially the models learned with cross-modality) seems give more accurate estimation of sample difficulty.
Weaknesses: 1. The proposed distance metric RMD in Eq. (5) is shown useful than MD in Fig. 6, but backs of the theoretic supports.
2. Two new hyperparameters ($\alpha$ and T) are induced in the final loss function (9), used as the weighting factors of the entropy penalty term, which act like the factor used in label smoothing and may reduce the robustness of the proposed method.
3. I wonder if the large model necessary for the estimation of sample difficulty? There must be some other metrics for sample difficulty, so if other metrics also yield good performance with the regularization-based loss function (9). Maybe some ablation studies can be conducted for this point.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. If no theoretic support could be added, could you explain more about why RMD is better than MD.
2. Maybe experiments of hyperparameter study can be added for the second weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating our essential problems and new contributions and providing valuable comments. We address the detailed concerns below. We hope that you may find our response satisfactory and raise your score accordingly.
**Q1: why RMD is better than MD?** The key difference between RMD and MD lies in the relative distance calculation. MD scores the sample difficulty based on the distance towards the class-specific mean mode, whereas RMD examines the relative distances towards two properly chosen mean modes, i.e., class-agnostic and class-specific mean. The relative distance reflects not only if the feature is typical (close to the class-specific mean), but also if it is discriminative (far from the class-agnostic mean). MD mostly concerns the first aspect, thus performs worse than RMD, esp. when taking care of fine-grained classification benchmarks such as ImageNet1k. In ImageNet1k, many classes could share some common features, e.g., there are many sub-classes of elephants in ImageNet1k and they all look alike in some aspects. To confidently classify one elephant as Tusker, it must have typical features that are also discriminative enough from the rest of the classes. The metrical difference using RMD can filter out the influence from shared typical features, i.e., features close to the class-agnostic mean mode. We will make it clearer in the next version. We also add an illustrative example in Figure I of the attached rebuttal PDF in our global response, which hopefully visualizes well the above description.
**Q2: Robustness of hyperparameter $\alpha$ and $T$, ablation results:** In our experiments, we did not experience hard tuning scenarios. Being part of our ablation study on ImageNet1k classification, we selected the weighting coefficient $\alpha$ from { 0.05, 0.10, 0.15, 0.20, 0.25, 0.30 }. As shown in Table 4, the proposed method is less sensitive to the choice of $\alpha$ than the baseline ER, owing to our sample adaptive weighting. As for the temperature $T$, it is set to 0.7 for all datasets. Hence, the proposed method does not rely on time-consuming hyper-parameters tuning.
**Q3: Is the large model necessary to estimate sample difficulty? Ablation studies about different metrics for sample difficulty:** To verify the benefit of using large models, we compared them with smaller ResNet34/50/101 models that were trained on the task dataset in Table 10 in Appendix. We find large models beneficial to estimate the sample difficulty. As they were trained on large-scale datasets with high sample diversities in many dimensions, they can learn to preserve and structure richer semantic features of the training samples than models only exposed to the training set that is commonly used at a smaller scale. Hence, large pre-trained models have a representative feature space, which can be used to quantify more accurate sample difficulty scores. Besides CLIP, we additionally use DINOv2 [*1] during the rebuttal, which is also a large-scale pre-trained model. The ACC and ECE gains are comparable with CLIP, significantly outperforming the baselines.
Moreover, we have compared different metrics (K-means and MD) for sample difficulty in Table 7, which confirms the superiority of RMD in measuring the sample difficulty. We also compared our method and CRL (a baseline that uses the frequency of correct predictions during training epochs as a sample difficulty metric) in Table 6, and we can see that large pre-trained models still perform better.
Reference:
[*1] DINOv2: Learning Robust Visual Features without Supervision, Oquab et al., arXiv:2304.07193
---
Rebuttal Comment 1.1:
Title: Thanks for the responses.
Comment: Thank you for your responses. I have thoroughly reviewed the feedback, including the additional experimental results and the illustrative figure provided in the attached rebuttal PDF.
The results from the ablation studies align with the stated claims. However, I still maintain concerns regarding the issue of hyperparameters. Given the various choices available for the combination of $\alpha$ and $T$, and the uncertainty regarding the method's sensitivity to $T$, it remains challenging to definitively assert that larger models consistently outperform others. As the utilization of larger models stands as a pivotal contribution within this paper, I am maintaining my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: Thanks for reviewing our rebuttal and providing feedback. We understand your viewpoint, but have a different perspective on "the issue of hyperparameters”. The utilization of $\alpha$ is common in various regularization-based methods, and experimental results in Table 4 have verified that our method does not rely on time-consuming hyperparameters tuning. Besides, "temperature" is a widely-used hyperparameter in many loss functions, such as uncertainty calibration, knowledge distillation, and contrastive loss. We can flexibly control the relative importance among all training data by tuning parameter “T”, and a fixed “T” for all datasets can also work well in our method. Hence, we believe that both hyperparameters are not limiting factors of our method. | Summary: The authors propose to improve model calibration by leveraging information about a sample's difficulty. To do this, they cluster samples using embeddings obtained from large, pre-trained models, and then use a sample's distance to samples from the same class as proxy for difficulty. They show improvements on imagenet in calibration and slight improvements on accuracy.
Strengths: Originality: The approach is effectively a form of adaptive entropy regularization. The novelty is that the strength of the regularization depends on the "difficulty" of the samples. The approach makes sense and is (to the best of my knowlege) novel.
Quality: Theoretical analysis is a bit lacking. Empirical results make sense and show a decent improvement in calibration scores. As such, the method does make sense.
Clarity: Some choices the paper makes are unclear to me, both in terms of overall method design (why focus so much on RMD?) and in details (e.g. formulation of Eq. 10). See "details" for more. As far as language use goes, the paper is easily understandable.
Significance: Results show that the idea works. ECE decreases, and the qualitative examples shown in the figures/supplementary make sense to me.
Weaknesses: * The main idea of the paper (estimate difficulty of samples on large pre-trained model, weight regularization with that) is a bit obscured by the fact that the authors make a big fuzz about using the "Relative Mahaloni Distance" (RMD), even though Table 7 shows that even doing simple K-Means clustering works as well (even better than normal Entropy Regularization, if one compares with Table 4). The paper's main idea would be much more obvious if the paper first established that the idea works even using a simple clustering like K-Means, and then shows that RMD is a sensible improvement upon this. As such, I don't really understand why RMD was chosen: The paper also never goes into explaining WHY RMD beats K-Means, or why doing Mahaloni Distance (MD) does not work as well. The authors merely say that RMD worked well for near-OOD detection.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Why take the maximum over i in Eq. 10?
* Table4: misses a description of what the columns represent. The text says that they are different regularization strenghts, but this should also be made explicit in the table itself.
* Authors state that all results are averages, but never talk about the variances between runs. It would be nice to know if error bars overlap.
* What is the runtime of calculating RMD? It would be nice to know if this method is applicable in practice or is prohibitively expensive.
* How well does the Gaussian assumption fit? An exploration of how well the RMD describes the actual representation space would be interesting.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors do not adress limitations of their approach, and I'd encourage them to add a discussion of those!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for finding our work novel and clarifying empirical results. We address the detailed concerns below. We hope that you may find our response satisfactory and raise your score accordingly.
**Q1-a: a big fuzz about using the RMD, even though simple K-means clustering already helps** It is true that K-means clustering can already reveal the benefit of using our proposal. As the reviewer already pointed out, our main idea is not limited to one specific sample difficulty measure. The fact that different reasonable measures provided gains is a positive evidence to our idea. Nevertheless, we believe it is still of technical interest to present RMD (see our next answer). We will definitely consider reviewer's suggestion on improving the presentation of sample difficulty measuring method.
**Q1-b: why is RMD a better measure of sample difficulty than K-Means and MD?** These three metrics are all based on the Gaussian assumption in the feature space. The key difference between RMD and K-means/MD lies in the relative distance calculation. Both K-means/MD scores the sample difficulty based on the distance towards the mean mode, whereas RMD examines the relative distances towards two properly chosen mean modes, i.e., class-agnostic and class-specific mean. The relative distance reflects not only if the feature is typical (close to the class-specific mean), but also if it is discriminative (far from the class-agnostic mean). Both K-means and MD mostly concern the first aspect, thus perform worse than RMD, esp. when taking care of fine-grained classification benchmarks such as ImageNet1k. In ImageNet1k, many classes could share some common features, e.g., there are many sub-classes of elephants in ImageNet1k and they all look alike in some aspects. To confidently classify one elephant as Tusker, it must have typical features that are also discriminative enough from the rest of the classes. The metrical difference using RMD can filter out the influence from shared typical features, i.e., features close to the class-agnostic mean mode. We will make it clearer in the next version. We also add an illustrative example in Figure I of the attached rebuttal PDF in our global response, which hopefully visualizes well the above description.
**Q2: Why do you take the maximum over i in Eq. 10?** By taking the maximum over "i", we make the score $s(x,y)$ upper-bounded by one. Overall, the score is constrained to the value range between 0 and 1, which is normalized and avoids potential numerical issues.
**Q3: Table 4: misses a description of what the columns represent:** Thanks for pointing this out. Each column is associated with a value for the hyper-parameter $\alpha$. We will add this information upon revision.
**Q4: Error bars (variance) of experimental results:** Thanks for the suggestion. We will add ``std'' of experimental results upon revision. At this point, we first report std of ACC and ECE for CE, ER and our method in Table Ⅰ of the attached rebuttal PDF in our global response. We can observe that the proposed method is still performing the best within the standard deviation. Our std is generally comparable to other methods.
**Q5: What is the runtime of calculating RMD?** We can calculate RMD of each sample once before training to save computing overhead (as stated in line 213), so training and inference overhead is basically the same as other methods. As for the running time of calculating RMD of all training samples before training, we report results in Table A. It should be noted that we only need to calculate RMD once for a specific downstream task.
Tab. A: The running time of calculating RMD of the entire training dataset.
| Dataset | CIFAR-10/100 | ImageNet-1k |
|------------------|--------------|-------------|
| Running time (s) | 94s | 2068s |
**Q6: How well does the Gaussian assumption fit?** We chose the Gaussian assumption based on the following considerations. First, it leads to a simple distance measure for quantifying the sample difficulty, i.e., examining mean and variance. Second, it has several empirical supports from other tasks. For instance, to assess the image synthesis quality, the standard metric: FID essentially measures the difference between the real and synthetic data distributions based on multivariate Gaussian modeling in the feature space of Inception v3 or CLIP. Another example is the use of Gaussian assumption for deriving the OOD detection metric, e.g., in [*1].
We would further note that our method is not limited by Gaussian assumption. We contributed the idea of modeling the feature distribution for sample difficulty quantification. Gaussian assumption is an example, which led to convincing gains. Nevertheless, one may resort to more sophisticated modellings for further improvements.
Reference:
[*1] Contrastive Training for Improved Out-of-Distribution Detection, Jim et al., arXiv:2007.05566 | Summary: This paper proposes a difficulty-aware uncertainty regularization approach, which first pre-defines the difficulty of each training sample and then differently regularizes training samples during training. To quantify the sample difficulty, the authors utilize pre-trained large models like CLIP to extract feature vectors, and the relative Mahalanobis distance is computed on a collection of these feature vectors.
Strengths: 1. The paper is well-written and exhibits clarity in its presentation. The visual results help to comprehend the concept of sample difficulty.
2. The effectiveness of considering sample difficulty in classification tasks has been empirically verified against other uncertainty regularization methods. In particular, the ablation study conducted on the regularizing strength hyperparameter $\alpha$ in Section 5.4 clearly demonstrates that the proposed sample-specific regularization approach outperforms ER, which performs global regularization.
Weaknesses: 1. The current version of the paper solely presents the average value obtained from five trials without including information about the standard deviation. It is highly recommended to include error bars.
2. The proposed methodology heavily depends on CLIP as a key component. Specifically, when using ViT-B and MAE-ViT-B, the accuracies achieved in Table 5 are 73.59% and 73.81%, respectively, which are lower than the accuracy of 74.02% achieved by the Poly baseline presented in Table 1. At this stage, it is challenging to assert that the proposed method can be applied universally to any large-scale pre-trained models.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The overall scheme that a pre-trained model provides supervision for a given data reminds the knowledge distillation. However, the authors stated "knowledge distillation will not solve the problem" in lines 29-30. Could you make an additional statement on this?
2. The reason for the absence of the CRL as a baseline in the main tables, specifically Tables 2 and 3, is unclear or not readily apparent.
3. Is there a specific justification for not including C10 in the misclassification detection experiment?
Miscellaneous:
1. Section 5.4. "roubustness" should be "robustness."
2. Table 4. "under different on ImageNet1k" should be "under different $\alpha$ on ImageNet1k."
3. It would be nice to update the figures in the paper to utilize colors that are accessible for individuals with color blindness. Specifically, figures 4, 7, and 11 might pose challenges for individuals with red-green color blindness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors did not address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for finding our work clearly motivated and clarified, as well as providing valuable suggestions to further improve our paper. We address the detailed concerns below, by including the suggested standard deviation, a new experiment to further demonstrate the applicability beyond CLIP, and clarification on various points. We hope that you may find our response satisfactory and raise your score accordingly.
**Q1: No standard deviation (error bars) about experimental results:** We agree this is important. Our reported numbers are the mean values based on 5 runs for CIFARs and 3 runs for ImageNet1k. We will also add ``std'' in the final version. At this point, we first report std of ACC and ECE for CE, ER and our method in Table Ⅰ of the attached rebuttal PDF in our global response. We can observe that the proposed method is still performing the best within the standard deviation, and our std is generally comparable to other methods.
**Q2: It is challenging to assert that the proposed method can be applied universally to any large-scale pre-trained models:** We disagree with our highest respect. In fact, while VIT-B and MAE-VIT-B in Tab. 5 are not as good as CLIP, they both provide convincing improvements in ECE. Then, among methods in Tab. 1 that achieve similar ECEs as them, they are better in terms of ACC. It is true that they both underperform PolyLoss in ACC. However, PolyLoss severely compromises ECE for the gain, whereas our method combined with different pre-trained models does not suffer from such a tradeoff, and improves both metrics over the baseline "CE". Furthermore, we perform a new experiment, using recent DINOv2 [*1] as a pre-trained model to quantify sample difficulty, and report ACC and ECE on CIFAR-10/100 in Table Ⅱ of the attached rebuttal PDF in our global response. While both CLIP and DINOv2 were trained on large-scale datasets, they took different training procedures. As shown, our method based on DINOv2 outperforms all baselines in Tab. 1 (including PolyLoss). The achieved gains are similar to that of using CLIP. Hence, our method itself can work with other pre-trained models.
Reference:
[*1] DINOv2: Learning Robust Visual Features without Supervision, Oquab et al., arXiv:2304.07193
**Q3: More elaboration on the statement on "knowledge distillation will not solve the problem" in lines 29-30:** Sample difficulty measure can be regarded as a type of data annotation complementary to the ground-truth label. Solely relying on the ground-truth label, the cross entropy loss essentially induces the model to overfit the 0/1 loss, i.e., matching the label with 100% confidence. This is also the case when people use the cross entropy loss to train the teacher model as well as the student model with the feature matching with the trained teacher model. Therefore, this overfitting issue does not simply go away with knowledge distillation. Our method tries to address the issue by adding additional annotation to each sample, and modifying the training loss with instance-adaptive entropy regularization.
Furthermore, we do not intend to make sure the "student" (i.e., the actual classifier) mimics the "teacher", neither in the feature space nor the logit space. Behaving like the teacher does not necessarily lead to better "uncertainty", where knowledge distillation solutions primarily focus on accuracy. It is an interesting venue that requires dedicated investigation, which however is beyond the scope of our work. We will improve our wording upon revision.
**Q4: The reason for the absence of the CRL as a baseline in the main tables, specifically Tables 2 and 3:** We took CLR as an approach that exploits sample difficulty measures for uncertainty in ablation studies, thus we primarily compared them among methods based on sample difficulty, i.e., Sec. 5.5 for comparison of different sample difficulty measures. Of course, we are happy to provide results about misclassification and OOD detection for CRL. As shown in Tab. A and Tab. B below, our method still outperforms CRL regarding misclassification and OOD detection. We will add detailed results in the final version.
Tab.A: The comparison of misclassification detection performance (%) for CRL and our method.
| Dataset | Method | FPR-95\%$\downarrow$ | AUROC $\uparrow$ | AUPR $\uparrow$ |
|------------|----------|----------------------|----------------------|----------------------|
| | | | MSP / Entropy |
| C100 | CRL | 44.80/46.08 | 86.67/85.98 | 95.55/95.49 |
| | Proposed | **42.71/43.22** | **87.50/87.03** | **96.10/96.02** |
| ImageNet1k | CRL | 46.03/48.01 | 86.11/84.33 | 94.41/93.89 |
| | Proposed | **45.69/46.72** | **86.53/85.23** | **94.76/94.31** |
Tab.B: The comparison of near OOD detection performance (%) for CRL and our method.
|$D_{in}$ /$D_{out}$ | Method | FPR-95\%$\downarrow$ | AUROC $\uparrow$ | AUPR $\uparrow$ |
|------------|----------|----------------------|----------------------|----------------------|
| | | | MaxLogit / Entropy |
| C100/C10 | CRL | 58.13/58.54 | 79.91/80.13 | 81.75/81.89 |
| | Proposed | **55.48/55.60** | **80.20/80.72** | **82.51/82.84** |
| ImageNet1k /iNaturalist | CRL | 35.07/34.65 | 90.11/90.32 | 97.96/97.81 |
| | Proposed | **32.17/34.19** | **91.03/90.65** | **98.03/97.99** |
**Q5: Is there a specific justification for not including C10 in the misclassification detection experiment?** Misclassification detection is more relevant for scenarios with low classification accuracy, whereas the accuracy of C10 has already reached about 95\%. Therefore, we focused on C100 and ImageNet1k, which are more interesting/challenging benchmarks for misclassification detection.
**Q6: Editing errors and colors in figures 4, 7, and 11:** Thank you for your suggestions, and we will update them in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' efforts. Incorporating supplementary "difficulty" characteristics into an existing dataset via pre-trained models represents a straightforward yet intriguing strategy to offer extra insights into the data. The supplementary experiments involving DINOv2 work to reaffirm this notion, and integrating them into the main text would be beneficial. More precisely, showcasing the primary results through ViT (supervised), MAE-ViT, CLIP-ViT, and DINO-ViT would make the paper solid and underscores its alignment with the title (i.e., pre-trained models). I am pleased to raise my score and anticipate addressing the concerns raised during the rebuttal phase in the final manuscript.
---
Reply to Comment 1.1.1:
Title: Thank you for raising the score!
Comment: We are pleased to know that Reviewer Ye8r finds our rebuttal satisfactory and raises the score. We will address the concerns in the final manuscript, as stated in the rebuttal. | Rebuttal 1:
Rebuttal: Firstly, we would like to express our gratitude for the thoughtful reviews, which help to further improve our paper. We are pleased that the reviewers found our paper to be **novel (innovative)** (Reviewers FmxN, YeMZ, Lxfs), **well-written and convincing** (All), **straightforward to understand** (Reviewer FmxN), the studied problem is **important** and the training process is **flexible** (Reviewer Lxfs), and our experiments to be **strong and convincing** (Reviewer 93FC).
Secondly, for some common concerns, 1) **no standard deviation (error bars) about experimental results**, we provide Table Ⅰ in the uploaded rebuttal PDF, which shows the std is on a par with other methods; 2) **Why is RMD a better measure of sample difficulty than K-Means and MD**, we provide detailed explanations to reviewers individually. In addition, we also add an illustrative example in Figure A of the uploaded rebuttal PDF, which hopefully visualizes well the benefit of using relative distance; 3) **Whether our method can be applied to other large-scale pre-training models**, we perform a new experiment, using recent DINOv2 as a pre-trained model to quantify sample difficulty, and report ACC and ECE on CIFAR-10/100 in Table Ⅱ of the uploaded rebuttal PDF. While both CLIP and DINOv2 were trained on large-scale datasets, They took different training procedures. As shown, our method based on DINOv2 also improves over the baselines in both ACC and ECE, delivering superior performance similar to CLIP. Hence, our method itself is not CLIP-specific, and can work with other pre-trained models.
We hope this message provides a good summary of the reviews and our responses. We further address the comments of each reviewer individually.
Pdf: /pdf/6f95b2fa896b5911e03cff9f68beab5969271746.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In settings where deep neural networks are used for critical tasks, it is crucial to ensure that they are calibrated and capable of reliable predictions. It is desirable to have the ability to measure the confidence of the model's predictions and reject those that have high uncertainty. To achieve this, the authors suggest using neural networks that have been pre-trained on large and diverse datasets as calibration auxiliaries for downstream tasks.
Through extensive experimentation on benchmarking datasets such as ImageNet, the authors demonstrate that using pre-trained models such as CLIP to define the relative Mahalanobis Distance in feature space is a valid measure of sample difficulty. Additionally, they introduce a novel regularization term that accounts for the difficulty of individual samples. In section 5, they provide an extensive evaluation of their proposed learning objective in comparison to other standard regularizers. While difficulty-aware regularization can be somewhat costly, it ultimately reduces the expected calibration error.
Strengths: Some strengths of the work:
**Originality**: Researchers are actively studying how to measure the difficulty of individual instances in machine learning to improve its reliability. The authors have introduced and proven the effectiveness of RMD as a reliable method for measuring this difficulty.
**Clarity**: I found the paper well-written and straightforward to understand. The authors have provided ample empirical evidence to support their idea of sample difficulty, and they have included a comprehensive discussion while comparing it to relevant baselines.
**Quality**: Understanding the limitations of existing algorithms on complex problems requires measuring sample difficulty. This research introduces innovative regularizers that enhance the models' calibration in downstream training.
**Significance**: Alongside establishing robust benchmarks and regularizers for calibrated training, the authors also find that self-supervised learning algorithms (such as MAE) learn richer representations that better estimate hardness.
Weaknesses: 1. When it comes to measuring sample difficulty, the effectiveness of using large-scale pretrained datasets depends on the degree of domain/distribution shift in the downstream task. For example, it remains uncertain whether the same measures can be applied to medical imaging tasks as compared to natural image classification.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The authors explore example difficulty in the context of classification tasks. Can similar measures be extended to dense-prediction tasks?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: It's important to mention that measuring the difficulty of samples adds extra computing workload. To provide clarity to readers, the authors are encouraged to discuss this topic and any potential limitations of their proposed work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for finding our work innovative, well-written and straightforward to understand, and showing convincing validation, as well as providing valuable comments. We answer the specific questions below. We hope you will find our response satisfactory and raise your score accordingly.
**Q1: It remains uncertain whether the same measures can be applied to medical imaging tasks as compared to natural image classification:** We understand the concern may arise from the fact that CLIP may be inadequate for the medical domain, as medical images can be OOD (out-of-distribution) to CLIP. Nevertheless, we would like to note that our procedure of deriving sample difficulty is compatible with other pre-trained models. For instance, we add a new experiment to demonstrate that our method is plug-and-play to switch from CLIP to DINO v2 [*1], delivering comparable gains as on the natural image classification benchmarks (See Table II in the attached rebuttal PDF in our global response). For the medical domain, MedCLIP [*2] can be a more interesting alternative than CLIP/DINO v2 for practicing our method. We will include this discussion in the final version.
Reference:
[*1] DINOv2: Learning Robust Visual Features without Supervision, Oquab et al., arXiv:2304.07193
[*2] MedCLIP: Contrastive Learning from Unpaired Medical Images and Text, Wang et al., EMNLP 2022.
**Q2: Can sample difficulty measures be extended to dense-prediction tasks?** It is an interesting perspective. The challenge for dense-prediction tasks lies in multiple objects' coexistence in one image. To use our measure, getting the feature per object instance is important. Taking object detection as an example, a natural way to use our method would be to use the ground-truth bounding boxes of the training samples to extract the features per object instance (e.g., using ROI align) before scoring the sample difficulty of the training set. The sample difficulty can then be used for regularizing the classification head, which is also on each object instance level. We will include such discussion in the final version.
**Q3: Mention the computation overhead and discuss potential limitations of their proposed work:** We will add the computation overhead into the implementation details. Briefly, we can calculate per-sample difficulty score once before training to save computing overhead (as stated in line 213), so training and inference overhead is basically the same as other methods. Overall, we find the computation overhead very affordable, not being a limiting factor of our method. As for a general limitation discussion, we have included some in the conclusion in the form of future extensions of our method. Upon revision, we will incorporate the discussion in Q1 as well. | null | null | null | null | null | null |
Structured State Space Models for In-Context Reinforcement Learning | Accept (poster) | Summary: The authors propose a modification to S5 that enables "resetting" the recurrent state, allowing it to function as an RNN replacement in RL. They update the scan operator to utilize the `done` flag and use this to reset the recurrent state. They evaluate the resettable S5 on a portion of the POPGym suite and the bsuite memory task. They show that the S5 is able to solve long-term memory tasks previous methods were unable to solve.
Next, they implement a metalearning approach from Kirsch et al. that enables learning across various action and observation space sizes by using random linear projections of the spaces to a fixed-size vector. They leverage S5's long-term memory to obtain good results across DMLab, on both in-distribution and out-of-distribution tasks.
Strengths: The paper is well written and addresses a painful issue: that partially observable policies are inefficient to train. They show that S5 is the first model to solve the difficult RepeatHard task from POPGym. It is clear the author put a ton of work into implementation and put care into baseline implementations to ensure a fair comparison. The experimental setup is quite broad in the sense that it covers both POMDPs and metalearning.
Weaknesses: #### Main weakness:
Perhaps I am missing some information, and if so, please correct me. But the reset, the main contribution of the paper, appears to be a trivial change to S5. Line 94 already provides the initial state of vanilla S5, which is $e_0 = (I, 0)$. Their reset is just setting the recurrent state to the initial state when they receive a `done` flag from the environment. This is the standard for any recurrent model in RL, so I do not believe this counts as a novel contribution.
With a slight change of notation, the S5 scan operator is
$f(a_t, b_t, a_{t-1}, b_{t-1}) = \begin{bmatrix}
a_t \odot a_{t-1} &
a_t \otimes b_{t-1} + b_t
\end{bmatrix}$
The authors propose that at $t=0$, since we do not have $a_{t-1}, b_{t-1}$ we evaluate $f$ using $a_{t-1} = I$ and $b_{t-1} = 0$
$f(a_t, b_t, I, 0) = \begin{bmatrix}
a_t &
b_t
\end{bmatrix}$
So to reset: just put the predefined vanilla S5 initial state $e_0$ and into S5.
#### Other Weaknesses
- The authors show very promising performance on Repeat Hard and a few stateless cartpole environments. That said, POPGym has something like 50 total tasks, so it is a bit disappointing that they picked the easiest control tasks, as I imagine S5 would do much better than GRUs on harder tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The reuse of $a$ is confusing in the scan terms, since $a$ is also the subscript referring to the $A$ matrix. Perhaps a different term would be more clear?
- Figure 3: This is across all POPGym envs or just a subset? From briefly reviewing the code I assume a subset.
- I'm not an expert on meta rl, but it is very promising that their approach can do well on out-of-distribution tasks.
- Experiments and implementation are the strength of this paper, I just wish they just put the S5 reset in the appendix or footnote
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not have a limitations section. It is probably worth adding a short section on this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to first thank the reviewer for their detailed and technical review. We are glad that the reviewer finds our approach promising and the experiments fair and broad.
### On the Reset
>the reset, the main contribution of the paper, appears to be a trivial change to S5
It’s not immediately obvious why the parallel reset is non-trivial. Indeed, for any recurrent model in RL, resetting the hidden state during inference (and training) is standard. However, it’s not clear how to do this when one is *parallelizing across the time dimension* during *training/backpropagation* given a batch of data of shape $(Minibatch Size, Sequence Length, Observation Size)$. For example, most S4-like models usually implement their backwards pass using *convolutions* (instead of recurrence) to parallelize across time -- it’s not clear how or if we can adjust the convolution kernel to account for “dones” from the environment.
Instead, S5 uses *associative scans*, which leverages a *binary associative operator* to parallelize across the time dimension. While it’s immediately obvious that simple operators, such as multiplication and addition, can be implemented associatively, previous implementations have not shown how to implement “reset” function associatively. Our contribution is introducing such an operator (Line 132) that computes the desired output (Line 135) and is provably associative (Appendix A).
With this in mind:
>So to reset: just put the predefined vanilla S5 initial state $e_0$ into S5
We assume the reviewer is asking why we cannot just insert $e_0$ where there are “dones”. We believe this would not compute the desired recurrence. While indeed (using the reviewer’s notation) $f(a_t, b_t, I, 0)$ computes the desired value, note that $f(I, 0, a_{t-1}, b_{t-1})$ (i.e. the other side of the associative operation) returns $[a_{t-1} b_{t-1}]$. Thus, when scanning over the sequence $(e_{t-1}, e_0, e_t)$, we would get $f(a_t, b_t, a_{t-1}, b_{t-1}) \neq [a_t, b_t]$. This means it does not reset the hidden state. At no point would inserting $e_0$ actually *reset* the hidden state of the scan -- it would merely repeat the previous element.
Furthermore, inserting elements into the scan would usually involve dynamic shapes or masking, which is often inefficient or undesirable when performing batch learning.
We have updated our manuscript to clarify why we cannot just use $e_0$ to perform the reset
---
*Because this was the reviewer’s primary concern, we hope that our response has clarified this contribution and that the reviewer will let us know of any further concerns on this topic or, otherwise, consider updating their score.*
---
> The reuse of $a$ is confusing in the scan terms
Good catch! We’ve updated the paper to reflect this change
### On the tasks
> POPGym has something like 50 total tasks, so it is a bit disappointing that they picked the easiest control tasks
> This is across all POPGym envs or just a subset?
Indeed, this is just a subset of the POPGym envs. While POPGym does have many tasks, many of them are simply easier versions of the ones we evaluated on. For example, we evaluate only on the “Hard” version of these tasks while there are “Medium” and “Easy” versions that are not particularly informative. Unfortunately, there are no harder control tasks in POPGym beyond CartPole and Pendulum; however, we evaluated on the environment that most architectures struggled the most on: RepeatPreviousHard. The best-performing architecture in POPGym reported a score of 0.191 while we obtained a score of 0.91.
We’ve since added further additional results on a long-horizon version of POPGym StatelessCartPole that involves sequences of length up to $6400$ to increase the difficulty and context. We’ve included a brief explanation and analysis in the general response.
> It is probably worth adding a short section on [limitations].
Good point -- thanks for the feedback! We have included this in our manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks, I did not consider the other side of the scan.
The Bsuite, long-term CartPole, and RepeatHard task performance is impressive, but I was hoping to see how S5 performs on a broader range of POMDPs. Looking at the POPGym website, they appear to have tasks like Minesweeper or Autoencode which I would argue test different memory capabilities than inferring position from velocity or store/recall from RepeatHard.
Your results are good, I just think they could be more convincing if your model was demonstrated on more tasks. I have updated my score accordingly.
---
Reply to Comment 1.1.1:
Title: Additional POPGym Results
Comment: We thank the reviewer for the fast response, and for updating their score. We’re happy that the reviewer finds the performance on the Bsuite, long-term CartPole, and RepeatHard to be impressive, though we understand that the reviewer is disappointed that we did not evaluate on the full POPGym suite.
To further address the reviewer’s concerns, we implemented as many of the POPGym environments as we could in pure JAX. We believe that this in and of itself is a significant contribution, and will provide and open-source the code upon acceptance (thanks to the reviewer). This will speed up research in partially-observable RL significantly, since it allows researchers to run statistically-significant experiments in minutes rather than several hours.
On top of the “Repeat”, “Cartpole,” and “Pendulum” environments in the original paper, we’ve now also implemented the “Minesweeper”, “Higher Lower”, “Count Recall”, “Autoencode”, “Multiarmed Bandit”, and “Concentration” environments.
This only means we have not yet implemented 2 environments: “Battleship” and “Labyrinth”, which would take more effort. We plan to implement these before the camera-ready version. Should the reviewer believe that these environments are crucial to their evaluation (or should we finish implementing early), we will report the results before the Reviewer Discussion deadline.
We only report on the “Hard” difficulty of the *new* environments for brevity here. Results for "Medium" and "Easy" are similar and we can report them should the reviewer request them. We ran 4 vectorized seeds on an NVIDIA A100 to get the following results:
## MMER
| Method | Minesweeper | Higher Lower | Count Recall | Autoencode | Multiarmed Bandit | Concentration |
|-------------------|--------------------|-------------------|-------------------|-------------------|-------------------|-------------------|
| S5 with $\oplus$ | **-0.296 ± 0.002** | **0.505 ± 0.001** | **-0.833 ± 0.000** | **-0.296 ± 0.002** | **0.562 ± 0.019** | **-0.831 ± 0.001**|
| S5 with $\bullet$| -0.345 ± 0.003 | 0.499 ± 0.000 | -0.877 ± 0.000 | -0.345 ± 0.003 | 0.438 ± 0.019 |**-0.831 ± 0.001** |
| GRU | -0.313 ± 0.003 | **0.505 ± 0.000** | **-0.832 ± 0.001**| -0.313 ± 0.003 | **0.575 ± 0.008** | **-0.830 ± 0.001**|
| MLP | -0.383 ± 0.004 | **0.504 ± 0.000** | -0.877 ± 0.000 | -0.383 ± 0.004 | 0.306 ± 0.012 | -0.832 ± 0.000 |
## Runtime (Seconds) for 4 Vectorized Seeds
| Method | Minesweeper | Higher Lower | Count Recall | Autoencode | Multiarmed Bandit | Concentration |
|-------------------|---------------|--------------|--------------|--------------|-------------------|---------------|
| S5 with $\oplus$ | 1030.565 | 1016.776 | 1038.935 | 1031.494 | 1458.849 | 1043.843 |
| S5 with $\bullet$| 939.419 | 927.000 | 953.165 | 940.351 | 1359.034 | 954.630 |
| GRU | 10309.002 | 10379.492 | 10716.769 | 10585.172 | 10452.576 | 9775.959 |
| MLP | 172.651 | 157.777 | 179.184 | 173.515 | 586.735 | 233.702 |
Notably, because we are running on an A100 instead of an A40, we achieve significantly faster speeds for S5, nearly 10x that of the GRU in these environments while obtaining very similar results across the board.
We do not expect the S5 architecture to necessarily outperform the GRU significantly since these environments do not test long-term memory; however, it is notable that S5 still runs approximately 10x faster while achieving similar results.
We hope that these latest results address the reviewer’s latest concerns. We would be happy to engage in further discussion with the reviewer or perform more experiments. We would like to once again thank the reviewer for the timely and thoughtful feedback, which we believe has effectively strengthened the paper's quality and contributions. | Summary: This paper investigates the effectiveness of structured state-space sequence (S4) models and in particular its variant S5 in reinforcement learning settings. To apply S5 to reinforcement learning, the authors propose a modified associative operator that handles episodic resets, allowing S5 to train over sequences spanning multiple episodes. Across a suite of partially observed RL environments from bsuite and POPGym, S5 outperforms RNN and Transformer baselines while being significantly more efficient at training and inference. To further evaluate the S5's ability to perform in-context RL and generalize out-of-distribution, the authors introduce a multi-environment meta-learning task based on DM Control where each meta-episode features different random projections of observation and action spaces. Not only does S5 outperform MLP and LSTM on in-distribution tasks, but it demonstrates better generalization to held-out tasks as well.
Strengths: - The paper proposes a novel associative operator which allows the S5 model to handle episodic resets in RL settings. In doing so, they demonstrate the effectiveness of the S5 model for partially-observed and meta-RL tasks, both in terms of asymptotic performance and training/inference speed.
- The meta-RL environment with random observation and action projections can be used as an evaluation benchmark for future work.
Weaknesses: - The proposed method is a rather marginal modification to the S5 model. And according to the results in Table 2, the new operator does not significantly outperform the vanilla S5 operator.
- It would be fine to thoroughly evaluate an existing method in a new setting, making it an empirical analysis paper. However, the experiments in this paper are insufficient. The bsuite memory length is a toy environment. And there is a lack of baselines and variety of tasks in the POPGym experiments. It is elusive why the authors include MLP but not other baselines from [1].
- A major benefit of state-space models is their ability to handle significantly longer sequences than RNNs and Transformers, but the paper does not demonstrate an RL setting where this can be useful.
- The meta-learning setting, while novel, feels a bit contrived. It is unclear what the implications of this setting are.
[1] Steven Morad, Ryan Kortvelesy, Matteo Bettini, Stephan Liwicki, and Amanda Prorok. POPGym: Benchmarking partially observable reinforcement learning. In The Eleventh International Conference on Learning Representations, 2023.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - What is the reason behind the improved OOD generalization of the S5 model compared to baselines?
- From Table 2, the proposed operator does not significantly outperform the vanilla S5 operator. Can you include more comparisons with the vanilla operator? E.g. by evaluating the vanilla S5 model on the meta-learning tasks.
- According to [1], S4D is the worst-performing method out of all baselines. Is there a reason behind the huge improvement in asymptotic performance?
[1] Steven Morad, Ryan Kortvelesy, Matteo Bettini, Stephan Liwicki, and Amanda Prorok. POP- Gym: Benchmarking partially observable reinforcement learning. In The Eleventh International Conference on Learning Representations, 2023.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have not adequately addressed the limitations in the paper. I would imagine the limitations to be similar to those of state-space models. For example, they generally perform worse than RNN on partially observed RL settings, and they are hard to scale to higher dimensions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback. We are happy to hear that the reviewer agrees that we “demonstrate the effectiveness of the S5 model for partially-observed and meta-RL tasks, both in terms of asymptotic performance and training/inference speed” and that “the meta-RL environment with random observation and action projections can be used as an evaluation benchmark for future work.” We would like to address the reviewer’s concerns.
> The proposed method is a rather marginal modification to the S5 model...in Table 2, the new operator does not significantly outperform the vanilla S5 operator
> From Table 2, the proposed operator does not significantly outperform the vanilla S5 operator.
In Table 2, we show that the new operator significantly outperforms the vanilla S5 operator in $3/5$ of the environments, and matches in the remaining $2/5$. In the most challenging environment, the new operator raises the score from $0.76$ to $0.91$. We believe this is very significant, and visualize it more clearer in Figure 10 of our rebuttal.
We show this further in the shared rebuttal. The Meta-StatelessCartPole (Figure 8a and 8b) experiments further demonstrate its significance on a task that requires longer memory.
Furthermore, state-space models in on-policy RL in general have not been rigorously studied and are not widely-used. The operator is just one of the contributions. We hope this paper and these results can encourage further adoption of this architecture to speedup and improve future research involving memory-based architectures for RL.
> It is elusive why the authors include MLP but not other baselines from [1].
We report GRU results since it was generally the most consistent best-performing baseline evaluated in [1]. It is also one of the most widely-used. We include the MLP to demonstrate the performance vs. runtime tradeoff in Figure 3.
> A major benefit of state-space models is their ability to handle significantly longer sequences than RNNs and Transformers, but the paper does not demonstrate an RL setting where this can be useful.
Indeed, that is *a* benefit of state-space models. There are *many other benefits* of state-space models, many of which we show in the paper! In particular, we get *significant speedups* (over 6x faster on POPGym!) while also *significantly outperforming the baselines*.
***We have also attached further additional results to show a scenario where longer sequence adaptation of up to $6400$ is useful. Please read the general rebuttal for plots (in Figure 8) and experimental setup***.
> The meta-learning setting, while novel, feels a bit contrived. It is unclear what the implications of this setting are.
Most existing RL settings do not require long sequence lengths (arguably *because* existing architectures cannot handle them), so we designed this meta-learning setting to construct one that does.
In the future, a more scaled up version of this setting with many more tasks and data augmentation could result in a *general* meta-learned reinforcement learning agent that can adapt to much farther OOD tasks.
> What is the reason behind the improved OOD generalization of the S5 model compared to baselines?
We believe it is because the S5 model can incorporate information over longer sequences, which we demonstrated in the Bsuite experiments (Figure 2). The ability to incorporate longer contexts could have led to a more general adaptation mechanism than one that can only adapt to recent transitions.
> According to [1], S4D is the worst-performing method out of all baselines. Is there a reason behind the huge improvement in asymptotic performance?
We hypothesize that it is because their implementation only uses one layer of S4D. While for most recurrent networks, stacking additional recurrent layers does not help in many tasks, including RL (see the architectures used for [2] and [3]), for state-space models it is extremely important for performance since it is a completely linear recurrence (and can only achieve non-linearity through depth).
>limitations to be similar to those of state-space models. For example, they generally perform worse than RNN on partially observed RL settings, and they are hard to scale to higher dimensions.
We agree with the reviewer and have since added a limitation section, which includes implementation challenges (e.g. the lack of associative scans in PyTorch) and potentially diminishing speedup returns on tasks with short sequences or other computational bottlenecks (such as vision-based tasks). However, we are not sure why the reviewer believes that SSM’s perform worse than RNN’s on partially observed RL settings or that they are hard to scale to higher dimensions. Our experiments show that they perform significantly better on partially observed RL settings. Other concurrent work [5] has shown similar results in Offline RL and have scaled them to higher dimensions [4].
---
*We hope that most of the reviewer’s concerns have been addressed and, if so, they would reconsider their assessment. We’d be happy to engage in further discussions.*
---
[1] Steven Morad, Ryan Kortvelesy, Matteo Bettini, Stephan Liwicki, and Amanda Prorok. POP- Gym: Benchmarking partially observable reinforcement learning. In The Eleventh International Conference on Learning Representations, 2023.
[2] OpenAI, C. Berner, et al. "Dota 2 with large scale deep reinforcement learning." arXiv preprint arXiv:1912.06680 2 (2019).
[3] Vinyals, Oriol, et al. "Grandmaster level in StarCraft II using multi-agent reinforcement learning." Nature 575.7782 (2019): 350-354.
[4] Deng, Fei, Junyeong Park, and Sungjin Ahn. "Facing off World Model Backbones: RNNs, Transformers, and S4." arXiv preprint arXiv:2307.02064 (2023).
[5] David, Shmuel Bar, et al. "Decision S4: Efficient Sequence-Based RL via State Spaces Layers." The Eleventh International Conference on Learning Representations. 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional experiments and clarifications. From Fig. 3 of the rebuttal it does seems that the proposed associative operator leads to an overall improvement over vanilla S5, which addresses my main concern. While I still view this paper as more of an empirical evaluation of S5 in RL settings, I believe this is a reasonable contribution and will adjust my score.
---
Reply to Comment 1.1.1:
Title: Thank you for the response! Have you looked at our new POPGym results?
Comment: We would like to greatly thank the reviewer for acknowledging our rebuttal and adjusting their score.
We do not wish to take up more of the reviewer’s time, **but we would like to confirm if the reviewer has also had a to chance to read our [latest response to Reviewer gUv9](https://openreview.net/forum?id=4W9FVg1j6I¬eId=13iOA4ADUV), which includes new results on many more POPGym environments.** We copy the key section below.
---
## Additional POPGym Environments
To further address the reviewer’s concerns, we implemented as many of the POPGym environments as we could in pure JAX. We believe that this in and of itself is a significant contribution, and will provide and open-source the code upon acceptance (thanks to the reviewer). This will speed up research in partially-observable RL significantly, since it allows researchers to run statistically-significant experiments in minutes rather than several hours.
On top of the “Repeat”, “Cartpole,” and “Pendulum” environments in the original paper, we’ve now also implemented the “Minesweeper”, “Higher Lower”, “Count Recall”, “Autoencode”, “Multiarmed Bandit”, and “Concentration” environments.
This only means we have not yet implemented 2 environments, “Battleship” and “Labyrinth”, which would take more effort. We plan to implement these before the camera-ready version. Should the reviewer believe that these environments are crucial to their evaluation (or should we finish implementing early), we will report the results before the Reviewer Discussion deadline.
We only report on the “Hard” difficulty of the *new* environments for brevity here. Results for "Medium" and "Easy" are similar and we can report them should the reviewer request them. We ran 4 vectorized seeds on an NVIDIA A100 to get the following results:
### MMER
| Method | Minesweeper | Higher Lower | Count Recall | Autoencode | Multiarmed Bandit | Concentration |
|-------------------|--------------------|-------------------|-------------------|-------------------|-------------------|-------------------|
| S5 with $\oplus$ | **-0.296 ± 0.002** | **0.505 ± 0.001** | **-0.833 ± 0.000** | **-0.296 ± 0.002** | **0.562 ± 0.019** | **-0.831 ± 0.001**|
| S5 with $\bullet$| -0.345 ± 0.003 | 0.499 ± 0.000 | -0.877 ± 0.000 | -0.345 ± 0.003 | 0.438 ± 0.019 |**-0.831 ± 0.001** |
| GRU | -0.313 ± 0.003 | **0.505 ± 0.000** | **-0.832 ± 0.001**| -0.313 ± 0.003 | **0.575 ± 0.008** | **-0.830 ± 0.001**|
| MLP | -0.383 ± 0.004 | **0.504 ± 0.000** | -0.877 ± 0.000 | -0.383 ± 0.004 | 0.306 ± 0.012 | -0.832 ± 0.000 |
### Runtime (Seconds) for 4 Vectorized Seeds
| Method | Minesweeper | Higher Lower | Count Recall | Autoencode | Multiarmed Bandit | Concentration |
|-------------------|---------------|--------------|--------------|--------------|-------------------|---------------|
| S5 with $\oplus$ | 1030.565 | 1016.776 | 1038.935 | 1031.494 | 1458.849 | 1043.843 |
| S5 with $\bullet$| 939.419 | 927.000 | 953.165 | 940.351 | 1359.034 | 954.630 |
| GRU | 10309.002 | 10379.492 | 10716.769 | 10585.172 | 10452.576 | 9775.959 |
| MLP | 172.651 | 157.777 | 179.184 | 173.515 | 586.735 | 233.702 |
Notably, because we are running on an A100 instead of an A40, we achieve significantly faster speeds for S5, nearly 10x that of the GRU in these environments while obtaining very similar results across the board.
We do not expect the S5 architecture to necessarily outperform the GRU significantly since these environments do not test long-term memory; however, it is notable that S5 still runs approximately 10x faster while achieving similar results.
---
> I still view this paper as more of an empirical evaluation of S5 in RL settings
This is a very fair interpretation! Hopefully our additional results on Meta-StatelessCartPole and our six additional POPGym environments would further the strength of the empirical evaluations, which we know was also one of the reviewer's key concerns.
We hope that these results further address the reviewer’s remaining concerns. We would be happy to engage in further discussion with the reviewer and would like to thank the reviewer for their feedback, which we believe has strengthened the paper significantly. | Summary: Structured state space sequence (S4) models deliver good performance on long-range sequence modeling tasks, fast inference speed and parallelize training, making them suitable for many RL settings. The authors propose a modification to the recently proposed S5 architecture and apply it to RL tasks. Their proposed model outperforms Transformers in terms of runtime and memory complexity (on a toy task), and RNNs in terms of task performance. They aim to show the efficacy of SSMs for in-context adaptation to new task variations.
Strengths: - The authors are the first (to the best of our knowledge) to leverage S4-like models for RL. This is the main novelty of the paper, and we consider this an important contribution.
- The authors propose a modification/fix for the S5 architecture to enable handling sequences of varying lengths in and RL setting.
- They show that S4-based models have advantages over commonly used Transformers (runtime, memory complexity) and RNNs (task performance) on a toy task (Figure 2).
Weaknesses: **Major concerns**:
While the method to add a mechanism for resetting the state of S5 layers seems sound the proposed evaluation and in-context RL setting do not make sense to me.
The authors themselves state, that using S5 layers for RL is problematic as such models would be "accessing memory and context from other episodes".
Why then evaluate the proposed solution in the in-context setting, where accessing context (e.g. from other episodes) is required?
Furthermore, I do not understand what the authors mean with "in-context RL", and apart from the conclusion the authors never even mention it again even though its in the title.
Instead, the authors perform meta-RL experiments by training models on multiple tasks simultaneously, however even here I don't understand the evaluation protocol.
I would not call fine-tuning for another 2 billion timesteps an "evaluation".
How does this experiment show "in-context adaptation"?
Finally, the authors argue that comparison to transformer based models is not possible due to poor runtime performance.
However, at least for the POPGym environment this should have been possible, seeing as the authors state in the appendix that the runtime per trial with GRU is only 3 minutes.
And for in-context experiments a transformer-based baseline should definitely be included.
**Minor comments:**
- Algorithm 1 seems redundant and can be moved to the Appendix, as it only shows a meta-RL loop with random projections.
- Figure 1 is too large for its information content.
- Line 166: “mmemory”
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - Figure 2:
- How do models perform beyond a sequence length of 512? S5 and self-attention reach the same level of performance at all lengths. How does this change for longer sequence lengths (at least up to 2048)?
Do Transformers outperform S5 beyond some context length?
No standard deviations, only single seed?
- How does the memory consumption differ between the compared methods?
- Figure 3, Table 2:
- What sequence length is used? Is a long sequence length even required for these tasks?
- The performance gains of S5 over GRU come from the “Repeat Previous Hard” task. Why is this the case? On all other tasks, they perform the same. Seems like the benchmark is too easy to solve and not a suitable test-bed.
- Why are the standard deviations 0.0 for all methods on cartpole? Seems unlikely across 8 seeds (especially in the noisy setting).
- Section 4.3:
- The setup for this experiment needs clarification:
- Why does this experiment demonstrate in-context learning abilities? Does the model always only observe observations from the current episode?
- Are you training on held-out tasks for 2B steps (as shown on the x-axis) or just doing inference? From the learning curves, it looks like weights are updated. Why is this “evaluation” then? Please clarify.
- How would an agent trained from scratch compare against the pre-trained one? Learning the new tasks within 2B steps should not be an issue.
- Are you resetting S5 at episode boundaries during evaluation? Wouldn’t it be beneficial to maintain the context across episodes to encourage in-context learning abilities? It seems like resetting is a disadvantage here.
- How would the agents perform if trained and evaluated without random projections?
- Figure 7:
- In general, the considered tasks are very similar to the pre-training tasks. Four of the five tasks are minor variations of them. Is it correct to consider them OOD?
- Importantly, the model fails completely on pendulum_swingup. This suggests that, the task distribution is too narrow. Conducting this experiment on a larger number of tasks or robot morphologies may be more insightful.
- Why is “finget_turn_easy” in the OOD tasks, but the hard variant “finget_turn_hard” in the pre-training tasks. Shouldn't it be the other way round?
- Missing ablation on the proposed modification to S5. How does performance change when using the proposed modification vs. without?
- Please report parameter counts of the compared methods for all experiments (only reported for POPGym in the Appendix). Do all architectures have approximately the same amount of parameters?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: - Limitations of the proposed architecture have not been discussed sufficiently. As there is no comparison against the Transformer architecture, it is hard to assess the limitations of the proposed architecture.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to first thank the reviewer for their extremely thorough review. We are glad that the reviewer finds that investigating S4-like models for RL is an important contribution, especially since they have not been widely-adopted or thoroughly investigated in RL.
### In-Context Learning and Meta-Learning
> Why evaluate...where accessing context (e.g. from other episodes) is required?
> Are you resetting S5 at episode boundaries during evaluation?
RL^2-like meta-RL methods, an “episode” can consist of multiple “trials” (the terminology on this is inconsistent between papers in the field). In meta-RL it is important to access context between different *trials* but not *episodes*. We have updated the manuscript to clarify this.
In our meta-RL setting we only allow the agent access to **one trial** because it already achieves near-optimal performance in the underlying tasks through within-trial adaptation. ***We have attached additional results to show multi-trial in-context adaptation in the general rebuttal Figure 8.***
> I do not understand what the authors mean with "in-context RL"...How does this experiment show "in-context adaptation"?
While in-context learning is now commonly used to refer to few-shot learning in transformers, we are referring to *in-context changes in behavior (adaptation)* when referring to “in-context RL”. We hope Figure 8c makes this clearer, where we show the agent learns across trials!
> Are you training on held-out tasks for 2B steps?
> How would an agent trained from scratch compare?
Those plots are for evaluation throughout *meta-training*. The held-out tasks are thus evaluated at different points throughout *meta-training*. During the evaluation, we are just doing inference (including on held-out tasks). An agent trained from scratch could not solve an out-of-distribution task in one episode.
> The considered tasks are very similar to the pre-training tasks
It would require a very large collection of pre-training tasks (or inductive bias on the meta-learner) to enable any transfer to far out-of-distribution tasks. We believe that showing transfer to close, but still OOD tasks, is still a positive result that shows robustness and generalization.
> Why is “finget_turn_easy” in the OOD tasks?
The “easy” and “hard” versions of tasks are variants of the task with different sets of parameters or dynamics. Transferring either way demonstrates OOD performance. The choice of which versions were “held-out” was arbitrarily selected.
> How does performance change when using the proposed modification vs. without?
The experiments for the original DMControl suite are slow and expensive to run. We have instead performed a similar experiment using random linear projections of POPGym’s StatelessCartPole, which we share in the general response Fig 8. The results show that performance improves significantly when using the proposed modification.
> comparison to transformer based models
Transformer-based models are far too slow for us to run in the meta-DMControl setting and are an *uncommon architecture choice for on-policy RL* because of their runtime. While transformers can show fast training on supervised learning tasks, they are slow in reinforcement learning due to their poor inference speed, especially when using 2B frames.
> same amount of parameters?
One can reconstruct the sizes from the hyperparameters.
S5 Encoder: 1975040 parameters
LSTM Encoder: 2099200 parameters
### BSuite Results
> How do models perform beyond...(at least up to 2048)?...only single seed?
The original bsuite memory length evaluation only goes up to $105$. $512$ is already nearly 5x that number. Credit assignment gets increasingly infeasible as the length gets longer, especially since bsuite evaluation uses a fixed number ($10000$) of episodes for training, a fixed $\gamma=0.99$, and a basic A2C algorithm. We are unlikely to get *any meaningful information* about an architecture’s capabilities past 512 steps.
Furthermore, we expect self-attention to grow *cubically* in runtime since the number of steps grows *linearly*, and the cost-per-step grows *quadratically*. Sequence length $4096$ would be expected to take several days for self-attention. We are currently running these experiments.
In the meantime, we have reported results for sequences up to $2048$ (Fig 9). Our original results reported the median score across $5$ seeds. Our updated ones in Figure 9c report the mean and standard error across $5$ seeds.
> How does the memory consumption differ?
We report this in the rebuttal! Figure 9a.
### POPGym Results
> What sequence length is used? Is a long sequence length even required?
A sequence length of 1024 is used. We follow the training hyperparameters outlined in POPGym [1]. Even if long sequence lengths are not required, S5 *still trains six times faster and outperforms the baselines*. POPGym is not designed to test learning across long sequences; however, it is useful for evaluating architectures for partially-observed RL. We have experiments in Figure 8 that require long sequences.
> The performance gains of S5 over GRU come from the “Repeat Previous Hard” task. Why is this the case?
We’re not sure why S5 performs better. POPGym’s best performing architecture, the LMU (MMER of $0.19$ vs. S5’s $0.91$), shares the underlying theory of continuous time representations.
> Why are the standard deviations 0.0...on cartpole?
The choice of hyperparameters leads to very consistent policy updates since they train for many epochs on large batch sizes. For StatelessCartPole, all methods achieve perfect score. For NoisyStatelessCartPole, there are very small standard deviations at more digits.
[1] Steven Morad, et al. “POP- Gym: Benchmarking partially observable reinforcement learning.”
---
*We hope that most of the reviewer’s concerns have been addressed and, if so, they would reconsider their assessment. We’d be happy to engage in further discussions.*
---
Rebuttal Comment 1.1:
Comment: We thank the authors for their effort during the rebuttal. We appreciate the clarifications and additional experiments. Therefore, we decided to update our score accordingly.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for reading our rebuttal and acknowledging our improvements, clarifications, and additional experiments. We would like to also further thank the reviewer for updating their score accordingly.
We would be more than happy to discuss any remaining concerns or reservations the reviewer may have about the submission that is preventing them from increasing the score further. | Summary: The authors propose a modification of the S5 sequence architecture with a resettable hidden state that leads to a drop-in replacement of RNNs and Transformers in partially observed / memory-intensive RL tasks. They show that S5 exceeds baseline performance while being more computationally efficient to train than LSTMs due to S5’s parallelization and faster at inference than Transformers’ N^2 attention.
Strengths: State space models are a natural fit for in-context meta-RL and long-term memory because of their unique combination of train-time parallelism and test-time constant inference. They are also thought to be more compatible with the continuous inputs seen in RL than other long-sequence domains like NLP that use discrete tokens. In-context RL has a sequence length barrier that goes somewhat unnoticed because common benchmarks do not require adaptation over more than a few steps. Applying S5 and future versions here may have great long-term benefits.
The experiments do clearly demonstrate that resettable S5 can replace LSTMs and Transformers in partially observed tasks.
The paper also proposes a meta-learning version of the DM Control suite using randomly projected state and action spaces. While this is a somewhat arbitrary way to expand the meta-distribution of tasks from a small set of popular benchmarks, it is probably better than the common alternative of turning similar environments into goal-conditioned tasks and then hiding the goal (Ant Goal, Cheetah Fwd-Back, Humanoid Dir, ...). This benchmark will be useful for future work.
The method could be applied to most agents that support S5's computation, and should be easy to use and reproduce.
Weaknesses: S5 would be best suited for extending the sequence lengths used by current methods, and could unlock a new level of difficulty in long-context RL. However, the experiments here stop short of exploring those limits. Instead, sequence lengths are capped at ~1k where LSTMs/Transformers are feasible but slower and more expensive. Wall-clock efficiency is a nice advantage but could also be improved by other factors like using a more efficient base algorithm. If the goal was to show that S5 is a valid substitute at existing sequence lengths while leaving expansion for future work, it would have been simpler to use more standard benchmarks in the meta-RL portion and compare to external baselines.
In general, the narrative of the paper gets a bit lost in the gray area between more traditional long-term memory and “in-context” meta-learning. The need for long sequences is motivated by multi-episodic in-context learners like RL^2 (lines 36-42), but the method (Sec. 3.1 and Alg 1) does not distinguish between episode resets (which would not reset the sequence model’s hidden state) and task resets. The first two experiments are focused on standard long-term memory, while the third appears to only evaluate zero-shot generalization to a meta-distribution of tasks. It would have been more interesting to evaluate RL^2-style multi-episodic learning, which would naturally extend the sequence length to highlight S5’s core advantages.
The need for a resettable hidden state within S5 is presented as a widespread barrier to its application in RL. But this is primarily caused by the data collection and optimization implementations common to on-policy policy gradient algorithms with parallel actors. I think it would be helpful to explain this issue in more detail than is provided (lines 111-144, 43-48). For example, there would be no need for the modified architecture when substituting S5 for RNNs/Transformers in an off-policy in-context learner like in [Ni et al.](https://arxiv.org/abs/2110.05038) The issue may also be avoided by batching the policy updates into padded sequences that do not cross episode boundaries.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please clarify if I am wrong in thinking that the DMControl tasks are zero-shot. The evaluation procedure is not as clearly discussed as it often is in meta-RL papers, and the appendix did not give more details.
Were there any experiments to dramatically increase the context length towards the range seen with S4 in LRA? I’m curious if there are insights on limitations or changes that would be needed to make this work in an RL setting. According to Table 5 the DM Control experiments used 64 TPUv3s, so I assume compute was not the limiting factor here.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is no Limitations section or a clear discussion on limitations. However, the paper is proposing an architectural change to an existing method that has known limitations, and some of those limitations are improved by the S5 model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their well-thought review. The reviewer brings up many good points that we would like to address.
Firstly, we are glad that the reviewer finds that our paper clearly demonstrates that resettable S5 can replace LSTM’s and Transformers and that our proposed benchmark can be used for future works. We would like to also highlight that our code is provided in the supplementary material, which should further help with reproducibility and ease of use.
## On Longer Contexts
> sequence lengths are capped at ~1k where LSTMs/Transformers are feasible but slower and more expensive
> It would have been more interesting to evaluate RL^2-style multi-episodic learning, which would naturally extend the sequence length to highlight S5’s core advantages.
These are fair points! We originally wished to find benchmarks where longer sequence lengths would be helpful; however, we struggled to find scalable benchmarks (i.e. beyond POPGym), likely because current architectures would not be able to take advantage of the increased context. Because of this, we introduced our new meta-learning version of the DMControl Suite. We had expected that more in-context adaptation trials would be required because of the large space of tasks; however, our architecture was still able to perform near-optimally with just one trial. *We have since developed and evaluated the architecture on a version of StatelessCartPole where we simply perform random linear projections of the observation.* This involves sequences of length of up to $6400$ and we analyze its performance across multiple adaptation trials. We encourage the reviewer to read our general rebuttal and attached plots in Figure 8.
>Were there any experiments to dramatically increase the context length towards the range seen with S4 in LRA? I’m curious if there are insights on limitations or changes that would be needed to make this work in an RL setting.
We agree with the reviewer that long-range tasks, especially meta-RL tasks, would be ideal for investigating dramatically increased context lengths. Firstly, we struggled to find appropriate benchmarks that require many thousands of steps of context. Ni et al. [1] find that for the vast majority of common POMDPs (including meta-RL tasks), short context lengths are often optimal -- and anything over just $100$ is considered “long”. It’s unclear if this is because long contexts are generally not useful, or if current benchmarks were designed with existing architectures in mind. Secondly, in such settings, credit assignment and long-horizon discounting become increasingly challenging. For example, the common discount factor of 0.99 (and even 0.999) vanishes after several thousands of timesteps. We have since added a task that uses sequence length of up to $6400$ in Figure 8 of our general rebuttal.
## Others
> But this is primarily caused by the data collection and optimization implementations common to on-policy policy gradient algorithms with parallel actors. I think it would be helpful to explain this issue in more detail than is provided
This is a good point. The explanation in the current paper can be unclear (and it is impressive that the reviewer understands the nuances of these implementation details given that we did not fully explain them). We have since re-written this section to further explain why on-policy algorithms with parallel actors often make use of resettable hidden states and thank the reviewer for the suggestion. Indeed, in the off-policy setting, implementations have more control over data organization.
> The issue may also be avoided by batching the policy updates into padded sequences that do not cross episode boundaries.
Yes, this is also a potential approach -- however, padded sequences often involve large amounts of inefficient computation and would be significantly slower. If one is running on-policy algorithms they are usually concerned with wallclock-time rather than just sample efficiency. We’ve updated the paper to discuss this!
> the DMControl tasks are zero-shot. The evaluation procedure is not as clearly discussed as it often is in meta-RL papers, and the appendix did not give more details.
Yes, the DMControl tasks are zero-shot, with no fine-tuning involved. The agent is given only a single trial to adapt in-context. We have updated the manuscript to clarify this point further. Thank you for the suggestion!
[1] Ni, Tianwei, Benjamin Eysenbach, and Ruslan Salakhutdinov. "Recurrent model-free rl can be a strong baseline for many pomdps." arXiv preprint arXiv:2110.05038 (2021).
---
*We hope that most of the reviewer’s concerns have been addressed and, if so, they would consider updating their score. We’d be happy to engage in further discussions.*
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal and new results on a tight deadline. The long sequence length results look good. I don't think stateless cartpole and bsuite genuinely require adaptation over these context lengths, but the runtime advantages of S5 are clear. The lack of long-horizon domains is outside the scope of this paper, and if anything this work contributes to the idea that meta-RL has a serious benchmarking problem at the moment. I'd encourage the authors to prioritize open-source usability of the core model for future work. If S5 can be a stepping stone to enable longer sequences and shorter runtimes in meta-RL, then it should be as easy as possible to plug resettable S5 into other RL frameworks and experiments.
The writing changes you discuss seem helpful. In general, the importance of the *resettable* aspect of S5 is a bit overstated by the original draft, given that this is a problem created by other implementation details that could be avoided. However this is still a nice feature that makes S5 compatible with popular high-throughput RL libraries.
Could you clarify the comparison between the two S5 operators in results like Table 2 and the new Figure 8? The other operator is worse because it is not reset between task boundaries, so is trained on sequences that run through consecutive tasks? It might be clearer to rename it from the operator symbol to the concept ("S5 Without Resets" or similar).
I will update my score to accept.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for their time, response, and update. Their feedback throughout this process has been extremely accurate and insightful. Many of the points the reviewer brought up aligned closely with our own thoughts about the initial draft and were articulated *very* clearly. The reviewer further brought up many good points around benchmarks in meta-RL and potentially unclear sections of the writing that we had not noticed.
> don't think stateless cartpole and bsuite genuinely require adaptation over these context lengths
We think bsuite and stateless cartpole test different times of long-horizon behavior, but *we generally agree with the reviewer that neither are fully ideal*. In particular, bsuite tests recall (which is not long-horizon adaptation) and the random projection stateless cartpole tests the ability to integrate information across a long-horizon (but does not require specific recall). Ideally a good environment would test both simultaneously.
> the idea that meta-RL has a serious benchmarking problem at the moment
We fully agree with the reviewer! As our architectures, algorithms, and computers get more powerful, the benchmarks also need to advance to keep up.
> I'd encourage the authors to prioritize open-source usability of the core model for future work
We will heavily prioritize this. We think our implementation (if the reviewer had a chance to look at the provided code) is generally accessible and *highly performant*, hopefully enabling wide scale experimentation and adoption.
> the importance of the resettable aspect of S5 is a bit overstated by the original draft, given that this is a problem created by other implementation details that could be avoided
This is fair! We think given the extent of our new results, we can likely also shorten this section in the paper for space and thus relegate its importance.
> The other operator is worse because it is not reset between task boundaries, so is trained on sequences that run through consecutive tasks? It might be clearer to rename it
Yes the reviewer’s understanding is correct -- we will rename it for the camera-ready copy! The reviewer brings up a good point -- this would be a much more clear naming since in theory we could have used the original operator but unrolled it in sequence like an RNN in order to perform the resets.
We would like to once again thank the reviewer for their feedback! | Rebuttal 1:
Rebuttal: We are grateful to the reviewers for their insightful feedback. We appreciate the consensus that the proposed S5 architecture offers **clear advantages over standard RNNs and Transformers in partially-observed RL, both in terms of performance and runtime.** This is the key takeaway of our work, and we hope it will accelerate future research in RL.
$\color{red} R1$ (R54Y): “experiments do clearly demonstrate that resettable S5 can replace LSTMs and Transformers in partially observed tasks”
$\color{green} R2$ (Zjtv): “They show that S4-based models have advantages over commonly used Transformers (runtime, memory complexity) and RNNs (task performance) on a toy task.”
$\color{blue} R3$ (R3ow): “they demonstrate the effectiveness of the S5 model for partially-observed and meta-RL tasks, both in terms of asymptotic performance and training/inference speed.”
$\color{magenta} R4$ (gUv9): “S5 is able to solve long-term memory tasks previous methods were unable to solve.”
We are glad that reviewers found the associative operator to be a “novel” ($\color{blue} R3$) modification that enables “drop-in replacement of RNN’s” ($\color{red} R1$) by “handling sequences of varying lengths” ($\color{green} R2$), although there are concerns with its significance ($\color{blue} R3$, $\color{magenta} R4$). We added Figure 10 to the general rebuttal to demonstrate its impact, along with results in Figure 8.
Some reviewers also found the meta-learning setup to be useful for future work ($\color{red} R1$, $\color{blue} R3$) and the results “very promising” ($\color{magenta} R4$), although some understandably have concerns over its clarity ($\color{green} R2$) and limited single-trial sequence length ($\color{red} R1$, $\color{green} R2$, $\color{blue} R3$) of $1000$, which we address below.
The results on POPGym demonstrated over *6x speedups* and better performance (being the “first model to solve the difficult RepeatHard task from POPGym” [$\color{magenta} R4$]) over the state-of-the-art GRU. Reviewers had concerns over the lack of Transformer-based baselines ($\color{green} R2$), lack of difficult tasks ($\color{green} R2$, $\color{blue} R3$, $\color{magenta} R4$), and short context lengths ($\color{red} R1$), which we address below.
## Meta StatelessCartPole
To address concerns about sequence length, in-context learning, and POPGym tasks, *we have run additional experiments combining POPGym’s StatelessCartPole task with randomly-projected observations.* We allow the agent to have $16$ trials per episode. We show in-context learning results in Figure 8(c) and demonstrate that it can even maintain performance to $32$ trials, evaluating on sequence lengths of up to *6400* -- which is a very long sequence length for reinforcement learning. We show training results and runtime in Figure 8(a) and 8(b) -- S5 still outperforms GRU’s while running significantly faster.
We hope this addresses the reviewer’s concerns around the lack of multiple trials, long context lengths, and challenging tasks.
## Transformers
To further address $\color{green} R2$’s concerns about the lack of transformer-based baselines, we included longer context results in bsuite Figure 9 for lengths up to $2048$ and include memory usage statistics. Note that bsuite’s original evaluation only extends to a length of $105$, and is likely unsuitable for such long sequences (since it uses a fixed episode budget). We are currently running even longer sequences; however, they are expected to take days to complete and will not change the takeaway.
Furthermore, we ran a Transformer baseline on POPGym environments that have *fixed episode lengths*. We then modify the trajectory collection such that it *always collects exactly one episode per rollout* while maintaining an identical total batch size. Without this constraint, it is challenging to efficiently run Transformers in on-policy RL while maintaining its memory. We perform the same for other architectures across three seeds and report the mean and standard error below. Transformers are fast in these settings since the horizons are short and they can parallelize across time (unlike LSTM’s).
| Architecture | stateless pendulum hard MMER | noisy stateless pendulum hard MMER | repeat previous hard MMER |
|---|---|---|---|
|S5|**0.805±0.003**|0.575±0.002|**0.986±0.000**|
|Transformer|**0.796±0.005**|0.544±0.003|-0.462±0.003|
|GRU|**0.808±0.005**|**0.628±0.001**|-0.457±0.013|
| Architecture | stateless pendulum hard (s) | noisy stateless pendulum hard (s) | repeat previous hard (s) |
|---|---|---|---|
|S5|**297.56±0.45**|**321.18±0.15**|**308.50±0.31**|
|Transformer|2018.25±0.22|706.49±0.10|863.64±0.20|
|GRU|970.82±11.60|1925.91±32.59|1476.80±16.37|
## Misc
We’ve also made numerous other changes and additions to our manuscript from the reviewer’s suggestions, including:
- Clarifying “the data collection and optimization implementations common to on-policy policy gradient algorithms with parallel actors” ($\color{red} R1$)
- The training and evaluation procedure of the DMControl multi-task setup ($\color{red} R1$, $\color{green} R2$)
- What we mean by “in-context learning” ($\color{green} R2$)
- Various fixes to spelling, notation, and resizing of figures ($\color{green} R2$, $\color{magenta} R4$)
- An explicit and thorough limitations section ($\color{red} R1$,$\color{green} R2$,$\color{blue} R3$,$\color{magenta} R4$)
- Clarifying why standard naive resetting does not work ($\color{magenta} R4$)
- Parameter counts for different architectures ($\color{green} R2$)
We hope these modifications adequately address the concerns raised by the reviewers, and we are confident they strengthen our manuscript's overall quality. To reiterate: We show that SSM’s provide *clear advantages in partially-observed RL* and this can accelerate future work in RL.
Pdf: /pdf/8ba80bec4bcedff70ac865b30f1d03bc477b2ec1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Adaptive Selective Sampling for Online Prediction with Experts | Accept (poster) | Summary: This paper presents an adaptive label-efficient forecasting technique for online binary prediction with expert advice. The proposed approach implements a label querying probability that is a function of the observed scenario, rather than based on pessimistic conditions. This enables the method to adapt, i.e., have lower label complexity, to benign environments while remaining robust to adversarial ones, unlike prior approaches in this label-efficient forecasting. Sharp analyses of the regret and label complexity and results on synthetic scenarios are provided in support of the method.
Strengths: * The paper is very well-written and organized.
* The introduced algorithm is novel and intuitive. The regret and label complexity analyses of the approach seem sound.
* The method remedies the shortcoming (lack of adaptivity) of prior label-efficient prediction approaches. This enables it to query fewer labels in benign scenarios while remaining robust to adversarial ones.
* The authors present empirical evaluations that demonstrate the effectiveness of the method.
Weaknesses: * The method only applies to binary prediction tasks with zero-one loss.
* Additional details on prior work on label-efficient prediction would be helpful in contextualizing the benefit (adaptivity) of the proposed approach.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors:
1. The authors mention that the same inductive approach can be used for general loss functions and $y_t \in [0,1]$, but that the expression for the selective sampling probability would be really complicated. Do the authors conjecture that an efficiently-computable upper bound for the complicated function can be found as was done for the binary zero-one loss case?
2. The authors mention active learning as a possible application in the introduction. Could this method (or its generalization) be applied to, e.g., a simple active learning scenario with a ResNet on CIFAR10?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and helpful comments.
*Regarding general losses and predictions:*
As you mention, applying the same approach to general losses would quickly lead to complications, but generalizing the analysis is an intriguing direction.
Please see Point 2 in the global response for more details.
*Regarding prior work:*
Please see Point 3 in the global response. We will expand the discussion of related work in the paper along these lines.
*Regarding active learning for e.g. CIFAR10:*
This is a potential application of our approach.
Given an ensemble of $N$ ResNets initialized with random weights, one could view them as experts and use the label-efficient forecaster to determine whether or not a given label should be observed, and used for training the ResNets.
In order for the approach to work off-the-shelf, one would need to consider a binary classification version of CIFAR10.
Extending the applicability of our results and using them in this way is an intriguing direction.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I've read the authors' rebuttal and the other reviews and will maintain my position favoring acceptance (7).
---
Reply to Comment 1.1.1:
Comment: Thank you for your response! | Summary: The paper considers a binary prediction game on $0-1$ loss, it proposed efficient sampling scheme via an modification of the exponentiated weight forecaster, which selectively acquire labels $y_t$ based on $Ber(q_t)$, where the design of $q_t$ is correlated to the disagreement among experts’ predictions at each round for the exponentiated weight forecaster.
Ultimately, the proposed algorithms achieves the regret of the exponentiated weight forecaster $O(\ln N \sqrt{ n} )$ over time horizon $n$ and $N$ number of experts, the design labelling acquisition parameter $q_t$ resultant to a labelling complexity of $O( \frac{\sqrt{n}}{\Delta^2} )$, where $0 < \Delta \le \mathbb{E} [\ell_{t,i} - \ell_{t,i^{\ast}}], \forall t \in [n], i \neq i^{\ast}$ represents the lower bound expected loss gap comparing to the optimal expert with index $i^{\ast}$, which also signifies the difficulty of identifying the optimal experts.
Strengths: The paper is well written and easy to follow. It first introduces the intuition of sampling parameter $q_t$ if there is a perfect expert, then extended to general case. The paper also provided an graphical illustration on the lower bound of $q_t$ which matches with expectation.
The result is novel comparing to previous result in two folds. (1) There is no assumption on how $y_t$ is generated. The proposed algorithm is able to attain the $O(\ln N \sqrt{n})$ regret without less labels. (2) In contrast to previous work on sampling by disagreement, the label complexity can be quantified as $O(\frac{\sqrt{n}}{ \Delta^2})$.
$q_t$ is easy to compute, numerical experiment with respect to time horizon $n$ shows the expected regret and expected number of labels which matches with theoretical results. Experiment with respect to number of label (number of weights update) shows labelling efficient algorithm proposed in the paper matches the minimax rate in active learning asymptotically.
Weaknesses: It seems that the assumption $0 > \Delta \ge \mathbb{E} [\ell_{t,i} - \ell_{t,i^{\ast}}], \forall t \in [n], i \neq i^{\ast}$ is required only for bounding the labelling complexity, ( in order to track how $q_{t+1}$ evolves in line 493), without this assumption the regret still holds.
I am a bit concern such assumption is very strong, it generally asks the best expert $i^{\ast}$ is wining over every other expert at every round. In addition, the assumption is $\ell_{t,i} \in \{ 0, 1\}$, if we are at a specific iteration $t$ where $\ell_{t,i^{\ast}} = 1$, then what values can $\ell_{t,i}, \forall i \neq i^{\ast}$ take in order to satisfy the condition for a strictly positive $\Delta$?
Whether labelling complexity can be bounded without this assumption?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Appendix C:
From line 437 to line 438, whether intermediate steps could be further elaborated. I am concerned about the proof, which seems primarily confused on the relationship of combining minimum values: $\min_i (a_i + b_i)$ VS $\min_i(a_i)+ \min(b_i)$.
My understanding is that from the last equation line of line 437, we have
$$ L_{n |1}^{w_{.,0} } \le \frac{n \eta }{ 8 } \underbrace{ - \frac{\eta}{8 } + A_{1,1} + (1 - 2 A_{1,1})y_1 + \mathbb{E} \left[ \min_{i} \left( L_{i, n} + \frac{\ln w_{i,0}}{\eta} + \frac{q_1}{\eta} ( \ln w_{i,0} - \ln w_{i,1} ) - \ell_{i,1} \right) \right]}_{A} $$
To show the desired result claimed in the theorem, we aim to show
$$ A \le \mathbb{E} \left[ \min_{i} \left( L_{i,n} - \frac{\ln w_{i,0}}{\eta} \right) \right] $$
That is to show
$$ A_{1,1} + (1 - 2 A_{1,1})y_1 + \mathbb{E} \left[ \min_{i} \left( L_{i, n} + \frac{\ln w_{i,0}}{\eta} + \frac{q_1}{\eta} ( \ln w_{i,0} - \ln w_{i,1} ) - \ell_{i,1} \right) \right] \le \frac{\eta}{8 } + \mathbb{E} \left[ \min_{i} \left( L_{i,n} - \frac{\ln w_{i,0}}{\eta} \right) \right] $$
From the claim in line 438, it seems we need to show above inequality should hold even without expectation, that is:
$$ A_{1,1} + (1 - 2 A_{1,1})y_1 + \min_{i} \left( L_{i, n} + \frac{\ln w_{i,0}}{\eta} + \frac{q_1}{\eta} ( \ln w_{i,0} - \ln w_{i,1} ) - \ell_{i,1} \right) - \min_{i} \left( L_{i,n} - \frac{\ln w_{i,0}}{\eta} \right) \le \frac{\eta}{8 } $$
It seems the paper used
$$\min_{i} \left( L_{i, n} + \frac{\ln w_{i,0}}{\eta} + \frac{q_1}{\eta} ( \ln w_{i,0} - \ln w_{i,1} ) - \ell_{i,1} \right) - \min_{i} \left( L_{i,n} - \frac{\ln w_{i,0}}{\eta} \right) \le \min_{i} \left( \frac{q_1}{\eta} ( \ln w_{i,0} - \ln w_{i,1} ) - \ell_{i,1} \right) $$
which doesn't seem to be true. These intermediate steps are just my personal guess. I wonder whether details to understand those two lines in the manuscript can be further explained.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and helpful comments.
*Regarding the assumption that $\Delta>0$:*
Please see Point 1 in the global response, where we also highlight the more general assumption in Appendix F.
Note that, under the assumption stated in the main paper, it is allowed
that $\ell_{i^*,t}=1$ with positive probability, as long as $E_{t}[\ell_{i^*,t}]<1$.
However, under the more general assumption in Appendix F, we even allow $E_t[\ell_{i^*,t}]=1$ for some rounds, as long as the best expert performs well in sufficiently many other rounds.
Extending the label complexity analysis to even more general settings is
interesting future work, but as mentioned in the paper, it is
unavoidably linear in $n$ in the worst case (see also Point 3 in the global response).
*Regarding line 437 to 438:*
As you correctly observe, there is a (minor) issue with the proof as
stated. Thanks a lot for catching this! It can be fixed as follows:
In order to prove the desired result, it is enough to show that
$$\bar L_{n |1}^{w_{.,0} } \leq \mathbb{E}\left[ \min_{i\in[N]} \left( \ell_{i,1} + L_{i,2:n} - \frac{\ln w_{i,0}}{\eta}\right) \right] + \frac{n \eta}{8} = \mathbb{E}\left[ \left( \ell_{i',1} + L_{i',2:n} - \frac{\ln w_{i',0}}{\eta}\right) \right] + \frac{n \eta}{8} . $$
Here, we let $i'$ denote the $\mathrm{argmin}$ of the right-hand side.
While we had separated out $\ell_{i,1}$ in the submitted paper, this is not necessary.
As shown after line 437, we have
$$ \bar L_{n |1}^{w_{.,0} } \leq A_{1,1}+(1-2A_{1,1})y_1 +\frac{(n-1)\eta}{8} + \mathbb{E}\left[\min_{i\in[N]} \left( L_{i,2:n}+q_1\frac{-\ln w_{i,1}}{\eta} + (1-q_1)\frac{-\ln w_{i,0}}{\eta}\right)\right] $$
$$\leq A_{1,1}+(1-2A_{1,1})y_1 +\frac{(n-1)\eta}{8} + \mathbb{E}\left[ \left( L_{i',2:n}+q_1\frac{-\ln w_{i',1}}{\eta} + (1-q_1)\frac{-\ln w_{i',0}}{\eta}\right)\right] \ .$$
In the last step, we used the fact that since the upper bound holds for the minimum $i$, it holds for $i'$ in particular.
Now that everything has been recast in terms of $i'$, the rest of the argument follows as stated.
To ensure that the bound in the theorem holds, it is thus sufficient to select $A_{1,1}$ such that it satisfies
$$A_{1,1}+(1-2A_{1,1})y_1+\left(\frac{q_1}{\eta}\left(\ln w_{i',0}-\ln w_{i',1}\right)-\ell_{i',1}\right)\leq \frac{\eta}{8}\ ,$$
after which the proof continues as before.
Thank you for pointing this out!
---
Rebuttal Comment 1.1:
Comment: Dear Author,
Thank you very much for your response. I have read through global response which addressed the condition on $\delta$, and the interpretation (the expected cumulative loss gap growth). Also, thank you for addressing the proof from line 437 to 438.
After reading all responses, I elevated my score from 7 to 8, since this is the first paper being able to quantify a $O(\sqrt{T})$ label complexity under the considered assumption given the best of my knowledge.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and updated score! | Summary: This paper proposes an interesting novel approach to prediction with expert advise. In the standard prediction with expert advise setup, the learner receives experts' predictions, commits to its own and then sees the true outcome as produces by the (possibly adversarial) nature. Suppose that obtaining the true outcome is costly; do we really need to do this all the time? Clearly, if all (or most) experts agreed on the same prediction, the value of the true outcome for adjusting our trust in them is negligible; in the standard weight-based algorithms with multiplicative update the contribution of this round will simply be eliminated by normalisation.
The paper takes an algorithm from Cesa-Bianchi and Lugosi (exponential weighting with fixed $\eta$) and shows that its regret bound stands as it is if the true outcome is requested with certain probability.
It is then shown that the expectation of the number of outcomes actually requested is upper bounded by $3\eta T + O(\log\log T)$, so for small $\eta$ there is linear improvement in the number of requested outcomes. If $\eta$ is chosen to minimise the regret bound using prior knowledge of $T$, we take $\eta\propto 1/\sqrt{T}$ and thus the bound reduces to $O(\sqrt{T})$. This results holds under some conditions on experts' behaviour though and they seem restrictive.
Strengths: I think this is an interesting take on the well-known problem and should be published.
Weaknesses: No obvious weaknesses.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The authors chose to present the results in the probabilistic setup. Typically predicting $0$ or $1$ in a probabilistic setup is equivalent to predicting $\gamma\in [0,1]$ under a deterministic setup; the later seems more important to me (this is my judgement of course). Will the results of the paper stand?
I think the final version would benefit from a better discussion of the $O(T)$ lower bound, which is mentioned but not discussed in detail.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and helpful comments.
*Regarding the restrictiveness of the conditions for the label complexity bound:*
While the assumed setting in Theorem 3 is relatively benign, it includes many relevant settings, and the results hold under the more lenient assumption of cumulative separation after a certain time.
Please see Point 1 in the global response for more details.
*Regarding extensions to predictions in $[0,1]$:*
Extending our analysis to predictions in $[0,1]$ would be interesting, and we believe it may be possible to extend our approach.
Please see Point 2 in the global response for more details.
*Regarding the label complexity lower bound:*
We will update the paper with further discussion of the lower bound.
Please see Point 3 in the global response for more details.
---
Rebuttal 2:
Comment: Many thanks to the authors for the response.
I do think it should be straightforward to extend the result to \[0,1\], but regardless of this I am happy to keep my high evaluation of the paper.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response! After thinking a bit more about this, we agree that it is straightforward to extend our results to expert predictions in [0,1], while still keeping the label sequence and forecaster predictions binary (in {0,1}). Extending to more general forecaster predictions and losses does not seem quite as straightforward, although it may be doable with similar techniques. | Summary: This paper investigates the PEA problem in the context of online binary classification where the cost of obtaining labels for streaming data is high, necessitating selective label collection adaptively. To this end, the authors introduce a carefully designed label collection strategy based on the classical Hedge algorithm. The resultant label-efficient forecaster has a best-of-both-worlds theoretical guarantee. The authors further demonstrate that regret of their label-efficient forecaster asymptotically reaches the minimax rates of pool-based active learning.
Strengths: * The paper is well-structured and easy to comprehend.
* The theoretical analysis provided in the paper appears to be solid and sound.
Weaknesses: * The primary conclusion of the paper, Theorem 3, relies on a core assumption that there exists a unique optimal expert whose expected loss in each round surpasses all other experts by a specific margin $\Delta$. Is this assumption too strong? Are there real-world application scenarios that satisfy such an assumption? (To my knowledge, this assumption has only been used in the COLT'14 paper: A Second-order Bound with Excess Losses).
* Does this specific PEA setting investigated in the paper relates to the bandit setting? Both are concerned with identifying the optimal expert (arm). How then do the setting and techniques used in this paper differ or relate to those in bandit scenarios? Does the problem studied in this paper present novel challenges in comparison to the bandit setting?
* The paper states that the online prediction setting is similar to streaming active learning. If this is the case, should the numerical experiment compare the regret convergence rate of the proposed algorithm with that of the streaming active learning algorithm?
* Some writing aspects could be improved:
* In line 227, the symbol $\tilde l$ is undefined.
* In line 264, the regret bound (7) is termed "pessimistic" without explanation.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See Weaknesses above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper should further clarify the method and theory part, see Weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and helpful comments.
*Regarding the assumption in Theorem 3:*
The assumed setting is relatively benign, but it does include many practical i.i.d. settings, and the stated results do hold under a more general assumption (as detailed in Appendix F).
Please see the global response for more details, especially the point
that our algorithm still works when the assumption is violated.
*Regarding the relation to the bandit setting:*
This is an interesting question.
In the setting considered in the paper, we observe the losses of *all* experts when observing a label.
In contrast, in the bandit setting, only the loss of the selected arm would be observed for each round.
This would necessitate the forecaster to incorporate more exploration in its strategy, and the analysis of a label-efficient version seems like it would be very different from what is used in this paper, although some of the ideas may transfer.
*Regarding the comparison to active learning:*
Essentially, streaming active learning can be seen as a special case of label-efficient online prediction, where the environment draws i.i.d. features and labels, and the expert predictions are determined by fixed functions of the features.
Note that this is precisely the setting that we consider in the numerical experiments.
Thus, the comparison to batch active learning is a comparison between the following:
1. minimax optimal batch active learning,
2. streaming active learning based on our label-efficient algorithm for online learning.
The fact that these match asymptotically is an indication that, for the specific setting under consideration, the attainable performance with batch active learning and streaming active learning coincide asymptotically, and specifically, this is achieved by using our proposed label-efficient predictor for streaming active learning.
However, as stated in the paper, this is an empirical observation which we did not establish theoretically.
*Regarding aspects of the writing:*
We will rewrite Line 227 in terms of $\ell_{i,j}/q_{j}$ (to avoid needing
to introduce the short-hand $\tilde \ell$) and we will clarify that (7)
is ‘‘pessimistic'' because it is a worst-case bound for adversarial environments.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, which has addressed my concerns and I will raise my score to 6. I don't have any further questions.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and updated score! | Rebuttal 1:
Rebuttal: ## Global response to all reviewers
We thank all reviewers for their careful reading and helpful comments.
We are happy that you consider the paper to be sound, novel, interesting, and well-written.
Below, we address three points that were raised by multiple reviewers:
*1. Regarding the assumption of a best expert with $\Delta>0$:*
The setting in Theorem 3, where there exists a single expert that is
best in expectation for all rounds, is satisfied for any static
stochastic environment with a unique minimizer, including many relevant
applications with i.i.d. data. The same assumption is standard in the
stochastic bandit setting, where it is nearly always assumed that there
exists a single best arm, and has also been used in the prediction with
expert advice setting. We therefore view it as relatively
uncontroversial. Moreover, note that our algorithm satisfies a
best-of-both-worlds result: it preserves regret guarantees, even if the $\Delta > 0$
assumption does not hold, but if the assumption does hold, the label complexity is much smaller than $n$. We further note that, as mentioned before the theorem statement, the same result holds under a more general assumption: the best expert does not need to surpass the others in each round.
Instead, it is sufficient that the normalized *cumulative* expected loss becomes separated by $\Delta$ sufficiently fast.
This is stated more precisely in Eq. (16) and (17) in Appendix F in the supplementary material.
Under the more general assumption, the ‘‘best'' expert is even allowed to be the worst for some rounds, as long as it performs well in sufficiently many other rounds.
Extending the label complexity analysis to even more general settings is interesting future work, but as mentioned in the paper, it is linear in $n$ in the worst case (Theorem 13 in [10], N. Cesa-Bianchi, G. Lugosi, and G. Stoltz: Minimizing regret with label efficient prediction).
*2. Regarding the extensions to general losses and predictions:*
While extending the results to losses and predictions in $[0,1]$ seems solvable by generalizing our approach, this will complicate the optimization problem in Eq. (13,14), leading to a more complicated (implicit) specification for $q_{t}$.
It may be the case that an altered technique based on the same ideas would allow for efficiently handling more general losses and predictions.
As we have not fully established the details, this should be taken with a grain of salt, but it is an interesting avenue for future work.
*3. Regarding the worst-case label complexity:*
The label complexity being linear in $n$ in the worst case follows from Theorem 13 in [10] (N. Cesa-Bianchi, G. Lugosi, and G. Stoltz: Minimizing regret with label efficient prediction).
To clarify this point, we will alter the beginning of Section 5 as follows in the revised paper:
___
*We now examine the label complexity, defined as $S_{n}\equiv \sum_{t=1}^{n} Z_{t}$.
In [Thm. 13, 10], it is shown that there exists a setting for which the expected regret of a forecaster that collects $m$ labels is lower-bounded by $cn\sqrt{\ln(N-1)/m}$ for some constant $c$. Hence, in the worst case, the number of collected labels needs to be linear in $n$ in order to achieve an expected regret that scales at most as $\sqrt n$. However, since $q^{\*}_t$ can be less than $1$, it is clear that the label-efficient exponentially weighted forecaster from Theorem 2 can collect fewer than $n$ labels in more benign
settings. To this end, we consider a scenario with a unique best expert,
which at each round is separated from the rest in terms of its expected
loss.* | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Training-free Diffusion Model Adaptation for Variable-Sized Text-to-Image Synthesis | Accept (poster) | Summary: This paper proposes both an analysis and a contribution to fix a problem found during the analysis.
They start with the premise that diffusion models should be able to generate arbitrary size images, and training specialized models for each image size is too expensive, which is correct. Using diffusion models trained for square images would be much cheaper and easier.
They identify two key problems when using square models to generate arbitrary aspect ratio images: incomplete/inadequate objects and repetitive/disorganized patterns.
In order to improve performance all around (quality, prompt following, etc.) they study entropy in the generated image.
Specifically, they note that as entropy rises tokens attend to wider regions, and relate this phenomenon to the problems delineated above.
Finally, they find that simply proposing a scaling factor mitigates many of these issues.
Strengths: 1. The idea to add a scaling factor that counteracts entropy fluctuation in a training-free manner is really nice and the formulation is a principled approximation that is easily computable.
2. The implementation is incredibly simple, a large plus.
3. The work presents strong quantitative evaluations as well as a strong user evaluation that is commendable.
4. The work presents a wide amount of qualitative samples that show improvement in very common problems for diffusion models such as double heads, double hands and other issues. Samples look good.
Weaknesses: 1. Hard to find many weaknesses, it's a strong paper that is well written, with a clear argument, clear analysis, clear solution that simply seems to work after both quantitative and user study evaluation (and with substantial qualitative samples showing edge cases).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. I'm assuming time complexity does not vary with the new scaling factor?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: 1. Good limitation section. Metrics are a bit of an issue but there is a strong user eval in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We really appreciate your interest and support in our paper.
**Q1**: Relation between time complexity and the new scaling factor.
**A1**: Yes, it needs constant O(1) time complexity to calculate the new scaling factor.
---
Rebuttal Comment 1.1:
Comment: I've read the other reviews and believe their weakness comments to be minor, and relatively well addressed by the rebuttals. I will keep my initial score. | Summary: This paper analyzes the issues of using a fixed-resolution diffusion model to generate varied-size images and proposes a scaling factor to stable the attention entropy which remedies the issue. The method is evaluated on text-to-image models when the inference resolution is moderately different than the model resolution and the results show the effectiveness of the proposed method compairing to directly using fixed-resolution diffusion model for varied-size synthesis.
Strengths: 1. The paper is well-written and easy to follow. The motivation is clearly explained and it is with some theoretical analysis.
2. The proposed method is simple to implement, which does not require costly large scale training and could help varied-size synthesis with a fixed-resolution diffusion model.
3. The effectiveness of changing the scale factor in attention for varied-size synthesis is an interesting observation.
Weaknesses: 1. Synthesizing images at different resolutions is an important research problem, but it is not very clear that why a fixed-resoltion pre-trained model should be used for this task (which is the main baseline in this work). To achieve this goal, there are many other candidate methods: (1) using diffusion-based (e.g. cascaded diffusion models) or GAN-based super-resolution method to upsample the output to some resolution of powers of 2, then downsample it to the query resolution. (2) using a any-scale super-resolution network (e.g. LIIF) to upsample the network result. It is not clear that why modifying fixed-resolution model is an important research topic if it's not shown to outperform other candidate methods.
2. The test resolutions are still in a small changing range compared to the model resolution (about x0.5\~x1.5), and the FID on x0.5 scale suggests that the quality could be significantly decreased when changing the scale to be different than training scale. Does this suggest that the proposed method does not work for a wider range of scales (for example, any-scale SR works typically evaluates x1\~x4, or even extrapolates to x20\~30)?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See weaknesses. It is not clear to me that, if a diffusion model only see faces at resolution 256 during training, how can its UNet synthesize a face at resolution 2048 with its kernels (trained for resolution 256) without repetitive patterns? Does the proposed method address this issue?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your thoughtful comments. We will explain your concerns point by point.
**Q1**: Comparison to candidate methods.
**A1**: We would like to point out the advantages of our method (modifying fixed-resolution models) against other methods (e.g. cascaded diffusion models and LIIF) in three key aspects.
1. **Improvement in Different Aspect Ratio**. Other methods do not support generating new images with a different aspect ratio from the original images with fixed resolutions. In comparison, our method could improve image generation in different aspect ratio, which is validated by both qualitative and quantitative experiments in our paper.
2. **Richness of Visual Information (More Important for Higher Resolutions)** . When users generate images with higher resolutions, what they are expecting is not only more pixels but also richer semantic information in images. Our method could introduce more information by enabling models to deal with more tokens, while super-resolution and other methods do not contribute to the richness of visual information by simply scaling up the original images.
To illustrate the non-trivial difference in the richness, we have provided several examples in Figure 1 in the PDF page.
3. **Time Cost and Memory Usage (More Important for Lower Resolutions)** . For diffusion models adapted by our method, their time cost and spatial usage become proportional to the generation resolution, while downsampling methods are constantly constrained to the fixed cost brought by training resolutions. In this way, our method could efficiently enables low resolution image synthesis especially on portable devices, which have a high demand for both time cost and memory usage other than image fidelity.
In Table 1, we present FID, memory usage and time cost of stable diffusion with different methods in various resolutions ($128^{2}, 224^{2}, 768^{2}, 1024^{2}$). For low resolution, our method achieves better performance in memory usage and time cost, with partial trade-off in fidelity. Noticing those fine qualitative results in Figure 3 in the paper, we believe the trade-off is acceptable considering the tackling scenario of low resolution image synthesis.
| Resolution | GPU Memory (MiB) | FID | Average Time (s) |
| -------------------------------- | ---------------- | -------- | ---------------- |
| 224 * 224 (Original) | 7064 | 74.5742 | 5.6268 |
| 224 * 224 (Ours) | 7064 | 41.8925 | 5.6274 |
| 512 * 512 and Resize to 224 | 9324 | 21.8415 | 17.4457 |
| | | | |
| 768*768(Original) | 19797 | 29.5974 | 36.7137 |
| 768*768(Ours) | 19797 | 28.1372 | 36.7140 |
| 512*512 and Resize (liif) to 768 | 9948 | 20.8712 | 18.5286 |
| | | | |
| 128 * 128 (Original) | 6867 | 191.5447 | 3.4953 |
| 128 * 128 (Ours) | 6867 | 127.2798 | 3.4955 |
| 512 * 512 and Resize to 128 | 9324 | 36.0742 | 17.4377 |
| | | | |
| 1024 * 1024 (Original) | 39977 | 50.5425 | 62.2414 |
| 1024 * 1024 (Ours) | 39978 | 43.1167 | 62.2359 |
| 512 * 512 and Resize to 1024 | 12043 | 20.8235 | 20.8316 |
Table 1: FID, memory usage and time cost of stable diffusion with different methods with different resolutions.
In brief, our method does better in supporting uneven different aspect ratio and adapts diffusion models with moderate fidelity trade-off for meeting important requirements both in high resolutions and low resolutions.
**Q2**: The range limit of proposed method.
**A2**: We would like to state that our method does trade off FID because we do not have the same amount of calculation as the original resolution. However, as described in A1, our method could better meet important requirements for time cost and memory usage at lower resolutions or richness of information at high resolutions. We report the results of stable diffusion with different methods in various resolutions ($128^{2}, 224^{2}, 768^{2}, 1024^{2}$) . Additionally, our method could be implemented with no training effort to other diffusion models (e.g., diffusion models trained on the larger resolution).
**Q3**: How can UNet synthesize a face at resolution 2048 with its kernels (trained for resolution 256) without repetitive patterns?
**A3**: To be more specific, when the UNet is synthesizing a face at resolution 2048, our method aims to keep each token has the samilar behavior as it is at resolution 256, which means similar sized faces would be synthesized. To illustrate this, please refer to the Figure with prompt as "L'adoption internationale" in the PDF page.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I decided to change my rating to borderline accept, but would like to highlight the following concerns:
1. I think the current paper may need revision to clarify the contributions of the method. To my understanding, it only enables higher-resolution synthesis of text description which have zoom-in patches observed in the training set. The issue mentioned in the abstract "higher resolution images exhibit repetitive presentation" is not really solved, for example, it still could not synthesize the prompt "a face" at an unseen high resolution, and the Figure with prompt as "L'adoption internationale" in the PDF page shows it's still repeating objects rather than generating a high-resolution version of the object.
2. The argument of "Richness of Visual Information" does not make sense to me. Super-resolution model is an unbiased estimation of the distribution of high-resolution images given a prompt. As a simple example, when generating "a face" at high-resolution, people would expect a 2048x2048 face with details rather than multiple faces putting together. It is misleading to claim that "richness of visual information" is an advantage of the proposed method than SR models.
3. Despite the weaknesses, I feel the proposed method could still be useful as it allows synthesis at higher-resolution in a reasonable range, particularly for different aspect ratios. Thus I raised my rating, but with the assumption that the paper's contribution could be clarified.
---
Reply to Comment 1.1.1:
Comment: Thank you for the valuable comments.
1. We have clarified the contribution by revising the issue statement as "higher resolution images exhibit repetitively disordered presentation" and imposing emphasis upon the disorder which hurts image fidelity, instead of the repetitiveness.
2. We have revised our argument to emphasize upon the difference between our method and SR, instead of the priority or the advantages, since both methods introduce more visual information from different perspectives and thus meet different needs. Additionally, they could be utilized in parallel according to users' specific requirements.
We have completed the revisions and we will update the revised version after the revision submission is available. | Summary: This work adapts a pre-trained Stable Diffusion model for variable-resolution image generation. Since Stable Diffusion is trained on a fixed image resolution, naively varying the output size results in abnormal patterns in the images. This paper tracks the problem down to self-attention weights in the denoiser network, and identifies the so-called attention entropy as the root cause by proving that attention entropy is fundamentally correlated with image resolution. Drawing on this insight, the paper introduces resolution-dependent scaling to self-attentions to calibrate attention entropy throughout the generative process. The effectiveness of the proposed method is validated through extensive experiments.
Strengths: - The method is training-free. It works by modifying the generative process of a pre-trained diffusion model at inference time.
- The method reveals a strong connection between self-attention entropy and image quality. It builds on the key finding that attention entropy is correlated with image resolution, and artifacts in low / high-resolution images can be attributed to mis-calibrated attention entropy. It thus attempts to calibrate attention entropy across various image resolutions using a resolution-dependent scaling factor. Turning theoretical insights into a simple, actionable solution is a key strength of the paper.
- The effectiveness of the method is validated by qualitative, quantitative and user study results. The improvement over the Stable Diffusion baseline seems quite consistent.
Weaknesses: - A few simple baselines are missing. For example, one can simply down-sample a 512x512 image to reach a lower resolution. Similarly, one can generate a 512x512 image and subsequently up-sample it using an off-the-shelf super-resolution model. In fact, Stable Diffusion supports super-resolution and uneven aspect ratios with community effort (check out AUTOMATIC1111). To justify the main contribution of the paper (i.e., improved generation quality of variable-sized images), it is important to show that the proposed method performs equally well, or even better, compared to these simple baselines.
- Arguably, generating high-resolution images is of greater interest to the community. To this end, it would be interesting to probe the limit of the proposed method. That is, what is the highest resolution it can handle without introducing noticeable artifacts? Most high-resolution images in the paper are 768x768. There is one image in the supplement at the resolution of 1024x1024. The proposed method will generate more impact if it can further grow the image resolution.
- According to Figure 5 (left), attention entropy is up by 1 bit as image resolution grows from 512 to 768. However, Figure 6 (right) seems to suggest that applying the scaling factor only reduces entropy by a very small amount. My early impression is that the scaling factor aims to restore entropy to the level of 512x512 images. I am curious about what causes this discrepancy.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the section above for questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper discussed the limitations and societal impact of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful and valuable comments. We will explain your concerns point by point.
**Q1**: Comparison to baselines.
**A1**: We would like to point out the advantages of our method (modifying fixed-resolution models) against other methods (e.g. cascaded diffusion models and LIIF) in three key aspects.
1. **Improvement in Different Aspect Ratio**. Super-resolution and downsampling methods do not support generating new images with a different aspect ratio from the original images with fixed resolutions. In comparison, our method could improve image generation in different aspect ratio, which is validated by both qualitative and quantitative experiments in our paper. For AUTOMATIC1111, the aspect ratios are passed directly into Stable Diffusion (like the original Stable Diffusion in our paper, which also support uneven aspect ratios) without any in-depth optimization.
2. **Richness of Visual Information (More Important for Higher Resolutions)** . When users generate images with higher resolutions, what they are expecting is not only more pixels but also richer semantic information in images. Our method could introduce more information by enabling models to deal with more tokens, while super-resolution and other methods do not contribute to the richness of visual information by simply scaling up the original images.
To illustrate the non-trivial difference in the richness, we have provided several examples in Figure 1 in the PDF page.
3. **Time Cost and Memory Usage (More Important for Lower Resolutions)** . For diffusion models adapted by our method, their time cost and spatial usage become proportional to the generation resolution, while downsampling methods are constantly constrained to the fixed cost brought by training resolutions. In this way, our method could efficiently enables low resolution image synthesis especially on portable devices, which have a high demand for both time cost and memory usage other than image fidelity.
In Table 1, we present FID, memory usage and time cost of stable diffusion with different methods in various resolutions ($128^{2}, 224^{2}, 768^{2}, 1024^{2}$). For low resolution, our method achieves better performance in memory usage and time cost, with partial trade-off in fidelity. Noticing those fine qualitative results in Figure 3 in the paper, we believe the trade-off is acceptable considering the tackling scenario of low resolution image synthesis.
| Resolution | GPU Memory (MiB) | FID | Average Time (s) |
| -------------------------------- | ---------------- | -------- | ---------------- |
| 224 * 224 (Original) | 7064 | 74.5742 | 5.6268 |
| 224 * 224 (Ours) | 7064 | 41.8925 | 5.6274 |
| 512 * 512 and Resize to 224 | 9324 | 21.8415 | 17.4457 |
| | | | |
| 768*768(Original) | 19797 | 29.5974 | 36.7137 |
| 768*768(Ours) | 19797 | 28.1372 | 36.7140 |
| 512*512 and Resize to 768 | 9948 | 20.8712 | 18.5286 |
| | | | |
| 128 * 128 (Original) | 6867 | 191.5447 | 3.4953 |
| 128 * 128 (Ours) | 6867 | 127.2798 | 3.4955 |
| 512 * 512 and Resize to 128 | 9324 | 36.0742 | 17.4377 |
| | | | |
| 1024 * 1024 (Original) | 39977 | 50.5425 | 62.2414 |
| 1024 * 1024 (Ours) | 39978 | 43.1167 | 62.2359 |
| 512 * 512 and Resize to 1024 | 12043 | 20.8235 | 20.8316 |
Table 1: FID, memory usage and time cost of stable diffusion with different methods with different resolutions.
In brief, our method does better in supporting uneven different aspect ratio and adapts diffusion models with moderate fidelity trade-off for meeting important requirements both in high resolutions and low resolutions.
**Q2**: The range limit of proposed method.
**A2**: We have provided more 1024x1024 generated images in the PDF page, which show the outperforming performance and information richness of our method in synthesis. To be more specific, our method could improve FID by nearly 15% at the resolution of 1024x1024. We think the limit might depend on specific semantics (e.g., human faces) since we observe that Stable Diffusion sometimes could not successfully generate these semantics (e.g., human faces) even at the resolution of 512x512. Additionally, the proposed method could also get implemented onto other diffusion models (e.g. diffusion models trained on the resolution of 2048x2048 and get expanded to 4096x4096).
**Q3**: Changing attention entropy.
**A3**: We think the attention entropy should not be fixed because the information of images would naturally change in accordance with resolutions and the attention entropy would inevitably fluctuate. Thus, the goal of our method in the paper is to "alleviate the fluctuation" instead of fixation.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. Among the three arguments to support the use case, I found the second most convincing. Please be sure to include this discussion (as well as the supporting figures) in the revision to strengthen the motivation of the work. Please also consider reporting qualitative and quantitative results of baselines combining 512x512 SD with latest SR methods (e.g., Real-ESRGAN). Lastly, as Reviewer QLpB also pointed out, growing image size by a small factor of 2 cannot sufficiently justify the merit of the method. What is the largest image size the method can support? This is an important question which is worth an ablation study. It would also be interesting to compare to MultiDiffusion mentioned by Reviewer p6J2 in this context.
---
Reply to Comment 1.1.1:
Comment: Thanks for the valuable comments.
1. We have included the argument as well as the supporting figures in the revision. We will update the revised version after the revision submission is available.
2. Specifically, the results denoted with *512x512 and Resize to 768/1024* shown in Table 1 in the previous rebuttal comment are the results from 512x512 SD combined with LIIF [1], which is one of the latest SR methods supporting any-scale super-resolution. Note that our method could introduce different visual information with SR methods (please refer to the PDF page). We have included the results in the revision which will be updated after the revision submission is available.
3. In Table 1 in the previous rebuttal comment, we have followed the suggestion of Reviewer QLpB to expand the evaluation range from $256^{2}$ ~ $768^{2}$ (x0.5 ~ x1.5) to $128^{2}$ ~ $1024^{2}$ (x0.25 ~ x2), which shows that our method could effectively improve the FID for different resolutions and allow synthesis at higher-resolution in a reasonable range, which has been accepted by Reviewer QLpB.
4. We compare our method with MultiDiffusion and report corresponding FID(4K), memory usage and time cost in Table 2. As shown in Table 2, our method outperforms MultiDiffusion on each metric except for the GPU Memory with high resolution.
| Resolution | GPU Memory (MiB) $\downarrow$ | FID (4K) $\downarrow$ | Average Time (s) $\downarrow$ |
| :--------------- | :---------------------------- | :-------------------- | :---------------------------- |
| 224 * 224 (Ours) | 7064 | 50.3515 | 5.6274 |
| 224 * 224 (MD) | 7061 | 154.6925 | 31.7680 |
| 768 * 768 (Ours) | 19797 | 28.1372 | 36.7140 |
| 768 * 768 (MD) | 9906 | 40.9270 | 122.2485 |
Table 2: FID (4K samples), memory usage and time cost of diffusions with resolution $224^{2}$ and $768^{2}$.
\[1]: Learning Continuous Image Representation with Local Implicit Image Function, CVPR 2021. | Summary: In this paper, the authors propose a new scaling factor for attention based text-to-image generative models in order to handle variable sized generations. The authors establish the relationship between attention entropy and token size and use this newly found relationship to design a scaling factor that takes into account the image resolution. The authors also empirically show the effectiveness of this scaling factor by comparing FIDs, CLIP scores and qualitative examples.
Strengths: The proposed scaling factor is very simple and easy to implement. It does not require model architecture modifications or any training of the pretrained model. The qualitative results also look very promising. Therefore I think this can be easily applied in many existing models. Overall the paper is well organized and well written.
Weaknesses: 1. The main formula (Equation 6 and 7) is not very well explained. There are a lot of conjectures and approximations without justifications.
2. The authors did not compare with synthesis in fixed resolution and then performing super-resolution/downsampling. Since the resolution experimented in this paper is not very far away from the pretrained resolution, it is highly likely that super-resolution/downsampling may work just as well.
3. The authors should also compare their method with other machine learning based methods such as MultiDiffusion (https://multidiffusion.github.io/)
4. The compensation for human annotator is not mentioned
5. Typo in Equation 1,3: j is used in both the numerator and the denominator of Aij
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Related to Weakness (1), Section 3.2 is still a little bit confusing to me, can the authors explain more on the connection between Equation 5, 6 and 7? I.e. How did the authors come up with Equation 6 and 7?
2. Related to Weakness (2), can the authors compare their method with synthesis in fixed pretrained resolution and perform super-resolution/downsampling? Can authors also compare to more dramatic resolution changes (eg. 512 => 128 or 512 => 1024)?
3. Related to Weakness (3), can the authors compare their method with MultiDiffusion?
4. Is FID appropriate for evaluating wrt different resolutions? Since it is trained on fixed resolution images. And how did the authors select the reference/ground truth images for FID evaluation? Did they uniformly rescale to the desired resolution or did they subsampled based on the native resolution? It may have a difference in the evaluation results.
5. Although CLIP score is a reasonable indicator, I don’t think it is a perfect metric especially when object count is involved. Have the authors considered using off-the-shelf object detectors as one of the methods to quantitatively measure the performance?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Related to Weakness (4), the compensation for human annotators is not specified in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: More explanation on Equation 5, 6 and 7.
**A1**: In Equation 5, we derive the static relationship between attention entropy $A_{i}$ and token number $N$, i.e. $Ent(A_{i}) = \log N - \frac{1}{2} \lambda^{2} C + O(1)$, where $\lambda$ is the scaling factor and $C$ is a constant number unrelated with token number $N$. Recap that our goal is to control the fluctuating entropy $Ent(A_{i})$. Considering the form in Equation 5, we consequently set $\lambda$ with a granularity of squared logarithm (i.e., $\lambda = \alpha \sqrt{\log N} $ ), which is Equation 6 in the paper and $\alpha$ denotes a newly introduced hyper-parameter. Note that during training periods, token number $N$ is set as a constant number $T$ for training with $\lambda$ fixed as $1 / \sqrt{d}$ , which indicates that $\lambda$ comes to $1 / \sqrt{d}$ when $N = T$. In this way, we have $\lambda = \alpha \sqrt{\log T} \approx 1 / \sqrt{d}$ and we could have an approximate analytical solution for the introduced hyper-parameter $\alpha \approx 1 / (\sqrt{d\log T})$. By substituting this into Equation 6, we have $\lambda \approx \sqrt{\frac{\log_T N}{d}}$, which is Equation 7.
In brief,
- Considering the granularity of $\lambda$ from Equation 5, we get Equation 6.
- Combining Equation 6 with the condition that $\lambda \approx 1 / \sqrt{d}$ when $N = T$, we get Equation 7.
**Q2**: Comparison to super-resolution/downsampling methods.
**A2**: We would like to point out the advantages of our method against other methods in three key aspects.
1. **Better Support in Different Aspect Ratio**. Other methods do not support generating new images with a different aspect ratio from the original images with fixed resolutions. In comparison, our method could improve image generation in different aspect ratio, which is validated by both qualitative and quantitative experiments in our paper.
2. **Richness of Visual Information (More Important for Higher Resolutions)** . When users generate images with higher resolutions, what they are expecting is not only more pixels but also richer semantic information in images. Our method could introduce more information by enabling models to deal with more tokens, while super-resolution and other methods do not contribute to the richness of visual information by simply scaling up the original images.
To illustrate the non-trivial difference in the richness, we have provided examples in Figure 1 in the PDF page.
3. **Time Cost and Memory Usage (More Important for Lower Resolutions)** . For diffusion models adapted by our method, their time cost and spatial usage become proportional to the generation resolution, while downsampling methods are constantly constrained to the fixed cost brought by training resolutions. In this way, our method could efficiently enables low resolution image synthesis especially on portable devices, which have a high demand for both time cost and memory usage other than image fidelity.
In Table 1, we present FID, memory usage and time cost of stable diffusion with different methods in resolution 224. For low resolution, our method achieves better performance in memory usage and time cost, with partial trade-off in fidelity. Noticing those fine qualitative results in Figure 3 in the paper, we believe the trade-off is acceptable considering the tackling scenario of low resolution image synthesis.
| Resolution | GPU Memory (MiB) | FID | Average Time (s) |
| --------------------------------- | ---------------- | ------- | ---------------- |
| 224 * 224 (Original) | 7064 | 74.5742 | 5.6268 |
| 224 * 224 (Ours) | 7064 | 41.8925 | 5.6274 |
| 512 * 512 and Downsampling to 224 | 9324 | 21.8415 | 17.4457 |
Table 1: FID, memory usage and time cost of stable diffusion with different methods with resolution $224^{2}$. For more results, please refer to Table 1 in other rebuttals due to character limits.
**Q3**: Comparison with MultiDiffusion.
**A3**: We compare our method with MultiDiffusion and report corresponding FID(4K), memory usage and time cost in Table 2. As shown in Table 2, our method outperforms MultiDiffusion in both resolutions except for the GPU Memory with high resolution.
| Resolution | GPU Memory (MiB) | FID (4K) | Average Time (s) |
| ---------------- | ---------------- | -------- | ---------------- |
| 224 * 224 (Ours) | 7064 | 50.3515 | 5.6274 |
| 224 * 224 (MD) | 7061 | 154.6925 | 31.7680 |
| 768 * 768 (Ours) | 19797 | 28.1372 | 36.7140 |
| 768 * 768 (MD) | 9906 | 40.9270 | 122.2485 |
Table 2: FID (4K samples), memory usage and time cost of diffusions with resolution $224^{2}$ and $768^{2}$.
**Q4**: Limits on the FID metric.
**A4**: In this paper we only use FID for the comparison between images with the same resolution (w/ and w/o our method) to validate the efficacy of our method. Thus, we believe the comparison is fair and credible. Specifically, we uniformly rescale the reference images to desired resolution for FID. In addition, we have discussed about the limits of FID in the Limitation Section and supported our claims with additional qualitative experiments and user study evaluation.
**Q5**: Using off-the-shelf object detectors.
**A5**: Please note that repetitive patterns in images are not expected to be detected by object detectors since the repetitiveness exhibits in the form of texture/composition rather than countable objects (please refer to Figure 5 in the Supplementary Materials). Thus, we think using object detectors might not help.
**Q6**: Typos and unspecified compensation.
**A6**: We have revised our script by changing j in the numerator into j' and adding "For each human annotators, we pay $15 for effective completeness of user study evaluation" in Section 4.1, Line #213.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your response. I believe all my concerns have been addressed. Therefore I would like to raise my score from 4 to 5. | Rebuttal 1:
Rebuttal: Thanks for all the reviewers and AC for your time and valuable comments!
This comment is followed by the PDF page with new figures to support our views in information richness.
Pdf: /pdf/bebc28b02cbd13eb4b5f18b0f661ca9e7e03f13e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
MuSe-GNN: Learning Unified Gene Representation From Multimodal Biological Graph Data | Accept (poster) | Summary: This paper introduces MuSe-GNN, a model for learning gene embeddings from single-cell sequencing and spatial transcriptomic data that is based on multimodal machine learning and deep graph neural networks. While incorporating regularization with weighted similarity learning and contrastive learning to learn cross-data gene-gene relationships, the proposed method creates informative graph structures for model training and the generation of gene representations, which can make the learned gene representations contain functional similarity across various contexts in a joint space. The results of tests on diverse multimodal biological datasets show that MuSe-GNN can efficiently learn the functional similarity of genes across tissues and methods and surpasses current gene embedding learning models across several metrics. This model can aid in the investigation of the pathogenic processes underlying conditions like COVID and lung cancer.
Strengths: 1. The proposed model learns the gene expression, which incorporates the regularization with weighted similarity learning and contrastive learning to learn cross-data gene-gene relationships. The proposed model can analyze the large-scale multimodal biological datasets.
2. The model is applied to analyze COVID and cancer datasets, which helps to unveil potential disease resistance mechanisms or complications based on specific differentially co-expressed genes.
3. MuSe-GNN has obtained significant improvement from 24.2% to 100.4% in comprehensive assessment.
4. The overall pipeline and the mechanisms are presented in detail.
Weaknesses: 1. What is the difference between this paper and the Geneformer [1]. What about the comparisons of the performance of the experiments if they can be compared together?
2. In Figure 1, why Gene2vec and GIANT failed to learn the gene similarity? The embeddings are clustered by datasets.
[1] Theodoris, Christina V., Ling Xiao, Anant Chopra, Mark D. Chaffin, Zeina R. Al Sayed, Matthew C. Hill, Helene Mantineo et al. "Transfer learning enables predictions in network biology." Nature (2023): 1-9.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes, the authors have discussed their limitations and potential impact of gene embeddings for diseases, COVID.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the supportive comments. The detailed response to each point is as follows.
**1. What is the difference between this paper and the Geneformer [1]. What about the comparisons of the performance of the experiments if they can be compared together?**
We appreciate your questions. The distinguishing features between the work presented in this paper, namely MuSe-GNN, and Geneformer can be summarized as follows:
1. Geneformer is a Large Language Model (LLM) for single-cell data, which leverages a large-scale single-cell RNA-seq (scRNA-seq) dataset (a single modality). It employs a self-supervised pre-training approach, similar to BERT [2], to predict cell types or gene functions through fine-tuning on specific datasets.
2. MuSe-GNN is specifically designed for gene representation learning, leveraging large-scale single-cell multi-omics data to create meaningful gene embeddings through an unsupervised learning framework.
The main function of Geneformer is to perform annotation and prediction, while it does not guarantee the generation of high-quality embeddings. The training framework of Geneformer is supervised and needs labels, in contrast to our unsupervised approach. Geneformer also has more limited application scenarios based on our following experiments. Moreover, we have already benchmarked a similar method, known as scBERT [3]. Therefore, MuSe-GNN has more functions than Geneformer, and can handle multimodal datasets. Moreover, MuSe-GNN is not as resource-intensive and time-consuming (the limitation of LLM) as Geneformer.
Regarding possible shared experiments between Geneformer and MuSe-GNN, we considered both the gene function prediction task, a key feature of Geneformer, and the gene representation learning task, central to MuSe-GNN.
For the first task, we produced gene embeddings from the dataset utilized in [4] via MuSe-GNN and employed Support Vector Machine [5] as the classifier. This dataset was previously used by Geneformer for the gene function prediction task. The resulting accuracy scores are presented in Table 1.
Table 1: Accuracy of gene function prediction task. Here "unsup" means unsupervised learning and "sup" means supervised learning.
| | MuSe-GNN (unsup) | Geneformer (sup) | Raw |
|----------|------------------|------------------|--------------|
| Accuracy | 0.77 $\pm$ 0.01 | 0.74 $\pm$ 0.06 | 0.75 $\pm$ 0.01 |
We present both the average accuracy and standard deviation for three different configurations: one based on gene embeddings derived from MuSe-GNN, another based on Geneformer, and the last one based on raw gene expression data (Raw). The classification process for raw data also utilizes the Support Vector Machine.
Our results indicate that MuSe-GNN's performance is comparable to that of Geneformer, yet with less variability, suggesting that MuSe-GNN is more robust. Importantly, while Geneformer operates on the principles of supervised learning, MuSe-GNN employs an unsupervised approach. As a result, the gene embeddings produced by MuSe-GNN are capable of being utilized across a broader range of downstream tasks, as they encapsulate more meaningful biological contexts.
For the second task, we attempted to generate gene embeddings using Geneformer and record its performance. However, due to the absence of gene names in Geneformer's input and output, the benchmarking process could not be executed. Moreover, the absence of gene names greatly affects downstream applications; for example, analyses like pathway enrichment or disease analysis become infeasible. Hence, MuSe-GNN has more functions than Geneformer.
**2. In Figure 1, why Gene2vec and GIANT failed to learn the gene similarity? The embeddings are clustered by datasets.**
We appreciate your insightful observation. We have the following explanation:
This observed phenomenon arises due to these methods failing to accurately capture similar associated genes across various multimodal data. More details can be found in the Introduction section of our manuscript.
The UMAP visualizations of GIANT's gene embeddings reveal that it only learned such information for parts of datasets. Therefore, GIANT does not effectively capture the common gene information that spans different datasets and contexts. The UMAP visualizations of Gene2vec reveal that it failed in learning such information across all the datasets.
In contrast, our proposed model, MuSe-GNN, addresses this issue through the integration of a cross-graph Transformer model along with various loss functions. This unique approach enables our model to efficiently capture and assimilate information from multiple datasets. This results in improved gene similarity learning and effectively overcomes the dataset bias that both Gene2vec and GIANT encounter. Our design for such a function was explained in the Methods section and Appendix D.1 of the Supplementary Materials.
Please let us know if you have further questions, and we will do our best to address them. Once again, we thank you for your thoughtful review.
References:
[1] Theodoris, Christina V., Ling Xiao, Anant Chopra, Mark D. Chaffin, Zeina R. Al Sayed, Matthew C. Hill, Helene Mantineo et al. "Transfer learning enables predictions in network biology." Nature (2023): 1-9.
[2] Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).
[3] Yang, Fan, et al. "scBERT as a large-scale pretrained deep language model for cell type annotation of single-cell RNA-seq data." Nature Machine Intelligence 4.10 (2022): 852-866.
[4] Franzén, Oscar, Li-Ming Gan, and Johan LM Björkegren. "PanglaoDB: a web server for exploration of mouse and human single-cell RNA sequencing data." Database 2019 (2019): baz046.
[5] Pedregosa, Fabian, et al. "Scikit-learn: Machine learning in Python." the Journal of machine Learning research 12 (2011): 2825-2830.
---
Rebuttal Comment 1.1:
Comment: I have read your rebuttal and appreciate your effort. It is good to compare with Geneformer. Good luck!
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Thank you very much for your recognition and wishes! Your time and efforts are greatly appreciated. | Summary: The authors proposed a novel deep graph neural network, named MuSe-GNN, to learn gene representations from single-cell sequencing and spatial transcriptomic data.
The idea is to construct gene graphs from single-cell sequencing and spatial transcriptomic data, and then learn gene embeddings via cross-graph transformer.
Strengths: The tasks of learning gene embeddings from multi-context sequencing profiles is of interest especially in the bioinformatic domain.
The empirical results seem to indicate that the approach can be applied to different analysis frameworks and the gene representations learned by their approach are highly versatile.
Writing and presentation skill is well.
Weaknesses: This idea is not new, and the models they used are all well established.
Lack of comparison with related works.
Lack of ablation studies.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: One additional ablation experiment would be of interest: how much would the performance be impacted if you induce the embedding using only part of the modules.
They can compare their method with other deep learning-based multimodal representation learning models, such as [1].
[1] Lin X, Tian T, Wei Z, et al. Clustering of single-cell multi-omics data with a multimodal deep learning method[J]. Nature communications, 2022, 13(1): 7705.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the supportive comments. The detailed response to each point is as follows.
**1. This idea is not new, and the models they used are all well established. Lack of comparison with related works. Lack of ablation studies.**
We appreciate your comments concerning the novelty of our work, which we address in two distinct aspects.
The first aspect pertains to the novelty of our model objective. As outlined in the Introduction section, the conventional method of multi-omics data integration hinges on using cells as dimensionality reduction anchors. Such approach, however, faces significant computational hurdles including data-specific challenges, substantial data volumes, and batch effects.
Therefore, we propose to shift our focus to gene perspective. The existing models fail in integrating gene information across different multimodal biological data. By designing MuSe-GNN, we aim to address a fundamental constraint of a representation learning problem in the computational biology area. We provide more details in the Introduction section.
The second aspect revolves around the novelty of our methodology framework. To our knowledge, our work is the first instance in gene representation learning where the Multimodal Machine Learning (MMML) paradigm is synergistically paired with deep Graph Neural Network (GNN) designs. This combination has enabled us to achieve state-of-the-art performance according to our experimental results.
Furthermore, our core model comprises three unique components, each validated through an ablation study presented in Appendix D.1 of the Supplementary Materials. Finally, we consider constructing our graph data based on **multimodal data context** and **statistical test**, which is also novel and reliable.
Regarding the question about the model established level, we think that the **Cross-graph Transformer** design remains an active area of exploration, especially in the computational biology track. Furthermore, we introduced the **Weighted-Similarity Learning** design, which represents a novel attempt to enhance the performance of Graph Neural Networks (GNN). Please find more details about these design elements in the Methods section of our manuscript.
Regarding the question about model comparisons, our analysis involves a range of competitors that span gene representation learning methods (Gene2vec, GIANT), representative Graph Neural Networks (GNNs) (GAE, VGAE), and methods rooted in pre-training (scBERT) or other machine learning frameworks (PCA, WSMAE, MAE). We also intend to include Geneformer [1], a relatively recent model, in our comparisons. As indicated in Table 1 of our main text, our model outperformed all these competitor methods. We think our benchmarking is comprehensive, as noted by Reviewers cB5G and vvZM.
Regarding the question about ablation studies, please see the information from Appendix D.1 of the Supplementary Materials, where we tried the ablation tests with different loss functions, different input settings, the contribution of cross-graph design, and the performance of different GNN structures. As noted by reviewer cB5G, our ablation tests are exhaustive and informative.
**2. One additional ablation experiment would be of interest: how much would the performance be impacted if you induce the embedding using only part of the modules. They can compare their method with other deep learning-based multimodal representation learning models, such as [2].**
We appreciate your suggestions. Regarding the possible ablation test about dropping modalities (We assume 'modules' was meant to refer to 'modalities'; please let us know if this is not the case), we would like to emphasize the crucial role that data from diverse contexts play in our model's design. Comparisons between data with a greater number of modalities and those with fewer might be deemed unfair due to the variations in the sizes of the training datasets.
Additionally, theoretical research, as exemplified by [3], has demonstrated that the introduction of more modalities can enhance a model's performance in addressing related downstream tasks. We extrapolate this principle to the domain of gene representation learning. The central premise here is that the inclusion of data from a wider range of modalities can generate a mapping function that features a reduced upper bound of loss. Such information will be included in our updated version.
Regarding the comparison with [2], we posit that such a comparison might not be entirely relevant due to two significant differences between the methods. Firstly, scMDC is designed for cell embeddings, while our model centers around gene embeddings. Secondly, scMDC necessitates 'paired' scRNA-ADT or scRNA-scATAC data, which are not only limited in availability but also differ considerably from the input our model requires. Instead of a direct comparison with scMDC, we have expanded on the comparison between MuSe-GNN and Geneformer in our rebuttal suggested by Reviewer vvZM.
Nevertheless, we will discuss this paper [2] as an example of learning cell embeddings in our Related work section.
Please let us know if you have any further questions, and we will try our best to address them. Once again, we thank you for your thoughtful reviews.
References:
[1] Theodoris, Christina V., et al. "Transfer learning enables predictions in network biology." Nature (2023): 1-9.
[2] Lin X, Tian T, Wei Z, et al. Clustering of single-cell multi-omics data with a multimodal deep learning method[J]. Nature communications, 2022, 13(1): 7705.
[3] Huang, Yu, et al. "What makes multi-modal learning better than single (provably)." Advances in Neural Information Processing Systems 34 (2021): 10944-10956. | Summary: This paper addresses the heterogeneity problem in the gene representation learning from multi-context biomedical sequencing profiles. Specifically, gene connections are formulated through co-expression network, then the proposed MuSe-GNN utilizes cross-graph Transformer to generate gene embeddings, while designing a contrastive-learning based regularization term to integrate multi-modal data.
Strengths: 1. The paper is well-motivated. Effective gene representation learning from multi-context profile datasets is critical real-world biomedical applications.
2. The experimental results demonstrate that MuSe-GNN achieves the state-of-art performance and brought a significant improvement.
3. Bioinformatical analysis on identifying significant pathways, diseases, and causal networks provides biologically-informed insights .
Weaknesses: 1. The novelty of the proposed method is limited.
2. The methodology part is not adequately described. The cross-graph Transformer section is hard to follow and Figure 2 can be improved.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Minors:
The application area is rather specialized, so this paper may be of interest to a smaller subset of the NeurIPS community.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the supportive comments. The detailed response to each point is as follows.
**1. The novelty of the proposed method is limited.**
We appreciate your inquiry concerning the novelty of our work, which we address in the following from two aspects.
The first aspect pertains to the novelty of our model objective. As outlined in the Introduction section, the conventional method of multi-omics data integration hinges on using cells as dimensionality reduction anchors. Such approach, however, faces significant computational hurdles including data-specific challenges, substantial data volumes, and batch effects.
Therefore, we propose to shift our focus to gene perspective. The existing models fail in integrating gene information across different multimodal biological data. By designing MuSe-GNN, we aim to address a fundamental constraint of a representation learning problem in the computational biology area. We provide more details in the Introduction section.
The second aspect revolves around the novelty of our methodology framework. To our knowledge, our work is the first instance in gene representation learning where the Multimodal Machine Learning (MMML) paradigm is synergistically paired with deep Graph Neural Network (GNN) designs. This combination has enabled us to achieve state-of-the-art performance according to our experimental results.
Furthermore, our core model comprises three unique components, each validated through an ablation study presented in Appendix D.1 of the Supplementary Materials. Finally, we consider constructing our graph data based on **multimodal data context** and **statistical test**, which is also novel and reliable.
**2. The methodology part is not adequately described. The cross-graph Transformer section is hard to follow and Figure 2 can be improved.**
We appreciate your comment on the lack of clarity of the description of our method. For the explanation of graph transformer design, we can add more details as follows:
Regarding the description of our methodology, we have summarized all the essential components in the Methods section of our main text. Due to the page limit, we extended our method explanation in Appendix B of the Supplementary Materials.
Regarding our cross-graph transformer design, we have addressed it in the main text and appendix. We offered comprehensive explanation in Section 3.3 of the Methods in our manuscript. Additionally, we have provided more details about our graph transformer architecture in Appendix B.4 of the Supplementary Materials.
We also improved Figure 2 with more annotations and attached it to the Author Rebuttal according to the rebuttal requirement. Please let us know if you have further questions or comments.
The updated figure description is: The overall model architecture and the design of loss functions for MuSe-GNN. The brown block represents the network architecture of MuSe-GNN. The blue block represents different loss function components of MuSe-GNN. The green block represents the input datasets. The color gradients of the left two matrices represent different gene expression levels. All the components of MuSe-GNN and gene embeddings are annotated.
**3. This paper may be of interest to a smaller subset of the NeurIPS community.**
We appreciate your comment. As for the concerns regarding the specialized nature of our topic, we respectfully offer a different perspective. The intersection of AI and science [1], particularly within the realm of biology, has been an integral part of top-tier machine learning conferences in the past decades. The application of AI technology to address critical biological questions holds immense excitement and significance. This is evidenced by groundbreaking works such as Alphafold2 [2], which revolutionized protein design.
We note that there are many papers and activities about computational biology in NeurIPS, including:
**Papers:**
Hetzel, Leon, et al. "Predicting cellular responses to novel drug perturbations at a single-cell resolution." Advances in Neural Information Processing Systems 35 (2022): 26711-26722.
Xiao, Feiyi, et al. "Estimating graphical models for count data with applications to single-cell gene network." Advances in Neural Information Processing Systems 35 (2022): 29038-29050.
Tu, Xinming, et al. "Cross-linked unified embedding for cross-modality representation learning." Advances in Neural Information Processing Systems 35 (2022): 15942-15955.
**Workshops:**
LMRL - Learning Meaningful Representations of Life (https://www.lmrl.org/).
AI for Science (https://ai4sciencecommunity.github.io/neurips23/).
Machine Learning in Structural Biology Workshop (https://neurips.cc/virtual/2023/workshop/66513).
**Competitions:**
Open Problems in Single Cell Analysis - NeurIPS 2022: Multimodal Single-Cell Integration Across Time, Individuals, and Batches (https://openproblems.bio/events/2022-08_neurips/) (More than 1000 teams).
Open Problems in Single Cell Analysis - Coming August 2023 - Single-Cell Perturbation Prediction (https://openproblems.bio/events/2023-08_neurips/).
We think these references underscore the impact and relevance of our work within the realm of AI for science and biology. As commented by reviewer vvZM, our manuscript has excellent impact on at least one area, or high-to-excellent impact on multiple areas. Therefore, we think our research will capture the interest of scholars beyond these disciplines. There is also likely knowledge transfer, as a key property of machine learning research, among different areas.
Please let us know if you have further questions, and we will do our best to address them. Once again, we thank you for your thoughtful review.
References:
[1] Wang, Hanchen, et al. "Scientific discovery in the age of artificial intelligence." Nature 620.7972 (2023): 47-60.
[2] Jumper, John, et al. "Highly accurate protein structure prediction with AlphaFold." Nature 596.7873 (2021): 583-589.
---
Rebuttal Comment 1.1:
Comment: I have read your rebuttal and my concerns are well addressed. Thus, I have increased my score.
---
Reply to Comment 1.1.1:
Title: Thanks a lot!
Comment: Thank you very much for your recognition. Your time and efforts are greatly appreciated. | Summary: The paper describes an approach that combines Multimodal Machine Learning and Deep Graph Neural Networks to learn gene representations from multi-omics and multi-tissue data. The main issue that this paper tries to address is that existing approaches fail to obtain gene representations that are consistent across modalities (esp. from different data sources but same tissue). To accomplish this goal, the authors propose a model called MuSE-GNN that takes as input graph co-expression networks extracted from different modalities and projects them into a unified and consistent latent space to extract gene embeddings. These gene embeddings are then used for several downstream tasks such as pathway analysis, causal network analysis, and disease analysis. In particular, the model is learned through a loss function composed of three terms: a co-expression graph reconstruction term, a weighted similarity term (where the pairwise similarity of gene representations across modalities is maximized), and a self-supervised contrastive term (to cluster together functionally similar genes and pull apart functionally different genes).
The model is evaluated on a benchmark consisting of different tissues against 9 other methods, where it shows excellent performance on different metrics. In a subsequent experiment, the authors show that MuSE-GNN embeddings successfully relate to functional groups in human data. Lastly, they present a case study on COVID-19 where they run the model on pancreatic human cells of healty vs diseased COVID patients to identify differentially expressed genes, showing (with gene enrichment) that these genes are related to immune system activity and that some of them are part of known causal networks of the regulatory activity of the immune system.
Strengths: This paper is well-written and easy to follow. It studies a relevant problem (how to obtain consistent gene embeddings across modalities) and proposes a quite elegant solution (make them consistent mainly through a tailored loss function). The experimental results are good and the ablations (in the appendix) are exhaustive and very informative.
Weaknesses: It seems that the competitors have been evaluated with default hyper-parameters, while the hyper-parameters of MuSE-GNN have been tuned. This is unfair, since the results of MuSE-GNN are optimized to the experimental settings of this paper, while those of the competitors are not. This might be addressed by tuning (some of) the hyper-parameters of the competitors, although I understand it might be difficult given the time constraints of the review process.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - from the code, I can see that the competitors are trained with random seeds in the range [0, 10) while I can't see the same for MuSE-GNN. Can you confirm that MuSE-GNN has been trained with the same random seeds and thus the results are comparable?
- can you explain better this sentence in the appendix: "Methods such as WSMAE, GAE, VGAE, and MAE, which are based on MLP/GNNs and inspired by other methods, are configured with the best hyperparameters based on parameter searching to ensure fairness."?
- Your model uses 349M parameters. How many parameters GIANT uses? If they don't match, can you train a version of MuSE-GNN with a comparable number of parameters to GIANT (or alternatively, a version of GIANT with a number of parameters comparable to MuSE-GNN)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Not discussed. I don't see many limitations of this approach besides getting data of proper quality (which is independent of the contribution). Perhaps, as mentioned by the authors, the model is quite "heavy" in terms of number of parameter and one needs to bring down the embedding dimension to avoid OOM errors.
No negative societal impact of concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the supportive comments. The detailed response to each point is as follows.
**1. It seems that only the hyper-parameters of MuSe-GNN were tuned, while other competitors were not.**
We appreciate your question. We have mentioned the details of parameter tuning for other methods in Appendix D.2 of the Supplementary Materials. We search for the best hyper-parameter combination for methods based on deep learning without pre-training.
When it comes to methods not grounded in deep learning but specifically tailored for gene representation learning - PCA, Gene2vec, and GIANT - we did not adjust their parameters. However, in light of your feedback, we agree that it is important to consider the hyper-parameters for these methods. We have performed experiments by adjusting the dimensions of the resultant gene embeddings, thereby testing the performance across different models. The search range for these dimensions is set between 32 and 256. For this analysis, we utilized the datasets derived from the heart tissue. Due to the word limit, we recorded the raw score with standard deviation in the pdf file attached to the Author Rebuttal.
Table 1: Min-max scaled benchmarking score table for different optimized models, where the star represents the optimized version.
| Methods | ASW | AUC | iLISI | GC | CGR | NO | Average |
|-----------------------------|-------------------------------------|--------------------------------------|---------------------------|----------------------------------------|---------------------------------------|--------------------------------------|-----------------------------|
|PCA* | 0.17 | 0.37 | 0.76 | 0.08 | 0.47 | 0.90 | 0.46 |
|Gene2vec* | 0.70 | 0.91 | 0.00 | 0.68 | 0.00 | 0.32 | 0.44 |
|GIANT* | 1.00 | 0.00 | 0.40 | 0.00 | 0.16 | 0.00 | 0.26 |
|MuSe-GNN | 0.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | **0.83** |
Based on the results presented in Table 1, we can conclude that even when compared with the other three competitors, which have been optimized for gene representation learning tasks, MuSe-GNN consistently outperformed the other methods.
**2. Can you confirm that MuSE-GNN has been trained with the same random seeds and thus the results are comparable?**
We appreciate your observation and question. We confirm that the same random seeds were consistently utilized during the training phase for all benchmarking methods and MuSe-GNN. We intend to provide more comprehensive code in the subsequent version of our work.
**3. can you explain better this sentence in the appendix: "Methods such as WSMAE, GAE, VGAE, and MAE, which are based on MLP/GNNs and inspired by other methods, are configured with the best hyperparameters based on parameter searching to ensure fairness."?**
We appreciate your question. We now elaborate our statement below:
The methods under discussion, namely WSMAE, GAE, VGAE, and MAE, are primarily predicated on Multi-Layer Perceptron (MLP) or Graph Neural Networks (GNNs) frameworks and draw inspiration from various other methodologies. To uphold fairness in our study, these methods were fine-tuned and configured with their most conducive hyper-parameters.
The hyper-parameter tuning process entails carrying out multiple experiments with various hyper-parameter configurations to identify the one that delivers the most superior model performance.
However, given that we have now incorporated hyper-parameter tuning even for methods that do not rely on deep learning, we will strive to improve the clarity of this sentence in the forthcoming revision of our manuscript.
**4. Your model uses 349M parameters. How many parameters GIANT uses? If they don't match, can you train a version of MuSE-GNN with a comparable number of parameters to GIANT?**
We appreciate your question. In our experiments, we found that GIANT consumes approximately 260 MB of memory. This result is relatively close to the 349 MB memory usage of our method, suggesting that the memory footprint of both approaches is comparably sized.
Upon further examination of the GIANT model and GIANT paper, we have found that its architecture cannot be modified due to the absence of a user-accessible API for changing the number of layers. We do modify the number of latent dimensions and choose the largest model now. Moreover, we remain open to exploring further possibilities in future research.
**5. Perhaps, as mentioned by the authors, the model is quite "heavy" in terms of number of parameter and one needs to bring down the embedding dimension to avoid OOM errors.**
We appreciate your question and concern. Our ablation tests have indeed shown that excluding raw gene expression from the input significantly impacts MuSe-GNN's performance. Moreover, we have optimized our model to require only a single GPU for training and inference as mentioned in Appendix D.4 of the Supplementary Materials. Nonetheless, we will keep on exploring the optimization of memory efficiency in the future.
Please let us know if you have further questions, and we will do our best to address them. Once again, we thank you for your thoughtful review.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: I have read your rebuttal and appreciated your effort. I now think that this paper is mature enough to make it to the Neurips bar and have changed my score accordingly. Good luck!
---
Reply to Comment 1.1.1:
Title: We appreciate your feedback
Comment: Thank you very much for your recognition and wishes! Your time and efforts are greatly appreciated. | Rebuttal 1:
Rebuttal: # General response
We would like to thank the reviewers for their overall positive comments on the aims of our manuscript, as well as their insightful comments and inquiries.
We appreciate all the reviewers for their comments about the strengths of MuSe-GNN, including the importantance of the topic (supported by all four reviewers), elegant solutions with informative ablation tests (supported by reviewers cB5G and vvZM), significant improvement over other methods (supported by reviewers anSh and vvZM), meaningful downstream applications (supported by reviewers anSh and vvZM), and a well-written manuscript (supported by reviewers cB5G, mFBG, and vvZM).
We have diligently addressed all the comments, which include clarifying the model's novelty (as raised by reviewers anSh and mFBG), improving the presentation of the model (as suggested by reviewer mFBG), refining our experimental design (as per the feedback from reviewers cB5G and mFBG), and expanding the discussion on related work (as noted by reviewers mFBG and vvZM).
Furthermore, we underscore the importance of our work within the realm of computational biology and provide supplementary materials to advocate for the wider application of AI in scientific research. Once again, we are immensely grateful for the invaluable comments and suggestions from all reviewers.
Pdf: /pdf/1d80aa435fd7937f3ba6a3a7a9bae4124316bd0f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Reversible and irreversible bracket-based dynamics for deep graph neural networks | Accept (poster) | Summary: This paper provides a unified framework inspired the bracket-based dynamical system to analysis the oversmoothing problem in GNN. The past work may leverage the opposite physics concept such as the reversible processes, irreversible process and therefore it is not clear how such concept help to design GNNs, while the framework in this paper gives a deeper understanding. Leveraging the data-driven exterior calculus, this paper constructs four novel architectures which span the both reversibility and irreversibility spectrum using geometric brackets as a means of parameterizing dynamics abstractly without empirically assuming a physical model. Interestingly, it can reinterpretate the message-passing and GAT as the fluxes and conservation balances of physics simulator. It also generalize the attention mechanism which extends the nodal feature to higher order cliques and provide a unified evaluation of dissipation. Emperically, the author compare the architectures derived from this framework with classical GNN,e.g., GAT, GDE over several benchmarks.
Strengths: 1. This paper is well-written and offers valuable insights into the construction of graph neural networks. As far as I am aware, the idea presented in this paper is innovative. However, since I lack a background in physics, I would appreciate hearing the suggestions of other reviewers who are familiar with bracket-based dynamical systems.
2. The experimental result in damped double pendulum and MuJoCO dynamics looks good.
3. The author made the code available, promoting reproducibility.
Weaknesses: 1. This paper may be a little hard to understand for the reader without physics background.
2. It seems that the algorithm proposed by the author does not outperform the baselines in node classification problem.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I am curious about the precise computational complexity of the architectures proposed by the author, specifically in the context of N nodes or N neighbors. It would be beneficial to have concrete values rather than just the order of complexity. Additionally, it would be valuable to understand the advantages of these architectures in comparison to traditional graph neural networks (GNNs).
Regarding the node classification experiment, the algorithm's performance is generally comparable to the baseline, although it does appear weaker in some cases. However, it showcases excellent performance in physical systems like the double pendulum and MuJoCo dynamics. Could this be attributed to the notion that physics-inspired frameworks are inherently more suitable for describing and modeling physical systems rather than social networks?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We are happy that you have enjoyed our paper and that you appreciate the results of the physics-based simulations. Below are responses to your specific concerns in the “Weaknesses” section:
1. *This paper may be a little hard to understand for the reader without physics background.*
This is a fair point, as the physics underlying our architectures is reasonably involved. Unfortunately, we are limited by space constraints in the main manuscript and cannot give a comprehensive explanation whenever concepts are introduced. However, note that we have included the Appendices A.1 and A.2 which provide an introductory primer to the graph exterior calculus and bracket-based dynamical systems. Additionally, we also plan to add some more motivating examples and literature references to these Appendices, as well as go carefully through the body to ensure that the meaning of our results is preserved even if the analysis cannot be precisely followed. If you have additional suggestions regarding how to improve the readability of our paper, we are happy to incorporate them.
2. *It seems that the algorithm proposed by the author does not outperform the baselines in node classification problem.*
This is true in most cases. However, note that the goal of this work was not to develop an architecture which achieves state-of-the-art performance: this would require more significantly more hyperparameter tuning than we have done here, as well as additional empirical modifications in terms of feature encoding, graph rewiring, etc. Instead, we aim to bring some clarity to the roles of reversibility and irreversibility in modern GNN architectures using common benchmarks from both graph analytics and physics-based modeling. By connecting ideas such as gradient flows and graph attention to bracket-based dynamical systems, we have shown that architectures such as GAT and GRAND are not conservative or totally dissipative, and that this has non-obvious consequences on network performance.
Regarding your questions, please see the following responses:
*I am curious about the precise computational complexity of the architectures proposed by the author, specifically in the context of N nodes or N neighbors. It would be beneficial to have concrete values rather than just the order of complexity. Additionally, it would be valuable to understand the advantages of these architectures in comparison to traditional graph neural networks (GNNs).*
The primary advantages of the architectures presented here over traditional GNNs are (1) explicit and automatic preservation of desirable physical properties such as the first and second laws of thermodynamics, and (2) straightforward incorporation of the graph attention mechanism directly into the differential operators governing network evolution. By posing attentional GNNs as bracket-based dynamical systems, we make these networks more interpretable, conceptually simpler, and guaranteed to satisfy useful properties (e.g., dynamical stability) which follow from established theory. This leads to better analysis of network properties and performance, as well as architectures which provably respect physical laws often found in training data.
To address the question of computational complexity: note that our architectures are predicated on the graph exterior calculus, which encodes derivatives in terms of signed incidence matrices d_0, d_1, etc. These derivatives are always local on k-cliques, and our attention matrices A_0, A_1, etc. are always diagonal. This means that operations on graph entities are always sparse, and the complexity will therefore be linear in the graph size. On the other hand, it will also depend on the number of matrix multiplies called for by the evolution equations, the dimension of the features, and the time integrator itself, along with any need to, e.g., compute automatic derivatives during the forward pass. Based on this information, it can be seen that the Hamiltonian bracket is relatively cheap, and the Metriplectic bracket is the most expensive. A rough estimate of this complexity in terms of numerical runtime on CORA is given in the new Table 11.
*Regarding the node classification experiment, the algorithm's performance is generally comparable to the baseline, although it does appear weaker in some cases. However, it showcases excellent performance in physical systems like the double pendulum and MuJoCo dynamics. Could this be attributed to the notion that physics-inspired frameworks are inherently more suitable for describing and modeling physical systems rather than social networks?*
There may be some truth to this. You are right that we do notice better results from structure-preservation on physics-based examples where there is a clear notion of energy/entropy, and the goal is to reproduce the trajectories of dynamical systems. Conversely, it appears that graph classification networks can achieve comparable performance with many different types of structure-preserving and structure-agnostic architectures, perhaps indicating that the dynamics of the internal state in these networks are not critical to understanding the underlying classification problem. With that said, we also notice remarkable stability of our structure-preserving architectures with increasing depth (see the attached Table 10), which is a notable advantage over standard GNN architectures even in this case.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: I am glad to see that the authors plan to add some more motivating examples for the reader without physics background. I have no further questions. | Summary: This work provides a comprehensive overview of graph attention networks (GATs), shedding light on their fundamental concepts and principles. Additionally, the authors introduce a set of novel GNN architectures that leverage structure-preserving bracket-based dynamical systems. By incorporating these systems into GNNs, the proposed architectures aim to enhance the modeling capabilities and performance of graph-based machine learning tasks. Overall, the paper presents both a broad perspective on GATs and innovative approaches to further advance the field.
Strengths: They give a detailed theoretical analysis of graph attention networks with exterior calculus techniques. They also provide a generalized attention mechanism and propose several novel structure-preserving GNNs with satisfying performance.
Weaknesses: The table writing is not clear. Sometimes orange result is better than blue result while sometime blue is better.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I'd like to know whether your analysis can explain why "Gradient“ model performs worse on heterophily graphs like Computer and Photo. Which physics system is suitable for heterophily graphs and which is suitable for homophily graphs?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: They have already listed their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We are glad that you appreciate our analysis of graph attention networks in the context of the graph exterior calculus, as well as our interpretation of the attention mechanism as a learnable inner product on features. Please see below for responses to your specific concerns:
*The table writing is not clear. Sometimes orange result is better than blue result while sometime blue is better.*
It is true that blue and orange do not always mean “best” and “second best” in our Tables. In particular, we have used orange to represent the best result of the bracket-based architectures, and blue to represent the best result used for comparison. Although this has been mentioned at the beginning of Section 5, we will reiterate this in the captions of our Tables since it was confusing.
*I'd like to know whether your analysis can explain why "Gradient“ model performs worse on heterophily graphs like Computer and Photo. Which physics system is suitable for heterophily graphs and which is suitable for homophily graphs?*
This is a good question. In general, there is not an easy way to determine whether a particular structure-preserving architecture will be suitable for a particular degree of homophily or heterophily, as it has been shown (see, e.g., [*] below) that the ability of a network to distinguish between neighboring nodes relies crucially on the spectra of the learned weights. On the other hand, it can be informally expected that architectures which are fully or partially conservative – such as the Hamiltonian, Double Bracket, and Metriplectic systems – will have an easier time learning on highly heterophilic data. Indeed, this is what we observe on Computer and Photo in Table 5, where we see that the double bracket architecture is most performant. This is because the totally dissipative Gradient architecture, along with many other architectures based on gradient flows, has a strong tendency to assign similar predictions to nodes in the same local neighborhood, which is colloquially known as “oversmoothing”. While it is possible in theory for a totally dissipative architecture to correctly classify highly heterophilic data, the connection of dissipation/diffusion to graph convolution appears to make this difficult in practice.
[*] Francesco Di Giovanni, James Rowbottom, Benjamin P. Chamberlain, Thomas Markovich, and Michael M. Bronstein, “Understanding convolution on graphs via energies”, *arXiv:2206.10991*, 2023. | Summary: The work proposes structure-preserving bracket-based dynamical systems to learn physical systems using GNNs. The authors proposed four formulations depending on completeness and character (conservative or dissipative). The models are demonstrated to be effective in physical system and node classification tasks.
Strengths: * The method is constructed based on solid mathematics, and incorporating attention as a learnable inner product seems quite natural.
* The authors suggested various formalisms depending on completeness and character under the notion of the abstract bracket formulations, which offers a general viewpoint for physical systems.
* The method is quite adaptive because it is demonstrated to be effective in various tasks, including physical system and node classification.
Weaknesses: * It is unclear to the reviewer where is the authors' contribution and where it is from prior works, particularly the mathematical part. The presentation could be more clear to distinguish existing work and the contribution of the work.
* The authors claim, "We use these architectures to systematically evaluate the role of (ir)reversibility in the performance of deep GNNs." However, the reviewer could not catch "the role of (ir)reversibility" from the experiments. The authors could elaborate on this point in the experiments part.
* Despite the broad range of experiments considered, the performance of the models is moderate. Therefore, it is unclear in which condition the proposed models are effective (see the following weakness for more details).
* In Section 5.1, the Hamiltonian model works better than the gradient model on the dissipative phenomena. The author could explain why the conservative works better here as it could be key to understanding "the role of (ir)reversibility."
* In addition, there are no experiments where the gradient model works best. With these results, it is unclear when the gradient model is effective. The authors could add experiments where the gradient works fine, e.g., incomplete and totally dissipative system.
* In Section 5.3, the authors reported that the proposed model consumed much memory. In this case, computation time for prediction could also be reported because there are typically tradeoffs between speed and accuracy.
Minor points:
* Pointers to "Table 3" (49, 124, and 159) might be "Table 1."
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Does the method work well for PDEs, e.g., heat and Navier–Stokes equations?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We are glad that you appreciate the idea of attention as a learnable inner product, and the versatility of our bracket-based architectures in capturing physical principles and performing a variety of tasks.
We believe the primary weakness mentioned in your review – that it is not obvious where our mathematical contributions are relative to related works – can be cleared up easily. First, the work on data-driven exterior calculus in [10] deals fundamentally with identifying elliptic PDEs for data-driven physics modeling, and has nothing to do with dynamical systems, conservation properties, or bracket formalisms. Moreover, [10] presents no analysis related to graph attention or existing GNN architectures in terms of learnable inner products. So, while it is true that both [10] and our paper rely on the graph exterior calculus, their goals and contributions are essentially disjoint. Similarly, the work in [28] has a fundamentally different focus: they consider one specific type of bracket-based dynamical system, the metriplectic system, with the goal of learning physics outside the graph setting. Moreover, that work makes no use of the graph calculus or learnable inner products, meaning that [28] also has no direct connection to graph attention or existing GNN architectures.
Responses to your remaining concerns are below. Please note truncation due to the character limit.
*The authors claim, "We use these architectures to…*
You are right that it is not easy to precisely characterize when one should use a reversible GNN architecture over an irreversible one. Our experiments show that both types of architectures can achieve competitive performance, though it seems that combining both conservative and dissipative dynamics is often the best choice. This suggests that the precise role of (ir)reversibility is context-dependent: in physics-based modeling there are obvious reasons to use a network which is invariant-preserving, but enforcing physics may not be as critical for graph classification. Regarding depth, please see the attached Tables 9-11 and the global reviewer response.
*Despite the broad range of experiments…*
This is fair criticism, as our architectures only achieve the best performance in some of the experiments discussed. However, note that the goal of this work was not to develop an architecture which achieves state-of-the-art performance: this would require significantly more hyperparameter tuning, as well as additional ad hoc modifications in terms of feature encoding, graph rewiring, etc. Instead, we aim to establish a unified framework for understanding graph networks, in the process bringing some clarity to the roles of reversibility and irreversibility in modern GNN architectures using benchmarks from both graph analytics and physics-based modeling. By connecting ideas such as gradient flows and graph attention to bracket-based dynamical systems, we have shown that architectures such as GAT and GRAND are not conservative or totally dissipative, and that this has non-obvious consequences on network performance.
*In Section 5.1, the Hamiltonian model works better…*
This is a good point. The key to understanding this is to note that, in order to achieve coordinate-independence, our brackets are wrapped in a feature encoder/decoder. This is necessary for learning bracket-based dynamical systems from arbitrary data: it is highly unlikely that a given set of data will match the form of our structure-preserving brackets in any arbitrary set of user-collected features, even if this data is drawn from a system which is physics-constrained. So, the autoencoder provides additional flexibility to our architectures, but comes with a challenge: the bracket structure we prescribe is enforced only in the latent space. Section 5.1 shows that it is possible for machine learned conservative dynamics to still produce an effective model for the dissipative double pendulum system when combined with a learnable encoding scheme.
In simple problems, this issue is circumvented by using data with specially designed features which match a known bracket structure, e.g., position and momentum for canonical Hamiltonian systems. This removes the need for an autoencoder, but also limits the ability of bracket-based architectures to discover unknown physics or apply to less physical problems such as graph classification. For this reason, we have chosen to focus on the more general case of arbitrary features.
*In addition, there are no experiments…*
You are right that our experiments do not show a case where the gradient system performs the best. On the other hand, the original depth study in Table 8 shows that the gradient system appears to be the highly stable on node classification problems with respect to perturbations in the integration domain. See, e.g., Remark B.1.
*In Section 5.3, the authors reported…*
This is a good suggestion. We have included some computational timings on CORA in Table 11 as part of the new depth study. Note that the memory issues which were observed have been mitigated: they no longer appear in the new version of the code, though the metriplectic bracket is still prohibitively expensive on the large node classification problems despite its linear scaling in the graph size, due to its reliance on automatic differentiation during the forward pass. We anticipate that the metriplectic model will still have substantial impact in the context of data-driven physics models: in these settings feature dimensions are low. For evidence in this direction, please refer to the results in Tables 2 and 3.
*Does the method work well for PDEs…?*
Metriplectic dynamics have tremendous potential for multiphysics modeling. Reduced-order physics models obey a fluctuation-dissipation theorem, and the framework introduced here provides a means of learning non-equilibrium statistical mechanics. We are currently investigating this in other projects.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses, which addressed my concerns raised in the first review. Assuming these responses are incorporated in the revised manuscript, I have raised my scores (presentation: 2 -> 3, rating 4 -> 6). I generally appreciate the mathematical contribution and the broad range of experiments made by the work.
> So, the autoencoder provides additional flexibility to our architectures, but comes with a challenge: the bracket structure we prescribe is enforced only in the latent space.
This is an important point that could be clarified in the manuscript (e.g. limitation part) because most readers (including myself) expect this kind of model to respect physical laws in the original space in addition to the latent space through inductive biases.
---
Reply to Comment 1.1.1:
Comment: We are glad that we could assuage your concerns. Thanks for being open to modifying the initial score. We will make sure to explicitly clarify the mentioned limitation in future versions of the paper. | Summary: This paper proposes a bracket-based dynamical system framework to design structure-preserving graph neural networks (GNNs). Specifically, the authors leverage four formalisms: Hamiltonian, Gradient, Double-Bracket, and Metriplectic, that model physical systems with different completeness and dissipation characteristics, and implement them via discretizations from exterior calculus. The authors parameterize GNNs with a chosen dynamic by specifying the bracket matrices and learning the energy/entropy functions via graph attention mechanisms. Experiments on real-world graph inference tasks show that structure-preserving GNNs can perform better than those that are not strictly structure-preserving, and the best dynamics depend on the downstream task.
Strengths: Strengths:
1. It is novel to use bracket-based parameterizations to design physics-inspired GNNs.
2. It is fruitful to analyze existing GNNs (e.g., GAT, GRAND) under the bracket-based dynamical framework.
Weaknesses: Weaknesses:
1. Since the paper is intended for general audience in graph machine learning, it can be written in a clearer manner with more motivating examples and targeting applications. For starters, include a notation section.
2. One of the key motivations of the paper is to improve performance of deep GNNs, yet all the experiments in the main manuscripts do not investigate explicitly the role of depth - are the GNNs used in these experiments actually deep? The more related experiments are described and discussed in Appendix B.3.2 as ablation study, but results (Table 8) show that for node classification task, only Gradient-based GNNs perform well when the number of layers increases, whereas other dynamics exhibit (significant) performance drop. If the paper intends to mitigate over-smoothing, the authors should provide more compelling experiments. On the other hand, if the paper simply intends to describe a physics-inspired framework to parameterize GNNs, then the authors should re-write the discussion of deep GNNs.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. What's the implications for practitioners? What applications would we expect imposing certain bracket structure outperforms the unconstrained ones? What structural requirement one should choose for different applications (e.g., add a column in Table 1 giving example applications)? The current set of experiments reads a bit confusing to me. For example, experiment 2 considers the node classification task, which requires some sense of clustering / smoothing, but not over-smoothing. So the findings in Table 4 showing that Double Bracket performs the best aligns with our expectations? But in Table 8, the Double Bracket formalism cannot be computed for deeper architectures - Is it merely a computational constraint? More explicit remarks and discussions will be great!
2. The role of (learnable inner product) $A$: the authors connect this with graph attention mechanism; but a priori one can also choose other forms of $A$? Have the authors compare some simple baselines, say fixed $A$ or even just set $A$ as identity? This could also help address the memory issue in experiments (Sec 5.3) where the authors cannot compute $A$ (and thus $E, S$) due to the high-dimensionality of the node features?
3. The authors discussed higher-order attention mechanism based on higher-order cliques (Appendix 4). Will this be useful in practice or is this a pure theoretical discussion? If the former, can the authors provide experimental demonstration? If the latter, the authors might consider removing this from the primary contributions since this does not seem like the focus of the paper?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: This work does not have negative societal impact. The limitations are discussed in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We are happy to hear that you enjoyed our bracket-based parameterizations and that you appreciate the analysis of existing GNN architectures in terms of this framework. Your weakness related to the clarity of the exposition has been well received: we plan to include a section at the beginning of Appendix A which summarizes our notation, and we will also add some discussion of additional examples and applications in Appendices A.1 and A.2, along with references where the interested reader can find more information. Additionally, we have gone through the body of the paper again and made sure that the takeaways of our work are clear even if the analysis cannot be precisely followed. If you have other suggestions, we are happy to incorporate them.
To address your weakness related to the role of depth in our networks, we have modified the depth study as described in the global reviewer response, so that the role of depth in our architectures is more apparent. Particularly, Tables 9-11 provide additional information related to the role of depth in the trained CORA models, and it can be seen from Table 10 that the performance of all bracket-based architectures does not depend critically on the depth in terms of number of layers, maintaining itself very well when the depth is increased while the interval of integration is held constant.
To address your specific questions, please see below:
1) *What's the implications for practitioners? What applications would we expect imposing certain bracket structure outperforms the unconstrained ones? What structural requirement one should choose for different applications (e.g., add a column in Table 1 giving example applications)? The current set of experiments reads a bit confusing to me. For example, experiment 2 considers the node classification task, which requires some sense of clustering / smoothing, but not over-smoothing. So the findings in Table 4 showing that Double Bracket performs the best aligns with our expectations? But in Table 8, the Double Bracket formalism cannot be computed for deeper architectures - Is it merely a computational constraint? More explicit remarks and discussions will be great!*
First, please note that the mentioned deficiency in the Double Bracket architecture was artificial and has been addressed since our submission (please see the global reviewer response for details). As to the impact of our work for practitioners, this is a complex question. We notice that, in almost every case, a combination of reversible and irreversible dynamics is necessary for optimal performance, which agrees with the intuition that network dynamics should simplify feature information without homogenizing it. As for the question of when to choose structure-preserving networks over unconstrained ones: we chose not to focus on this at the present time, as there are many papers in the literature (e.g., [6, 19-26]) which have established the benefits of structure-preservation in machine learning – including dynamical stability, improved performance with depth, and relative insensitivity to initialization – on several different classes of problems. Our work adds to this literature by showing that, perhaps counterintuitively, comparable performance can also be obtained on node classification experiments with architectures based on several different physical mechanisms.
2) *The role of (learnable inner product) A: the authors connect this with graph attention mechanism; but a priori one can also choose other forms of A? Have the authors compare some simple baselines, say fixed A or even just set A as identity? This could also help address the memory issue in experiments (Sec 5.3) where the authors cannot compute A (and thus E,S) due to the high-dimensionality of the node features?*
It is certainly possible to choose other forms for the learnable matrix A which do not connect to the standard idea of graph attention: the only fundamental requirement A should have is positive definiteness to induce a valid norm. For example, one could choose learnable diagonal matrices as in [10], or parameterize A in terms of learnable Cholesky factors. We have indeed experimented with setting A to identity, but find that including an attentional A improves performance, likely for the same reasons that it is advantageous to incorporate graph attention in standard GNNs. We have also experimented with setting the A matrices to update only at the start of every epoch, so that they are constant during the integration (this is referred to as “constant attention” in [48]). We find that this choice makes very little difference in our experiments, so we have not included a comparative study to this effect.
3) *The authors discussed higher-order attention mechanism based on higher-order cliques (Appendix 4). Will this be useful in practice or is this a pure theoretical discussion? If the former, can the authors provide experimental demonstration? If the latter, the authors might consider removing this from the primary contributions since this does not seem like the focus of the paper?*
While the discussion in the paper is purely theoretical, we expect higher-order attention to be useful on datasets with, e.g., explicit edge features or relatively dense connectivity. We had difficulty identifying an appropriate edge feature dataset when we were designing the experiments for this work. However, since the time of submission, we have improved the implementation of our attentional inner products, including the 2-clique attention which appears in the edge equation of the Gradient bracket. These changes do appear to yield a modest performance increase, and this code will be released following the review process.
---
Rebuttal Comment 1.1:
Title: Clarification of Table 9 and 10: interpretation of depth
Comment: Thank you for your detailed response! I have two follow-up questions below:
Question:
1. The new Table 9 caption reads "Note that integration time can be considered as a surrogate for depth, since the
temporal step-size of each network is fixed to 1". On the other hand, in the global response, the authors comment that "...but now progressively reduces the step-size of their 4th-order Runge-Kutta time integration from 1 to 1/64 ...". So Table 9 still uses a fixed step-size, while Table 10 uses a decreasing step-size?
2. If we adopt the interpretation of "integration time $\approx$ depth when step size is fixed to 1" in Table 9, then shouldn't the same analogy hold for Table 10, where the **"64" layers should effectively be much smaller**, since it should be a sum of $1 + 1/2 + 1/3 \ldots 1/64 \approx \ln 64 + \gamma \approx 4.7$ (i.e. the 64-th harmonic number)? If so, is the interpretation of "64" layer somewhat misleading, especially comparing to "standard GNNs, including GATs, are known to experience significant degradation after ~4 layers"?
---
Reply to Comment 1.1.1:
Comment: Thanks for your continued interest in our paper. Please see our responses below for answers to your questions:
1. *The new Table 9 caption reads "Note that integration time can be considered as a surrogate for depth, since the
temporal step-size of each network is fixed to 1". On the other hand, in the global response, the authors comment that "...but now progressively reduces the step-size of their 4th-order Runge-Kutta time integration from 1 to 1/64 ...". So Table 9 still uses a fixed step-size, while Table 10 uses a decreasing step-size?*
Not exactly. The final time of integration for each bracket architecture is fixed according to the value in Table 9, and the step-size is constant for each entry in Table 10. This means that, for each column in Table 10, we choose step_size = final_time / num_layers, so that time integration is performed at variable degrees of temporal resolution to test stability with increasing depth.
2. *If we adopt the interpretation of "integration time $\approx$ depth when step size is fixed to 1" in Table 9, then shouldn't the same analogy hold for Table 10, where the "64" layers should effectively be much smaller, since it should be a sum of $1 + 1/2 + 1/3 \ldots 1/64 \approx \ln 64 + \gamma \approx 4.7$ (i.e. the 64-th harmonic number)? If so, is the interpretation of "64" layer somewhat misleading, especially comparing to "standard GNNs, including GATs, are known to experience significant degradation after ~4 layers"?*
Again, there is no step-size decrease during the inference procedure, and the meaning of depth is the same in both cases. During network training, the step-size is fixed to 1. Therefore, a final time of, e.g., 11, means that (in a forward Euler scheme) 11 network compositions are performed to reach the network’s predictions, which is why we say integration time is approximately depth in Table 9. This interpretation continues in Table 10, where the step-size is changed to create a variable number of network compositions for the same final integration time: from 2 up to 64. This shows that the performance of the bracket-based architectures does not degrade with increasing depth, in contrast to more standard GNNs.
Let us know if this answers your questions. If not, we are happy to continue this discussion. | Rebuttal 1:
Rebuttal: Thank you all for your insightful comments and useful feedback on our work. We have heard the shared criticism that (1) parts of our manuscript could be difficult to understand for a general machine learning audience, and (2) that the role of depth in our architectures could be made clearer in the text. We are taking steps to address these concerns.
Unfortunately, some technical complexity is unavoidable due to the mathematical machinery employed. However, there are some additional things that we can do to ease the burden on our readers. We have gone through the main manuscript and ensured that the practical implications of our work are clear, even if the analysis is difficult to follow. Due to space constraints, it is difficult to include any more introductory details in the main body of the manuscript. However, note that we have already written introductions to the graph exterior calculus in Appendix A.1 and to bracket-based dynamical systems in Appendix A.2, with the goal of making our manuscript accessible to the general machine learning community. Following one reviewer’s suggestion, we have added a glossary of terms for ease of reference. We also plan to add more examples and common applications to these Appendices as per your suggestions.
Regarding the role of depth: we find that there is not an obvious relationship between depth and the performance of structure-preserving network architectures. On the other hand, our experiments show that the bracket-based architectures presented here **can maintain their performance up to (and potentially beyond) 64 layers**, as demonstrated clearly in the new Table 10, while standard GNNs, including GATs, are known to experience significant degradation after ~4 layers. In particular, many of the networks trained in Section 5 use the equivalent of 5-15 layers to reach their optimal predictions (see, e.g., the new Table 9). Note that these new Tables correspond to a redesigned depth experiment which clarifies the influence that depth has on our architectures. This new study, which has been designed to supplement Table 8 in Appendix B.3.2, takes our bracket-based networks trained on CORA with Planetoid splits, but now progressively reduces the step-size of their 4th-order Runge-Kutta time integration from 1 to 1/64 and tests their predictive accuracy on random splits. This experiment tests the effect of increasing the depth of a network while holding its optimized domain of integration fixed and measures dynamical stability with increasing depth. In the numerical analysis community, it is well known that the temporal step-size must be sufficiently small to achieve a stable result (the so called CFL conditions), which may not be respected by the experiment in Table 8 which fixes the temporal step-size to 1 and performs integration over progressively longer domains. The previous experiment (Table 8) was originally taken to match the methodology used in the GRAND paper, but we believe that Table 10 provides a more meaningful comparison.
In addition to these changes, we would like to reiterate that a primary contribution of our work is the principled reframing of graph neural networks in terms of bracket-based dynamical systems, which provides a rigorous way to both construct new network architectures as well as contextualize current ones. Besides simplifying the analysis of existing architectures such as GAT and GRAND, this framework has great potential for data-driven physics applications, since reducing the order of a physics model naturally introduces dissipation in the form of information loss even when the full model is conservative. Some evidence for the utility of the bracket formalism on physical problems is presented in Tables 2,3, and 6, and we are actively seeing success with this approach in other projects as well.
Finally, we would like to call your attention to the difference in results between the new depth study in Table 10 and Table 8 of the original submission. This is because, in addition to reflecting a slightly modified experiment, the new results were obtained following a complete refactor of the code. Our original code was written around the publicly available GRAND codebase, but we had concerns related to memory usage and scalability of the GRAND code for deep networks (see missing data in tables 8 and 5). After reimplementing from scratch, we were able to remove the many hidden features of the GRAND code and avoid these issues. We are repeating all experiments with the new code but have completed preliminary tests that demonstrate the issues have been resolved, and this code will be made available following the review process.
Pdf: /pdf/b86e3e19f2427cfbaa8fa688681ca3cbd6b931c9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Reference-Based POMDPs | Accept (poster) | Summary: This paper presents a method to solve POMDPs by considering a reference policy. The solution of a reference-based POMDP is presented. Then the existence and uniqueness of the solution are proved. Then the author shows the connections between a reference-based POMDP and a standard POMDP. In the end, an online planning algorithm is developed.
Strengths: The POMDP problem is worth exploring, starting from the case that a reference policy is available is a good start.
The general idea is easy to follow.
Theorems are developed for reference-based POMDP. Although some steps rely on approximation, the logic chain of the analysis is complete.
Weaknesses: It is a bit hard to understand all the math details. It is unclear to me what is the purpose of theorem 3.1.
How to get the initial reference policy can be a major barrier to applying this method. For example, the author mentioned a way to get the reference policy from a fully observable environment seems to be a leak of the optimal solution, which I think is inappropriate.
Please present the assumptions explicitly.
The experiment is relatively simple and not convincing enough. It is unfair to compare the baseline with the method when the reference policy is derived from a fully observable environment.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Does the reference policy always exist and available?
What's the purpose of theorem 3.1?
When a standard POMDP can not be approximated by a reference-based POMDP?
Is there any other baseline that came up in the past 13 years? The baseline seems to be out of date.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: A reference policy must be available.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive review and for your questions.
Re “Weaknesses: purpose of theorem 3.1”. This is the main Theorem in this paper and is a straightforward extension of the LS-MDP of Todorov (see eq 31 and 32) to the POMDP. It’s main point is to demonstrate that: 1) the Bellman equation for a Ref-POMDP can be represented as an expectation under the passive dynamics
; 2) the optimal policy (by definition, a belief-to-belief transition probability) can be obtained explicitly after solving the Bellman equation. As noted in the paper, these facts can be leveraged to efficiently approximate the solution.
Re ”Weaknesses, how to get the initial reference policy…” We do acknowledge that a judicial choice for the reference policy needs to be made as the problem formulation, by definition, rewards respecting the reference policy (see eq 10). However, a solution to a Ref-POMDP will also ignore the reference policy if greater rewards are available by deviating from it (also eq 10). As such, there is a level of robustness against the reference policy baked in to the formulation. A good way to think of the reference policy is as a “heuristic” or “initial guess” which can be jettisoned by the Ref-POMDP’s solution if it is advantageous to do so. Note also, in our environments, the reference policy generated by the A* policy is suboptimal in general due to partial observability and transition noise (e.g. starting at S1 in Navigation 1 and proceeding via the A* policy almost certainly leads to very negative results). As noted in the summary, we envisage RefSolver to be suitable for problems with dynamically changing POMDP models where one needs to deform the policy being followed without making abrupt changes to the policy. We also hope that for more general problems we can substantially revise the algorithm by iteratively improving the reference policy.
As a general point, note that most complex problems in optimisation require a heuristic or initial policy that can be iteratively improved so we do not believe that such the requirement for a reference policy is too restrictive. Generally, as we suspect it is the case here, the proximity of the “guess” to the final solution also determines the speed of convergence.
Re "Please present the assumptions explicitly." Agreed that we can state assumptions and scope of application clearly upfront. This can be updated in the final version. The key assumptions of our formulation is that a stochastic belief-to-belief transition reference policy needs to be given and that the model is known.
Re ”Weaknesses, the experiment is relatively simple…” We acknowledge that the experiments are preliminary, but they demonstrate some key advantages of our approach. Both navigation problems have a long horizon, and we demonstrate in Navigation 2 that the modified environment can be successfully navigated even when the A* policy is generated off the initial map (i.e. A* policy is suboptimal). This is one application of Ref POMDPs which we think could be leveraged in applications where there is some uncertainty about the real environment. Note also that, while we used a fully observable policy as a reference here, this is not a requirement to employ the formulation (e.g. an offline POMDP policy from a similar environment could be used).
Re ”Q, “When a standard POMDP can not be approx….” See lines 230 – 234. Essentially the result says that the representation of the belief space needs to be simple. More explicitly, the size of the representation should be small with respect to the number of states.
Re "Q, is there any other baseline..." While POMCP and DESPOT are relatively old algorithms, they still represent the canonical benchmarks to beat for online POMDP solvers especially for discrete models. See reference [9] for a recent survey of the field and various extensions of these main algorithms.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. | Summary: The paper proposes to regularize online policy search in POMDP by providing a reference stochastic policy. The idea is illustrated on two synthetic grid domains.
Strengths: The paper attempts to systematically incorporate prior knowledge to improve online POMDP planning.
Weaknesses: 1) The proposed approach is not well placed in the literature. The approach is not new --- the reference policy is what is known as policy prior in Bayesian approach or policy regularizer in frequentist approach. There are quite some works on that. If the work provides a new formalism to these notions, this should be properly related to prior work.
2) The empirical evaluation is insufficient. The approach is evaluated on two synthetic grid worlds only, on which it outperformed some implementations of POMCP, and was not compared with DESPOT. It is easy to craft problem instances for any algorithm so that the algorithm looks the best, this is a fact known as 'no free breakfast theorem' in computer science. For an informative evaluation, the algorithm should be compared on domains and settings on which POMCP and DESPOT were compared.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: How is the proposed approach related to methods developed for stochastic control with latents? Seems to be the same setting.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations are addressed adequately.
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to provide feedback. See responses below.
On “ The approach is not new – the reference policy is what is known …” We respectfully disagree.
First, we would like the clarify the main contributions of the paper by referring you to the general comments of our rebuttal above.
Second, we like to note that although at the very high level, the mechanism may seem like a policy prior in a Bayesian approach, the reformulation and implication of the reformulation to solving POMDPs is, to the best of our knowledge, novel and distinct to existing approaches. We emphasise that the major contribution of the paper is that we propose a reformulation of POMDPs, such that the optimisation for solving POMDPs no longer requires exhaustive enumeration of the action space (this enumeration is required by almost any POMDP planning method today). Instead, optimisation can be performed analytically as a consequence of the relaxation which, under suitable transformations, leads to a linearisation of the Bellman equation (Theorem 3.1) and allows the Bellman equation to be computed as an expectation, which in turn can be more efficiently computed using Monte Carlo methods relative to other state-of-the-art POMDP solvers.
This contribution represents an extension of the theory of Linearly Solvable MDPs (see Todorov [18-20]) to partially observable domains. Note that Linearly Solvable MDPs are closely related to Max Ent RL as noted in Section 7 Summary of the paper, where some recent theoretical developments have demonstrated robustness to the model (i.e. dynamics and rewards) in fully observable domains (see [2]) but again we are not aware of a parallel in partially-observable domains as of yet. Of course, model uncertainty could be explored at a later stage and is important for real-life applications, but this paper's scope was simply to demonstrate both theoretically and experimentally the feasibility of Ref POMDPs in the context of partial observability where the model itself (i.e. transition, observation and rewards) are known.
Re Weakness 2. We agree that further experiment on higher-dimensional environments should be done to further validate this approach. However, it should be noted that we have not manufactured problems to yield good results. E.g. If the A* policy were blindly observed in Navigation1, this would consistently yield poor results. What both our experiments demonstrate is that RefSolver can improve on “good” (albeit suboptimal) reference policies which is the main point. We did not compare RefSolver to DESPOT due to the long-horizon environments as stated in the paper (see lines 264-267). Constructing a DESPOT for 4 actions with a max depth of 60 is very expensive because each action needs to be visited at each observation branch. This issue has also been highlighted in [5]. However, we do agree that it would be appropriate to run experiments in higher-dimensional environments with shorter planning horizons.
Re Questions: Please refer to our clarification on the main contribution in the general comment.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Based on the response, I still think proper discussion of relation of this method to optimal stochastic control theory and practice is related, as well as much more thorough empirical evaluation. It seems that you agree with both points.
---
Reply to Comment 1.1.1:
Title: The paper's contribution is theoretical rather than empirical
Comment: On the point of a discussion of the literature, we point out again that the inspiration of this paper was the framework of the Linearly-Solvable MDP (see references to papers by Todorov [1, 18-20]). This Linearly-Solvable MDP work is summarised in 8.1 of the paper. We can of course expand this summary to include a broader optimal stochastic control and will do so in the final manuscript if the paper is accepted.
On the point of empirical evaluation, we note that the main focus of this work was theoretical rather than experimental, our focus being to establish a well-grounded formulation of Reference-Based POMDPs and a thorough analysis of the features of the problem. This, by itself, is a novel contribution. Related to this point, we also like to highlight three points:
1/ An inspiration of this work was Todorov (2006) Linearly-solvable Markov Decision Processes (LS-MDPs), which was published in Adv. in NeurIPS, where the theory of LS-MDPs is developed in a fully-observable context. Todorov’s NeurIPS paper contribution was theoretical. At the time, empirical results, while encouraging and supportive of the theory, were preliminary and limitations remained to be clarified, but further investigations demonstrated that this was a fruitful approach.
2/ We stress again that the experiment we have run is not trivial due to the long planning horizon and the challenges of partial state observability and state-transition uncertainty. In fact, the environments were chosen quite deliberately to highlight these features and are difficult to navigate even for state-of-the-art solvers such as POMCP, or DESPOT.
3/ We did not tune the environment in our experiments to favour our method, and in fact our environments are not well suited for the A* reference policy either.
The contribution of RefSolver is that it can improve on a policy, including a less suitable reference policy, by sampling judiciously using our analytical solution in Theorem 3.2. Crucially, as a well-justified form of Monte Carlo action sampling is used, RefSolver can avoid exhaustive enumeration over actions which is a major limitation for current online methods for long-horizon POMDP problems. We hope that the reviewer can recognise this contribution and the challenges that are inherent to the formation of the theory here. | Summary: The authors propose reference based POMDPs, where the agent needs tradeoff between achieving environment rewards and following a reference policy.
Strengths: generally, I think the reference based pomdps are a good idea and can solve some real world problems;
Weaknesses: see below
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1 As for the definition of the policy, is that $\pi(a|b)$ or $\pi(a,o|b)$? In L143 it seems to be the latter. Then, is the belief-to-belief transition $U(a,o|b)$ equals to a policy ? However, the $p(o|a,b)$ seems to contain the environment observation probability which is the intrinsic nature of the environment. As for the reference policy, is it correct to say the reference based pomdps only requires it to provide $p(a|b)$ as in Eq.6? I think this part is somehow confusing for me and I suggest further clarification.
2 According to Eq 10 and Eq 19, the policy chooses sup $U$ while the standard POMDP chooses max $a$. can you further explain the differences between the two? Is it correct to see the reference based POMDP equals to a standard POMDP with different reward function?
3 In section 5 and experiments, the reference policy mainly restricted to a fully-observed based policy. I think reference-based pomdp setting pretty much relies on the performance of the reference policy, and in real world, the base policy might be much worse since the fully observation might not be accessible. It will be interesting to see whether the methods can run well with different initial policy, especially when the initial policy is not that good. Can the methods still benefit from it?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations:
N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and feedback.
Re Question 1. It should be $\pi(a, o | b)$ or equivalently $\pi(b’ | b)$. By definition of the problem, one chooses belief-to-belief distributions over next beliefs (or, equivalently, action-observation pairs). This is a relaxation of the concept of picking a single action (i.e. point mass) as is classical. A policy is a mapping from a given belief to a distribution over action-observation pairs.
At the most abstract level the reference policy could be given in the top-down form (5) – e.g. a noisy version of an offline POMDP policy computed on a similar environment. However, in some cases it might be more appropriate to construct the reference policy from the ground up as in (6) and (7). We acknowledge that the notation in $U^p$ in (5) is a little misleading as it seems to assume that some reference $p$ is given a priori. This could be changed in the final paper e.g. $\bar{U}$ to denote a top-down policy versus $U^p$ to denote a policy generated from the ground-up.
Re Question 2. We stress again that the two problem formulations are different but related. The main difference is that a Ref POMDP tries to balance trading off respecting a reference policy while also achieving higher rewards - see (10) while a standard POMDP does not. As noted in the general comments and the paper, this leads to computational advantages which can be exploited to deal with standard POMDPs.
The reason for the supremum is that a Ref POMDP requires choosing a distribution over action-observation pairs rather than a specific action. As the space of distributions is infinite-dimensional and objective functions are only assumed to be bounded, a supremum is taken. It turns out that the maximiser is actually attained (see Step 1 in Proof of Lemma 3.1) but this is not known a priori.
Moreover, the Bellman equations differ so Ref POMDPs and classical POMDPs really are not the same thing. Compare (2) and (10). However, we demonstrate in Section 4 that a classical POMDP can be embedded inside a Ref POMDP provided the reachable belief space is not too complex. This means that one could approximate solutions to classical POMDPs by casting them as Ref POMDPs. Because solving RefSolver is innately faster than standard POMDPs, we could take advantage of this idea to find efficient solvers for some POMDPs. We are currently further investigating this idea.
Re Question 3. We do acknowledge that a judicial choice for the reference policy needs to be made as the problem formulation, by definition, rewards respecting the reference policy (see eq 10). Of course, if the initial reference policy is very misleading, then this would not lead to useful solutions for the POMDP. However, a solution to a Ref-POMDP will also ignore the reference policy if greater rewards are available by deviating from it (also eq 10). As such, there is a level of robustness against the reference policy baked in to the formulation. A good way to think of the reference policy is as a “heuristic” or “initial guess” which can be jettisoned by the Ref-POMDP’s solution if it is advantageous to do so. As noted in the summary, we envisage RefSolver to be suitable for problems with dynamically changing POMDP models where one needs to deform the policy being followed without making abrupt changes to the policy. We also hope that for more general problems we can substantially revise the algorithm by iteratively improving the reference policy.
As a general point, note that most complex problems in optimisation require a heuristic or initial policy that can be iteratively improved so we do not believe that requiring a reference policy is too restrictive. Generally, as we suspect it is the case here, the proximity of the “guess” to the final solution also determines the speed of convergence.
---
Rebuttal Comment 1.1:
Comment: Thanks for your clarification. | Summary: The paper introduces and investigates the concept of reference-based POMDP, which addresses the challenge of finding an optimal policy in partially observable environments. The main objective is to simplify this problem by leveraging a baseline fully observed policy. The authors propose that solving a reference-based POMDP can be achieved by iteratively computing expectations using the provided reference (stochastic) policy. They further argue that this approach can be viewed as an extension of Linearly Solvable MDPs to scenarios involving partial observability.
Strengths: - Solving POMDPs is a challenging problem.
- The authors present a new problem formulation for POMDPs called Reference-based POMDPs that uses a reference baseline policy.
- The authors present solution approach to this new problem formulation.
Weaknesses: - It is hard to place the work in current state of the literature in this field because of a lack of discussion on related works.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The paper presents a systematic way to solve POMDPs, albeit with some heavy assumptions on:
a. availability of a reference belief-to-belief transition probability - a distribution which is constructed using a fully observed policy.
b. knowledge of admissible reference transition probabilities. computing iterative expectations using the given reference (stochastic) policy.
It would be nice to highlight these assumptions and exact contributions in the introduction.
2. Is the second paragraph in Introduction just an interpretation of the paper’s formulation/ main algorithm (Alg. 1 included in appendix 8.7)? If it is just an interpretation, the logical flow of paragraph seems conflicting because:
a. The paragraph starts by saying the problem of reward maximization in POMDPs is challenging. And then proposes to use constrained optimization as a solution - when in general constrained optimization problems are more complex than unconstrained ones.
b. Since the optimal value function of the Reference-Based POMDP can be computed by purely computing expectations under the reference transition probabilities recursively using Monte Carlo approximation, where exactly is constrained optimization coming into picture?
3. In paragraph 3 in Introduction, authors mention “the standard POMDP can be related to the Reference-Based POMDP via an embedding” - what does an embedding mean here? I would be nice to elaborate this.
4. The paper does not include a section on related works. If it was because of space constraints, I strongly recommend adding this section in Appendix. From reading the paper, it is unclear to me what the state-of-the-art in solving POMDPs is - both algorithms and evaluation environments. For example, the paper does not comment about this recent work [1] on solving POMDPs using approximate information state.
5. For the evaluation environments considered, I believe one could also solve these problems using meta-RL (VariBAD [2]) or robust-RL. Can the authors comment on why is it better to cast these problems as POMDPs as opposed to meta-RL or robust-RL?
I would be happy to increase my score if the authors could clarify the queries above.
References:
[1] Subramanian, J., Sinha, A., Seraj, R., & Mahajan, A. (2022). Approximate information state for approximate planning and reinforcement learning in partially observed systems. *The Journal of Machine Learning Research*, *23*(1), 483-565.
[2] Zintgraf, L., Schulze, S., Lu, C., Feng, L., Igl, M., Shiarlis, K., ... & Whiteson, S. (2021). Varibad: Variational bayes-adaptive deep rl via meta-learning. *The Journal of Machine Learning Research*, *22*(1), 13198-13236.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: A more detailed discussion on limitation of the solution approach to very high dimensional problems can be included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and feedback.
RE Weaknesses. We will provide additional related work, in relation to Max Entropy RL and stochastic control on top of those already provided. The paper is inspired by an interesting series of papers by Todorov (see ref 18-20), namely Linearly Solvable MDPs, which is summarised in 8.1. These works focused on MDPs whereas we generalise the approach to POMDPs and provide some ways of dealing with the infinite-dimensionality of the belief space using an approximate solver for the Ref POMDP.
RE Question 1. We do not need a reference belief-to-belief transition probability that is constructed using a fully-observed policy. Our fully-observed policy was just one example of a bottom up approach to construct such a policy. Rather, it is enough to have any initial reference policy which (stochastically) transitions from belief to belief that serves as an initial guess to the problem. The state transition of the reference policy need not be known; only the belief-to-belief level needs to be known. For instance, the reference policy could be sampled under some generative model. Some good guesses might be sampled free dynamics or human inputs or an offline POMDP policy from a similar environment. Of course, as in any optimisation problem, if this initial guess is close to the optimal solution this is useful, but guesses can be iteratively improved upon.
RE Question 2. I wonder if you could clarify what you mean by “constrained optimisation”. For a precise formulation of the modified problem see equation (10). We stress that this is an entirely different problem to the standard POMDP where a "reference transgression term" is introduced by the KL term. It turns out that, under suitable transformations, each Bellman step can be optimised analytically (12) which is why the Bellman equation can be solved by taking pure expectations under the reference measure. Note that, we do mention “constrained optimisation” in line 185 but this is in the context of the recovering stochastic actions rather than with respect to the formulation.
RE Question 3. See Definition 4.1 and Section 4 for a complete explanation. This was inspired by Todorov’s idea which is briefly summarised in Section 8.1 (lines 403-408). Intuitively, we say that a standard POMDP can be embedded in a Reference-Based POMDP if we can choose parameters to the Ref-Based POMDP such that, for any action in the standard POMDP, there are points in the relaxed space of stochastic actions where the objective functions align. The idea is that if an embedding can be found, there is a hope that the solution to the Ref-Based POMDP should be close to that of the standard POMDP and so the more efficient machinery of the Ref-Based POMDP can be readily applied.
RE Question 4. Thank for for this feedback. We provided the background in Appendix (8.1) and will expand the scattered related work into a section in the Appendix too. Briefly regarding state of the art, while POMCP and DESPOT are relatively old algorithms, they still represent the canonical benchmarks to beat for online POMDP solvers especially for discrete models. See reference [9] for a recent survey of the field and various extensions of these main algorithms.
RE Question 5. For problems such as Navigation1 where we need a long planning horizon and identification of the use of uncertainty reduction to solve the problem, general RL methods (including [2]) that do not represent uncertainty explicitly, when confronted with long horizons, will face difficulties. As for [1], it is actually very related to POMDP. Information state is an established concept that can be viewed as a more general representation to the concept of beliefs in POMDP[e.g., M. Hauscrecht. Value-Function Approximations for Partially Observable Markov Decision Processes. JAIR 2000]. In this sense, POMDP can be viewed as providing a more compact representation of uncertainty compared to Information states. In fact, [1] proposes to frame PORL based on approximate POMDP planning. Therefore, we foresee that a method that improves POMDP solving or reformulates POMDP into something that can be solved with less computational resources could in turn be useful for PORL too. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their contributions and respond to some general points here.
First, we would like to emphasise again the key contributions of this paper. The major contribution of the paper is that we propose a reformulation of POMDPs, such that the optimisation for solving POMDPs no longer requires exhaustive enumeration of the action space (this enumeration is required by almost any POMDP planning method today). Instead, optimisation can be performed analytically as a consequence of a relaxation which, under suitable transformations, leads to a linearisation of the Bellman equation (Theorem 3.1) and allows the Bellman equation to be computed as an expectation, which in turn can be more efficiently computed using Monte Carlo methods relative to other state-of-the-art POMDP solvers.
We call this general reformulation of POMDPs a Reference-Based POMDP. The solution of a Reference-Based POMDPs needs to respect some given reference policy (e.g. heuristic, initial guess) while also searching for high rewards. A key idea in the formulation is to relax the space of optimisation such that policies are no longer mappings from beliefs to deterministic actions but rather stochastic actions. This means that the Bellman value can be computed as an expectation under the reference stochastic policy, which leads naturally to Monte Carlo methods where the sample distribution is the reference policy itself. As the reference policy is known a priori, the above insight facilitates algorithms where actions can be sampled, as opposed to enumerated, at each step. This substantially reduces computation time as demonstrated in our preliminary algorithm. It is important to note that the mechanisms used in our algorithm is fundamentally different to that of the state-of-the art solvers. Simulations resemble a depth-first search so that a sparse tree is constructed by sampling actions rather than a full enumeration as POMCP and DESPOT do.
In summary, our main contribution is a theoretical justification that decent (albeit suboptimal) reference policies can lead to fast algorithms for solving general POMDP problems substantially better than existing state-of-the-art solvers. Note that the 2D problem we have presented, while simple, is not trivial even for state-of-the-art solvers like POMCP and DESPOT due to the long horizon and the enormous size of the search tree. We alleviate this problem by sampling under the reference policy in our algorithm.
Finally, to place this appropriately in the literature, we are not aware of any papers that have extended Linearly Solvable MDPs (see Todorov [18-20]) to partially observable domains. Note that Linearly Solvable MDPs are closely related to Max Ent RL[2] where some recent theoretical developments have demonstrated robustness to the model (i.e. dynamics and rewards) in fully observable domains (see [2]) but again this has no parallel in partially-observable domains as of yet.
Re the Ethics Reviews, thank you for your insights and citations. We do see, for instance, applications of this kind of framework in the context of artificial agents that are guided by a human to make responsible decisions while also judiciously transgressing guidance in the case of significant human error. Such systems may, at a future stage, be implemented in equipment that may be extremely difficult to manage without automated assistance and, of course, should be thoroughly tested and understood before final commissioning. We agree it is appropriate to address your recommendations in the final version of the paper. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper is well written, clear and concise, with a well defined scope and goal.
This work extends prior work on Linearly Solvable MDPs (LS-MDPs) to the partially observable setting. LS-MDPs are alternate decision processes whose control paradigm is shifted (w.r.t. standard MDPs) such that the system defines passive state dynamics (as in Markov chains or HMMs), and control is performed by applying smooth modifications to these passive dynamics that directly alter the state dynamics. The theory of LS-MPDs already contains efficient solution methods for such decision processes. The formulation of LS-MDPs can then be exploited to solve MDPs by embedding the MDP as an LS-MDP, and choosing the action from the action-set that most closely resembles the optimal control associated with the LS-MDP.
In this work, the authors extend LS-MDPs to the partially observable setting in a fairly straightforward way: by applying the LS-MDPs framework to the belief-MDP associated with the POMDP at hand. LS-MDPs operate in terms of state-to-state transitions and distributions U(s'|s), which in the POMDP case become belief-to-belief transitions U(b'|b), or, equivalently, U(a, o|b) (since the tuple (b,a,o) uniquely determines the next belief b'). Aside from this notational simplification, the resulting method seems to reflect the basic LS-MDP very straightforwardly.
Possibly the main difficulty remains that some computations remain complex in belief-space, as the authors also note, w.r.t W^*. The authors therefore propose to employ sampling-based solvers to estimate W^*, resulting in RefSolver, their proposed planning algorithm for POMDPs.
- Minor notes:
-- Border cells are cut off from Figure 1; remake graphics showing them in full.
-- Probably as a consequence of the above, the initial belief (in blue) is not visible in Figs 1a and 1b.
Strengths: - theoretically sound
- shows significant improvements against POMCP baseline on reasonably stochastic and partially observable gridworld navigation tasks
Weaknesses: - the evaluation is run only on 2d navigation tasks where the actions chosen by the optimal fully osbervable agent strongly correlate with the optimal actions chosen by a partially observable agent (see question below). Given the role and influence that the optimal fully observable policy has on RefSolver, it's unclear how much the optimality of the passive policy influences the overall method.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - both the POMCP baseline and the proposed RefSolver use the fully observable policy obtained by A* in one way or another. POMCP uses it exclusively as a rollout policy, while RefSolver uses it in what seems a much more impactful manner by having it determine the passive dynamics of the reference-based POMDP formulation. WHat is the impact of this choice? Would the algorithm work well enough even if a worse policy were chosen? what about a random one?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: - The final paragraph already identifies a few limitations of the proposed method concerning the computational methods employed.
- My question to the authors might also highlight another limitation of the resulting method, depending on the authors' answer.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the positive review and feedback.
Just as a clarification, you mentioned in your review that the extension of LS-MDPs to partially observable domains is “very straightforward”. We do note however that there are some interpretive challenges of formulating “belief-to-belief” stochastic transitions in the context of POMDPs as it requires relaxing the action space to provide for stochastic actions. We believe that this insight is what makes the extension possible and useful and should be considered a contribution as well.
RE “Remake graphics in Figure 1”: Note that each grid is 60x60 cells and so some cells look very small. The cells are there but are different grades of blue depending on the intensity of the belief. E.g. in Navigation1 the initial belief is spread out uniformly over 12 cells adjacent to S1 and S2 while in Navigation2 all the mass is concentrated on 2 cells adjacent to S1 and S2. We can make a note to explain this in the caption and increase the image sizes in the final version. There is no initial belief on Fig1b as this map is only used to construct the fully observed A* policy and is not used for planning under partial observations.
RE Weaknesses: We recognise that the experimental results are preliminary at this early stage. We are currently working on how the algorithm could be scaled up to higher-dimensional navigation tasks as well as to motion planning tasks but the role of this paper was just to demonstrate that the approach is promising and to lay down the theoretical underpinnings. As mentioned in the general comments, we note again that even in this relatively simple environment, POMCP and DESPOT have difficulty managing the size of the search tree. RefSolver on the other hand relies on analytical properties to selectively sample the tree.
RE Questions: We do acknowledge that a judicial choice for the reference policy needs to be made as the problem formulation, by definition, rewards respecting the reference policy (see eq 10). However, a solution to a Ref-POMDP will also ignore the reference policy if greater rewards are available by deviating from it (also eq 10). As such, there is a level of robustness against the reference policy baked in to the formulation. We acknowledge the method should therefore be carefully applied to situations where it's apparent that a "good" reference policy exists (e.g. minimise energy) or can be justified (e.g. using an embedding as outlined in Sec 4). As noted in the summary, we envisage RefSolver to be suitable for problems with dynamically changing POMDP models where one needs to deform the policy being followed without making abrupt changes to the policy. We also hope that for more general problems we can substantially revise the algorithm by iteratively improving the reference policy.
With respect to POMCP using the A* policy, this is an advantage over simply using a standard uniform rollout especially in a long-horizon problem. With the A* policy there is a good chance that the rollout will realise the large rewards toward the tail end of the time horizon, whereas there is almost no chance using a uniform rollout given the long horizon.
As a general point, note that most complex problems in optimisation require a heuristic or initial policy that can be iteratively improved so we do not believe that the requirement of a reference policy is too restrictive. Generally, as we suspect it is the case here, the proximity of the “guess” to the final solution also determines the speed of convergence.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and clarifications; my opinion of the submission was already positive to begin with and remains so having checked out the other reviews and rebuttals. Best luck! | null | null | null | null | null | null |
Cal-DETR: Calibrated Detection Transformer | Accept (poster) | Summary: The paper proposes a method to improve the calibration performance of transformer-based object detectors. In their approach, they first present a way to quantify the uncertainty of each logit using the variance of the outputs of different transformer decoder layers. Then, with the motivation that a higher uncertainty generally implies a poor calibration, they downscale the logits with high uncertainty. Second, they introduce a mixup based approach that is applied to the queries that match with the objects. Specifically, they create a prototype as the mean of the logits of the positive queries and mix it up with the logits of the foreground objects. They incorporate their method on Deformable DETR and UP-DETR, and observed notable improvements across different datasets.
Strengths: - The proposed method is a training time approach, hence does not require an extra hold-out validation set.
- The improvement in the calibration performance is notable and outperforms existing approaches consistently in several datasets.
- While the reilabiility of the estimated uncertainties are not investigated, the proposed approach to do so is intuitive and does not introduce an additional burden on the detection architecture.
- The paper is written clearly and it is easy to follow.
Weaknesses: - I think the logit mixup strategy that the authors introduce is quite related to a cited work [36]. The main differences are that the authors apply mixup in the logit space (which is again special case of manifold mixup), they use a fixed mixing coefficient instead of sampling it and the method is applied for object detection. I haven't seen in the paper (especially in related work or in Section 3.3.2) such a clarification. Therefore, I'd see the contribution of the authors in logit mixup as the extension of the regularized mixup [36] to object detection. I think this is still an important contribution but I'd like to see an explicit discussion with the prior related work.
- Recently [A] showed that the number of detections that are input to the calibration measure has a clear effect on the calibration performance. Specifically, such measures are easy to be misleading due to the maximum number of allowed detections, which is for example 100 for COCO dataset. For example, a detector that outputs 100 detection has more advantage to achieve lower calibration error compared to a detector that outputs less and does not fill the quota of 100. So, I wonder whether such a case exists here. Specifically for example for Table 1, how many detections per image on average do Baseline D-DETR, temp scaling, TCD (as the closest counterpart) and Cal-DETR output (which are then used as the inputs to estimate Detection ECE)? Or is there any thresholding of the top-100 detections while estimating the DECE? If so, how? Related to this, a minor suggestion (that I am not considering in my rating as a weakness as [A] came out in CVPR in June) can be to include a comparison in terms of Localisation-aware ECE in the way that [A] suggests to avoid such a doubt in the final version if the paper is accepted.
- The authors claim that they propose a method to quantify the uncertainty for each logit. However, further insight on these uncertainties are not provided. For example, there is no evidence that the estimated uncertainties are reliable and can be used for different purposes.
[A] Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration, CVPR 2023
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Specifically for example for Table 1, how many detections per image on average do Baseline D-DETR, temp scaling, TCD (as the closest counterpart) and Cal-DETR output (which are then used as the inputs to estimate Detection ECE)? Or is there any thresholding of the top-100 detections while estimating the DECE?
- Can you please confirm that you obtain the prototypical representation query for each iteration during training? And I'd recommend making this explicit in the paper (L217-221). Also I'd recommend making it more explicit that you compute the mean of the logits across all positive queries (L217-218).
- In constrast to several mixup strategies that sample mixing coefficient (\alpha in Eq.(4)) from a distribution, why did you choose a single \alpha value?
- How do you obtain the labels after smoothing ( `c_i in L225). I think this should be explicitly defined.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Briefly mentioned
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: I think the logit mixup strategy that the authors introduce is quite related to a cited work [36]… I think this is still an important contribution but I'd like to see an explicit discussion...**
**A1:** Thanks for finding the logit mixup strategy an important contribution. Our approach can be seen as an extension of regularized mixup [36] from classification to object detection with important differences in the studied setting, the space in which mixup is applied, and the way mixup-based regularization is performed. We explain the differences and relationships between both approaches below:
• `High-level motivation:` In [36], the goal is to improve the uncertainty estimation of mixup without sacrificing accuracy, especially under severe distribution shifts for the task of image classification. The goal of our logit mixing is to further improve the calibration performance of DETR for the task of object detection (Cal-DETR).
• `Technical Details:` The mixup in [36] operates in input pixel space, studies the effect of hyperparameter beta of the sampling distribution for mixing coefficient and introduces a regularizer. Besides improving accuracy, it also helps in improving the quality of predictive uncertainty estimates. Ours operates in logit space, leverages positive queries to build a prototypical representation, and then uses this to achieve logit mixing for a given query. We also employ a regularizer loss, but it leverages the proposed logit mixing output. Our logit mixing complements the logit mixing and further improves the detection calibration performance of DETR-based pipelines.
We will include the discussion with prior related work [36] in the final version.
---
**Q2: In constrast to several mixup strategies that sample mixing coefficient (\alpha in Eq.(4)) from a distribution,...**
**A2:** We empirically find $\beta$ hyperparameter on beta distribution using the split of Cityscapes dataset (L:298). For our reported result, we choose $\beta$=0.3 using this procedure.
| $\beta$ (sampling) | D-ECE | AP | mAP |
|-------------------|-------|------|------|
| 0.3 | 11.4 | 24.5 | 46.4 |
| 0.5 | 13.0 | 24.5 | 46.2 |
| 1.0 | 13.6 | 24.7 | 45.7 |
Several mixup strategies operate to mix an input image using another randomly selected image, so it involves two samples in the process. In our approach, we perform a query instance/object level mixup in the logit space. We first build a prototypical representation using all positive queries which is then used to achieve a mixup for a given positive query. Owing to this difference from conventional mixup strategies operating, our experiment with conventional mixup leads to suboptimal results. The subpar performance is potentially because the logit mixing suppresses the object representation with lower arbitrary $\alpha$ simultaneously having dominant prototypical representation from multiple positive queries. It is to note that sampling strategy (Cal-DETR$_{sample}$) still works better when compared to other calibration methods. We report the results on Cityscapes and Foggy Cityscapes in the table below:
| | In Domain | | | Out Domain | | |
|---------------------|-------|------|------|-------|------|------|
| Methods | D-ECE | AP | mAP | D-ECE | AP | mAP |
| Baseline$_{D-DETR}$ | 13.8 | 26.8 | 49.5 | 19.5 | 17.3 | 29.3 |
| TS | 12.6 | 26.8 | 49.5 | 14.6 | 17.3 | 29.3 |
| MDCA | 13.4 | 27.5 | 49.5 | 17.1 | 17.7 | 30.3 |
| MbLS | 12.1 | 27.3 | 49.7 | 20.0 | 17.1 | 29.1 |
| TCD | 11.9 | 28.3 | 50.8 | 14.3 | 17.6 | 30.3 |
| **Cal-DETR$_{sample}$** | 11.7 | 28.2 | 52.3 | 13.2 | 17.3 | 29.6 |
| **Cal-DETR** | 8.4 | 28.4 | 51.4 | 11.9 | 17.6 | 29.8 |
---
**Q3. Recently [A] showed that the number of detections that are…? Or is there any thresholding...?**
**A3:** We thank the reviewer for mentioning useful insights about how the number of detections affects the calibration error. Note that in our implementation, we calibrate the detection model having prediction scores >=0.3. Specifically, we use a repository [22], where we set the threshold on scores prior to estimating the calibration which leads to a reliable and robust D-ECE measure in our case.
---
**Q4. The authors claim that they propose a method to quantify the uncertainty for each logit. However,…**
**A4:** [I] uses layer ensemble and some other works that use output space ensemble for the estimation of uncertainty (e.g., MC dropout/Deep Ensembles). In a similar spirit to these works, we exploit the logit space from different decoder layers (containing multiple dropout layers) for the estimation of uncertainty. Results show that our method is more effective in improving model calibration for both in-domain and out-domain scenarios.
[I] Kushibar, Kaisar, et al. "Layer ensembles: A single-pass uncertainty estimation in deep learning for segmentation." International Conference on Medical Image Computing and Computer-Assisted Intervention, 2022.
---
**Q5: Specifically for example for Table 1, how many detections… ? Or is there any thresholding...**
**A5:** We explicitly use threshold on scores i.e. >=0.3 while estimating D-ECE
---
**Q6: Can you please confirm that you obtain the prototypical representation query…**
**A6:** Yes, we obtain the prototypical representation of queries for each iteration during training. We surely cater to the recommendations of the reviewer in the final version and add more details.
---
**Q7: How do you obtain the labels after smoothing…**
**A7:** $\alpha$ value for a given positive query contributes to smooth the label while remaining positive queries that contribute to forming a prototypical representation share 1-$\alpha$ value to smooth corresponding labels uniformly. We will add this detail in the final version.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their time and efforts to address my concerns. I am mostly satisfied with the rebuttal. My only point is: I do not agree that measuring the calibration error using 0.3 as a fixed confidence threshold provides a fair comparison. I can understand that this is the approach used in some repositories. However,
- Reporting AP with top-100 detections (a potentially very low threshold) and DECE with 0.3 result in two configurations for object detector in a practical application. As a single setting can be used to get the detections in the application, I think this (using two settings, one for AP and one for DECE) is not a realistic way of comparing models.
- A choice of 0.3 is also arbitrary. I am not sure if this threshold promotes or demotes some detectors.
Just to make sure, if possible, could you please at least compute AP or LRP Errors of some models from 0.3 confidence score as well? This can make sure that the detector is not only calibrated but also it is accurate in the same setting compared to other methods.
---
Reply to Comment 1.1.1:
Comment: We thank you for reading our rebuttal and providing insightful comments. We report AP on top-100 detections, to avoid any confusion of performance drop when comparing with the standard settings. However, for the sake of completeness and as suggested, we report the AP/mAP on COCO (in-domain and out-domain) with same threshold (of 0.3) used for D-ECE. We observe that, even using the higher threshold, our Cal-DETR delivers the best detection performance with no drop across both in-domain and out-domain scenarios.
| | In Domain | | Out Domain | |
|----------|-----------|------|------------|------|
| Methods | AP | mAP | AP | mAP |
| Baseline$_{D-DETR}$ | 39.9 | 56.2 | 20.6 | 30.0 |
| TCD | 40.0 | 56.3 | 20.7 | 30.1 |
| **Cal-DETR** | 40.6 | 57.9 | 21.1 | 31.3 | | Summary: This paper proposing a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR and UP-DETR, which consists of quantifying uncertainty, an uncertainty-guided logit modulation and a logit mixing approach. Results show the method improves the baselines in calibrating both in-domain and out-domain detections while maintaining or even improving the detection performance.
Strengths: I'm familiar with object detection and DETR, but not Model calibration. It seems that the model calibration is a valuable problem, and the proposed method does improve it without performance drop.
Weaknesses: It seems the performance of selected baselines is relatively low, e.g. the performance of Deformable-DETR is only 44.0 in COCO as mentioned in Table 1. Will the method still be effective when using a strong or even a sota model, like [DINO](https://github.com/IDEA-Research/DINO)?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Will the method still be effective when using a strong or even a sota model?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors may try more baselines and report results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Will the method still be effective when using a strong or even a sota model, like DINO?**
**A1**: As suggested by the reviewer, we provide the results on COCO (in-domain) and CorCOCO (out-domain) with the DINO model, as shown below. Our **Cal-DETR** improves the calibration performance of this strong baseline for both in-domain and out-domain settings.
| | In Domain | | | Out Domain | | |
|----------|-----------|------|------|------------|------|------|
| Methods | D-ECE | AP | mAP | D-ECE | AP | mAP |
| Baseline$_{DINO}$ | 15.5 | 49.0 | 66.6 | 13.2 | 27.3 | 39.2 |
| TS | 15.1 | 49.0 | 66.6 | 14.3 | 27.3 | 39.2 |
| MbLS | 19.1 | 48.6 | 66.0 | 15.9 | 26.9 | 38.4 |
| MDCA | 16.3 | 48.6 | 66.1 | 14.0 | 26.7 | 38.4 |
| TCD | 15.6 | 48.5 | 66.1 | 12.9 | 26.8 | 38.5 |
| **Cal-DETR** | 11.7 | 49.0 | 66.5 | 10.6 | 27.5 | 39.3 |
DINO Calibration: MS-COCO & CorCOCO. Results are reported on the Epoch12 (4-scale) setting and Cal-DETR shows improvement in calibration as compared to other methods including baseline
---
Rebuttal Comment 1.1:
Title: Seems good
Comment: I do not find any problem now and it seems a good work for me. Again, I must say that I'm not familiar with this field. | Summary: This paper focuses on performing calibration for DETR, particularly for Deformable-DETR and UP-DETR. The authors first propose an approach for quantifying uncertainty in DETRs, which is built from the variation in the output of decoder layers. Then they develop an uncertainty-guided logit modulation mechanism and a logit mixing approach as regularizers. Experiments on three in-domain and four out-domain scenarios show the effectiveness of the proposed method in calibrating.
Strengths: The results of D-ECE are good in both in-domain and out-domain scenarios.
The experiments are extensive across various datasets and settings.
Weaknesses: 1. The proposed method has limited novelty. Although the authors claim that they do calibrate on object detection, the proposed method also performs the logits and shares large similarities with the methods in classification.
2. The way to quantify uncertainty in the DETR pipeline using the variation in the output of decoder layers is a straightforward idea, which may not be propriety to claim as a contribution. Moreover, the relationship between the proposed uncertainty-guided logit modulation mechanism and the logit mixing approach is not clear. It is more like a combination of two techniques, lacking deep insights.
3. When introducing D-ECE in Sec3.2, it would be better to cite [22] to indicate that the used measures follow previous works.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. I am confused that what benefits can we get when the D-ECE is lower. It seems that the box AP does not show clear differences between Cal-DETR and the baseline model in both in-domain and out-domain scenarios. If the calibration can not bring benefits in detecting objects, then what benefits can we get by performing calibration?
2. It seems that the proposed uncertainty-guided logit modulation and the logit mixing approach only perform on the output logits of the last decoder layer. Do the authors try to implement them into all decoder layers' output logits?
3. In Table 1 and Table 4, it seems that the D-ECE of the out-domain is lower than the D-ECE of the in-domain scenarios. Why? Does this phenomenon have a further explanation?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See the above sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The proposed method has limited novelty. Although the authors claim…**
**A1:** To the best of our knowledge, this is the first work that strives to improve the calibration performance of recent SOTA ViT-based detectors by proposing an uncertainty-guided logit modulation and a logit mixing approach. We also refer to Reviewer Gc9J, who acknowledges the significance of our proposed approach, and Reviewer co8a appreciates no performance drop while improving model calibration. We would like to highlight that the relevant mixup strategies also target improved calibration, but majorly for the classification task [I, II], using image-level mixup in the input space. In our approach, we devise an uncertainty mechanism that does minimal changes to the model architecture and is also computationally efficient to get uncertainty-guided logits, which is followed by a logit mixing built with prototypical representations of positive queries. Both our methods are effective and complementary to each other in calibrating in-domain and out-domain predictions. Reviewer 95V6 comment does not provide details on what methods in classification lead to the concern about limited novelty. However, if specific methods are provided, we will be happy to explain the specific technical differences.
[I] Zhang, Linjun, et al. "When and how mixup improves calibration." International Conference on Machine Learning. PMLR, 2022.
[II] Thulasidasan, Sunil, et al. "On mixup training: Improved calibration and predictive uncertainty for deep neural networks." Advances in Neural Information Processing Systems 32 (2019).
------------------------------------------
**Q2: The way to quantify uncertainty in the DETR pipeline using the variation in the output of decoder layers is a straightforward idea…**
**A2:** Although quantifying uncertainty using variation in the output of decoder layers is a straightforward idea, and simple to implement, note that:
`(a)` It is effective for scaling the logits during train time to calibrate DETR-based object detectors both for in-domain and out-domain scenarios, as shown in our experiments.
`(b)` Requires no modifications to the underlying architecture and is computationally efficient thereby revealing its plug-and-play nature (L:189-192).
`(c)` Finally, to our knowledge, it has not been explored in quantifying uncertainty and calibrating modern DETR-based object detection pipelines.
Akin to uncertainty-guided logit modulation, our logit mixing technique also operates in the logit space. It is designed to further enhance the calibration performance by introducing the logit mixing and a regularizer loss which allows model training using non-zero-entropy supervisory signal.
------------------------------------------
**Q3: When introducing D-ECE in Sec3.2, it would be better to cite…**
**A3:** We will add the reference [22] in Sec 3.2.
------------------------------------------
**Q4: I am confused that what benefits can we get when the D-ECE is lower…**
**A4:** The scope of model calibration is to reduce model miscalibration, which is quantified using the D-ECE metric while preserving the (object detection) performance. In other words, a calibrated model should be able to predict the class confidence that matches the actual likelihood of its correctness. It is of great value as this not only improve the overall trust in model predictions but could also increase the adoption of detector in several safety-critical applications (L:27-30).
------------------------------------------
**Q5: It seems that the proposed uncertainty-guided logit modulation and the logit mixing approach only perform on the output logits of the last decoder layer…**
**A5:** With minimal modification and keeping the DETR-based architecture intact, we implement the logit modulation and logit mixing to the last decoder layer only. DETR takes logits from the last layer only for probability output distribution. We provide results when uncertainty-guided logit modulation and logit mixing are applied to all decoder layers (**Cal-DETR$_{allDec}$**) on Cityscapes (in-domain) and Foggy Cityscapes (out-domain). The calibration performance of **Cal-DETR$_{allDec}$** is inferior to **Cal-DETR**, potentially due to less refined object queries (updates iteratively) as compared to the last layer.
| | In Domain | | | Out Domain | | |
|---------------------------|-------|------|------|-------|------|------|
| Methods | D-ECE | AP | mAP | D-ECE | AP | mAP |
| Baseline$_{D-DETR}$ | 13.8 | 26.8 | 49.5 | 19.5 | 17.3 | 29.3 |
| TS | 12.6 | 26.8 | 49.5 | 14.6 | 17.3 | 29.3 |
| MDCA | 13.4 | 27.5 | 49.5 | 17.1 | 17.7 | 30.3 |
| MbLS | 12.1 | 27.3 | 49.7 | 20.0 | 17.1 | 29.1 |
| TCD | 11.9 | 28.3 | 50.8 | 14.3 | 17.6 | 30.3 |
| **Cal-DETR$_{allDec}$** | 13.0 | 27.5 | 51.0 | 12.8 | 17.3 | 29.7 |
| **Cal-DETR** | 8.4 | 28.4 | 51.4 | 11.9 | 17.6 | 29.8 |
------------------------------------------
**Q6: In Table 1 and Table 4, it seems that the D-ECE of the out-domain…**
**A6:** Modern deep neural networks are usually overconfident because they are trained with zero entropy signal with entropy minimization objective. We train the detection model with in-domain data only with modulation and logit mixing modules, that largely improves calibration for in-domain. Since our modules act as a regularizer for calibrating in-domain detections, it facilitates learning domain generalizable features [I], which also helps in calibrating out-domain detections and improving out-domain detection performance.
[I] Wald, Yoav, et al. "On calibration and out-of-domain generalization." Advances in neural information processing systems 34 (2021): 2215-2227.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 95V6,
Thanks again for your effort in reviewing our paper and giving us a helpful chance to improve the paper's quality. We hope that our response can address your concerns.
Considering that the discussion period will end on August 21, we would like to know if you have any other questions about our paper, and we are glad to have a discussion with you in the following days. If our response has addressed your concerns, would you mind considering re-evaluating our work based on the updated information?
Best regards,
Authors | Summary: This paper proposes a calibrated detection transformer model, which equips the DETR variants with an uncertainty-guided logit modulator and a mixup augmentation for the classification branch of the detector. Specifically, the authors first quantify the uncertainty with the variance among the predicted logits from different decoder layers. Then logits with more uncertainty are suppressed. For mixup, it is similar to the common mixup used in detectors except the mix is between the average of the batch and each sample. Through experiments on various benchmarks, the authors demonstrate the effectiveness of the proposed method.
Strengths: 1. This paper studies an important but underexplored problem, calibration in object detectors.
2. The proposed method is lightweight and does not require any network modifications, which makes it more likely to be generalized to more architecture and application scenarios.
3. The authors perform very comprehensive experiments and the results show the effectiveness of the proposed method.
Weaknesses: 1. It is unclear why the variation in the output of decoder layers can measure the uncertainty of the class prediction. Is it based on intuition? And is there any experimental support?
2. As one of the two major contributions, logit mixing for confidence calibration is not technically novel enough. First, applying mixup for the classification branch of the object detector is considered a well-known technique [1][2]. Second, in [1], the mixup is considered a trick instead of a technical contribution. Finally, the difference in mixup between [1] and this paper is subtle.
[1] Zhang, Zhi, et al. "Bag of freebies for training object detection neural networks."
[2] Ge, Zheng, et al. "Yolox: Exceeding yolo series in 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: It is unclear why the variation in the output of decoder layers can measure the uncertainty of the class prediction…**
**A1:** Decoder layer contains multiple dropout layers that make it stochastic in nature and so allows capturing variation in logit space to estimate uncertainty. Logits are the raw output of the model, and with logits of all decoder layers, uncertainty is computed to modulate/scale logits of the final layer.
---
**Q2: As one of the two major contributions, logit mixing for confidence calibration is not technically novel enough…**
**A2:** `High-level intuition:` To our best of knowledge, our work is the first one that strives to improve the calibration performance of recent SOTA DETR-based pipelines by proposing an uncertainty-guided logit modulation and a logit mixing approach during training time without any hold out dataset (Reviewer Gc9J appreciates the significance of our approach). In [1], the aim is to apply complex spatial transforms to introduce occlusions and spatial signal perturbations in an effort to improve model generalization. [2] applies the mixup from [1] and also leverages Mosaic as data augmentations for object detection to enhance detection performance. On the contrary, our main goal is to improve the calibration of object detectors by developing a mixing approach in the logit space along with a regularizer loss that can be used with a detection-specific loss function.
`Formulation and Technical Details:` [1] operates in input pixel space and performs a geometry preserved alignment of mixed images for object detection, where image pixels are mixed up and object labels are merged as a new array. Also, the rest of formulation details are same as the original mixup. On the other hand, we operate in logit space, first, where we leverage positive queries to build a prototypical representation and then use this to achieve logit mixing for a given query. Furthermore, we employ a regularizer loss by leveraging the proposed logit mixing output. Overall, we fundamentally differ from [1] and [2] in terms of high-level intuition, formulation, and technical details.
[1] Zhang, Zhi, et al. "Bag of freebies for training object detection neural networks."
[2] Ge, Zheng, et al. "Yolox: Exceeding yolo series in 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. Most of my concerns are addressed and I decided to keep my initial rating (5 borderline accept). | Rebuttal 1:
Rebuttal: We thank the reviewers (wX9Z, 95V6, co8a, Gc9J) for the positive and thoughtful feedback, and we appreciate the comments to improve our work.
**Reviewer Gc9J:** "The proposed method does not require an extra hold-out validation set, as it is a training time approach. Improvement in the calibration performance is notable. The proposed approach is intuitive and does not introduce an additional burden. Paper is easy to follow."
**Reviewer wX9Z:** "An important and underexplored problem, calibration in object detectors. The proposed method is lightweight and does not require any network modifications."
**Reviewer co8a:** "Model calibration is a valuable problem, and the proposed method does improve it without a performance drop."
**Reviewer 95V6:** "Calibration results are good in both in-domain and out-domain scenarios. Extensive experiments."
We summarize the main points presented in our response and we kindly hope that we have addressed all the questions.
+ We address the questions and provide clarifications and details that have improved our work.
+ We include comparisons with another DETR-based pipeline as well to show the effectiveness of our method.
+ We include more variants of our proposed approach that bring further insight into our choice.
+ We will update the final version based on the recommendations of the reviewers (i.e. additional results with a DETR-based DINO model to show scalability, provide insights and clarifications, and more discussions with close works in literature). | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Binary Classification with Confidence Difference | Accept (poster) | Summary: This paper studies binary classification problems. A new data type is introduced, where each observation consists of two input instances, together with the "confidence difference," defined as the difference of the conditional probabilities (output=1|input) of the two input instances. New loss functions are introduced and the excess risk bound is derived with probability. To extend the research, the author(s) also studied the case when the confidence difference is contaminated by noise, and the idea of rectifying the empirical loss function by dropping the negative summands. Empirical tests are done on benchmark datasets where the proposed algorithm consistently outperforms the competitors.
Strengths: To my knowledge, the proposed data type (based on confidence difference) is new. The presented research is original. The high quality of the research is guaranteed by solid theoretical analysis, and the definition of a new class of classification data that captures more possible application scenarios. This paper is written clearly and is easy to follow. The broad application scenarios support the significance of the research.
Weaknesses: I have no major concerns about this work. Some minor issues are listed in the "Questions" section below, for the author(s) to clarify.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Table 2 has a structure similar to Table 1 in [11]. In particular, the methods Pcomp-*** have different values from those reported in Table 1 in [11]. Would it be possible for the author(s) to take a look at Table 1 in [11] and briefly explain the difference in experiment design or other conditions that leads to such a difference in the values in these two tables? Same question for Table 2 of this manuscript, and Table 3 in [11].
2. It seems that the density (or probability mass function?) tilde-p defined in Eq. (4) is not used elsewhere. Is that true? Is there any insight into this density?
3. Since x and x' are drawn from the same probability distribution, what is the relation between x_i and x_i'? Is the sample in Line 137 just n independent copies? Would the analysis still work if n=m^2 is the square of some integer, so that x_i and x_i's are obtained by the full combination of some sample points u_1,...,u_m?
4. Is the definition (6) and (8) newly introduced, or cited from the literature? It would be better if the author(s) could specify the facts around these equations.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: This is theoretical research, and I have not identified any limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. We are encouraged that you agree with the novelty and contributions of our paper. Below are the answers to your questions.
***
**Q1: Difference in experimental design between this paper and [11].**
**A1:** In [11], they only used examples from ${(+1, +1),(+1, -1),(-1, -1)}$, while ours also used examples from $(-1, +1)$. Because the training data is different, the experimental results are different. In order to make a fair comparison, we adapted the Pcomp methods to our problem settings. In particular, the input to Pcomp methods can be data pairs where the first example is more likely to be positive. Therefore, when the confidence difference was greater than zero, we used this data pair directly. If the confidence difference was less than zero, we swapped them. We will include the details of the experimental settings in the final version of our paper.
***
**Q2: The density in Eq.(4) is not used and what is the insight?**
**A2:** Yes, we only used Eq.(4) to elaborate the data distribution assumption of the Pcomp methods. As discussed in the Introduction section, Pcomp methods only consider examples from ${(+1, +1),(+1, -1),(-1, -1)}$. To handle training data with labels $(-1, +1)$, they have to discard them or reverse them to $(+1, -1)$. On the contrary, our data distribution assumption is more general, because we explicitly consider the examples from $(-1, +1)$ in the data distribution assumption.
***
**Q3: The relationship between $(x, x')$ and whether the analysis will still work for data pairs of size $m^2$ obtained from combining unlabeled data of size $m$.**
**A3:** First, we assume that $x$ and $x'$ in an unlabeled data pair are i.i.d. and both sampled from $p(x)$. Therefore, based on our assumption, any two unlabeled data pairs are also *mutually independent*. If we use all pairwise combinations of an unlabeled data set, some of the resulting data pairs may not be i.i.d. For example, we may have two unlabeled data pairs $(x_1, x_2)$ and $(x_1, x_3)$ both in the training set. Because they contain the same example $x_1$, I am afraid that they are *dependent*. Therefore, our theoretical analysis may not apply to this setting, and we need to investigate new theories that formulate non-i.i.d. training data. Thank you for the insightful question and we will include the discussion in our paper.
***
**Q4: Are Equations (6) and (8) newly introduced and what are the facts about them?**
**A4:** The two equations are new. Eq.(6) is the definition of confidence difference and Eq.(8) is the empirical approximation of Eq.(7), also known to be an *unbiased risk estimator*. We introduced Eq.(6) to specify the confidence difference between two unlabeled examples, which can also measure how confident the pairwise comparison is. Eq.(8) is a natural result of using training data to estimate the expected value. In Theorem 3, we discussed the estimation error bound by using the unbiased risk estimator. We will add this discussion to our paper.
---
Rebuttal Comment 1.1:
Title: notation not used elsewhere may be removed
Comment: We appreciate the comprehensive information.
For Q2, we think that a notation not used elsewhere may be removed.
---
Reply to Comment 1.1.1:
Comment: Thank you for your suggestion. We will remove the unused notation in the final version of our paper. | Summary: The paper discusses weak supervised learning for binary classification, specially in the setting where labeled data is not available. Based on pairwise-comparison (Pcomp) confidence, where we are given data pair $(x_1, x_2)$ and binary label {+1, -1} of $x_1$ being more or less probable of being positive compared to $x_2$, this paper suggests the novel problem setting of ConfDiff, where we are given $((x_1, x_2), P(y = 1 | x_2) - P(y = 1 | x_1))$. The paper then proceeds to provide risk estimators and error bounds for this new problem setting. Experimental results on curated benchmarks and one real benchmark shows the usefulness of the paper’s theoretical results and motivated algorithm.
Strengths: 1. The paper is well written and organized.
2. The theoretical results in the paper are non-trivial and well done.
3. Good experiments and ablation studies.
Weaknesses: My main complain with the paper is that the problem setting does not feel well justified. Since the main goal of the paper is to suggest a novel problem setting, this is a big weakness.
For example, in line 39, the paper makes the claim that $P(y = 1 | x_2) - P(y = 1 | x_1)$ values might be less biased than individual $P(y = 1 | x_2)$ and $P(y = 1 | x_1)$ estimates by human labelers. I do not think this claim is well justified.
Similarly, line 51 claims that section 4 will give a real world case study of this problem. While section 4 does present a real world problem where the paper’s algorithm works better, it is not clear that this is due to the problem setup being more useful/ideal. More study would need to establish that this problem setup has indeed lower bias then the case of pointwise confidence value, possibly with collecting a large scale dataset with human annotators.
Finally, if someone is unconvinced of the problem setting, the paper seems **tautological**, i.e., defining a custom problem where a modified risk estimator works (which, while the proof being involved, is not hard to see).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Based on the weakness mentioned, are there prior works that suggests this problem setup is more meaningful?
2. Line 233, “Since the data sets were originally designed for multi-class classification, we manually partitioned them into binary classes.”, how is this manual partitioning done for each dataset? Referencing to the appendix would be fine here.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we are very grateful for your time and effort in reviewing this paper. Below are the responses to your questions and comments.
***
**Q1: The claim that the confidence difference has a lower bias is not well justified.**
**A1:** Thank you for your comment. As discussed in the Introduction section, the confidence difference can be less biased in *some applications*. Take the problem of disease risk estimation as an example. Different doctors with different backgrounds and experience will give different confidence scores given a person's attributes. Some are confident and tend to assign high confidence, while others are relatively conservative and tend to assign low confidence. As a result, considering them all together in the training set may have negative effects. In such cases, using the confidence difference may be a more objective statistic by describing the relative difference between two examples for a doctor. We agree with you that using more large real-world data sets will better motivate. We will consider collecting more large-scale data sets using crowdsourcing platforms (e.g. MTurk) as our future work.
***
**Q2: Are the advantages in the experimental results of the recommender system task the result of better settings? Is there any prior work suggesting that the setting is more meaningful?**
**A2:** In this experiment, our approach performs better in terms of HR and achieves comparable performance in terms of NDCG compared with the Pconf method. Since our approach uses less supervision information, the Pconf method should have performed better than ours. Therefore, we suspect that this is because Pconf is affected by biased pointwise confidence values, while ConfDiff can deal with this problem in a better way.
As positive-confidence classification is a relatively new problem, there is no previous work on this setting. Therefore, the setting is new to the literature. We agree with you that more case studies will further support our setting. We will dedicate ourselves to collecting more large-scale real-world data sets to further validate the superiority of our setting.
***
**Q3: Partition of data sets.**
**A3:** We have listed the details of the data set partition in Appendix I. We will add the reference to our paper.
---
Rebuttal Comment 1.1:
Comment: Follow-up question: **The claim that the confidence difference has a lower bias is not well justified.**
Based on the authors' answer, do all the data points $((x, x'), c(x, x'))$ get collected from the same expert/confidence estimation system? The medical use-case example the authors provided works if the confidence estimate of two doctors, let them be $c_1(x)$ and $c_2(x)$, vary by a constant, i.e, there exists $A$ such that $c_1(x) = c_2(x) + A$ for all $x$. Then for any $x, x'$, we have $c_1(x) - c_1(x') = c_2(x) - c_2(x')$.
However, it is quite possible that this is not the case. In the extreme case, the confidence difference estimated by two doctors on the very same data pair can be different. In the less extreme case, similar data point pair can get different confidence difference estimate by two doctors. It is quite likely that any such large scale dataset would contain annotations by multiple experts, thereby bringing the same bias.
Would the authors have any intuitive or theoretical explanation for why confidence difference would have a lower bias in this case?
---
Reply to Comment 1.1.1:
Title: Response to the Further Question
Comment: First of all, we would like to express our sincere gratitude for your valuable comments and insights. Based on your valuable question, both pointwise confidence and pairwise confidence difference can be biased in many cases. Then, we conducted more theoretical analysis and found that our setting would have a lower bias.
We assume that the output of the confidence estimation system is given by a deterministic function. Let $x$ denote the ground-truth pointwise confidence value and $y$ denote the output confidence value given by the estimation system.
For a pair of examples, the ground-truth confidence values are $(x_1, x_2)$. The outputs of the confidence estimation system are $(y_1, y_2)$. If we use the pointwise confidence value directly, the total bias is $$E_{point}=|y_1-x_1|+|y_2-x_2|.$$
If we consider the confidence difference, then the total bias is
$$
\begin{align}
E_{pair}&=|(y_1-y_2)-(x_1-x_2)| \\\\
&=|(y_1-x_1)-(y_2-x_2)|.
\end{align}
$$
Based on the absolute value inequality that $|A-B|\leq |A|+|B|$, we have $E_{pair} \leq E_{point}$. The implication is that the pairwise confidence difference can have a lower bias than the pointwise confidence for a given estimation system. The conclusion can also be extended to the existence of multiple estimation systems (doctors). Therefore, the confidence difference would have a lower bias in this case.
I hope this rebuttal addresses your concerns. If you have any further concerns or questions, please do not hesitate to raise them. Thank you again for your time and effort in reviewing our submission. | Summary: - This work proposes a confidence label based training approach called _ConfDiff_ for binary classification models as an improvement over traditional hard labels. Specifically, authors argue that obtaining hard labels for traditional supervised learning paradigm, or confidence metrics around positive labels for weakly supervised learning paradigm is expensive and at times infeasible process. Therefore, they propose training such classification models on pairwise comparisons between confidence in positive labeling of training data example.
- Authors prove error bounds, and demonstrate the robustness of the proposed approach theoretically.
- Authors evaluate the proposed approach on benchmark classification datasets and a real world recommender system.
Strengths: **Motivation**
Authors make a strong technical as well as intuitive case in support of their approach, _ConfDiff_, in comparison to prevalent approaches in the related work such as _Pconf_. Similar to traditional supervised learning paradigm where approaches such as _soft labeling_, _pairwise loss_, etc. have provided benefits over pointwise models utilizing hard labels in the form of increased robustness and robustness, it is intuitive that pairwise confidence differences should outperform methods like _Pconf_, as demonstrated by the paper.
**Technical Presentation**
The work is very well presented and easy to digest. The Preliminaries section sets up the reader very well equipped to compare/contrast the proposed approach with existing classification approaches. Specifically, the authors do a great job at using formal notations with perfect granularity required to compare different approaches, without being overwhelming to the reader.
**Experiments**
Authors evaluate their proposed approach on multiple benchmark datasets as well one real world recommender system dataset. All the details as well as source code required for reproducibility are available in the Appendix section of the work. The observed evaluation metrics clearly favors the proposed approach against the baseline comparisons. Authors further analyze the sample efficiency and robustness of the proposed approach and provide empirical evidence supporting the theoretical analysis in the prior sections.
Weaknesses: **Choice of comparative baselines**
While the authors clearly position their work as an improvement over the existing approaches in weakly supervised learning domain, they make a case throughout the introductory as well as technical sections regarding benefits over traditional hard label supervised learning. Given that _ConfDiff_ was evaluated using backbones such as ResNet-34, I believe it would help the work to include inside Table 1 the metrics corresponding to vanilla ResNet-34 or logistic regressions.
**Generalizability of proposed approach**
Theoretical contributions not withstanding, it is not entirely clear how generalizable the proposed approach is in terms of obtaining confidence difference values. Authors present the real world evaluation on KauiRec dataset where _watching ratio_ i.e., ratio of watching time of short video and total length of short video, has an intuitive relationship with $p(y=1|x)$.
> We clipped the watching ratio above 2 and regarded the examples with watching ratio greater than 2 as positive examples. Following the experimental protocol of [8], we regarded the latest positive example for each user as the positive testing data, and sampled 49 negative testing data to form the testing set for each user.
As I understand, based on the above text in the Appendix, even in this example, the authors had to resort to assigning hard labels to training data using a heuristic based approach. I am not sure what would be a similar intuitive approach to compute $c(x, x^{\prime})$ in a more common user-item recommender system where purchase labels/click labels are by nature hard labels, and the priors are heavily biased by the fact that only limited number of recommendations can be shown to a user?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Could you address the weakness/include discussions about the same in the manuscript?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: In the Appendix section, authors have addressed limitations of the approach being currently limited to binary classification and potential negative societal impact in terms of loss of annotation work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, we are very grateful for your time and effort in reviewing this submission. We are encouraged that you agree with the contributions of our paper. Below are the responses to your comments.
***
**Q1: Add experimental results from the vanilla ResNet.**
**A1:** We agree with you that adding experimental results of logistic regression (LR) will strengthen the paper. We will add the following experimental results to Table 1 and Table 2.
| Class Prior | Method | MNIST | Kuzushiji | Fashion | CIFAR-10 |
| ------------- | ---------- | ----------- | ----------- | ----------- | ----------- |
| $\pi_{+}=0.2$ | Supervised | 0.990±0.000 | 0.939±0.001 | 0.979±0.001 | 0.894±0.003 |
| $\pi_{+}=0.5$ | Supervised | 0.986±0.000 | 0.929±0.002 | 0.976±0.001 | 0.871±0.003 |
| $\pi_{+}=0.8$ | Supervised | 0.991±0.001 | 0.942±0.003 | 0.979±0.000 | 0.897±0.002 |
| Class Prior | Method | Optdigits | USPS | Pendigits | Letter |
| ------------- | ---------- | ----------- | ----------- | ----------- | ----------- |
| $\pi_{+}=0.2$ | Supervised | 0.990±0.002 | 0.984±0.002 | 0.997±0.001 | 0.978±0.003 |
| $\pi_{+}=0.5$ | Supervised | 0.988±0.003 | 0.980±0.003 | 0.997±0.001 | 0.975±0.001 |
| $\pi_{+}=0.8$ | Supervised | 0.987±0.003 | 0.983±0.002 | 0.997±0.001 | 0.976±0.004 |
The performance of LR is comparable to the Pconf method, which can serve as a reference for performance comparisons.
***
**Q2: The application to common recommender system tasks even with biased class priors.**
**A2:** Currently, our approaches require supervision information of *confidence labels*. Therefore, if there are only 0-1 interactions without any other side information, it may be difficult to directly apply our methods without *confidence labels*. We will include this limitation of our applications in our paper. However, we can still obtain such confidence labels by using an auxiliary probabilistic classifier that outputs the probability of being positive, which is often used for many recommender system problems, such as click-through rate (CTR) prediction (Zhou et al., 2018). In addition, for many real-world recommender system applications, such as news recommendation, short video recommendation, and movie recommendation, we can often collect some real-value *confidence labels*, such as watching ratios and user ratings. We have also discussed the influence of inaccurate confidence values on our approach in Section 3.3. Therefore, our method is promising to be applied to more recommender system problems.
We agree with you that the class prior may be biased from the training data to the testing data. Such a problem is the well-known *label shift* problem in the distribution shift literature (Lipton et al., 2018). In this paper, we only consider that the ordinally labelled training data and testing data are sampled from the same distribution. We will consider the development of approaches that address label shift as our future work. Furthermore, if the class prior is the same for training and testing and the estimated class prior is biased, we have discussed its influence with theoretical analysis in Section 3.3. It is shown that a more accurate estimation of the class prior will reduce the upper bound of the estimation error and facilitate model training.
***
Reference:
- Zhou et al., Deep interest network for click-through rate prediction, KDD 2018.
- Lipton et al., Detecting and correcting for label shift with black box predictors, ICML 2018.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing some of my concerns mentioned in the review and their acknowledgement of a potential limitation of the proposed approach.
Considering the additional information, my overall rating of the work remains unchanged. | Summary: The paper proposes to solve a classification problem using weakly supervised learning problem called confidence-difference (ConfDiff)
classification, where unlabeled data pairs are equipped with confidence difference specifying the difference in the probabilities of being positive. The authors further develop a risk-consistent approach to tackle this problem and show that the estimation error bound achieves the optimal convergence rate. They provide additional analysis for noisy labels. The authors empirically show the effectiveness of their suggested technique on several classification benchmarks such as MNIST, CIFAR-10 and FASHION datasets (TABLE 1), where they show that they approach performs better compared to PComp methods. They further show effectiveness of their method in leveraging the supervision information of the confidence difference on a real-world recommender system data set.
Strengths: The paper is relatively clear and presents an interesting take on classification approach. Instead of an exact label the paper proposes to use relative confidence and provides theory to establish the performance bounds. The paper further validates the effectiveness on a real world recommender dataset, surpassing PComp teacher. The authors also investigate what happens when the volume of the labeled data is reduced and show that the suggested approach achieves superior or comparable performance even when only 10% of training data are used. It elucidates that leveraging confidence difference may be more effective than increasing the number of training examples.
Weaknesses: My main concern is that there is no comparison to a performance based on regular labels. Such comparison can at least show the gap in case regular classification performs better (and labels can be provided). Also the authors mention medical application in motivation, however they do not provide experiments with similar data.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please add performance of regular classification for your experiments if such is possible to benchmark.
Are there any other methods applicable for the Recommendation systems you can compare against? Providing more comparisons will help to strengthen your experimental section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I feel there are not so many applications that will have relative comparison between two samples. The two applications provided in the motivation are well-suited for such approach, but unfortunately the method is only benchmarked on one application. The authors might want to benchmark against additional application that benefits from relative comparison.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we would like to thank you for your time and effort in reviewing our submission. Next, we would like to respond to the main concerns raised in the comments.
***
**Q1: Comparison with supervised learning based on ordinary labels.**
**A1:** We agree with you and list the performance of the supervised learning method.
Here are the experimental results on the four benchmark data sets:
| Class Prior | Method | MNIST | Kuzushiji | Fashion | CIFAR-10 |
| ------------- | ---------- | ----------- | ----------- | ----------- | ----------- |
| $\pi_{+}=0.2$ | Supervised | 0.990±0.000 | 0.939±0.001 | 0.979±0.001 | 0.894±0.003 |
| $\pi_{+}=0.5$ | Supervised | 0.986±0.000 | 0.929±0.002 | 0.976±0.001 | 0.871±0.003 |
| $\pi_{+}=0.8$ | Supervised | 0.991±0.001 | 0.942±0.003 | 0.979±0.000 | 0.897±0.002 |
Here are the experimental results on the four UCI data sets:
| Class Prior | Method | Optdigits | USPS | Pendigits | Letter |
| ------------- | ---------- | ----------- | ----------- | ----------- | ----------- |
| $\pi_{+}=0.2$ | Supervised | 0.990±0.002 | 0.984±0.002 | 0.997±0.001 | 0.978±0.003 |
| $\pi_{+}=0.5$ | Supervised | 0.988±0.003 | 0.980±0.003 | 0.997±0.001 | 0.975±0.001 |
| $\pi_{+}=0.8$ | Supervised | 0.987±0.003 | 0.983±0.002 | 0.997±0.001 | 0.976±0.004 |
In particular, the performance is comparable to the Pconf method, which can be used as a reference for performance comparisons.
***
**Q2: Additional compared approaches for the recommender system application.**
**A2:** Yes, adding more compared approaches will greatly strengthen the paper. Therefore, we add the following compared approaches 1) Binary Cross Entropy Loss (BCE), which uses the assigned hard labels as the target and applies the cross entropy loss; 2) Bayesian Personalised Ranking (BPR) (Rendle et al., 2009), which uses the logistic loss to rank a pair of items; 3) Margin Ranking Loss (MR), which ranks a pair of items by using the margin loss. The hyperparameters are the same as those in Appendix I. The experimental results are summarised as follows:
| Method | HR | NDCG |
| ------------- | ----- | ----- |
| BCE | 0.469 | 0.283 |
| BPR | 0.464 | 0.256 |
| MR | 0.476 | 0.271 |
| Pcomp-Teacher | 0.179 | 0.066 |
| Pconf | 0.534 | 0.380 |
| ConfDiff-ABS | 0.570 | 0.372 |
It is worth noting that BCE did not perform well because the heuristic labeling method can introduce a lot of label noise. Based on the experimental results, we can see that ConfDiff-ABS achieves the best performance in terms of HR and achieves comparable performance to Pconf. This confirms the effectiveness of our method in tackling this problem in recommender systems.
***
**Q3: Experiments on the medical data sets.**
**A3:** We agree that adding additional experiments on the medical application benchmarks will better motivate the setting. In the introduction, we pointed out a promising application of our setting and method in the medical domain. However, collecting data for medical applications is demanding because it requires a lot of domain knowledge and may involve privacy-sensitive issues. Therefore, we have not yet found such public data sets. We will consider collecting similar data sets as our future work. For example, we will consider collecting such data sets using a crowdsourcing platform (e.g. MTurk) by asking annotators to give the confidence difference between two examples. We are very grateful for your valuable suggestions.
***
Reference:
- Rendle et al., BPR: Bayesian personalized ranking from implicit feedback, UAI 2009.
---
Rebuttal Comment 1.1:
Title: Follow up on Rebuttal
Comment: I would like to thank authors for their thorough responses and addressing my concerns.
I am changing my final score to 6. | Rebuttal 1:
Rebuttal: First of all, we sincerely thank all the reviewers for their great efforts in reviewing this submission and providing helpful and valuable comments. Since we cannot revise our paper during the rebuttal period, we plan to make the following revisions in our paper:
- According to Reviewer Qs9H and Reviewer DX8U, we will include the performance of the supervised learning method in Tables 1 and 2.
- According to Reviewer Qs9H, we will include the results of additional compared approaches for the recommender system problem.
- According to Reviewer XYQ6, we will introduce a reference to the details of data sets in the paper.
- According to Reviewer G7aZ, we will add the details of the experimental setting for the Pcomp methods and the facts about Equations (6) and (8) to the paper. We will also discuss more about our data distribution assumption.
Besides, as suggested by Reviewer Qs9H and Reviewer XYQ6, we will consider collecting more real-world data sets for our problem as future work. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
CRoSS: Diffusion Model Makes Controllable, Robust and Secure Image Steganography | Accept (poster) | Summary: This paper presents CRoSS, a novel image steganography framework leveraging text-driven diffusion models. It offers improved security, robustness, and controllability compared to cover-based methods.
Strengths: CRoSS is the first work to introduce diffusion models to image steganography and achieves these benefits without requiring additional training.
Weaknesses: - Equation 3 should be written in detail. Is it the same as those in Section H of appendix in guided diffusion [A]?
- The collected dataset only contains 260 images, rendering it not convincing in the conclusions from the presented quantitative experiments. Besides, the secret and stego image pairs much resemble either in appearance or in concept, e.g., cat versus tiger, or man versus woman. As a result, there is a restriction of secret-stego correlation that does not exist in the compared previous works.
- Stable diffusion [B] utilizes VAE to reconstruct images from the latent space, leading to inherent errors. As a result, the performance of CRoSS is constrained by this factor, resulting in significant errors. In hiding tasks, the extraction accuracy is a crucial metric. But the PSNR between the revealed images and the corresponding secret images is merely 23.79dB, or even less when there is distortion, suggesting that the method can be less applicable in the real world.
- For each instance of covert transmission using the proposed method, the recipient must know the exact prompt or condition, e.g., word, sketch, or even parameters, used during hiding, which might be costy and inpractical.
- As stated in [C], setting the guidance scale to 1 (used in the classifier-free control algorithm) may compromise the image editability. But in this paper, it is indeed set to 1, which could potentially limit the range of secret image selection.
[A] Dhariwal P, Nichol A. Diffusion models beat gans on image synthesis[J]. Advances in Neural Information Processing Systems, 2021, 34: 8780-8794.
[B] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 10684-10695.
[C] Mokady R, Hertz A, Aberman K, et al. Null-text inversion for editing real images using guided diffusion models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 6038-6047.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Please conduct large-scale quantitative experiments, author may refer to the the settings in InstructPix2Pix [A].
- Please conduct experiments using prompts that show low relation with the secret images.
- More distortions shall be considered to verify general robustness.
[A] Brooks T, Holynski A, Efros A A. Instructpix2pix: Learning to follow image editing instructions[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 18392-18402.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments! If there are any remaining questions that have not been adequately addressed, please feel free to continue the discussion with us.
> ***We earnestly request you to reconsider your assessment of our work, taking into consideration different aspects.***
- We understand that your concerns about our work primarily center around objective fidelity and practical applicability. From our perspective, CRoSS holds distinct advantages in terms of $\textbf{\textcolor{red}{security and robustness}}$, which significantly enhance its $\textbf{\textcolor{red}{practical applicability}}$.
- The introduction of diffusion models for the $\textbf{\textcolor{red}{first}}$ time introduces new directions and tools in the field of image steganography, $\textbf{\textcolor{red}{inspiring}}$ future research.
> ***Weakness #1: The details of Equation 3 and its relationship with guided diffusion.***
- Equation 3 describes the sampling process of DDIM [1], while the appendix H in guided diffusion [2] discusses sampling process of conditional diffusion models. $\textbf{\textcolor{red}{These are distinct concepts and not closely related}}$.
- As the specific details of the DDIM sampling process are not a central focus of our work, we employed the concise form presented in Equation 3. This type of representation is commonly used in many relevant literatures due to its brevity.
- The details of Equation 3 involve iteratively executing the single-step sampling formula (Equation 2) using a FOR loop, sampling from $\mathbf{x}_T$ to $\mathbf{x}_0$ .
> ***Weakness #2 & Question #1: Insufficient size of Stego260 and similarity between secret and container images.***
- On one hand, the dataset size of Stego260 is not considered small. For instance, some previous works such as RIIS (CVPR 2022) [3], only conducted evaluation on just 100 images from DIV2K.
- On the other hand, creating a large-scale dataset for image steganography with text prompt labels (similar to Instruct-Pix2Pix [4]) is both costly and not a primary focus of our study. It might be more appropriate to leave this issue for future work.
- In our experimental setting, there is indeed an issue of similarity between secret images and container images. This is primarily attributed to the limited editing capability of Stable Diffusion (with guidance scale = 1). However, this aspect is **independent** of our proposed CRoSS framework. If in the future there are more powerful diffusion-based editors available, they could be seamlessly integrated into CRoSS, potentially alleviating the concern you raised.
> ***Weakness #3: Concerns about objective fidelity and practicality.***
- The fidelity of our results is subjectively acceptable in terms of visual perception, and the quality does not reach the level of "significant errors" as you mentioned. Our emphasis is not on pixel-level objective fidelity metrics such as PSNR, but rather on semantic fidelity.
- It's important to emphasize that evaluating an image steganography algorithm requires a $\textbf{\textcolor{red}{comprehensive consideration}}$ of three aspects: fidelity, security, and robustness. CRoSS exhibits significant advantages in terms of security and robustness, and considering that its fidelity is subjectively acceptable, CRoSS should indeed have $\textbf{\textcolor{red}{great practical advantages}}$ in real-world applications.
> ***Weakness #4: Practicality of the private and public keys such as prompts.***
- In most cases, we use text prompts as the key. In this scenario, we have the following reasons to demonstrate the practicality of using text prompts:
- (1) The transmission and storage costs of text prompts are **highly efficient**.
- (2) During the reveal process, it's not necessary for each word to be exact; rather, a **general semantic correctness** is sufficient.
- For other types of keys, they are extensions we introduced to showcase the diversity and extensibility of the CRoSS framework.
> ***Weakness #5 & Question #2: The editability is limited with 1 as guidance scale.***
- To ensure invertibility, we opted for Stable Diffusion (with guidance scale = 1), which indeed has limitations in terms of generation and editing capabilities. Therefore, it may not perform well in cases where the correlation between the given prompt and secret image is low.
- However, this limitation is inherent to Stable Diffusion itself. The focus of the CRoSS work is to introduce a novel framework, which is **independent** of the specific diffusion model chosen. If in the future, more powerful diffusion-based editors (such as the latest SDXL [5]) become available, they can be readily integrated as a plug-and-play replacement for Stable Diffusion to achieve improved outcomes.
> ***Question #3: More distortions should be considered to verify general robustness.***
- Regarding distortion types, we have conducted experiments involving five different types: **Resize, Gaussian noise, JPEG compression, WeChat, and Shoot**. Particularly, the WeChat and Shoot categories include complex real-world degradations, such as information loss due to **image compression, color distortion, and moiré patterns**.
- Following the advice from Reviewer mtD2, we have also added supplementary experiments in the supplementary rebuttal PDF file, including **Poisson noise, salt-and-pepper noise and blurring-a-patch distortion**.
- In conclusion, the number of distortion types included in the robustness experiments should be $\textbf{\textcolor{red}{sufficient}}$ to clearly demonstrate the significant advantages of CRoSS in terms of robustness.
> ***References:***
[1] Denoising Diffusion Implicit Models. ICLR 2021.
[2] Diffusion Models Beat Gans on Image Synthesis. NeurIPS 2021.
[3] Robust Invertible Image Steganography. CVPR 2022.
[4] InstructPix2Pix: Learning to Follow Image Editing Instructions. CVPR 2023.
[5] https://stablediffusionxl.com/
---
Rebuttal Comment 1.1:
Title: Response
Comment: I have read the rebuttal and some concerns such as Weakness #2, have been addressed.
However, I still maintain some issues as follows:
+ Weakness #1: Actually, Appendix H in Guided Diffusion can be viewed as a detailed representation of the DDIM-sampled ODE equation. And diffusion models can be applied with diverse ODEs, such as EDM[1]. Therefore, a detailed description is warranted.
+ Weakness #3#5: The objective of a steganographic system is to accurately convey secret messages. Accuracy on extraction is an indispensable metric. Moreover, cover and stego images often share some visual elements like backgrounds, which can compromise security to a certain extent. Setting the guidance scale to 1, in my opinion, restricts the breadth of real-world application scenarios.
+ Weakness #4: The authors mentioned that during the reveal process, it's possible to guess the prompt key. If feasible, it would be helpful to showcase relevant examples and demonstrate the extraction results.
Thanks for the authors' response. Looking forward to further discussion.
**Reference**:
[1] Elucidating the design space of diffusion-based generative models. NIPS 2022.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 1iwK's Concerns (part 1)
Comment: Thank you for your timely response and active engagement in the discussion! Below is our latest response:
> ***Weakness #1: The details of Equation 3 and its relationship with appendix H of guided diffusion***
We have carefully read Appendix H of the guided diffusion paper [1] (titled "Conditional Diffusion Process") and would like to provide some clarifications regarding your comments:
- Appendix H of [1] provides an overview of the theory behind the **conditional diffusion model**. The conditional diffusion model can be represented by $q(x_{t}|x_{t+1}, y)$. Using Bayes' theorem, it can be further broken into two components: $q(x_{t}|x_{t+1})$ representing the unconditional diffusion model and $q(y|x_{t})$ representing conditional guidance. For more detailed insights, please refer to Equation 55~61 in Appendix H of [1] along with the subsequent analysis. It's important to emphasize that the theory of the conditional diffusion model discussed in Appendix H of [1] is $\textcolor{red}{\textbf{independent of specific sampling algorithms}}$, whether it's DDIM or DDPM, ODE or SDE.
- Our Equation 3 describes the general DDIM sampling process, which is applicable to both conditional and unconditional diffusion models. In summary, there is a **noticeable distinction** between Equation 3 and Appendix H of [1].
However, we also find your suggestion to provide more detailed description for Equation 3 constructive. We will incorporate your suggestion into the revised version.
> ***Weakness #3: Issues of accuracy on extraction and the setting of "guidance scale=1"***
- Regarding the issue of fidelity, or more specifically, the **accuracy on extraction**, our clarification is as follows:
- Firstly, we agree with your perspective that fidelity is a crucial aspect of image steganography algorithms. However, achieving higher PSNR solely **under ideal conditions** falls short of practical significance. In real-world scenarios, container images often carry complex real-world distortions. Investigating fidelity under such conditions is more practical and significant. However, $\textcolor{red}{\textbf{this point has been overlooked in previous works}}$, which is precisely where CRoSS holds a unique advantage. CRoSS demonstrates better fidelity under various distortion conditions, showcasing enhanced robustness.
- Additionally, we believe that pixel-wise evaluation metrics like PSNR can be misleading. A significant amount of research has concentrated on boosting PSNR **under ideal conditions**, disregarding real-world application needs. While our CRoSS may not achieve the same level of **objective fidelity** as compared to previous methods, its **subjective fidelity** in terms of visual perception is indeed satisfactory.
- Lastly, $\textcolor{red}{\textbf{there is substantial room for improvement in CRoSS's objective fidelity.}}$ There are two possible approaches: (1) Not using latent diffusion models like Stable Diffusion but employing image diffusion models such as Imagen [2]. (2) Introducing strategies that require training. It's worth noting that CRoSS currently operates **without requiring training**. However, by maintaining the overall framework and simultaneously incorporating dedicated training for image steganography tasks, it is possible to mitigate its objective fidelity disadvantage while preserving its advantages in robustness and security. This would be a valuable topic for the future research.
- Regarding the issue of background similarity and the setting of **"guidance scale=1"**, our clarification is as follows:
- Currently, in order to ensure subjective fidelity, we have adopted the Stable Diffusion with a guidance scale of 1. This has indeed led to certain issues such as background similarity. What we want to emphasize here is that these issues $\textcolor{red}{\textbf{do not stem from the CRoSS framework itself}}$, but rather from the specific choice of the diffusion model used.
- Because the editing capability of Stable Diffusion with a guidance scale of 1 is limited, we adopted experimental settings such as using similar backgrounds. However, the generative capability of diffusion models is **continuously evolving**. For instance, recent advancements like the upgraded version of Stable Diffusion (SDXL) are further pushing the boundaries of diffusion model capabilities. In the future, we hope to leverage **more powerful** classifier-free diffusion models with a guidance scale of 1, or even non-classifier-free diffusion models, to achieve **stronger editing capabilities**.
- This way, we can enhance the CRoSS framework in a **plug-and-play manner**. CRoSS, as the first effort to integrate diffusion models into image steganography, introduces a new framework, showcasing pioneering value.
> ***References***
[1] Diffusion Models Beat Gans on Image Synthesis. NeurIPS 2021.
[2] Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. NeurIPS 2022.
---
Reply to Comment 1.1.2:
Title: Response to Reviewer 1iwK's Concerns (part 2)
Comment: > ***Weakness #4: Relevant examples when the receiver guesses the prompt***
- In the main paper, in **Figure 4** on the far left (Scenario #1), we illustrate potential candidate revealed results that the receiver might generate when trying to guess the prompt. As can be observed, for the receiver, it is not feasible to deduce the correct answer through random guessing of the prompt, as each candidate revealed image appears authentic.
- We will incorporate your instructive suggestion and provide more examples in the revised version.
We hope that our responses have addressed all of your concerns. We sincerely appreciate your continued engagement!
---
Reply to Comment 1.1.3:
Title: Looking forward to further discussions with the Reviewer 1iwK
Comment: Dear Reviewer 1iwK,
Thank you for your continued engagement in the discussion!
We apologize for any inconvenience caused by reaching out again. However, today marks the last day of the discussion phase, and we have provided detailed analysis and clarifications in response to your latest concerns. We would like to further discuss with you to ensure that your concerns are addressed.
Looking forward to our continued discussion!
Best regards,
Authors
---
Rebuttal 2:
Title: Looking forward to discussions with the Reviewer 1iwK
Comment: Dear Reviewer 1iwK,
We appreciate your previous review time and constructive comments. In response to your concerns, we have provided explanations, clarifications, and additional experimental results in the supplementary rebuttal PDF file.
We would like to know if your concerns have been adequately addressed. If you have further questions or comments, we would be more than willing to address and discuss them.
Thank you for your efforts!
Best regards,
Authors | Summary: This paper addresses coverless image steganography by taking the prompt as the guidance to generate stego images using Stable Diffusion. It shows better controllability with language-driven model, better robustness and security with stronger generation power of diffusion probabilistic model. Experimental results show better performance than SOTAs.
Strengths: +) The first coverless stegnography that uses pretrained diffusion model for image steganography;
+) Prompt as control of steganography
Weaknesses: -) Novelty is somewhat limited. It is obvious that using fixed sampling could always generate the same image using a pretrained diffusion model. I'd like to see contributions in terms of network structures and insights that drives coverless steganography in general.
-) Not clear about how the public and private keys are used in the model.
-) Missing experiments on popular datasets such as BOSSbase and BOWS2.
-) Another major problem is RIIS performance better than CRoSS in some cases. Please explain them in details. It is not proper that RIIS is trained on degraded data compared to CRoSS, as both can be set under the same setting.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: +) In Figure 8, what does "Shoot" mean?
+) Typo error: In Algorithm1, L2, hided -> hidden
+) Does CRoSS have any limitations?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: Not mentions. I'd like to hear about any potential limitations.
Flag For Ethics Review: ['Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments! If there are any additional comments to be added, please continue the discussion with us.
> ***Weakness #1: The novelty is limited.***
- Our contributions are primarily demonstrated in the following aspects. $\textbf{\textcolor{red}{These major contributions firmly establish the novelty of our work}}$:
- (1) We are the $\textbf{\textcolor{red}{first}}$ to apply diffusion models to image steganography.
- (2) Our approach exhibits $\textbf{\textcolor{red}{significant advantages}}$ in terms of security and robustness.
- (3) CRoSS, as a novel image steganography framework, can provide $\textbf{\textcolor{red}{valuable insights}}$ for future research in the field of image steganography.
- We are uncertain why you mentioned the point about "fixed sampling generating the same image," as it appears unrelated to our work. Could you please provide further clarification or elaboration on this matter?
- Our work does not focus on introducing a new model structure; instead, our objective is to present a novel image steganography framework. This framework inherently offers a wealth of insights to inspire future research.
> ***Weakness #2 & Question #1 & Question #3: Some missing contents (but we have already provided them)***
- **(Weakness #2)** We have provided detailed explanations about private and public keys in two separate paragraphs in $\textbf{\textcolor{red}{Section 3.3 (Lines 206-212, Lines 231-237)}}$. Could you please specify the specific points that you find unclear so that we can address them more precisely?
- **(Question #1)** We have already explained the meaning of "Shoot" in $\textbf{\textcolor{red}{Section 4.4 (Lines 305-306)}}$. The meaning of "Shoot" is: we utilize the mobile phone to capture the container images on the screen and then simply crop and warp them.
- **(Question #3)** We have thoroughly discussed the limitations of CRoSS in $\textbf{\textcolor{red}{Section D}}$ of the supplementary materials.
> ***Weakness #3: Missing experiments on other datasets.***
- The two datasets you mentioned, BOSSbase and BOWS2, both consist of 10,000 grayscale images with dimensions of 512x512. As far as we are aware, many studies related to image steganography have not conducted experiments on them, such as RIIS (CVPR 2022) [1], HiNet (ICCV 2021) [2], and ISN (CVPR 2021) [3].
- Moreover, these two datasets $\textbf{\textcolor{red}{do not align with our experimental setting}}$. For evaluation, CRoSS requires labeled text prompts for secret images. BOSSbase and BOWS2 do not meet this specific requirement. Therefore, we gathered and labeled the Stego260 dataset ourselves for evaluation, and this dataset can also facilitate future related research.
> ***Weakness #4: Comparison between RIIS and CRoSS***
- Please note that RIIS [1] is a method that requires training specific to degradation types, whereas CRoSS $\textbf{\textcolor{red}{does not require training}}$. Therefore, RIIS and CRoSS cannot share the same setting and it's not surprising that RIIS performs better in scenarios with no degradation or some synthetic degradation, as opposed to CRoSS.
- However, CRoSS exhibits $\textbf{\textcolor{red}{significant advantages}}$ in cases involving real-world degradations such as WeChat and Shoot. Furthermore, it's not possible for RIIS to train on all types of degradations, including real-world degradations. Therefore, CRoSS holds clear $\textbf{\textcolor{red}{practical advantages}}$ over RIIS, and such a comparison is appropriate.
> ***Question #2: A typo***
Thank you for pointing out this typo. We will correct it in the revised version.
> ***References***
[1] Robust Invertible Image Steganography. CVPR 2022.
[2] HiNet: Deep Image Hiding by Invertible Network. ICCV 2021.
[3] Large-capacity Image Steganography based on Invertible Neural Networks. CVPR 2021.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I appreciate the responses from authors. However, I have concerns regarding:
1. Steganalysis. A well steganography method is capable of defending various steganalysis methods. The popular SRNet [1] and SigstegNet [2] are needed to validate performances of the steganography method. I believe SiastegNet is an improved version of SID.
2. I believe CRoSS and RIIS can share the same setting when both of them are trained on the same setting. Otherwise you could not compare the two. According to the numeric results, I'm afraid it's no better than RIIS.
[1] Mehdi Boroumand, et al, Deep Residual Network for Steganalysis of Digital Images, http://www.ws.binghamton.edu/fridrich/research/SRNet.pdf
[2] Weike You et al, A Siamese CNN for Image Steganalysis, TIFS
---
Reply to Comment 1.1.1:
Title: Response to Reviewer g14p's Concerns
Comment: Thank you for your prompt response. Below is our response for your concerns:
> ***Experiments involving various steganalysis methods***
- Firstly, our paper $\textbf{\textcolor{red}{already}}$ includes experiments involving various steganalysis methods, as shown in $\textbf{\textcolor{red}{Figure 5 and Table 1}}$ of the main paper. These methods include **SID** [1], **StegExpose** [2], **XuNet** [3], **YedroudjNet** [4], and **KeNet** [5]. $\textbf{\textcolor{red}{It's worth noting that KeNet [5] is the same as the method SigStegNet you mentioned.}}$ The authors of [5] chose the name **"KeNet"** for **SigStegNet**, and you can find more details about this naming choice in their GitHub link [6].
- Secondly, we have included experiments involving detection accuracy of **SRNet** [7] as you mentioned. The closer the detection accuracy is to 50%, the better the security. The results of these experiments are as follows and demonstrate the superiority of CRoSS over other methods in terms of security:
| #Leaked Samples | 60 | 120 | 180 |
| ----- | ----- | ----- | ----- |
| ISN | 57.5% | 67% | 66.75% |
| Baluja | 55.75% | 65% | 71.5% |
| HiNet | 58.67% | 63.5% | 68.33% |
| RIIS | 54.25% | 61.67% | 59.5% |
| **CRoSS** | **54%** | **59.5%** | **57.25%** |
- In conclusion, we believe that the number of steganalysis methods involved in the experiments is $\textbf{\textcolor{red}{sufficient}}$ to demonstrate the security advantages of CRoSS.
> ***Experiments with the same settings for both RIIS and CRoSS***
- We need to emphasize again that $\textbf{\textcolor{red}{CRoSS does not require training for specific distortion types.}}$ In order to make CRoSS and RIIS share the same setting, we compared the robustness differences between RIIS and CRoSS, where RIIS **was not trained under any distortion types.** The PSNR results are shown below, and CRoSS outperforms RIIS (**"GN"** refers to Gaussian noise):
| Methods | GN ($\sigma=10$) | GN ($\sigma=20$) | GN ($\sigma=30$) | JPEG ($Q=20$) | JPEG ($Q=40$) | JPEG ($Q=80$) |
| --- | --- | --- | --- | --- | --- | --- |
| RIIS | 13.78 | 12.73 | 11.09 | 7.48 | 10.05 | 13.82 |
| **CRoSS** | **21.89** | **20.19** | **18.77** | **21.74** | **22.74** | **23.51** |
- Furthermore, based on the experimental results we have already provided, CRoSS $\textbf{\textcolor{red}{undoubtedly outperforms}}$ RIIS in terms of robustness and security. These results include:
- Most results in $\textbf{\textcolor{red}{Figure 5 and Table 1}}$ of the main paper (security experiments).
- $\textbf{\textcolor{red}{Figure 8}}$ of the main paper (subjective results of robustness experiments).
- Most results in $\textbf{\textcolor{red}{Table 2}}$ of the main paper (objective results of robustness experiments).
- $\textbf{\textcolor{red}{Figure B.4}}$ in the supplementary material (more subjective results of robustness experiments).
- $\textbf{\textcolor{red}{Figure 1, Figure 3 and Table 1}}$ in supplementary rebuttal PDF file (more subjective and objective results of robustness experiments).
> ***References***
[1] Steganalyzing images of arbitrary size with CNNs. Electronic Imaging 2018.
[2] Stegexpose-A tool for detecting LSB steganography. ArXiv 2014.
[3] Structural design of convolutional neural networks for steganalysis. IEEE Signal Processing Letters 2016.
[4] Yedroudj-net: An efficient CNN for spatial steganalysis. ICASSP 2018.
[5] A Siamese CNN for image steganalysis. TIFS 2020.
[6] https://github.com/SiaStg/SiaStegNet#quickstart
[7] Deep residual network for steganalysis of digital images. TIFS 2018.
---
Reply to Comment 1.1.2:
Title: Looking forward to further discussions with the Reviewer g14p
Comment: Dear Reviewer g14p,
We appreciate your previous review and the prompt response. We have responded to your latest concerns and supplemented the experiments on steganalysis methods and the comparison between RIIS and CRoSS.
We would like to further discuss whether your concerns have been addressed. If there are any aspects of our work that are still unclear to you, please let us know.
Thank you for your continued engagement!
Best regards,
Authors | Summary: This paper introduces diffusion models to the field of image steganography. It argues the significant advantages in controllability, robustness, and security compared to cover-based image steganography methods. It utilized the power of Stable Diffusion to translate between two images without training. This paper collects a benchmark and shows experiments on it.
Strengths: 1. This paper proposes a novel idea to utilize the diffusion Model to do image steganography. I think this is reasonable because pretrained diffusion models can invert an image to noise or translate between images without training.
2. The presentation is clear such as Fig.2 and Fig.3.
3. The qualitative results seem good as shown in Fig. 6.
4. This paper collects a benchmark containing 260 images, which may be useful for the community.
5. Analysis is in detail, including discussion from Sec 4.2 to Sec 4.4.
Weaknesses: 1. According to null-text inversion [1], DDIM is not enough to invert a real image perfectly. In this case, how to deal with the artifacts in your method? I believe low-level artifacts are fatal for image steganography.
2. Most secrets and containers share a similar background. Does it mean we need a good image editing tool when using this method?
3. It is difficult to generate high-quality human faces with Stable Diffusion. Inverting human faces is also not stable. Why do the qualitative results in Fig. 6 seem good?
[1] Null-text Inversion for Editing Real Images using Guided Diffusion Models
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why Secret and Container are required to share a similar background?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the "weakness".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments! We hope that our rebuttal has addressed all your concerns. If there are still aspects that need further clarification, please feel free to continue the discussion with us!
> ***Weakness #1: The invertibility is not perfect, and how to address the artifacts.***
- We acknowledge that achieving perfect invertibility through DDIM Inversion can be challenging. Therefore, at this stage, we only require the revealed image to match the secret image in subjective visual quality. By ensuring **acceptable subjective fidelity**, we can fully exploit the advantages of CRoSS in terms of security and robustness.
- Regarding the concern you raised about "low-level artifacts", we suspect you are referring to the situation where some of the revealed images might appear to have slightly lower quality. This phenomenon can be attributed to the generation capability of the diffusion model, which directly impacts the quality of the revealed image. However, it's important to note that our method itself is **independent of the specific choice of the diffusion model**. Opting for a more powerful diffusion model (such as the latest SDXL [1]) can further enhance the quality of the revealed image, thereby potentially alleviating the concern you mentioned.
> ***Weakness #2 & Question #1: The similarity between the secret and container images' backgrounds***
- In our experimental setting, there is indeed a similarity in background between the secret image and the container image. This similarity is primarily influenced by the editing capability of Stable Diffusion (with guidance scale = 1) that we employ. Therefore, your observation is correct: a better editor could enhance our proposed CRoSS framework in a plug-and-play manner, resulting in greater diversity between the secret image and the container image.
> ***Weakness #3: The generation capability and stability on human face images***
- Based on our knowledge, Stable Diffusion does possess the ability to generate high-quality human faces. There are numerous galleries within the community that serve as references, such as the following blog [2].
- Furthermore, within our experimental setting, we do not solely rely on Stable Diffusion to generate results from scratch. Instead, we utilize DDIM Inversion for image editing, which is less challenging and exhibits higher stability.
> ***References***
[1] https://stablediffusionxl.com/
[2] https://www.reddit.com/r/StableDiffusion/comments/13bjs6x/some_unedited_faces_made_with_base_sd_15/
---
Rebuttal 2:
Title: Looking forward to discussions with the Reviewer Nci3
Comment: Dear Reviewer Nci3,
We appreciate the time you dedicated to reviewing our work and your recognition of our work. Regarding the concerns you raised, we have provided explanations in our responses.
We would like to ensure that your concerns have been adequately addressed. If there are any aspects of our work that remain unclear to you, please don't hesitate to let us know.
Thank you for your dedication!
Best regards,
Authors | Summary: In this work, the authors introduce an image steganography framework (CRoSS) that leverages the properties of diffusion models to enhance the security, controllability, and robustness of the steganography process. The authors show how the diffusion model can integrate with image steganography to achieve these goals without additional training. The authors claim this to be the first work to introduce diffusion models to the field of image steganography. The effectiveness of the proposed CRoSS framework was validated with different experiments where the authors demonstrated the advantages over existing methods.
Strengths: - This paper proposes to apply diffusion models to the field of image steganography, creatively combining existing ideas to overcome the limitations of traditional methods.
- The paper is clear, and the authors clearly define their goals, explain their methodology, and discuss their results.
- Overall, the paper was easily understandable and easy to follow. Its structure is well-organized, and the proposed method is presented clearly and makes sense.
- To validate the effectiveness of the method, the authors conducted different experiments showing the advantages of the proposed approach in terms of security, controllability, and robustness.
- In general, the paper is very well written.
Weaknesses: - Although the authors considered validating their method against distortion attacks, Gaussian noise distortion is not the appropriate approach to evaluate its robustness. As authors may know, Gaussian noise is already used in the diffusion process due to its mathematical properties. The Gaussian distribution is symmetric and has the property that the sum of multiple Gaussian random variables is also Gaussian. This makes it mathematically convenient, especially in diffusion models where noise is added iteratively. Therefore, adding more Gaussian noise to the container image will not affect the reveal process too much. I would like to see if the method is still robust to another type of noise but Gaussian.
- In my opinion, the robustness validation made in the paper is not enough to conclude that the method is robust. For example, did the authors consider the scenario where an attacker performs not only global distortions (like JPEG compression, adding noise, etc.) but local distortions, e.g., blurring a patch on the image?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Will the proposed method recover the secret image well when an attacker applies another type of noise? Can this negatively impact the performance of the method?
- Is the method robust to local distortions?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors acknowledged the limitations of their work, including the gap in pixel-wise objective fidelity metrics, the trade-off that sacrifices the editing capability to ensure the zero-shot invertibility of image translation, and the limitation of hiding only one secret image within a single container image. However, they could provide more insights into potential solutions or future research directions to address these limitations. They did not explicitly discuss potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments! We hope that our response addresses all of your concerns. All discussions and supplementary experiments will be included in our revised version. If there are any remaining questions that have not been resolved, please feel free to continue the discussion with us!
$\textbf{\textcolor{red}{The supplementary rebuttal PDF file can be found at the bottom of the overall response.}}$
> ***Weakness #1 & Question #1: Robustness experiments involving a broader range of noise types***
- The robustness experiments involving additional types of noise are provided in the supplementary rebuttal PDF file (**Table 1 and Figure 3**). We have included the results for **Poisson noise** and **salt-and-pepper noise**. The experimental results show that CRoSS maintains robustness against other types of noise and has an advantage compared to other methods.
- Regarding the implementation details of the two types of noise, for Poisson noise, we referred to the implementation in [1] and adjusted the noise level using the parameter $\alpha$. For salt-and-pepper noise, we adjusted the noise level using the parameter probability $p$. The examples of degraded images are presented in **Figure 2** of the Supplementary Rebuttal PDF file.
> ***Weakness #2 & Question #2: Robustness experiments involving local distortions***
- The robustness experiments concerning local distortions are presented in the supplementary rebuttal PDF file (**Figure 1**). We have included the results for the scenario of **blurring a patch**. The experimental results indicate that CRoSS maintains significant robustness against local distortions and exhibits clear advantages compared to other methods.
- Regarding the implementation details of "blurring-a-patch," we extracted a $256\times256$ patch from the center of the image and applied Gaussian blur exclusively to this patch. The blur kernel size was set to $5$, and the sigma was set to $2$. The examples of degraded images are presented in **Figure 2** of the Supplementary Rebuttal PDF file.
> ***Future research directions for limitations***
- (1) Regarding the limitation of **the gap in pixel-wise objective fidelity metrics**, which is primarily attributed to the training-free nature of DDIM Inversion, a potential solution could involve training diffusion models specifically for image steganography tasks to improve objective fidelity.
- (2) Regarding the limitation of **sacrificing editing capability to ensure invertibility**, which is mainly attributed to the unsatisfactory generation capabilities of Stable Diffusion (with guidance scale = 1), a potential strategy involves keeping up with the latest advancements in the diffusion model field and replacing the base model in CRoSS in a plug-and-play way to alleviate this issue.
- (3) Regarding the limitation of **hiding only one secret image within a single container image**, this is primarily due to the current design of diffusion models being Single Input Single Output (SISO). A potential solution could involve designing Multiple Input Multiple Output (MIMO) diffusion models specifically for high-capacity image steganography.
> ***Potential negative societal impacts***
- The related technology could potentially be utilized for improper distribution of personal privacy or for inappropriate distribution of images that might lead to offense.
> ***References***
[1] Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis. ArXiv 2022.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. The provided rebuttal addressed my questions. I will update and maintain my score toward acceptance.
---
Reply to Comment 1.1.1:
Title: Thank Reviewer mtD2 for recognizing our work
Comment: Dear Reviewer mtD2,
Thank you for engaging in our discussion and recognizing that our responses have effectively addressed all the concerns you raised. We greatly appreciate your constructive comments, which have contributed to the improvement and solidity of our work!
Best regards,
Authors | Rebuttal 1:
Rebuttal: We sincerely appreciate all the constructive comments from the reviewers! Below is our brief overall response.
> ***Firstly, we are delighted to observe that the reviewers have acknowledged various aspects of our work:***
- Reviewer mtD2 and Reviewer Nci3 hold a $\textbf{\textcolor{red}{positive}}$ side for our work in terms of $\textbf{\textcolor{red}{soundness, presentation, contribution, novelty and effectiveness.}}$
- $\textbf{\textcolor{red}{All}}$ reviewers, including Reviewer mtD2, Reviewer Nci3, Reviewer g14p, and Reviewer 1iwK, consider our attempt to introduce diffusion models into the field of image steganography as a $\textbf{\textcolor{red}{strength}}$ of our work.
> ***Secondly, we would like to emphasize the value and contribution of our work:***
- Based on diffusion models, CRoSS demonstrates unique advantages in terms of $\textbf{\textcolor{red}{security and robustness}}$, while maintaining acceptable subjective fidelity. This highlights the $\textbf{\textcolor{red}{greater practicality}}$ of CRoSS compared to previous non-diffusion model methods.
- CRoSS introduces the $\textbf{\textcolor{red}{first}}$ image steganography framework based on diffusion models, and this framework exhibits strong extensibility, serving as $\textbf{\textcolor{red}{inspiration}}$ for the future development of the field of image steganography.
- We have observed that Reviewer g14p claims a lack of novelty and insights in our work. However, the advantages and contributions listed above clearly demonstrate the novelty of CRoSS.
> ***Thirdly, we would like to further clarify the concerns regarding objective fidelity and practicality.***
- We have observed Reviewer 1iwK's concerns regarding the objective fidelity and practicality of CRoSS. However, evaluating an image steganography algorithm requires $\textbf{\textcolor{red}{a comprehensive consideration of fidelity, security, and robustness}}$, which many previous methods have overlooked, leading to poor practicality. In contrast, CRoSS exhibits excellent security and robustness, along with acceptable subjective fidelity, which grants it significant practical advantages.
We kindly request the reviewers to thoroughly consider the value and contribution of our work. The detailed rebuttals for each reviewer can be found below.
Additionally, we have attached a $\textbf{\textcolor{red}{PDF file}}$ containing some figures and tables for the reviewers' reference.
Pdf: /pdf/e6926881bfe9688c6fa94ea2ab63647aef93c5f1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
High Precision Causal Model Evaluation with Conditional Randomization | Accept (poster) | Summary: The authors formulate and evaluate an approach to solving a non-standard problem: evaluating a causal model M when additional data (not used to construct M) is available from a non-randomized experiment. In particular, the authors focus on comparing IPW estimates from the non-RCT data and from the inferences of the model.
Strengths: The idea is a simple and apparently powerful one: Remove the variability due to IPW by performing IPW on both the actual data and the estimates from the model. This largely removes an apparently extraneous source of variability (IPW itself) and allows direct comparison of the estimates.
Weaknesses: The basic idea of the pairs estimator assumes some basic properties of the IPW estimator. Specifically, if IPW was a terrible estimator whose estimates were unrelated to the data (for example, it always output a single value for ATE: 0.5), then the pairs estimator would always show that the model was essentially perfect, regardless of the model’s estimates or the non-RCT data. I don’t think this scenario likely, but it is possible. This implies that, at least, some diagnostic tests are in order to increase confidence in the output of the pairs estimator. For example, you could introduce noise into the model’s estimates and see if the estimated error increases.
The results in Figures 2 and 3 are very good. Indeed, they are *freakishly* good. They are so good that it makes me wonder whether the experiments reported in these figures are really evaluating anything important about how the pairs estimator works in practice. This bears some discussion in the description of the results.
The introduction to the paper may be confusing to many readers. When first encountering the term “non-random experiment”, many readers will balk, thinking that randomization is the *sine qua non* of experimentation and seeing “non-random experiment” as a contradiction in terms. The authors could save readers this confusion by introducing the example of explicit non-random assignment (line 38) earlier or by moving the first paragraph of Section 2 (Related Works) to early in the introduction.
Another issue may confuse readers: The contrast between an IPW estimate and the “model’s” estimate of treatment effect. For many readers, the goal of analyzing observational data is to get a single estimate of treatment effect, perhaps through IPW. In this scenario, there is no “model” (or, the model is a model of treatment propensity). The authors could substantially improve the paper by explaining one or more practical scenarios in which a researcher has both a model of causal effect and a non-RCT data set that has not been used to create that model.
The paper has occasional small grammar errors that detract from the authors’ message. An example is the first sentence of the contributions (with corrections noted in brackets): “We focus on causal model evaluation with non-RCT[s], propose a novel method for low-variance estimation of causal error (Equation 1), and demonstrate its effective[ness] over current approaches by achieving near-RCT performance.” A little more care in editing would improve the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What are examples of practical scenarios in which researchers have a model of treatment effect and then they collect data non-RCT data to evaluate that model?
2. Under what circumstances would the pairs estimator fail to provide estimates with low variance and low bias?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The experiments do not identify cases in which the pairs estimator will fail. In the synthetic experiments whose results are shown in Figure 2 and 3 (which could be used to identify such failure modes), the results for the pairs estimator are freakishly good.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review. Below we address your question respectively.
> Some diagnostic tests are in order to increase confidence in the output of the pairs estimator. For example, you could introduce noise into the model’s estimates and see if the estimated error increases.
Thank you for your suggestion, we will append failure scenarios of our method and try to introduce certain practical diagnoses.
> The results in Figures 2 and 3 are very good. Indeed, they are freakishly good. They are so good that it makes me wonder whether the experiments reported in these figures are really evaluating anything important about how the pairs estimator works in practice. This bears some discussion in the description of the results.
While the results in Figures 2 and 3 are indeed very good, they are based on the fact that the assumptions of our method are met or expected to be met. These results mainly validate the correctness of our theoretical results and demonstrate the practicality of our assumptions. In the revision, we will provide further discussion including potential failure cases and scenarios where the assumptions may not hold.
> The introduction to the paper may be confusing to many readers. When first encountering the term “non-random experiment”, many readers will balk, thinking that randomization is the sine qua non of experimentation and seeing “non-random experiment” as a contradiction in terms. The authors could save readers this confusion by introducing the example of explicit non-random assignment (line 38) earlier or by moving the first paragraph of Section 2 (Related Works) to early in the introduction.
Thank you so much for these suggestions. We recognize the naming conventions and may leads to confusion, which we originally followed [1]. In revision, we will apply your suggestions and mention different naming conventions in the literature.
> What are examples of practical scenarios in which researchers have a model of treatment effect and then they collect data non-RCT data to evaluate that model? The authors could substantially improve the paper by explaining one or more practical scenarios in which a researcher has both a model of causal effect and a non-RCT data set that has not been used to create that model.
Such scenarios can be found in various applications. For example, in personalized medicine domain, researchers build complex models such as structural causal models (SCMs) to predict various types of causal quantities beyond ATE. In practice, performing trials on every single types of causal queries is expensive and ethically challenging, making the collection of non-RCT data a valuable downstream task to gain insights into the model's performance.
> The paper has occasional small grammar errors that detract from the authors’ message. An example is the first sentence of the contributions (with corrections noted in brackets): “We focus on causal model evaluation with non-RCT[s], propose a novel method for low-variance estimation of causal error (Equation 1), and demonstrate its effective[ness] over current approaches by achieving near-RCT performance.” A little more care in editing would improve the paper.
Thank you for your input. In revision, we will further polish and improve the paper.
**References**
[1] Rubin, Donald B. "Estimating causal effects of treatments in randomized and nonrandomized studies." Journal of educational Psychology 66.5 (1974): 688.
---
Rebuttal Comment 1.1:
Comment: Thanks for the additional information and thoughtful responses. I am increasing my rating.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your constructive feedback and appreciate your acknowledgment of our attempts to address the concerns raised. | Summary: This paper proposes a new estimator for the causal error, that achieves lower variance than previous approaches.
The estimator consists in the difference between a IPTW-like estimator using the causal model and a direct IPTW causal effect estimator.
The paper shows that under clear assumptions, this estimator results in lower variance than a naive estimator, which only uses a typical IPTW. The authors further test this estimator empirically on a wide range of setups that both satisfy and potentially violate their assumptions.
Strengths: This paper propose a simple yet effective way to reduce the variance of the causal error estimation.
The assumptions under which the main result hold are very clearly stated and discussed.
The authors have extensively tested their approach. In particular, they have tested both on simulated data that satisfy their assumptions as well as on data that potentially violate them, to stress test the method.
Weaknesses: The main theoretical comparison of this paper seems to be the naive IPTW estimator. As the authors state in the related works section, other estimators have been proposed in the literature to reduce the variance of the causal effect estimator. How does this estimator compare to those theoretically ? And can you leverage some the improvements of IPTW in your method too ?
I think the whole goal of the paper would deserve more clarity. For instance, the overall goal is usually to estimate the true causal effect rather the causal error. Having a good estimator of the causal effect would directly result in low causal error. I believe this would deserve some more motivation / details in the text.
Furthemore, I would encourage the author to make the problem setup more clear. In my opinion, it is not fully transparent from the paper if the model is trained on a different set of samples than the ones used for the IPTW estimator. I believe this is the case but this should be state more clearly.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Same as above:
The main theoretical comparison of this paper seems to be the naive IPTW estimator. As the authors state in the related works section, other estimators have been proposed in the literature to reduce the variance of the causal effect estimator. How does this estimator compare to those theoretically ? And can you leverage some the improvements of IPTW in your method too ?
I think the whole goal of the paper would deserve more clarity. For instance, the overall goal is usually to estimate the true causal effect rather the causal error. Having a good estimator of the causal effect would directly result in low causal error. I believe this would deserve some more motivation / details in the text.
Furthemore, I would encourage the author to make the problem setup more clear. In my opinion, it is not fully transparent from the paper if the model is trained on a different set of samples than the ones used for the IPTW estimator. I believe this is the case but this should be state more clearly.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations and assumptions of this work were carefully adressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive opinions for our work. We are pleased to see that you appreciate our efforts in technical soundness, presentation, as well as evaluations.
> The main theoretical comparison of this paper seems to be the naive IPTW estimator. As the authors state in the related works section, other estimator s have been proposed in the literature to reduce the variance of the causal effect estimator. How does this estimator compare to those theoretically ? And can you leverage some the improvements of IPTW in your method too ?
Thanks for your question. First, due to the variety of existing estimators, we cannot assert the theoretical advantages for each of them. However, we believe our results can be extended to the case of any extensions of self-normalized IPW, such as the Adaptive normalization for IPW estimation [1]. We will add discussions regarding this in the revision. Moreover, we would like to enhance that our paper focus on improving evaluation of causal inference models, rather than improving causal effect estimations itself.
> Having a good estimator of the causal effect would directly result in low causal error. I believe this would deserve some more motivation / details in the text.
We appreciate your suggestion and will provide better motivation in our revision. We would like to enhance that we did evaluate our method against other IPW variance reduction methods that are designed to obtain better causal effects; and our results indeed show that our method can still be very valuable when other effect-oriented estimators are not working well.
Our method aims to offer a new, easy-to-use tool for causal practitioners, complementing other existing tools. By focusing on causal error estimation, we can address specific challenges in evaluating causal models.
> Furthemore, I would encourage the author to make the problem setup more clear. In my opinion, it is not fully transparent from the paper if the model is trained on a different set of samples than the ones used for the IPTW estimator. I believe this is the case but this should be state more clearly.
Thank you for pointing this out. We will make this clearer in our revision. As illustrated in Figure 1, there are two data sources: an observational data source for training the model and an interventional data source obtained through a designed oracle treatment assignment mechanism P(T=1|X) for validating the model. The IPW estimator is used to estimate the ground truth effect based on the interventional data, leveraging the known propensity scores P(T=1|X).
**Reference**
[1] Khan, Samir, and Johan Ugander. "Adaptive normalization for IPW estimation." Journal of Causal Inference 11.1 (2023): 20220019.
---
Rebuttal 2:
Comment: Thank you again for your review, and we would like to ascertain if our rebuttal has adequately resolved the concerns you raised. We continue to welcome any supplementary observations or clarification to bolster our work. | Summary: This paper constructs a new estimator for IPW evaluation by comparing the IPW estimator applied on the model-predicted treatments versus the observed treatments. The paper presents a theoretical result that this estimator has lower variance than the naive one and aims to demonstrate this via empirical experiments.
Strengths: 1. The structure of the paper and most of the writing are very good.
2. The theoretical result is important.
3. The empirical results seem to show that the estimator is better than the naive one.
Weaknesses: 1. The paper could benefit from more clarity in the writing. For example, in line 124, it would be great if there can be some intuition or example on when P(T=1|X) is skewed and what that means (is it overfitting or mis-specified)? Furthermore, it would be great if there was an example of a commonly used model and how it satisfies Assumption A, and why this assumption is not a strong one.
2. While the theoretical result seems important, the supplementary file with the proof is missing, which makes it impossible to review.
3. Similarly, the empirical experiments, while extensive, rely on datasets and details that are supposed to be in the supplementary file, but that file is missing. Therefore, it is unfortunately impossible to review the setting in detail.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Could the authors briefly describe the derivation of the IPW estimator (the unlabeled equation between eq. 4 and 5). I have not seen it in this form before.
2. How do the authors explain the large spike in variance at \sigma_\beta = 10 in Figure 2 for the baseline?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The main limitation is the lack of the Appendix which contains significant details regarding the contributions of this paper. The authors should also discuss potential limitations of their theoretical assumptions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging review and valuable suggestions to improve.
> The paper could benefit from more clarity in the writing. For example, in line 124, it would be great if there can be some intuition or example on when P(T=1|X) is skewed and what that means (is it overfitting or misspecified)?
We appreciate your suggestion and will clarify this point in our revision. In our paper, P(T=1|X) is a given oracle distribution that is known and designed by the experiment designer to evaluate the causal predictions of another causal model; and the goal of such experiment is to evaluate the predictions of a existing causal model.
The skewness of P(T=1|X) is not due to overfitting or misspecification; rather, it often results from practical constraints such as ethical, legal, or financial considerations. For example, in business scenarios, any experimental interventions could have financial impact in the real world, leading to a trade-off between utility and test power. A more skewed policy might be preferred to minimize the negative impact of random assignment, at the cost of higher effect estimation variance. Our proposed method aims to address this challenge by providing a more reliable evaluation in such scenarios.
> Furthermore, it would be great if there was an example of a commonly used model and how it satisfies Assumption A, and why this assumption is not a strong one.
Thanks for your suggestion, relevant results are provided in the attached pdf in our general reply. We demonstrate with different models including linear DML, kernel DML, causal random forest, and variations of doubly robust algorithms (linear, forest, orthogonal), this assumption holds in practice naturally.
> While the theoretical result seems important, the supplementary file with the proof is missing, which makes it impossible to review.
We apologize for the omission of the appendix in our submission. This was an unintentional mistake, and we appreciate your understanding. Please refer to the proof provided in our response to Reviewer r45Q to review our theoretical results.
We hope that our revised responses address your concerns more effectively. Please let us know if there are any further questions or suggestions.
> Could the authors briefly describe the derivation of the IPW estimator (the unlabeled equation between eq. 4 and 5). I have not seen it in this form before.
This is just a vectorized form of the usual IPW definition. Note that
$
\hat{\delta}^{IPW}(\mathcal{T}) = \frac{1}{N} \sum_{i\in \mathcal{B}} \frac{\mathbf{Y}^{T=1}(i)}{p_i} - \frac{1}{N} \sum_{j\in \mathcal{D} \setminus \mathcal{B}} \frac{\mathbf{Y}^{T=1}(j)}{(1 - p_j)} $
$ = \frac{1}{N} \sum_{i\in \mathcal{B}} \frac{\mathbf{Y}^{T=1}(i)}{p_i} - \frac{1}{N} \sum_{j\in \mathcal{D} \setminus \mathcal{B}} \frac{\mathbf{Y}^{T=1}(j)/p_j}{(1/p_j - 1)} $
$ = \frac{1}{N}<\mathbf{Y}^{T=1}(\mathcal{B}), \mathbf{w}(\mathcal{B})> - \frac{1}{N}<\mathbf{Y}^{T=0}(\mathcal{D} \setminus \mathcal{B}), \frac{\mathbf{w}(\mathcal{D} \setminus \mathcal{B})}{\mathbf{w}(\mathcal{D} \setminus \mathcal{B}) - 1}> .
$
> How do the authors explain the large spike in variance at \sigma_\beta = 10 in Figure 2 for the baseline?
The spike of variance is expected since when $\sigma_\beta$ increases, the imbalance of treatment assignment would also increase. Sometimes the variance might go down again at $ \sigma^2_\beta = 20$, but this almost only happens for the combination of certain baselines (e.g., naive IPW), certain $\sigma_\nu$ values (e.g., very small) and certain datasets (e.g., csuite_3 dataset). This might be due to that when $\sigma_\nu$ is very small, the model's bias will also be small, resulting in a causal error that has a much smaller scale, hence the smaller variance.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response, I believe it addresses my questions and concerns. I am adjusting my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable input, and we're grateful for the recognition of our efforts in tackling the issues brought up. | Summary: This work aims to evaluate the fidelity of causal models in estimating true treatment effects across different treatments. The golden approach involves comparing treatment effects derived from the target causal model and those obtained from Randomized Controlled Trials (RCT). Practical, time, cost, and ethical constraints often necessitate replacing the RCT estimate with non-RCT methods such as Inverse Probability Weighting (IPW). However, IPW may lead to unbounded variance due to imbalanced propensity scores. To address this, the authors introduce a procedure that applies the IPW estimator to both the model and the actual effects. This aligns the estimated treatment effects, thus offsetting their estimation errors, and results in a lower variance causal error estimate. Under the two stated assumptions, the authors show that the variance of the causal error estimated from their approach, namely pairs estimator, is upper bounded by variance of the the causal error estimated from the naive estimator. In their experiments, the authors compared their approach with the naive estimator, RCT estimator, and existing state-of-the-art variance reduction estimators such as the self-normalized estimator and the LW IPW estimator. They carried out these comparisons on three synthetic datasets, under various non-RCT scenarios, which included different treatment assignment units across sub-populations and varying degrees of propensity score imbalance. The results demonstrated that their approach consistently produced low estimation errors, often on par with those from the RCT estimator. Moreover, they also evaluated the performance of their approach when the existing machine learning-based causal models are used in treatment effect estimation. Still, the pairs estimator yield low estimation errors and yielded results comparable to those from the RCT estimator.
Strengths: 1. The author proposes a simple yet powerful procedure for estimating causal error without modifying IPW, which might lead to other complexity, such as parameter tuning, and bias introduction. Their experiments demonstrate their approach has the capability in estimating true causal error faithfully under many existing non-RCT scenarios that are often encountered in practice.
2. The problem statement, formulation, and illustration are clearly stated and well structured, allowing readers to follow easily.
Weaknesses: Despite their approach being supported by the theoretical results and extensive experiments presented, the authors have not provided an appendix. The thorough justification of their theoretical results and experimental details could only be further substantiated with access to this supplementary material.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Regarding Figure 2, can you clarify why the variance of the Linear-modified estimator and Normalized IPW initially increase and subsequently decrease as the imbalance degree increases? Wouldn't these variance reduction methods yield improved results when the degree of imbalance is less pronounced?
2. In assumption A, what is $b_i$?
3. Given that your theoretical findings strongly depend on Assumption A, the compliance of the learned causal model's counterfactual predictions with this assumption becomes a key aspect of your experimental inquiry. Could you please provide the appendix and discuss these results in more detail?
####################################################################################
[08/19/ 2023] Reviewer r45Q: The experiment validation on Assumption A and the proof for Proposition 1 are provided, and hence I adjust my review accordingly.
####################################################################################
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: To my knowledge, this work does not have potential negative societal impacts. However, the authors did not provide a section with these discussions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your constructive and positive feedback to our paper. We would address your comments below.
> Despite their approach being supported by the theoretical results and extensive...the authors have not provided an appendix.
We apologize for the missing appendix, this is due to our unpurposive mistakes. We will add it in the revision. Regarding the theoretical results, they are appended at the end of this response since the proof is not complicated.
> The compliance of the learned causal model's counterfactual predictions with this assumption becomes a key aspect of your experimental inquiry. Could you please provide the appendix?
Yes, please see the pdf file our general reply.
> In assumption A, what is $b_i$?
$b_i$ is given in the paragraph below Eq (4), which is the Bernoulli random variable of treatment assignment.
**Proof of Prop. 1**:
First, it is straightforward to verify that under **Assumption A**, $\hat{\Delta}^{Pairs}(\mathcal{M}, \mathcal{T})$ and $\hat{\Delta}(\mathcal{M}, \mathcal{T})$ can be decomposed as
$ \hat{\Delta}^{Pairs}(\mathcal{M}, \mathcal{T}) = \hat{\delta}_\mathcal{M} - \hat{\delta} + g(\mathbf{\nu}, \mathcal{B}) $
$ \hat{\Delta}(\mathcal{M}, \mathcal{T}) = \hat{\delta}_\mathcal{M} - \hat{\delta} - f(\mathcal{B})$
Where:
$f(\mathcal{B}) = \frac{1}{N} <\mathbf{Y}^{T=1} (\mathcal{B}), \mathbf{w}(\mathcal{B}) - 1 > + \frac{1}{N} <\mathbf{Y}^{T=0}(\mathcal{B}), \mathbf{1}> $ \
$- \frac{1}{N}<\mathbf{Y}^{T=0}(\mathcal{D} \setminus \mathcal{B}), \frac{1}{\mathbf{w}(\mathcal{D} \setminus \mathcal{B}) - 1} > - \frac{1}{N}<\mathbf{Y}^{T=1}(\mathcal{D} \setminus \mathcal{B}), \mathbf{1}> $,
And:
$g(\mathbf{\nu}, \mathcal{B}) = \frac{1}{N}<\mathbf{\nu}(\mathcal{B}) * \mathbf{v}(\mathbf{Y}^{T=1} (\mathcal{B})), \mathbf{w}(\mathcal{B}) - 1 > \frac{1}{N}<\mathbf{\nu}(\mathcal{B}) *\mathbf{v}(\mathbf{Y}^{T=0}(\mathcal{B})), \mathbf{1}> $ \
$ - \frac{1}{N}<\mathbf{\nu}(\mathcal{D} \setminus \mathcal{B}) *\mathbf{v}(\mathbf{Y}^{T=0}(\mathcal{D} \setminus \mathcal{B})), \frac{1}{\mathbf{w}(\mathcal{D} \setminus \mathcal{B}) - 1} > - \frac{1}{N}<\mathbf{\nu}(\mathcal{D} \setminus \mathcal{B})*\mathbf{v}(\mathbf{Y}^{T=1}(\mathcal{D} \setminus \mathcal{B})), \mathbf{1}> $ .
Their estimation error can then be expressed as
$ e(\hat{\Delta}^{Pairs}(\mathcal{M}, \mathcal{T})) = \hat{\delta}_\mathcal{M} - \hat{\delta} + g(\mathbf{\nu}, \mathcal{B}) - \Delta(\mathcal{M}) $
$ e(\hat{\Delta}(\mathcal{M}, \mathcal{T})) = \hat{\delta}_\mathcal{M} - \hat{\delta} - f(\mathcal{B}) - \Delta(\mathcal{M}) $
According to delta method, both $\sqrt{N} e(\hat{\Delta}^{Pairs})$ and $\sqrt{N} e(\hat{\Delta})$ are asymptotically normal with zero mean under Assumption B.
Note that $g(\mathbf{\nu}, \mathcal{B})$ can be rewritten as
$ g(\mathbf{\nu}, \mathcal{B}) = \frac{1}{N}<\pmb{\mathbf{\nu}} *\mathbf{b}*\mathbf{v}(\mathbf{Y}^{T=1}), \mathbf{w} - 1 > + \frac{1}{N}<\mathbf{\nu} *\mathbf{b} *\mathbf{v}(\mathbf{Y}^{T=0}), \mathbf{1}> $ \
$ - \frac{1}{N}<\mathbf{\nu} *(1-\mathbf{b}) *\mathbf{v}(\mathbf{Y}^{T=0}), \frac{1}{\mathbf{w} - 1} > - \frac{1}{N}<\mathbf{\nu}*(1-\mathbf{b}) *\mathbf{v}(\mathbf{Y}^{T=1}), \mathbf{1}> $
where $b_i$ is the Bernoulli variable with $P(b_i=1) = p_i$, and $b_i = 1$ if $i \in \mathcal{B}$.
Note also that $\nu$ is independent from $(Y^{T}(i), b_i)$ and has zero mean and variance $\sigma$, we have
$ Cov(Y^{T=t_a}(i), \nu_i b_i V_i(Y^{T=t_b}(i)) ) = E(\nu_i b_i) Cov(Y^{T=t_a}(i), V_i(Y^{T=t_b}(i)) ) = 0$
holds for all $i$ and all treatments $t_a$ and $t_b$. Similarly, we have $ Cov(Y_\mathcal{M}^{T=t_a}(i), \nu_i b_i V_i(Y^{T=t_b}(i)) ) =0$.
Therefore, it is not hard to show that $ Cov( g(\mathbf{\nu}, \mathcal{B}), \hat{\delta}_\mathcal{M}) = 0 $, and $Cov( g(\mathbf{\nu}, \mathcal{B}), \hat{\delta} ) = 0$,
which implies $Cov( g(\mathbf{\nu}, \mathcal{B}), \hat{\delta}_\mathcal{M} - \hat{\delta} ) = 0 $. Thus, we have:
$
Var[ \sqrt{N} e(\hat{\Delta}^{Pairs}(\mathcal{M}, \mathcal{T})) ] = Var[ \sqrt{N} (\hat{\delta}_\mathcal{M} - \hat{\delta})] + Var[\sqrt{N}g(\mathbf{\nu}, \mathcal{B})] $
$ = Var[Y_\mathcal{M}^{T=1} - Y_\mathcal{M}^{T=0} + Y^{T=1} - Y^{T=0}] + Var[\sqrt{N}g(\mathbf{\nu}, \mathcal{B})]\\
Var[\sqrt{N}e(\hat{\Delta}(\mathcal{M}, \mathcal{T}))] $
$= Var[\sqrt{N}( \hat{\delta}_\mathcal{M} - \hat{\delta})] + Var[ \sqrt{N(}f(\mathcal{B}))] $
$ = Var[Y_\mathcal{M}^{T=1} - Y_\mathcal{M}^{T=0} + Y^{T=1} - Y^{T=0}] + Var[ \sqrt{N(}f(\mathcal{B}) )] $.
Since $\nu$ has zero mean and variance $\sigma^2_\nu$ and is independent of $(Y_i^{T}, b_i)$ as in Assumption A, this expression be further simplified as, according to the rules of variance of the product of independent variables:
$ Var[g(\mathbf{\nu}, \mathcal{B})] = \frac{1}{N^2}<\sigma^2_\nu *\mathbf{p}*E(\mathbf{v}(\mathbf{Y}^{T=1})^2), (\mathbf{w} - 1)^2 > + \frac{1}{N^2}<\sigma^2_\nu *\mathbf{p} * E(\mathbf{v}(\mathbf{Y}^{T=0})^2), \mathbf{1}> $
$ + \frac{1}{N^2}<\sigma^2_\nu *(1-\mathbf{p}) *E(\mathbf{v}(\mathbf{Y}^{T=0})^2), \frac{1}{\mathbf{w} - 1} > + \frac{1}{N^2}<\sigma^2_\nu*(1-\mathbf{p}) * E(\mathbf{v}(\mathbf{Y}^{T=1})^2), \mathbf{1}> $
$ = \frac{\sigma^2_{\nu}}{N^2}[ <\mathbf{p}(\mathbf{w}-1)^2 + (\mathbf{1}-\mathbf{p}), E((\mathbf{v}(\mathbf{Y}^{T=1})^2)> + <\frac{1-\mathbf{p}}{\mathbf{w} - 1} + (\mathbf{p}), E\mathbf{v}(\mathbf{Y}^{T=0})^2)>] $
$ < \frac{1}{N^2}[ <\mathbf{p}(\mathbf{w}-1)^2 + (\mathbf{1}-\mathbf{p}), E((\mathbf{Y}^{T=1})^2)> + <\frac{1-\mathbf{p}}{\mathbf{w} - 1} + (\mathbf{p}), E((\mathbf{Y}^{T=0})^2)>] = Var[f(\mathcal{B})] $.
Where the third equality is due to the fact that $ \sigma^2_\nu E [(V_i Y^{T=t}(i))^2] < E [Y^{T=t}(i)^2]$. Therefore, we finally conclude that the variances of the error estimators will satisfy
$ Var[ \sqrt{N} e(\hat{\Delta}^{Pairs}(\mathcal{M}, \mathcal{T})) ] < Var[\sqrt{N}e(\hat{\Delta}(\mathcal{M}, \mathcal{T}))]$.
**QED**
---
Rebuttal Comment 1.1:
Comment: The experiment validation on Assumption A and the proof for Proposition 1 are provided, and hence I adjust my review accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your feedback and we appreciate the recognition of our efforts in addressing the concerns raised. | Rebuttal 1:
Rebuttal: We thank the reviewers for your encouraging review and valuable suggestions to improve. We acknowledge that the reviewers highlighted the **effectiveness and simplicity of our approach (PdWH, r45Q, t8vT, yvJa), novelty or soundness of our results (PdWH, zDTG, t8vT, yvJa), extensiveness of experiments (r45Q, r45Q, t8vT), and clarity of presentation (PdWH, r45Q, zDTG, t8vT, yvJa)**.
- An essential aspect of our work that we would like to emphasize is its focus on **model evaluation in real-world testing scenarios**. Our novel method for low-variance estimation of causal error facilitates reliable evaluation of causal inference models through **oracle conditional/non-randomized trial**. This is particularly relevant in situations where the oracle design cannot be improved due to real-world constraints or interests. Evaluating causal models in these settings is crucial to ensure their validity and applicability across various domains. Our approach is simple yet powerful, complementing existing tools in the causal inference toolbox, effectively addressing specific challenges faced by practitioners in a wide range of real-world applications.
- Moreover, we have **addressed the main concerns raised by the reviewers**, including:
- as requested, providing **experiments to validate Assumption A**. In the attached pdf file, using an numerical example we have demonstrated that Assumption A holds in practice for various commonly used models, including linear DML, kernel DML, causal random forest, and variations of doubly robust algorithms (linear, forest, orthogonal). **This indicates that Assumption A is not overly restrictive and can be satisfied by a wide range of causal models that are popular in practical applications**.
- as requested, providing **theoretical proofs** to the main result, which can be found in our response to **r45Q**.
We believe that our responses effectively address the concerns raised by the reviewers and hope that they find our work more compelling and valuable. We are grateful for the reviewers' insightful feedback and look forward to incorporating their suggestions to further improve the quality of our paper.
Pdf: /pdf/832d7ab535de47a5267b5f00464a04ec7da575f4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper considers the estimation of causal error using the IPW estimator in conditional randomized experiments. Given that the allocation probabilities are readily available in these types of experiments, IPW estimators are often used. The authors propose to use the same IP weights for both the causal prediction, and the the estimator of the ground truth, and show that this so-called pair estimation approach reduces the variance in the distance between the causal prediction and ground truth. They provide theoretical justifications for their approach in terms of variance reduction, and illustrate this on synthetic data sets.
Strengths: The idea of using the same IP weights for both the causal prediction and ground truth estimation is simple yet effective.
Weaknesses: There is no evaluation of real data sets.
Often the IPW is bad when the ps is unknown and the model is misspecified. When the ps is known by design, this seems to be less of a concern, and can/should probably be addressed by a better design.
Often the IP weights are used in conjunction with an OR model to construct a DR estimator. In fact, there is really no good reasons to use the naive IPW estimator considered by the authors. I doubt the proposed method may also be useful for improving the DR estimator, but not as dramatic. The authors should probably have done this.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The authors refer to the phenomenon that the oracle IPW may not be as efficient as the one using a correctly specified model as one of the limitation of the IPW estimator. Why is this a limitation?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author’s use of non-RCT data is very misleading; this is actually often called the conditional randomized studies (e.g. Imbens and Rubins’ book), and is one common type of randomized studies. A trial does not need to have the same coin for everyone!
The authors mention that their approach is different from the modern approaches that try to stablize the IP weights. It is unclear to me if one already use these stablization methods, then whether the authors’ method is still useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and suggestions. Below, we respond to each of your comments.
> Often the IP weights are used in conjunction with an OR model to construct a DR estimator. In fact, there is really no good reasons to use the naive IPW estimator considered by the authors.
We acknowledge the effectiveness of DR estimators in causal inference settings. However, our paper focuses on **model evaluation in real-world scenarios** where IP weights are known by design, and model misspecification is less of a concern. As such, the introduction of the DR estimator might not be necessary or optimal. Instead, the high variance issue of IPW is more of a concern, which cannot be mitigated by D. We compared our method with existing IPW variance reduction techniques, such as self-normalized/renormalized [1, 2] and linearly modified (LM) IPW estimators[3], demonstrating the effectiveness of our approach.
> Often the IPW is bad when the ps is unknown and the model is misspecified. When the ps is known by design, this seems to be less of a concern, and can/should probably be addressed by a better design.
While it is true that known propensity scores alleviate some concerns, IPW might still suffer from high variance, leading to poor estimates. Addressing this issue through a different design is often infeasible due to practical limitations, such as financial concerns, or ethical constraints. Our method provides a solution that works within these constraints, helping to improve causal model evaluation even when propensity scores are known by design.
> There is no evaluation of real data sets.
We recognize the absence of real dataset evaluations as a weakness of our paper. However, it is very difficult to find such real-world datasets that satisfies: 1) the non RCT treatment assignment has known oracle propensity scores; 2) the interventional outcomes are not generated in a semi-synthetic manner. Otherwise, these “real” datasets would be no different from our experiments. Our method was inspired by a real-world problem we encountered, and we achieved good results in our actual application, motivating us to share the method that we discovered. Due to confidentiality reasons, we cannot share these proprietary results and data publicly. That said, we believe our simulation studies sufficiently demonstrate the potential of our method in various scenarios.
> The author’s use of non-RCT data is very misleading; this is actually often called the conditional randomized studies (e.g. Imbens and Rubins’ book), and is one common type of randomized studies. A trial does not need to have the same coin for everyone!
Thank you for the suggestion. We recognize that there is different convention of naming such as “non-RCT” used in [4] but CRS in other literatures. In revision, we will use the suggested terms and mention different conventions in the literature.
> It is unclear to me if one already use these stablization methods, then whether the authors’ method is still useful.
In our experiments, we have indeed compared the performance of our method versus some stabilization methods such as renormalization and linear corrections; and our results showed that our method will still be helpful even if these methods are already available.
**Reference**
[1] Imbens, Guido W. "Nonparametric estimation of average treatment effects under exogeneity: A review." Review of Economics and statistics 86.1 (2004): 4-29.
[2] Lunceford, Jared K., and Marie Davidian. "Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study." Statistics in medicine 23.19 (2004): 2937-2960.
[3] Zhou, Kangjie, and Jinzhu Jia. "Variance Reduction for Causal Inference." arXiv preprint arXiv:2109.05150 (2021).
[4] Rubin, Donald B. "Estimating causal effects of treatments in randomized and nonrandomized studies." Journal of educational Psychology 66.5 (1974): 688.
---
Rebuttal Comment 1.1:
Comment: We would like to thank you again for spending time carefully evaluating our submission and providing valuable feedback. We would appreciate if you could let us know if our rebuttal has addressed your concerns, thank you. | null | null | null | null | null | null |
Generalization in the Face of Adaptivity: A Bayesian Perspective | Accept (spotlight) | Summary: This paper explores the problem of adaptive data analysis - when a single dataset is repeatedly used for adaptively chosen queries, overfitting can occur rapidly. To reduce this bias, a popular approach is to add noise to the output of each query. Intuitively, this prevents the analyst from learning too much about the sample and thus prevents them from choosing a query that overfits the sample.
The most common technique used to analyze this adaptive setting is differential privacy. However, the worst-case nature of DP requires scaling the noise to a worst-case dataset, rather than a typical dataset. The paper's contributions are:
1) Prior work has shown that (roughly speaking), to ensure that the query responses are low bias, it suffices to show the responses have "posterior accuracy." This paper shows that posterior accuracy can be thought of as the correlation between the query asked and a Bayes factor.
2) They introduce a new notion of stability (pairwise concentration). Roughly speaking, stability measures (like DP) measure how much an algorithm's output depends on its input and have long been used to analyze mechanisms for adaptive data analysis. Pairwise concentration crucially depends on the dataset and query, to avoid the worst-case requirements of DP. Using this new notion of stability, they bounded the variance of the Bayes factor introduced in the previous point, which in particular bounds the correlation between it and the query asked. As a result, this new notion of stability can be used to prove generalization guarantees.
3) Using pairwise concentration, they show that it suffices to add noise that scales with the standard deviation of each query.
For 3), a similar result was shown by [Feldman and Steinke 18]. However, [FS 18's] result is weaker than this paper in two ways. First, it only holds in expectation over the error, whereas this paper's guarantee holds with extremely high probability (the difference is equivalent to Markov's vs Chernoff-style bounds). Second, this paper can handle subgaussian queries with unbounded ranges, whereas it's unclear if [FS 18] can handle unbounded ranges.
Strengths: Within the field of adaptive data analysis, the authors give a refined answer to a simple question: What happens if we scale the noise of our answers to the standard deviation of the query? Prior work of [FS 18] suggested this approach can work, but were only able to prove that the error is bounded in expectation. This paper shows the error is bounded with high probability and also extends to unbounded but subgaussian queries.
I'm also excited by the new notion of stability (pairwise concentration) introduced in this paper and hopeful that it can be applied in other settings (for instant, to analyze other mechanisms).
Along the way, the authors introduce a number of perspectives and new tools that may be more broadly applicable. For example, they introduce a number of "dissimilarity" measures, which provide more expressive ways to measure how dissimilar two distributions are than classical notions of divergence.
Weaknesses: The definition of pairwise concentration and its application (in Theorem 4.5) are technically complex. One way for this work to be impactful is for pairwise concentration to be applied more broadly, and providing a simple interface to it would aid that. For example:
1) In Theorem 4.5, I believe you only ever need to bound the pairwise concentration by datasets that differ in one point (via the function $\psi(x,y;v)$ defined on line 291). If so, I would explicitly state that (ideally early) as this would make it easier for someone else to apply your framework.
2) Also in Theorem 4.5, is there ever a reason (up to a small constant) to not set $\xi = \epsilon^2$? It seems to me the probability bound is monotonically decreasing in both $\xi$ and $\epsilon$, in which case setting both $\xi$ and $\epsilon^2$ to the maximum of the two will only improve the error probability, and affect the deviation bounded by at most a factor of $\sqrt{2}$. If so, my (personal) preference is that it's better to state a simpler version losing that $\sqrt{2}$ and defer more detailed versions to the appendix.
Another minor point: This paper's mechanism assumes it knows the standard deviation of each query. The work of [FS 18] is able to estimate the standard deviation of each query and scale the noise on the fly. Can your analysis handle a similar approach?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I'm a bit confused about the regime where $\alpha < 1$ in Definition 4.2 and Fact B.7. In Fact B.7, I believe the $\leq$ should be converted to a $\geq$ in this setting because the righthand side becomes decreasing with $\phi$. For Definition 4.2, do you similarly want the direction to depend on whether $(\alpha - 1)$ is positive? That would align with the intuition that $\varphi$ is an upper bound on some notion of ``distance" between $s$ and $s'$. The way it's written, we need $\varphi$ to be an upper bound on this distance when $\alpha > 1$ and a lower bound when $0 \leq \alpha < 1$.
Line 731 typo "ration"-> "ratio"
See also the questions from the Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes - all assumptions/theorem statements are clear, and the discussion includes some future directions that the present work does not address.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the useful feedback.
We appreciate the suggestions on how to make the presentation of pairwise concentration more accessible. We felt a tension between simplicity versus presenting the more general notion, which may have broader consequences and applications beyond the usage in this paper (for example, group stability becomes a direct result of the definition, not requiring any proof). Given your comments, we will more prominently highlight that this paper uses only the $\varphi(x, y;v)$ notion, where datasets differ in only one point. We'll also take your suggestion to set $\xi = \epsilon^2$ to simplify the statement of Theorem 4.5.
The case of unknown variance is not covered in this work, but we hope that the pairwise concentration stability notion will also offer new insights for this case, in future work.
The sign for the $0 \le \alpha \le 1$ regime is not wrong, though it is somewhat counter-intuitive. It may help to note that $\varphi$ serves two roles in this setting: (1) a bound on the stability loss random variable's mean (the $-\alpha$ part), and (2) the variance proxy of the stability loss's distribution (the $\alpha^{2}$ part). In the CDP representation [1], these two parts are denoted by $\mu$ and $\sigma^{2}$, respectively. Using this notation, we see that the bound is indeed monotonically tighter as $\sigma^{2}$ decreases, as expected.
There was a typo in Fact B.7: it holds only for the range $\alpha \ge 1$ (we only need it for that range).
~
[1] Dwork, Cynthia, and Guy N. Rothblum. "Concentrated differential privacy."
---
Rebuttal Comment 1.1:
Title: Response to rebuttal + additional question
Comment: I thank the authors for this response and am satisfied with the answers.
One followup question: Suppose the analyst gives what the mechanism gives what they think is an upper bound on the query's variance, but they are wrong and the variance is actually larger. How does this affect the answer to future queries? Can I think of one underestimated variance (say by a factor of 2) contributing the same amount of stability loss as a constant number of queries with the correct variance? In that case, the occasional underestimated variance seems ok. Or, does it become difficult to give bounds when a single query has bad variance?
---
Reply to Comment 1.1.1:
Comment: Thanks for the great question! Yes, our results allow us to reason precisely about the impact of an analyst using an incorrect variance bound.
First we note that our results can be extended to the case of varying variances, by adding noise at each iteration that scales with the corresponding variance, as we mentioned in the response to reviewer KJvw.
In the case of a Gaussian mechanism with queries of variance bounded by $\sigma_{i}^{2}$, our results translate to a guarantee that with probability $\ge1-\delta$, for any $i\in\left[k\right]$, the quantity
$$
\left|q_{i}\left(D\right)-r_{i}\right|\le O\left(\eta_{i}\sqrt{\ln\left(1/\delta\right)}+\sigma_{i}\sqrt{\ln\left(1/\delta\right)\sum_{j=1}^{k}\frac{\sigma_{j}^{2}}{n^{2}\eta_{j}^{2}}}\right),
$$
where $\eta_{i}$ is the noise parameter chosen at each iteration. The first term $\left(\eta_{i}\sqrt{\ln\left(1/\delta\right)}\right)$ represents the sample / posterior accuracy bound, and the second $\left(\sigma_{i}\sqrt{\ln\left(1/\delta\right)\sum_{j=1}^{k}\frac{\sigma_{j}^{2}}{n^{2}\eta_{j}^{2}}}\right)$ represents the the Bayes stability. Optimizing over $\eta_{i}$ we get that this term is minimized by $\eta_{i}=O\left(\sigma_{i}\sqrt{\frac{\sqrt{k}}{n}}\right)$, which implies
$$
\left|q_{i}\left(D\right)-r_{i}\right|\le O\left(\sigma_{i}\sqrt{\frac{\sqrt{k}}{n}\ln\left(1/\delta\right)}\right).
$$
Now consider a situation where the true variances were bounded by $\sigma_{i}^{2}$, but an analyst mistakenly assumed different bounds
$\tau_{i}^{2}$ which might be higher or lower than $\sigma_{i}^{2}$. In this case the analyst would have used noise parameters $\eta_{i}=O\left(\tau_{i}\sqrt{\frac{\sqrt{k}}{n}}\right)$, so
\begin{align*}
\left|q_{i}\left(D\right)-r_{i}\right|\le & O\left(\tau_{i}\sqrt{\frac{\sqrt{k}}{n}\ln\left(1/\delta\right)}+\sigma_{i}\sqrt{\ln\left(1/\delta\right)\sum_{j=1}^{k}\frac{\sigma_{j}^{2}}{n\sqrt{k}\tau_{j}^{2}}}\right)\\\\
= & O\left(\left(\frac{\tau_{i}}{\sigma_{i}}+\sqrt{\frac{1}{k}\sum_{j=1}^{k}\frac{\sigma_{j}^{2}}{\tau_{j}^{2}}}\right)\sigma_{i}\sqrt{\frac{\sqrt{k}}{n}\ln\left(1/\delta\right)}\right),
\end{align*}
which is the same term as in the case of correctly estimated variances, multiplied by the term $\frac{\tau_{i}}{\sigma_{i}}+\sqrt{\frac{1}{k}\sum_{j=1}^{k}\frac{\sigma_{j}^{2}}{\tau_{j}^{2}}}$.
Analyzing this term allows us to precisely understand the implications of an analyst making a mistake in the assumed variance bound.
If $\tau_{i}\ge\sigma_{i}$ for a single query $q_{I}$ (which means we added too much noise to the response to that query), the first term increased only for that query, and the second term decreased for all queries.
On the other hand, if $\tau_{i}<\sigma_{i}$ (which means we did not add enough noise to the response to that query), the first term decreased for that query, but the second term increased
for all queries.
The effect on the first term is proportional to the square root of the ratio of wrong to correct variance of that query, while the effect on the second term is proportional to the square root of average over all $k$ queries of the ratio of correct to wrong variances (notice the switch between numerator and denominator). | Summary: This paper makes progress in adaptive data analysis by providing a better analysis of the Gaussian mechanism (for answering statistical/linear/counting queries), and showing that adding Gaussian noise ensures generalization error that scales with the **variance** of the queries.
Previously, the differential-privacy-based analysis achieved a similar conclusion. However, the generalization error might scale with the *range* of the queries. In practice, we might expect the queries to exhibit some "nice" behavior so that the worst-case analysis might be too pessimistic. In this sense, the contribution of this paper is valuable.
This paper is not the first one to study this question. Previously, [Feldman and Steinke, 2017] and [Feldman and Steinke 2018] considered the same question and got some results that were quantitatively weaker than the one presented in the current paper. The algorithm presented in this work is simpler than prior ones (i.e., simply adding Gaussian noises suffices), and the bound sharper.
Strengths: * The algorithm is a natural and simple one. It's valuable that we can not understand the role of Gaussian noise in ADA better.
* The bound is sharp.
* The technical analysis seems novel (the so-called "covariance between the new query and a Bayes factor-based measure"), which might help advance the field further (though the reviewer did not have a chance to verify all the claims).
Weaknesses: * The current algorithm only works for linear queries. i.e., queries of the form q(D) = 1/n \sum_i q(x_i). The DP-based technique can offer bounds even for low-sensitivity queries.
* I believe the "covariance" and "Bayes factor" notions are technically interesting. However, it seems that the current write-up does not fully convey an intuitive interpretation of these quantities. In particular, I feel like the claim "how much information about the data sample was encoded in the responses given to past queries" (quote from the abstract) could be made clearer.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations and future directions are discussed in the submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the useful feedback.
The restriction to linear queries is indeed a limitation of the current work, and we hope to extend the results beyond linear queries in future work, as we briefly mention in the discussion.
Thanks for your comment about wanting more discussion/intuition for the Covariance Lemma and surrounding concepts! We agree and will add it in the final version. Here are some comments in that direction:
The Bayes factor $K(x, v)$ represents the likelihood ratio of seeing a particular view $v$ (a history of queries and responses) when conditioning on a particular datapoint $x$ being in the dataset, relative to when the dataset is entirely drawn at random. This captures the extent to which the presence of $x$ affects our probability of seeing $v$.
The first part of the Covariance Lemma (3.5) shows that the correlation that the view induces between a query $q$ and the Bayes factor $K(\cdot, v)$ is an important quantity; it precisely controls the difference between the expectation of $q$ on the underlying data distribution versus its expectation on the posterior distribution (a distribution reflecting how a prior of the true data distribution would be updated after seeing $v$). When $q$ behaves very differently on this prior and this posterior, $q$ ``encodes'' information from the dataset, information that it received via $v$.
For arbitrary analysts with unlimited prior knowledge, the inequality in the Lemma is tight. This can be observed via the variational representation of the Chi-square divergence, which implies that $\chi^{2}\left(D_{\mathcal{X}}^{v} \Vert D_{\mathcal{X}}\right) = \underset{q \in \mathcal{Q}}{\sup} \left(\frac{\left(q \left(D_{\mathcal{X}}^{v} \right) - q \left(D_{\mathcal{X}} \right) \right)^{2}}{\sigma_{q}^{2}} \right)$. In this case, the only way to avoid overfitting is by bounding the Bayes factor term, so that the prior and posterior remain close in Chi-square divergence.
This Lemma offers the tantalizing suggestion that it may also be possible to obtain improved guarantees for adaptive generalization under stricter assumptions on the analyst. Such an improvement would not contradict known lower bounds, and might help us better understand the existence of algorithms whose generalization properties in practice seem to beat the known theoretical guarantees.
---
Rebuttal Comment 1.1:
Title: thank you
Comment: Thanks to the authors for answering my questions! I want to keep my current score. | Summary: Overfitting can occur when a single dataset is used for several statistical tasks. It is known the application of additive noise techniques from differential privacy can prevent overfitting in a variety of statistical settings. The issue is that traditional mechanisms from differential privacy, which are inherently worst-case in nature, are overly pessimistic in adapting to the true variance of queries being issued. This work shows that additive noise mechanisms, such as the Gaussian mechanism, can provide guarantees that are inherently adaptive to the variance of queries issued by a data analyst. This departs from previous works, in which the guarantees are calibrated to worst-case (sensitivity-based) characterizations of the queries. To facilitate analysis, the authors introduce several novel analytic devices based on the realized Bayes factors of the queries.
Strengths: - The paper is very well written, and the related work section is good at framing the problem considered in this paper with respect to the general research landscape.
- This work, unlike most existing work, circumvents many aspects of worst-case analysis involved in differential privacy. Namely, through introducing several sophisticated analytical objects, the authors are able to present a more refined utility guarantee for the commonly used Gaussian mechanism.
Weaknesses: - I think the results in the paper would appear stronger if the guarantees allowed the variance of queries to differ across rounds, as will likely be the case in practical data-analytic settings. From the current wording it appears that the query variance is fixed across rounds, but perhaps I am mistaken? Perhaps in practice many queries are of low variance, but only a few are of high variance.
- Along this line, I think experiments would be very important in demonstrating the impact in preventing overfitting. Even something along lines of a toy regression problem could make the message of the paper stronger.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: NA
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: - The authors fairly discuss limitations as well as directions for future work in the final section of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the useful feedback.
Regarding the first point under ``weaknesses'': Our results can handle queries of differing variances, as long as the mechanism has access to bounds on their variances. In such a case, the mechanism can scale the added noise at each iteration $\eta_{i}$ to the corresponding bound on variance $\sigma_{i}$, and the guarantees will still hold. One way to think of this is as simply rescaling all queries according to their variance bounds, adding a fixed level of noise, and then re-scaling the responses. We will clarify this in the final version of the paper. (The case of entirely unknown variances is not covered in this work, but is an exciting direction for future work.)
We did not include empirical evaluation because it is straightforward to make such an empirical evaluation look arbitrarily good, by choosing a setting where the range is much larger than the standard deviation. One plot we could consider adding would have the range-to-standard deviation ratio as the x-axis, and plot the accuracy on the y-axis for both our analysis and the standard DP analysis. We somewhat prefer to use the space for additional intuition and other details, but are happy to defer to the reviewers on this matter. | Summary: In this paper, the authors generalize and improve the sample complexity that scales with variance compared to the sensitivity/range in the state-of-art DP result for adaptivity guarantee based on a novel definition of pairwise concentration (PC). The paper demonstrates such generalization in both bounded queries and sub-Gaussian queries.
Strengths: 1. The analysis of the main results for both bounded and unbounded queries using the pairwise concentration (PC) stability is novel and has potential to analyze similar questions in the literature.
2. The main results get rid of the range $\Delta$ dataset size required for the adaptivity guarantee only depend on variance.
Weaknesses: 1. In theorem 5.1, the data size n can still be significantly larger than the baseline in DP due to max term and the fact that $\alpha$ is arbitrary. Specifically, when $\alpha$ increases, the dataset size n in Theorem 5.1 does not necessarily improve over the DP result, even if it does not depend on $\Delta$.
2. The paper does not provide any empirical evaluation to compare with the DP result.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can the author please provide any explanations and details on the concerns in the weakness part?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not have experiments compared to the DP baseline result. And due to the fact that the adaptivity parameter is arbitrary, I think the dataset size in the main theorem 5.1 can scale beyond the DP result when $\alpha$ is large.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the useful feedback.
Regarding the first point under ``weaknesses'': note that the case of $\alpha \ge \frac{\Delta}{2}$ is trivial, since it can trivially be achieved by $r = \frac{1}{2} \left(\underset{x \in \mathcal{X}}{\min} \left(q(x) \right) + \underset{x \in \mathcal{X}}{\max} \left(q(x) \right) \right)$ (the mid point of the query's range), which is independent of the data. For any $\alpha < \frac{\Delta}{2}$, we obtain $\max \left( \frac{\Delta}{\alpha}, \frac{\sigma^{2}}{\alpha^{2}} \right) \le \frac{\Delta^{2}}{2 \alpha^{2}}$, and so the new bound provides improvement over the standard DP analysis for any non-trivial $\alpha$.
We did not include empirical evaluation because it is straightforward to make such an empirical evaluation look arbitrarily good, by choosing a setting where the range is much larger than the standard deviation. One plot we could consider adding would have the range-to-standard deviation ratio as the x-axis, and plot the accuracy on the y-axis for both our analysis and the standard DP analysis. We somewhat prefer to use the space for additional intuition and other details, but are happy to defer to the reviewers on this matter.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and I have updated my score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Object-centric Learning with Cyclic Walks between Parts and Whole | Accept (poster) | Summary: The paper studies the problem of unsupervised object-centric learning. Previous methods usually leverage reconstruction loss as the supervision signal. The authors propose a novel method that leverages a contrastive cyclic walk loss instead, which was originally proposed for learning correspondence between pixels in different frames of a video. The object disentanglement is achieved by using cyclic walk loss to learn the correspondence between object features and pixel features. To demonstrate the effectiveness of the proposed method, the authors conduct extensive experiments on three object-centric learning tasks. The method is also memory-efficient and computation-efficient compared with previous reconstruction-based methods.
Strengths: The paper studies the problem of unsupervised object-centric learning. Previous methods usually leverage reconstruction loss as the supervision signal. The authors propose a novel method that leverages a contrastive cyclic walk loss instead, which was originally proposed for learning correspondence between pixels in different frames of a video. The object disentanglement is achieved by using cyclic walk loss to learn the correspondence between object features and pixel features. To demonstrate the effectiveness of the proposed method, the authors conducted extensive experiments on three object-centric learning tasks. The method is also memory-efficient and computation-efficient compared to previous reconstruction-based methods.
Weaknesses: 1. Without the reconstruction loss, the pipeline may lose most of the information of objects that are useful for downstream tasks. The paper does not conduct relevant experiments to explore this point, which is also crucial for evaluating object-centric representation learning methods. The paper would be strengthened by including experiments on predicting object properties from slot features, as in [1][2].
2. The paper lacks a discussion of the case where the number of slots is greater than the number of objects in the scene plus the background. In this case, for the whole-parts-whole cyclic walk, because the loss encourages the transition matrix to be an identity matrix, there will be some pixels that have nonzero transition probability into redundant slots. Ideally, the pixels should only transition to the slot of the corresponding object.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. It is unclear how to compute the mask from $M_{x,\hat{s}}$. Is it obtained by selecting the slot with the maximum transition probability for each pixel?
2. The paper should also contain visualizations of the model on the Movi-C and Movi-E datasets. Currently, only datasets with a fixed number of objects (1 foreground and 1 background) are shown.
3. Section 5.5 analyzes the method's efficiency in terms of model sizes, training speed, and GPU usage. What are the number of training steps for each model considered?
Minor Comments:
- The framing of Whole-Part interactions seems confusing. It's actually just interactions between image features and slot features.
- Reference [20] should be updated to the published version "Improving Object-centric Learning with Query Optimization."
- A figure explaining the whole pipeline would make the paper clearer.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No limitations were discussed. The authors conduct some failure analysis on the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **oB3o.1 - Weaknesses: Without the reconstruction loss, the pipeline may lose most of the information on objects that are useful for downstream tasks. The paper does not conduct relevant experiments to explore this point, which is also crucial for evaluating object-centric representation learning methods. The paper would be strengthened by including experiments on predicting object properties from slot features.**
As the reviewer suggested, we now added a transformer-based decoder and the reconstruction loss to our method. In the experiment, we trained the model on Pascal VOC 2012 to investigate the effectiveness of reconstruction loss. The model with the decoder and the reconstruction loss achieved slightly better performance than our default model (from 29.6% to 29.9% in ARI-FG). This indicates that the reconstruction loss provides auxiliary information for object-centric learning. In future work, we will explore the possibilities of predicting object properties from slot features.
We will include this result in Section 5.4 and a discussion about future work in Section 6.
**oB3o.2 - Weaknesses: The paper lacks a discussion of the case where the number of slots is greater than the number of objects in the scene plus the background. In this case, for the whole-parts-whole cyclic walk, because the loss encourages the transition matrix to be an identity matrix, there will be some pixels that have nonzero transition probability into redundant slots. Ideally, the pixels should only transition to the slot of the corresponding object.**
The reviewer correctly points out this ill-posed case when the number of slots is greater than the number of objects in the scene. Our method, as well as many other slot-based methods, is not perfect in such cases. We provided a discussion about this in lines 259-260 in the main text and visualization results of such failure cases in Supplementary Section S3, lines 37-44, and Supplementary Figure S4.
**oB3o.3 - Questions: It is unclear how to compute the mask from M_{x, \hat s}. Is it obtained by selecting the slot with the maximum transition probability for each pixel?**
Yes, the reviewer is right. M_{x, \hat s} is the similarity matrix between features and slots.
It is used as the transition probability matrix from features to the slots.
**oB3o.4 - Questions: The paper should also contain visualizations of the model on the Movi-C and Movi-E datasets. Currently, only datasets with a fixed number of objects (1 foreground and 1 background) are shown.**
In Fig 3 in the main text, we provide visualization results on foreground extraction (1 foreground + 1 background) and object discovery (more than 3 object classes are discovered on the same image, such as ship, tree, water, and sky in Row 2 of Fig3b). As the reviewer suggested, we now provide the visualization results of the Movi-C and Movi-E datasets in Fig R2 on the rebuttal PDF. We found that our method predicts reasonable semantic segmentation masks in complex scenes containing multiple objects (e.g. small forks are successfully segmented in Column 1, Row 3 despite that the scene is cluttered). A similar analysis of visualization results from Fig3 can be applied here for Movi-C and Movi-E (lines 255-260 in the main text).
We will include the visualization results of Movi-C and Movi-E in the final version.
**oB3o.5 - Questions: Section 5.5 analyzes the method's efficiency in terms of model sizes, training speed, and GPU usage. What is the number of training steps for each model considered?**
We trained all models (Slot-Attention, SLATE, BO-QSA, DINOSAUR, and our Cyclic walks) with 250k training steps and selected their best models by keeping track of their best accuracies on the validation sets. Among all the models, during training, we found that our method converges the fastest. On Pascal VOC 2012 and COCO 2017, our model can achieve the best performance within 10k training steps, while other models need at least 100k training steps.
We will provide this discussion in Section 5 in the final version.
**oB3o.6 - Minor Comments: The framing of Whole-Part interactions seems confusing. It's actually just interactions between image features and slot features.**
Yes, the reviewer correctly points out that whole and part refer to image features and slot features respectively. We will update these terms in the final version.
**oB3o.7 - Minor Comments: Reference [20] should be updated to the published version "Improving Object-centric Learning with Query Optimization."**
Sure. We will update the reference in the final version.
**oB3o.8 - Minor Comments: A figure explaining the whole pipeline would make the paper clearer.**
OK, we will add a figure describing the training process of cyclic walks and the process of applying it in various downstream tasks.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for the response. My concerns have been addressed and I will keep my positive rating.
---
Reply to Comment 1.1.1:
Title: Thanks for recommendations
Comment: We thank the reviewer for the feedback and suggestions. We will incorporate all the changes promised in the rebuttal into the final version. | Summary: This paper introduces Cyclic Walks, an approach to obtain object-centric representations from images. The idea is to adapt contrastive random walks used for learning spatiotemporal correspondences for learning slots without a slot decoder. The other key ingredient of this approach is to use a frozen unsupervised pretrained feature extractor. The way that the approach works is that a loss is constructed based on both "whole-parts-whole" and "parts-whole-parts" cyclic walks between image patch features and slots, which trains the slot attention module used for slot extraction. The results on real world images indicate the method is effective and efficient.
Strengths: #### Originality
- The main idea, to adapt contrastive random walks for training slot attention on pretrained image features, is simple and shown to be effective.
- The idea also seems like a highly novel approach for object-centric representation learning.
#### Quality
- The experiments are rigorous and cover an appropriately wide range of datasets, metrics, and baselines.
#### Clarity
- The paper is well-written and the ideas are conveyed clearly.
- Figure 1 and Figure 2 do a good job conveying the key ideas.
#### Significance
- I believe this work is a significant contribution to the unsupervised object-centric representation learning literature.
Weaknesses: I have a few concerns with this work, but I believe the strengths outweigh the weaknesses.
- Missing a baseline: I would like to see Cyclic Walks compared against KMeans clustering on the frozen DINO feature tokens with $K$ equal to the number of slots used in the Cyclic Walks approach. This simple baseline would inform the extent to which the learned Slot Attention module contributes to the overall performance.
- The method introduces multiple hyperparameters, a similarity threshold and the temperature of the random walk, which appear necessary (and potentially difficult) to tune for each dataset.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Would the authors agree that using a frozen pretrained feature extractor is potentially a limitation, in that it can prevent the performance from improving beyond a point?
#### Minor suggestions for improvement
- L41-43: "First, inspired by the set encoding theory depicting that natural images can always be represented as a span of a finite set of task-dependent semantically meaningful bases". Is there a citation for this? Also, can this be re-worded into something more intuitive/less math-y? It feels out of place in the rest of this paragraph.
- L173: CE is not defined (Cross entropy?)
- Adding a summary of the results of the ablation studies after L277 (first paragraph of Sec. 5.4) would improve readability here.
---
Update after rebuttal: Thanks to the authors for engaging with my review. I maintain my initially positive outlook on this paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: A few failure cases are provided, but I would suggest adding a paragraph on limitations of the method to Sec. 6---space permitting. For example, discussing any hyperparameter sensitivity would be useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **VTwm.1. Weaknesses: Missing a baseline: I would like to see Cyclic Walks compared against KMeans clustering on the frozen DINO feature tokens with K equal to the number of slots used in the Cyclic Walks approach. This simple baseline would inform the extent to which the learned Slot Attention module contributes to the overall performance.**
As suggested by the reviewer, we now compared the performance of our method with that of direct k-means clustering (29.8% vs16.8% in ARI-FG on Pascal VOC 2012 and 39.3% vs 25.5% in ARI-FG on COCO 2017). In the experiments, our method beats k-means by a large margin, suggesting that our method is capable of learning to capture better object-centric representations.
We will add these new results of k-means in Section 5.
**VTwm.2. Weaknesses: The method introduces multiple hyperparameters, a similarity threshold, and the temperature of the random walk, which appear necessary (and potentially difficult) to tune for each dataset.**
The hyperparameters in our method are NOT required to be tuned for individual datasets. On the contrary, in our paper, we use the SAME similarity threshold of 0.7 and the SAME temperature of 0.7 for the random walks on ALL the datasets.
**VTwm.3. Questions: Would the authors agree that using a frozen pre-trained feature extractor is potentially a limitation because it can prevent the performance from improving beyond a point?**
In lines 146-148, we highlighted that we follow the SAME practice as previous works [16][30] (see reference list in the main text) by freezing feature extractors. It is for a fair comparison with the baselines in the literature.
We argue that freezing feature extractors is NOT a limitation of our method because of the two reasons below. First, our method can still be easily adapted to any new datasets with domain shifts. To achieve this, we introduce the 2-stage training pipeline. At stage 1, the feature extractor of a model can be trained on the new dataset using self-supervised learning. At stage 2, the feature extractor is frozen and the slot-based attention module can be fine-tuned using our method on the new dataset. Note that both stages only require self-supervised learning without any human supervision. This enables our method to be easily applied in new datasets and new domains during 2-stage training.
Second, freezing feature extractors before learning object-centric representations is loosely connected with the neuroscience findings. These findings suggest that neural plasticity is increased from low-level visual processes (frozen feature extractor) to higher levels of cortical processing responsible for handling symbolic representations (slot-based attention module) [Haak et. al., 2019, Nature Communications].
As suggested by the reviewer, we did the following three experiments to investigate how fine-tuning the feature extractor contributes to the overall performance of object-centric learning in ARI-FG. First, we fine-tune both the feature extractor and the slot-based attention (1-Fine-tune). The performance in ARI-FG is 12.2%. Second, we assign a small learning rate of 0.0001 for the feature extractor and a large learning rate of 0.0004 for the slot-based attention (2-Learning-rate). We observed a great performance improvement from 12.2% in 1-Fine-tune to 22.4% in 2-Learning-rate. Third, we apply EMA (Exponential Moving Averaging) on the entire model (3-EMA). The performance of 21.3% in 3-EMA is still inferior to the 2-Learning-rate.
From all these results, aligning with the neuroscience findings above, we found that the slow update of the feature extractor stabilizes the learning of high-level object-centric representations. However, the performance is still inferior to our default model in the paper (29.6% in ARI-FG). This emphasizes the importance of freezing feature extractors for our method.
We will include these results and discussions of our method in the final version.
**VTwm.4. Minor Suggestions: L41-43: "First, inspired by the set encoding theory depicting that natural images can always be represented as a span of a finite set of task-dependent semantically meaningful bases". Is there a citation for this? Also, can this be re-worded into something more intuitive/less math-y? It feels out of place in the rest of this paragraph.**
We thank the reviewer for pointing out this confusion on “set encoding theory”. This term is indeed NOT well-known. We will remove this sentence and rephrase the idea based on scene compositionality in the final version. In other words, just like DETR (https://arxiv.org/abs/2005.12872) and Mask2Former (https://arxiv.org/abs/2112.01527), these methods decompose the scene into multiple object representations or semantic masks. Similar to slot attention, these methods rely on a set of learnable “object queries”. Here, our method is to learn these “object queries” with cyclic walks in a self-supervised manner.
**VTwm.5. Minor Suggestions: L173: CE is not defined (Cross entropy?)**
Yes, the review is correct. CE here means Cross Entropy. We will define this in the text.
**VTwm.6. Minor Suggestions: Adding a summary of the results of the ablation studies after L277 (first paragraph of Sec. 5.4) would improve readability here.**
Thanks. We will add a brief summary of ablation studies, as suggested.
**VTwm.7. Limitations: A few failure cases are provided, but I would suggest adding a paragraph on the limitations of the method to Sec. 6---space permitting. For example, discussing any hyperparameter sensitivity would be useful.**
Yes, absolutely. We will emphasize that the hyperparameters in our method are NOT required to be tuned for individual datasets in the final version (see responses for VTwm.2). Moreover, we will expand our discussions on freezing feature extractors in Sec 6 (see responses for VTwm.3).
---
Rebuttal Comment 1.1:
Title: Thanks for the response and new experiments, some follow up questions
Comment: Thanks for the responses and new experiments!
I do have some follow-up questions, however.
### Recommendation on hyperparameters (VTwm.6)?
Is the recommendation of the authors then for anyone to first try 0.7 for the similarity threshold and temperature to train this method on their own data? If so, will this be mentioned in the summary of ablations (VTwm.6)?
### On the importance of freezing the feature extractor (VTwm.3)
The presented results clearly indicate the importance of the two stage training approach for this method. However, the authors write in the discussion (L337-340) "our method has been developed on a frozen unsupervised feature extractor. In the future, hierarchical contrastive walks can be explored in any feed-forward architectures, where the models can **simultaneously** learn both pixel-level and object-centric representations incrementally over multiple layers" (emphasis mine).
Can the authors clarify for me why they suggest exploring training an architecture with cyclic walks without freezing the feature extractor here? This seems contradictory to the presented results that show the importance of freezing the feature extractor.
Are there any other key future directions?
---
Reply to Comment 1.1.1:
Title: Response to follow-up questions
Comment: **VTwm.F1. Questions: Is the recommendation of the authors for anyone to first try 0.7 for the similarity threshold and temperature to train this method on their own data? If so, will this be mentioned in the summary of ablations (VTwm.6)?**
First, we apologize for a typo in the response of VTwm.2. Rather than 0.7, we used the temperature of **0.1**, as also indicated in line 6 in our Supp. Material.
In the final version, we will clarify in the summary of ablations (line 277) that we encourage readers to use a similarity threshold of 0.7 and a temperature of 0.1 for their own data, just like what we did over all the datasets from all the tasks.
We will also add the following statement in line 277: “From the ablation study below, we have found that these two hyperparameter values are relatively robust to various datasets. See the ablation studies below where we provide the variations on hyperparameter choices to examine their sensitivity in the performance.”
**VTwm.F2. Questions: The presented results clearly indicate the importance of the two-stage training approach for this method. However, the authors write in the discussion (L337-340) "our method has been developed on a frozen unsupervised feature extractor. In the future, hierarchical contrastive walks can be explored in any feed-forward architectures, where the models can simultaneously learn both pixel-level and object-centric representations incrementally over multiple layers" (emphasis mine).
Can the authors clarify for me why they suggest exploring training an architecture with cyclic walks without freezing the feature extractor here? This seems contradictory to the presented results that show the importance of freezing the feature extractor.
Are there any other key future directions?**
We respectfully disagree with the reviewer that what we commented about two-stage training in the rebuttal is contradictory to the future work mentioned in the main text.
For example, a model which can detect objects but cannot segment images does not imply that the model has LIMITATION/DRAWBACK of not being able to perform image segmentation. Similarly, our current model which can learn good object-centric representations but cannot learn pixel-level representations does not imply that our model has limitations in object-centric representation learning. Instead, we found that good pixel-level representations facilitate object-centric representation learning in our method (see our original response in VTwm.3).
Of course, in the real world, we want a general AI model which can ideally do all the tasks all at once. This includes a model which can learn both pixel-level and object-centric representations at the same time. Together with the entire research community, we are excited about working on this problem in the future, i.e. designing an AI model which can simultaneously learn both pixel-level and object-centric representations.
In addition to the future work mentioned in the main text, we would also like to look into object-centric representation learning from videos, as videos carry temporal correlations and coherences, important for learning objects from motions. Moreover, our current work has been focusing on unsupervised foreground extraction, object discovery, and semantic segmentation tasks. In the future, we plan to explore the possibility of adapting our method and benchmarking all unsupervised object-centric learning models on unsupervised instance segmentation tasks and unsupervised visual question answering.
In the final version, we will clarify the future work on learning pixel-level and object-centric representations simultaneously and provide the additional future directions discussed here in Section 6. | Summary: This paper works on unsupervised object discovery, that is, learning to decompose the compositional components of a scene. Based on frozen DINO features, it proposes to introduce random walks between part-level features (dense output of DINO) and object-level features (output of SlotAttention). Each object-level node is encouraged to go back to the same node after random walking to and back from local features, and vice-versa for each local node. Extensive experiments validate the effectiveness of the proposed pipeline.
Strengths: *Originality*: Tough SlotAttention module is not novel, and cycle consistency has been adopted in some previous works. These works are well acknowledged in the text, and the contribution from this paper: constructing cycle consistency between part-level and object-level features as a supervisory signal for object discovery with SlotAttention is absolutely original. The idea is clean and shown to be effective.
*Quality*: This work is relatively professional and of good quality. The proposed method is clean and reasonable, and has shown effectiveness in multiple benchmarks. Besides, error bars are provided and make the results more reliable. I also would like to note that many previous works did not mention STEGO in their tables, and I am happy to see it clearly compared in this work.
*Clarity*: The delivery is fairly smooth and clear, and I find no major issues in understanding this paper.
*Significance*: I am happy to see another reconstruction-free framework being validated on real-world object discovery, which is rare in this community. I also noticed in Tab.3 that this work is robust to pre-training feature extractors, which deviates from the recent trend of ''DINO is all you need'', and is good for the community.
*Reproducibility*: Code is not provided but the text is relatively clear.
Weaknesses: Kindly suggest for direct comparison with the following related works:
- [MaskDistill] W. Van Gansbeke et al., Discovering Object Masks with Transformers for Unsupervised Semantic Segmentation, arXiv 2022.
- [Odin] O. J. Hénaff et al., Object Discovery and Representation Networks, ECCV 2022.
- [SlotCon] X. Wen et al., Self-Supervised Visual Representation Learning with Semantic Grouping, NeurIPS 2022.
- [COMUS] A. Zadaianchuk et al., Unsupervised Semantic Segmentation with Self-Supervised Object-Centric Representations, ICLR 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Regarding the efficiency issue, what is the inference speed, and how is it compared with other frameworks?
- In the abstract it is claimed to be capable of CNN features, but the results are all with ViTs. Could the effectiveness with CNN features be validated?
Minor:
- L76: missing space
- L108: slot attention does not directly adopt cross-attention, the dimension for normalizing the attention weights is different and thus to introduce competition between slots
- Fig.4: please polish this figure for clarity
- L321: twice fewer is not clear
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: limitations are not explicitly discussed, but failure cases are provided in the appendix
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **8Kba.1 - Weaknesses: Kindly suggest for direct comparison with the following related works.**
We now include the following results of all the methods suggested by the reviewer on Pascal VOC 2012 and COCO-stuff27 in the table below. We are unable to compare with Odin, since Odin is a self-supervised instance segmentation method. Our method gets competitive performances compared to these methods, suggesting that our method is capable of learning to capture reasonable object-centric representations.
We will cite and discuss these works provided by the reviewer in Section 2. and include these new results of the baselines in Section 5.
| mIoU | ours | MaskDistill | Odin | SlotCon | COMUS |
|:------------------------:|:---------------:|:----------------:|:----------------:|:----------------:|:----------------:|
| Pascal VOC 2012 | 43.3 | 42.0 | N.A. | N.A. | 50.0 |
| COCO-stuff27 | 22.5 | N.A. | N.A. | 18.3 | N.A. |
**8Kba.2 - Questions: Regarding the efficiency issue, what is the inference speed, and how is it compared with other frameworks?**
For fair comparisons, we benchmark the inference speed of all the methods using the same hardware and data configurations: (1) one single RTX-A5000 GPU; and (2) input image size of 224 × 224, batch size of 8, and 4 slots. We provide the inference speed in the table below. Among all the methods, our cyclic walk is the most efficient with the fastest inference speed.
In the final version, we will include these results in Section 5.
| infer. speed (img/s) | ours | SA | SLATE | DINOSAUR | BOQ-SA |
|:-----------------------------------------:|:---------------:|:----------------:|:----------------:|:----------------:|:----------------:|
| Object Discovery | 285 | 126 | 32 | 130 | 32 |
**8Kba.3 - Questions: In the abstract, it is claimed to be capable of CNN features, but the results are all with ViTs. Could the effectiveness of CNN features be validated?**
We thank the reviewer for pointing this out. Here, we replace DINO with ResNet50 to extract features as the “whole” and perform our cyclic walks between these features and the slots. Our method with ResNet achieves similar performance as our method with DINO in ARI-FG on Pascal VOC 2012 (28.3% vs 29.7%). We will include this result in our final version.
**8Kba.4 - Minor**
Thank you! We will fix the text, rephrase the sentences, and polish the figure.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the response and I have no further concerns. I feel okay that it performs worse than COMUS since the method is clean and is an orthogonal effort. I do agree that whether DINO features are used should be clearly marked in comparisons since it is a key factor in this field. I confirm my positive recommendation.
---
Reply to Comment 1.1.1:
Title: Thanks for recommendations
Comment: We thank the reviewer for the feedback and suggestions. We will incorporate all the changes promised in the rebuttal into the final version. | Summary: The paper presents a method for unsupervised object discovery, while using cyclic walks between part and whole features as a supervision signal. While previous methods mainly use RGB or feature reconstruction as supervisory signal, this method uses a form of contrastive learning. Their lack of decoder architecture due to contrastive learning, reduces the overall computational overhead from the previous reconstruction based models that explicitly required a decoder. They showcase their result on seven image datasets and on three unsupervised tasks. They compare against many baselines and indicate better object discovery performance than most of the baselines.
Strengths: i) Interesting and novel idea of using random cyclic walks as supervisory signal.
ii) Method is well written and easy to understand.
iii) Good performance on real world datasets for object discovery.
Weaknesses: i) The results on classical object centric datasets such as CLEVR or CLEVRTex are missing,
ii) Missing results for just training on RGB pixels, this would be interesting to see even if the results are not good, it can help get better intutions of where the method works/fails
iii) Missing baseliens such as CutLer(https://arxiv.org/abs/2301.11320) or SlotCon (https://github.com/CVMI-Lab/SlotCon) or simply doing kmeans clustering on the Dino features.
iv) Does the model get instance segmentation or semantic segmentation. Didn't see any visuals that indicate the model can learn instance segmentation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: i) How does the method compare against reconstruction methods on original CLEVR or ClevrTex dataset, any one is fine?
ii) How does the method do when training on top of RGB pixels instead of pre-trained features.
iii) How does the method compare against SlotCon or Cutler or simple kmeans on features?
iv) Visuals/results indicating this is capable of learning instance segments?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **38HN.1 - Questions: How does the method compare against reconstruction methods on the original CLEVR or ClevrTex dataset, is anyone fine?**
As suggested by the reviewer, we conducted experiments on the ClevrTex dataset and evaluated the performance of all baseline methods and our method. In terms of ARI-FG, our method (67.4%) outperforms Slot-Attention (59.2%), SLATE (61.5%), and DINOSAUR (64.9%). Consistent with our results in the paper, the experimental results suggest that our method is superior at learning object-centric representations from complex scenes without reconstructions.
We will add these results and discussions in the final version.
**38HN.2 - Questions: How does the method do when training on top of RGB pixels instead of pre-trained features?**
In lines 146-148, we highlighted that we follow the SAME practice as previous works [16][30] (see reference list in the main text) by freezing feature extractors. It is for a fair comparison with the baselines in the literature.
We argue that freezing feature extractors is NOT a limitation of our method because of the two reasons below. First, our method can still be easily adapted to any new datasets with domain shifts. To achieve this, we introduce the 2-stage training pipeline. At stage 1, the feature extractor of a model can be trained on the new dataset using self-supervised learning. At stage 2, the feature extractor is frozen and the slot-based attention module can be fine-tuned using our method on the new dataset. Note that both stages only require self-supervised learning without any human supervision. This enables our method to be easily applied in new datasets and new domains during 2-stage training.
Second, freezing feature extractors before learning object-centric representations is loosely connected with the neuroscience findings. These findings suggest that neural plasticity is increased from low-level visual processes (frozen feature extractor) to higher levels of cortical processing responsible for handling symbolic representations (slot-based attention module) [Haak et. al., 2019, Nature Communications].
As suggested by the reviewer, we did the following three experiments to investigate how fine-tuning the feature extractor contributes to the overall performance of object-centric learning in ARI-FG. First, we fine-tune both the feature extractor and the slot-based attention (1-Fine-tune). The performance in ARI-FG is 12.2%. Second, we assign a small learning rate of 0.0001 for the feature extractor and a large learning rate of 0.0004 for the slot-based attention (2-Learning-rate). We observed a great performance improvement from 12.2% in 1-Fine-tune to 22.4% in 2-Learning-rate. Third, we apply EMA (Exponential Moving Averaging) on the entire model (3-EMA). The performance of 21.3% in 3-EMA is still inferior to the 2-Learning-rate.
From all these results, aligning with the neuroscience findings above, we found that the slow update of the feature extractor stabilizes the learning of high-level object-centric representations. However, the performance is still inferior to our default model in the paper (29.6% in ARI-FG). This emphasizes the importance of freezing feature extractors for our method.
We will include these results and discussions of our method in the final version.
**38HN.3 - Questions: How does the method compare against SlotCon, CutLer, or simple k-means on features?**
We now include the following results of all the methods suggested by the reviewer. Since Cutler is designed for instance segmentation; we excluded it from the method comparisons in the unsupervised semantic segmentation task. We compared our method with SlotCon (22.5% vs 18.3% in mIoU on COCO-stuff27). In addition, we compared the performance of our method with that of direct K-means clustering (29.8% vs16.8% in ARI-FG on Pascal VOC 2012 and 39.3% vs 25.5% in ARI-FG on COCO 2017). In the experiments, our method beats SlotCon and K-means by a large margin, suggesting that our method is capable of learning to capture better object-centric representations.
We will cite and discuss these works provided by the reviewer in Section 2, and add these new results of the baselines in Section 5.
**38HN.4 - Questions: Visuals/results indicating this is capable of learning instance segments?**
Traditional object-centric representation learning methods have been mainly focusing on unsupervised semantic segmentation, such as DINOSAUR and SlotCon (see reference list in the main paper). Following this line of research, our method was originally designed for semantic segmentation as well.
It is interesting that the reviewer highlighted the new possibility of applying these methods in instance segmentation tasks. Inspired by CutLer, we extract corresponding slot representations for all the image patches and apply Agglomerative Clustering [scikit-learn library] to generate instance segmentation masks. Specifically, Agglomerative Clustering performs self-supervised clustering based on a distance matrix. The distance matrix takes account of both predicted object categories by our default method and the position at each pixel. We provide visualization results of predicted instance segmentation in Fig R1 on the rebuttal PDF page.
From the visualization results, we observed that our method produces visually reasonable instance masks after applying post-processing steps. We also noticed several challenging cases where our method fails to separate object instances located close to one another (e.g. five sheep in Row 4). In the future, we will rigorously and quantitively benchmark unsupervised instance segmentation models. We will also improve the designs of slot-attention modules to learn to predict instance segmentation tasks in an end-to-end fashion.
In the final version, we will include discussions on future works of unsupervised instance segmentation in Section 6.
---
Rebuttal Comment 1.1:
Comment: Thanks for the additional results.
Q) "Why does the method perform worse than COMUS ?":
The performance of the method compared to COMUS seems significantly worse in response to reviewer 8Kba, that is 50.0 vs 43.3. Does this hold true across all benchmarks? Can the authors perform dense evaluation of COMUS across all benchmark? If not can the authors justify as to why one would use their method over COMUS?
Q) Lack of comparisions on without using pre-trained features:
To the best of my knowledge , Amongst all the methods the paper compares against only DINOSAUR uses pre-trained DINO features, this makes comparisions against most baselines a bit unfair. I would recommend highlighting this in the Table. Additionaly i would expect a lot dense comparision wrt Dinosaur specifically on the datasets they report in their paper. I was unable to find a result for Bird, Dogs etc in their paper. I think comparing with DINOSAUR , while using the same numbers from their paper would make the current results a lot stronger.
---
Reply to Comment 1.1.1:
Title: Thanks for comments. Response to follow-up questions
Comment: **38HN.F1 - Questions: Why does the method perform worse than COMUS?**
As the reviewer correctly points out, our method underperforms COMUS in mIoU metrics on Pascal VOC 2012(43.3% versus 50.0%).
However, as also agreed by Reviewer 8Kba, we argue that, in comparison to our method, there are two additional components in COMUS possibly attributing to the performance differences and unfair comparisons. First, COMUS employs two pre-trained saliency detection architectures DeepUSPS (Nguyen et al., 2019, NeurIPS, https://arxiv.org/abs/1909.13055) and BasNet (Qin et al., 2019, CVPR, https://arxiv.org/abs/2101.04704) in addition to DINO. The saliency detection model requires an additional MSRA image dataset (Cheng et al., 2015, TPAMI, https://arxiv.org/abs/2101.04704) during pre-training. Thus, compared to our cyclic walks, COMUS indirectly relies on extra useful information. Second, COMUS uses Deeplab-v3 as its backbone for predicting semantic segmentation masks. For better segmentation performances, Deeplab-v3 extracts large-resolution feature maps from images and applies Atrous Spatial Pyramid Pooling for aggregating these feature maps over multiple scales. In contrast, our method extracts low-resolution feature maps with DINO and is not specifically designed for unsupervised semantic segmentation. Yet, it is remarkable that our method still achieves comparable performances as COMUS in unsupervised semantic segmentation tasks.
In addition to the comparable performance with COMUS in unsupervised semantic object segmentation, our method is capable of parsing the entire scene on the image into semantically meaningful regions, which COMUS fails to do. These include distinct backgrounds, such as sky, lake, trees, and grass. For example, in Fig 3(b) in the main body, our method successfully segments backgrounds, such as the lake, the trees, and the sky. However, COMUS fails to segment background elements. For example, in Column 4 of Fig 1 in the paper (https://arxiv.org/abs/2207.05027), COMUS only segments the foreground ships and fails to segment the lake and the sky. Segmenting both salient objects and other semantic regions is essential for many computer vision applications. This emphasizes the importance and unique advantages of our method.
In the final version, we will highlight the differences between COMUS and our method and discuss the advantages of our method over COMUS in Section 2.
**38HN.F2 - Questions:Lack of comparisions on without using pre-trained features.**
We thank the reviewer for the recommendation. In the final version, we will highlight that both DINOSAUR and our method use the pre-trained frozen DINO in Table 1.
Note that the results of DINOSAUR reported in our paper deviate slightly from the results reported in the original DINOSAUR paper (Seitzer et al., 2023, ICLR, https://arxiv.org/abs/2209.14860). This is because the code for DINOSAUR was not available before the deadline for Neurips submission. Hence, we strictly followed the model specifications in the original paper, re-implemented DINOSAUR on our own, and reported the results of our re-implemented version. These results include the performances of our re-implemented DINOSAUR on Birds, Dogs, Cars, and Flowers datasets, which were missing in the original DINOSAUR paper. We will release our re-implementation code of all the baselines upon publication.
As the reviewer pointed out, in the table below, we now copied the exact same results in the original DINOSAUR paper. For easy comparison, we also copied the results of our re-implemented DINOSAUR and our method from our paper. The ARI-FG metric performances in (mean+/- standard deviation) are reported.
From the results, we observed that our method outperformed the re-implemented DINOSAUR and performed competitively well as the original DINOSAUR in all the experiments. Note that in comparison to DINOSAUR, our method does not require decoders for reconstruction and hence, our method has fewer parameters, less GPU memory usage, converges faster during training, and achieves faster inference speed as indicated in Section 5.5.
In the final version, we will include this table, dedicate a short paragraph to highlight the differences between our method and DINOSAUR and discuss the advantages of our method over DINOSAUR.
| |Pascal VOC 2012|COCO 2017| MOVi-C|MOVi-E|
|:-------:|:----------------------:|:----------------:|:---------:|:---------:|
|Original DINOSAUR|24.6±0.2|40.5±0.0|67.2±0.3|64.7±0.7|
|Re-implemented DINOSAUR|27.5±0.2|36.2±0.7|64.0±0.5|62.4±0.7|
|Our method|29.6±0.8|39.7±0.8|67.6±0.3|64.7±0.7|
---
Rebuttal Comment 1.2:
Title: CLEVRTex Slot Attention baseline
Comment: Dear authors,
Thank you for adding an experiment on CLEVRTex. Please note that Slot Attention, when combined with a deeper CNN backbone, can achieve significantly higher performance: the Invariant Slot Attention paper [Biza et al., 2023] reports a performance of Slot Attention on CLEVRTex of 91.3% FG-ARI when combined with a ResNet encoder. I think that the experiment nonetheless provides value, but I would recommend discussing it in the context of more recent results (in this case, a result from February 2023).
Thank you.
--Your AC
---
Reply to Comment 1.2.1:
Comment: **AC: Thank you for adding an experiment on CLEVRTex. Please note that Slot Attention, when combined with a deeper CNN backbone, can achieve significantly higher performance: the Invariant Slot Attention paper [Biza et al., 2023] reports the performance of Slot Attention on CLEVRTex of 91.3% FG-ARI when combined with a ResNet encoder. I think that the experiment nonetheless provides value, but I would recommend discussing it in the context of more recent results (in this case, a result from February 2023).**
We appreciate the advice from AC. In the final version, we will incorporate the result of SA(ResNet) and provide the discussion below in the result analysis:
Compared with SA (ResNet) in the paper, our method uses the frozen DINO encoder, pre-trained on naturalistic images (ImageNet). This might lead to poor feature extraction in CLEVRTex due to domain shifts between real-world and synthetic image datasets. As our experiment has shown that a good extractor is essential for our method to work well (see the response of **VTwm.3**), our method can be sensitive to domain shifts if the feature extractor has not been fine-tuned on the images from the current dataset.
The original slot attention model does not always perform well as shown in the Invariant Slot Attention paper, our work, as well as many others. To mitigate this issue, the Invariant Slot Attention paper introduces an interesting method based on slot-centric reference frames. Orthogonal to the proposed invariant slot attention method, we introduced the cyclic walks between slots and feature maps as the supervision signals to enhance the quality of the learned slot representations. Our method does not require decoders, compared with the invariant slot method.
We will cite the Invariant Slot Attention paper and discuss its differences with our work in Section 2 in the revised version.
Title: Thanks for AC's advice | Rebuttal 1:
Rebuttal: We thank the reviewers for feedbacks and suggestions. We present the figures in PDF and the responses to individual reviewers’ questions below. The original questions from the reviewers are copied and bolded.
Pdf: /pdf/15e00fd08624d1487de3a7ab91122cc722cb91da.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
FIRAL: An Active Learning Algorithm for Multinomial Logistic Regression | Accept (poster) | Summary: The authors first prove that the excess risk of multinomial logistic regression with under subgaussian data distribution is lower and upper bounded by terms involving the ratio of the Fisher information of the unlabeled data and the Fisher information of the labeled data.
The authors then propose an algorithm for active learning, based in FIR minimization, and prove a performance guarantee for it.
Experimental results on simulated and real world datasets demonstrate well the performance of the proposed approach.
Strengths: 1. The proposed algorithm stands on a solid mathematical background.
2. The experimental results demonstrate the performance benefits over competitive methods.
Weaknesses: 1. The manuscript is very technical and not easy to read.
2. Comparison to competitive methods in terms of computation is missing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can the authors refer to the capabilities of their algorithm in terms of scale?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer PwwV for the review and comments!
* **Not easy to read:**
We acknowledge that it might be hard to comprehend Section 4.3 for readers unfamiliar with regret minimization. We have tried explain the overal merit of our approach while leaving most technical details in Appendix. We are open to any suggestions that could enhance the readability of our paper.
* **Comparison in computation:**
We interpret the question to refer the computational complexity of the algorithm. Our current implementation of FIRAL is much more expensive than random, k-means, and uncertainty-based methods. Its cost is comparable to BAIT. However, (1) we have not optimized some algorithmic details of FIRAL; and (2) FIRAL and BAIT significantly outperform the other methods.
* **Scale capabilities:**
Please see our general response.
---
Rebuttal Comment 1.1:
Title: Update to review
Comment: I have read the comments made by the other reviewers, as well as the author's rebuttal.
One concern I have is regarding the high complexity of the algorithm in terms of $\tilde{d}$, which is prohibitive for most practical senses.
While I still recommend accepting the paper, I am waiting to read the post-rebuttal comments of the other reviewers and decide whether I should update the score from 7 to 6 in light of this issue. | Summary: This paper studies active learning when the underlying data distribution follows the multinomial logistic model. This paper considers the pool-based setting and designs an algorithm to select the sample to query in a batch fashion. There are two main contributions: (1) The paper shows that the excess risk is lower and upper bounded by the Fisher Information Ratio. (2) propose an algorithm to select the samples to optimize the Fisher Information Ratio and thus minimize the expected risk. Theoretical analysis and experiments are conducted to validate the proposed approach.
Strengths: The strengths of the paper are as follows:
+ This paper shows the relationship between expected excess risk and the fisher information ratio for the multinomial logistic regression model
+ This paper proposes an algorithm to maximize the fisher information ratio
+ The technical part of this paper is well-written and easy to follow
Weaknesses: The weaknesses of the paper are as follows:
- unclear comparison with existing work and novelty: my main concern about the paper is the novelty compared with existing literature.
- comparison with [11]: as mentioned by the authors, the non-asymptotic relationship between the FIR and excess risk has been provided in [11] for the generalized linear model. However, I am a little bit confused by the claim in lines 43-44 that "... the assumption that the loss function is strictly self-concordant (assumption 2 in [11]), which does not hold for the logistic regression case." Although the logistic regression loss is not strictly self-concordant, it is a general self-concordant function [27]. Based on the general self-concordance, it seems to me one can also bound the gap between the Hessians of two points as Assumption 2 in [11]. The authors also use the general self-concordant property (Proposition 31) to prove Theorem 3. I think it would be better if the authors could highlight that logistic loss is a general self-concordant function and highlight the novelty compared with [11].
- comparison with [14]: it seems to me the method used to optimize the FIR is largely established on [14]. Proposition 7 is similar to a combination of Eq (8) and Proposition 3.1 in [14]. Both papers use the regret analysis of FTRL to show the lower bound in Proposition 8. I think the paper would have a more significant influence if the author could highlight the main challenge of applying the analysis of [14] to the logistic regression model.
- about assumption in Theorem 3: Theorem 3 relies on the condition that $H_p(\theta_*)\leq H_q(\theta_*)$. It is unclear to me how large this constant would be and how to verify the condition in practice since $\theta_*$ is generally unknown.
- about the labeled dataset: the algorithm requires a labeled dataset $S_0$ to estimate the unknown parameter $\theta_*$. However, it is unclear to me how to obtain such a labeled dataset and when it is sufficient to obtain a good approximation of $\theta_*$ for the following active learning. As shown in Theorem 4, the number of labeled data will affect the coefficient $\alpha_0$ appearing in the exponential term that can be potentially large. I think it would be nice if the author could provide a more detailed discussion on the effect of the initial labeled data on the proposed algorithm.
- about the constants in Theorem 4: since $\alpha_0$ and $\alpha_n$ are related to $n_0$ and $n$, I am not sure they are appropriate to be considered constants in the proposed bound.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - could you highlight the main technical challenges and contributions of the paper beyond the existing work? (please refer to the first point of weaknesses for more details)
- when the assumption on the Hessian $H_p$ and $H_q$ required by Theorem 3 is satisfied? How large is the constant $\sigma$?
- could you provide more discussion on the influence of the labeled dataset on the proposed method? (please refer to the third point of weaknesses for more details)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: There are still some limitations in the paper:
- realizable assumption: the theoretical analysis is established on the realizable assumptions such that the data distribution follows the logistic regression model.
- tightness of the excess risk: the tightness of the proposed theoretical bound is not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer B8jR for the review and comments!
* **Novelty of Theorem 3 compared to [11]:**
* Our primary contribution concerning Theorem 3 in comparison to the work presented in [11] is the generalization to sub-Gaussian distributions for the points. The proofs in [11] rely on an assumption of a bounded support for the points where we do not use this assumption. We discussed this distinction in Lines 132--137 in our original submission. We remark that the bounded assumption is not explicitly stated in [11]. But it is required for the proofs in [11] to be valid. Specifically, item 5 of Assumption 1 in [11] assumed that $\nabla L(y|x, \theta^*)$ is bounded by a constant $L_1$. For multinomial logistic regression, by the expression of the loss gradient in our paper (Equation (39)), we can infer that [11] assumes that the point $x$ is in a bounded domain.
* Another contribution of our work on Theorem 3 in comparison to [11] is that we empirically demonstrate the excess risk bounds derived in Theorem 3 using synthetic datasets (as detailed in Section 5 and Appendix G.1).
* Regarding the concern raised by the reviewer on Assumption 2 in [11] pertaining to multinomial logistic regression, we maintain our position that _Assumption 2 in [11] can not be deduced by the results in [27]_. Let us explain. Using the notation established in our paper, Assumption 2 of [11] asserts that there exits $L_4>0$ such that $-L_4 \lVert\Delta\rVert_2 {\bf H}_p({\theta^\ast}) \preceq {\bf H}_p (\theta) - {\bf H}_p({\theta^\ast}) \preceq L_4 \lVert\Delta\rVert_2 {\bf H}_p({\theta^\ast})$, where $\Delta\triangleq \theta - \theta^\ast$. We can interpret Equation (6) in Proposition 1 of [27] as implying $\big(e^{-R\lVert\Delta\rVert_2}-1\big){\bf H}_p({{\theta}^{\ast}}){\preceq}{{{\bf H}_p}(\theta)-{{\bf H}_p}({\theta^\ast})}{\preceq}\big(e^{R\lVert\Delta\rVert_2}-1\big){\bf H}_p({\theta^\ast})$, where $R>0$ corresponds to the constant delineated in Proposition 1 of [27]. We can get the lower bound of Assumption 2 in [11] from this relation. But we can not deduce the upper bound of Assumption 2 in [11] since $e^{R\lVert\Delta\rVert_2} - 1$ can not be upper bounded by $L_4 \lVert\Delta\rVert_2$ for constant $L_4$.
* **Novelty of FIRAL compared to [14]:**
In Lines 53 to 58 we discussed the two main challenges of applying the regret minimization approach in [14] for optimal design to our active learning setting for multinomial logistic regression. We appreciate the reviewer for raising this concern and agree on the need for a more detailed discussion on challenges and our contributions. Below are the main points:
- Algorithmically, there are technical details that are needed in extending the approach presented in [14] to formulate the point selection objective within our specific context. In [14], they work with a loss matrix that is essentially a rank-1 matrix $x x^\top$ for some $x$ in $d$-dimension, resulting in a straightforward point selection objective (as seen in Line 9 of Algorithm 1 in [14]). In contrast, the loss matrix in our scenario, denoted as $\widetilde{{\bf H}}(x_{i_t})$ (Equation (15)), possesses a minimum rank of $c-1$ and can even be a full-rank matrix, contingent upon the labeled points from prior rounds. In order to manage the complexity associated with directly utilizing objective (19), we delve into the structure of the Fisher information matrix and employ algebraic techniques to establish an equivalent objective (23) that is more computationally efficient. We believe that our analysis is more technical compared to the situation addressed in [14].
- Theoretically, the task that presents the most technical challenge compared to [14] when establishing the performance guarantee for our algorithm is _Proposition 9_. The corresponding bound derived is Lemma 3.2 in [14]. The distinction between the characteristics of the loss matrices significantly complicates the derivation of such a general bound in Proposition 9. We have to use several algebraic tricks to obtain the bound for the near-optimal performance guarantee (see Appendix F.4). In our opinion, our proof is not trivial, which is perhaps corroborated by comparing the length of our proof with the proof of Lemma 3.2 in [14].
- Empirically, we conduct active learning experiments on real-world datasets and demonstrate that our algorithm outperforms the compared baselines.
* **Parameter $\sigma$:**
We thank the reviewer for pointing this out. This is a typo in our manuscript, $\sigma>1$ should be $\sigma >0$ in Theorem 3 (also Theorem 32 for detailed version). We do not require $\sigma>1$ in our proof. We apologize for the confusion caused by this typo. We will fix it in the new version.
* **Influence of initial labeled dataset:**
We believe that the influence of the initial labeled dataset on the performance of active learning depends on the specific dataset in question. In our numerical experiments, using CIFAR-10 as an example, we randomly select one labeled point from each class to construct $S_0$. This approach ensures an equitable starting point for all methods, enabling a fair comparison. We agree that in practice, the selection of initial data points to label, absent any label information, can exert a significant impact on the initial performance of active learning, particularly when the budget for $|S_0|$ is severely restricted, say $1c$ or $2c$. There are several methods for selecting $S_0$: Random sampling, K-means clustering, or optimal experimental design [14]. In Table 1 of the rebuttal PDF, we compare the performance of these different sampling methods for $S_0$ for $|S_0|=20$ and its effect to FIRAL. In Table 1, we observe that the effect of $S_0$ on the performance of FIRAL is not significant (for this particular experiment).
* **Constants in Theorem 4:**
Thanks for pointing this out. We will modify our expression regarding to these two constants in a more proper way.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply. The rebuttal has largely addressed my concerns regarding the novelty of this paper, particularly in its comparison with [14]. While there are similarities in the high-level ideas between this paper and [14] when addressing the optimization problem (13), I agree with the authors that the derivation of (23) and the analysis to reach Proposition 9 need specific technical adjustments tailored to the multinomial logistic model. Given this, I would like to adjust my score to 6.
Turning to the comparison with prior work [11], I retain some reservations about the statement concerning the self-concordant property. I understand that the generalized self-concordant property (Proposition 31) used in this paper is different from Assumption 2 in [11], particularly in its coefficient ($e^{R\Vert\Delta\Vert_2} - 1$ here, as opposed to $L_4\Vert \Delta\Vert$ in [11]). Yet, both conditions essentially utilize the self-concordant function to shift the Hessian matrix, as seen in the formulation $H_p(\theta) - H_p(\theta^*)\leq f(\Delta) H_p(\theta^*)$. It seems to me the difference in the coefficients $f(\Delta)$ just leads to different coefficients in the final bound. As evidence, the coefficient $f(\Delta) = (e^{\Vert \Delta\Vert}-\Vert \Delta\Vert-1)/\Vert \Delta\Vert^2$ finally leads to the $(e^\alpha -\alpha -1 )/\alpha^2$ term in Theorem. In this sense, I think the two conditions still share many similarities. I suggest the authors provide a more clear comparison with the previous work by just simply saying that "the strictly self-concordant does not hold for the logistic regression case."
---
Reply to Comment 1.1.1:
Comment: Thank you for considering our response and increasing the score!
We appreciate your new suggestions regarding the comparison to the previous work [11], specifically in terms of comparing our Theorem 3 with Lemma 1 of [11]. In the revision of our draft we will: (1) remove the statement about the self-concordance property of [11], (2) remark the comparison with [11] by summarizing key technical aspects at the end of section 3 (i.e. lines 132-137) in our original submission, and (3) provide a more detailed comparison in the appendix.
But let us give some more details on the differences between our approach and [11].
First, we realized that the self-concordance property (Assumption 2 in [11]) was _not_ used in deriving Lemma 1 of [11]. (Even though it is included as a premise in the Lemma 1 of [11], it remains unused in the proof.) Thus, we will remove the statement about the self-concordance property of [11] in our introduction section. Lemma 1 in [11] derives the excess risk bounds by using the Taylor theorem on $L_p(\theta_n) - L_p(\theta_\ast)$. In contrast, our approach to Theorem 3 involves initially establishing the general self-concordance of $L_p(\theta)$ and subsequently applying Proposition 31.
Second, there exists a difference in the manner by which the spectral approximation relations among multiple Hessian matrices are derived in Lemma 1 of [11] and our Theorem 3. This is due to the different assumptions used in our work and [11]. In Lemma 1 in [11], the spectral approximation relations are derived by using Bernstein-type inequalities that depend on the regularity conditions listed in Assumption 1 of [11]. In our work, the more general sub-Gaussian assumption (Assumption 1 in our submission) incorporated within our Theorem 3 introduces some technical challenges to our derivation. For example, in order to obtain the spectral approximation relations presented in Equations (111) and (136), we initially establish high probability bounds as detailed in Proposition 33 (and subsequently Corollary 34). Within this proposition, we employ a covering argument to infer spectral properties for random matrices associated with the property (3) of our Lemma 2.
We cannot thank you enough for taking so much time to review our work and giving valuable suggestions to our paper! | Summary: This paper proposes a novel active learning algorithm called FIRAL for pool-based active learning in multinomial logistic regression. The paper investigates the theory and algorithms for pool-based active learning and compares FIRAL to other active learning methods in terms of classification error. The authors use finite sample analysis to establish FIRAL-based bounds for the excess risk in the case of multinomial logistic regression with sub-Gaussian assumption for the point distributions. The experiments conducted on various datasets show that FIRAL outperforms other active learning algorithms in terms of classification error.
Strengths: The equation and proof make sense. Good introduction on this part.
Convinced theory.
Weaknesses: Not enough introduction on related work.
The target task is logistic regression classifier. I think it is simple and has been researched for many years. If this method can be applied on more complex classifiers or segmentation, it would be better.
No ablation study on hyperparameter, such as \eta.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In Line 67, the logistic regression classifier here is very simple, is it possible to apply this method on more complex classifiers, such as ResNet, Transformer, which need more data to train?
What is the meaning of \theta_*? The optimal parameters to predict y?
Could you compare with uncertainty-based active learning method, such as "Towards better uncertainty sampling: Active learning
with multiple views for deep convolutional neural network"?
Could you do experiments on the whole ImageNet data set, not just ImageNet-50?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The notation is not constant. For example, the subscripts of \theta_0 and \theta_n have different meaning.
Some writing errors. Line 17 "we use the sample the b points". Line 20 "choose q in order minimize"... Please check them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer zZMa for the review and comments!
* **Introduction on related work:**
We will add more discussion on related work in the new version.
* **Classifier choice:**
We used the multinomial logistic regression classifier in order to be able to conduct the theoretical analysis. We believe our non-asymptotic analysis in Theorem 3 and our algorithm FIRAL can be extended to other Generalized Linear Models with the change of Fisher information matrix. We remark that we did use deep-learning based contrastive learning for feature extraction, on which we then apply our classifier. Comparison of our current methodologies with a deep learning classifier that directly uses FIRAL would be interesting but also challenging, as it would be very hard to train a deep learning network with just a handful of examples. This comparison is important but beyond the scope of the paper.
* **Selection Hyperparameter $\eta$:**
Please see our general response to all reviews.
* **Question regarding the definition of $\theta_*$**
In original submission we introduced $\theta_*$ in lines 71--72: we assume that the true distribution $p(y|x)$ is given by the multinomial logistic regression with $\theta_*$.
* **Compare with uncertainty-based active learning methods**
We did compare uncertainty-based active learning methods such as "entropy" and "Var Ratios" in our paper (Figure 3 in the original submission). "Var Ratios" is also referred as "Confidence" in some literature.
* **Whole ImageNet dataset**
Unfortunately, the $O(b \tilde{d}^3)$ scaling of FIRAL's current implementation makes it very expensive to apply the scheme to the entire ImageNet as it would require solving thousands of 40K-by-40K eigenvalue problems (if we use 40 features for each data point). However, we would like to emphasize that this is not a fundamental limitation of FIRAL. As we discussed in our common response, FIRAL can be significantly sped up.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. All my questions have been solved. I have increase the rating. Please write better in the final paper. | Summary: This paper develops a pool-based active learning method for multinomial logistic regression, following a long line of work using the Fisher Information Ratio as a criterion for active set selection. The paper establishes FIR to tightly (within constants) characterize excess risk (Theorem 3), so that this can be used to select a set of points for active learning. This method is guaranteed pretty well (Theorem 4).
Strengths: It's nice that the evaluations include strong baselines like BAIT; this is a valuable contribution because of the computational efficiency of the proposed method. \tilde{d} is relatively small (hundreds) in useful regimes, where the proposed method would be quite efficient. I would not be entirely convinced that it performs better than BAIT without the empirical evaluations - but since they are present and look promising, this seems like a useful method.
The setting of multinomial logistic regression is at once specific enough to be useful here, and general enough to be interesting (could be any discrete label). So the extent to which theory meets practice here is uncommon and laudable - fantastic problem selection, and approach (with the FIR being the "right" quantity for other exponential families).
Weaknesses: The sparsification problem in (14) in Section 4.3 is where a lot of looseness in the theory creeps in; optimizing the eigenvalue for sparsification is a roundabout way of selecting points. I am a little puzzled by the relaxation of Section 4.3; the natural relaxation of the convex program (13) is a randomized rounding procedure optimizing over z \in [0,1]^{m} , rather than optimizing over the positive orthant as in (14). I am also curious about the choice of FTRL for sparsification. It works well enough, but initially was quite confusing in the exposition.
In Figure 2, dashed constant-slope lines would help readability a lot.
The main theoretical results are stated in an unnecessarily complex way, I believe.
- The complex dependence on \alpha is not really necessary to fully write out in the theorem statements. Fig. 1 handles this well, but in order to interpret it in context, you should stress that n needs to be O(\tilde{d}) for \alpha to be low and the bounds to be tight. The complex interplay between these variables could overall use better explanation.
- Theorem 4's use of the \lesssim makes it difficult to interpret the (1+\epsilon) prefactor on the right-hand side, and I had to rummage in the appendix and proofs to understand what the result is trying to express.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - The parameter settings for the real-world experiments are unclear right now (e.g. for \eta); the included code has \eta = 200, but is this ok for all datasets?
- I'd like to see the results with multinomial logistic regression on pretrained frozen embeddings, i.e. "linear probing." The theory here might actually reflect practice closely enough to make the results noteworthy. This type of approach would strengthen the paper a lot, and opportunities to do so with theory in non-toy settings are not so common.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer C5TF for the review and comments!
* **Relaxation problem:**
The reviewer is correct. The constraint on Equation (14) should be $z\in[0,1]^m$. We are sorry for the confusion caused by the expression. We will fix it in the new version.
* **Why use FTRL for sparsification problem?:**
We use FTRL because it allows to prove optimality of our algorithm. Specifically, with FTRL we can establish a lower bound in Equation (18) for $\lambda_{\min}\big(\sum_{t\in [b]}\widetilde{\bf H}(x_{i_t})\big)$. Our goal in point selection consequently becomes the maximization of this lower bound, encapsulated by Equation (19), which is equivalently expressed in Equation (23). For the reason why Equation (18) is satisfied, we provided an explanation in Appendix F.3. The primary steps can be outlined as follows: (1) FTRL algorithm provides an upper bound (Equation (250)) for the cumulative regret (defined in Equation (245)). (2) After applying Lemma 43, we can get the lower bound (Equation (254)) for the minimum eigenvalue of the summed loss matrices, i.e. ${\lambda_{\min}}\big(\sum_{t \in [b]} {\bf F_t}\big)$. (3) Since this bound is satisfied for any symmetric semi-positive definite loss matrix with dimension $\tilde{d}$, we can let loss matrix ${\bf F_t}=\widetilde{\bf H}({x_{i_t}})$, thereby arriving at the obtained lower bound in Equation (18).
* **Clarifying main theoretical results:**
Thanks for pointing them out. We will address them promptly by revising the content and discussion of Theorems 3 and 4 to ensure it becomes more concise and appropriately expressed.
* **Selecting $\eta$:**
Please see our general response.
* **Experiment on pretrained frozen embeddings:**
We compared various active learning methods on CIFAR-10 using frozen pretrained embeddings with a dimension of 512. The results of this comparison are presented in Figure 3, in the rebuttal PDF. In the plot, it is evident that FIRAL outperforms other methods methods until 200 points are labeled. After this point, uncertainty-based methods, BAIT and FIRAL demonstrate comparable performance levels. | Rebuttal 1:
Rebuttal: # General Response:
We thank the reviewers for their careful read of our paper, their comments, and their suggestions. We have fixed all the typos mentioned in the reviews. We have submitted responses to individual researchers. We also submitted a PDF with additional results.
There were two issues that were raised by two reviewers: the complexity of the algorithm and the selection of the hyperparameter $\eta$. We address them below.
### **Complexity of algorithm**
In our original submission, we stated the computational complexity of FIRAL at line 231. Recall that $c$ is the number of classes, $d$ is the number of features per point, $\tilde{d}=d(c-1)$, $m$ is the number of unlabeled points in the pool, and $b$ is the number of points to select for labeling. We use a dense-matrix SPD eigenvalue solver, and direct linear solver. These yield the complexity of FIRAL, as implemented now: $O(m c^3 + b \tilde{d}^3 + m \log m \tilde{d}^3).$ The $c^3$ term is due to Eq 23 (trace); the $b$-term is the SPD-eigenvalue solve in Alg 1, line 8 for $\left(\sum_{s=1}^t \tilde{H}_s\right)$. The $\log m$ comes from the $m$ linear solves (Alg 2, line 5, linear solve for $\Sigma$ and the trace) and $\log m$ expected iterations ($T$ in Alg 2, line 2).
The performance of these steps can be significantly sped-up by (1) using matrix-free iterative solvers and (2) exploiting the structure of $H(x)$ (Eq 10, line 164). The storage of $H(x)$ is just $d+c$ if we only store ${\bf h}(x)$ and $x$. A matrix-free vector multiplication with $H(x)$ can be done in $\tilde{d}$ time using the structure of Eq 10. This structure can be exploited to accelerate computations with $\tilde{H}$ and $\Sigma^{-1}$. The trace calculations, eigenvalue solves, and linear solves can be done approximately using Krylov methods and randomized algorithms.
For example, Eq 23 and Alg 2 trace calculations can be approximated using randomized trace estimation that requires approximate matrix-vector multiplications; and can be readily parallelized on multi-GPU architectures. The linear solve in Alg 2 can be done using the Conjugate Gradient method. The eigenvalues solve can be done using an iterative Lanczos solver. In this an approximation of $\nu_t$ could be in principle be computed in $O(b \tilde{d})$ cost.
We believe that by introducing fast algorithms and a more scalable high-performance implementation, the algorithm could scale to million of points and thousands of classes. Our goal in this paper was simply to establish the theoretical correctness and optimality of FIRAL; and to demonstrate its feasibility using modest-scale experiments. Scaling FIRAL to large datasets requires significant effort and merits a separate analysis. Such effort is beyond the scope of this paper.
### **Selection of hyperparameter $\eta$:**
The selection of $\eta$ is done use a grid search and doesn't require labeling. (In our code this search is manual, but it can be automated.) In our original submission, we discussed tuning $\eta$ in Appendix G.2 (Lines 1132 to 1134): we tested different values of $\eta$ to determine the one that maximizes $\lambda_{\min}\big(\sum_{t\in[b]}\widetilde{ \bf H}(x_{i_t})\big)$, as this is the main goal of our sparsification problem.
In the rebuttal PDF in Figure 4, we report results of an ablation study on $\eta$ and included the results in Figure 4 of the newly submitted file: we plotted $\lambda_{\min}\big(\sum_{t\in[b]}\widetilde{ \bf H}(x_{i_t})\big)$ along with the accuracy under different values of $\eta$ in the first two rounds of the active learning tests on CIFAR-10. In our experiments, we explored different values of $\eta$ to determine the one that maximizes $\lambda_{\min}\big(\sum_{t\in[b]}\widetilde{ \bf H}(x_{i_t})\big)$, as this is the main goal of our sparsification problem (parameter tuning in Appendix G.2). It is evident that $\lambda_{\min}\big(\sum_{t\in[b]}\widetilde{ \bf H}(x_{i_t})\big)$ serves as a valuable guide in selecting an appropriate $\eta$ that leads to good accuracy performance.
Pdf: /pdf/e6d15eb4f6fe07fedd9ac47454a30098ced59ff1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors present theory and algorithms for training multinomial logistic regression models in the pool-based active learning setting; how we should choose $b$ extra points to label from a pool of unlabeled ones, so that when we train a model including the newly acquired labeled points the excess risk of the classifier is minimized. The provide with novel asymptotic lower and upper bounds for the excess risk under sub-gaussian assumptions on the point distribution, and they show that excess risk is in $\Theta$ of the Fisher Information Ratio between the proposal and point distributions. This justifies algorithms which select points based on the minimization of Fisher Information Ratio. They further devise an algorithm for $(1 + \epsilon)$-optimal minimization of the Fisher Information Ratio objective. The algorithm depends on approximating the oracle predictive model $p(y|x, \theta^*)$ by a pretrained classifier, a continuous convex relaxation of the NP-hard discrete problem of point selection, and a greedy regret minimization procedure for sparsifying the continuous solutions, while maintaining bounds for the optimality of the final solution. Finally, they perform extensive experimentation on synthetic and image classification benchmarks to demonstrate that 1) their bounds are numerically valid, 2) their method outperforms chosen baselines.
Strengths: 1. They provide asymptotic lower and upper bounds for the excess risk **under sub-gaussian assumptions** on the point distribution, and they show that excess risk is in $\Theta$ of the Fisher Information Ratio between the proposal and point distributions. This weakens bounded domain assumptions found for similar bounds in the literature.
2. The employed sparsification methodology in the provided algorithm seems to be novel in the context of pool-based active learning, while maintaining the ability to have performance guarantees.
3. Numerical experiments are extensive and demonstrate that the algorithm outperforms the compared baselines. Even in the larger scale ImageNet-50 case, with pretrained feature extractors, their method provides with some benefits over random sampling.
Weaknesses: 1. Portion of the effort and presentation goes into reducing the requirement of performing eigendecompositions of a $\tilde{d} \times \tilde{d}$ matrix for each unlabeled point to performing eigendecompositions of $(c - 1) \times (c - 1)$. However, there is still need to perform $b$ eigendecompositions of $\tilde{d} \times \tilde{d}$ matrices in order to use the FTRL algorithm. As a result, complexity is still $O(\tilde{d}^3)$.
2. It seems that in more demanding cases (like ImageNet-50 presented in paper), there are diminishing returns from using an active learning setting like FIRAL over randomly sampling from unlabeled samples. This may discourage practitioners from implementing a pool-based active learning approach over random sampling from unlabeled samples, in order to avoid computational complexity issues and implementation overheads. This seems to be significant as the authors chose to further embed features extracted from pretrained networks with SimCLR to an even lower dimensional space using spectral embeddings. Evaluating the proposed algorithm under more practical considerations, it would be interesting to see the baseline of random sampling (and even K-means) from the unlabeled pool using instead the full $D$ dimensional extracted features from the pretrained network (in the cases of CIFAR10 and ImageNet-50); baselines which are certainly computationally possible to execute.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: ### Questions
* On a more general note, active learning might prove itself more advantageous when unlabeled points U come from a different distribution than the actual distribution $p$ that we want to evaluate the excess risk under. Such example is the setting of training with class-imbalanced sets (or more generally datasets which have been collected under biased processes), which is a common issue with uncurated large-scale datasets collected in-the-wild. Changing the setting slightly, e.g. by assuming access to a small set S of curated class-balanced samples, might incur benchmarks that are even more favorable to using approaches like FIRAL. What are the authors thoughts about such baselines and problems?
### Typos and Edits
* Line 17: “we use **the** sample the b points” -> “we use to sample these b points”
* Line 25: “is bounded above and below **by** FIR”. Also, there should be a citation about FIR at that place.
* Line 36: “sub-Guassian”
* Line 38: NP-hard problem needs a citation there
* Line 80: $V_p$ definition has a $q$, which should be $p$
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: See Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 3UFt for the review and comments!
* **On the algorithm complexity:**
Please see our general response.
* **Experiment on pretrained frozen embeddings:**
We compared various active learning methods on CIFAR-10 using frozen pretrained embeddings with a dimension of 512. The results of this comparison are presented in Figure 3, which is included in the rebuttal PDF. In the plot, it is evident that FIRAL outperforms the methods up to the 200 labeled points budget. After this point, uncertainty-based methods, BAIT and FIRAL demonstrate comparable performance levels.
* **Experiment on imbalanced dataset:**
Thank you for pointing this to us, this is an excellent point. Indeed, active learning helps a lot with imbalanced datasets. We tested FIRAL on a class-imbalanced CIFAR-10 (Figure 2 in the rebuttal PDF). We observe that FIRAL and BAIT are not sensitive to class imbalance in the active pool and significantly outperform the other methods.
---
Rebuttal Comment 1.1:
Title: Convincing rebuttal, thank you for extra experiments
Comment: - **On the algorithm complexity**: This is maintained as a weakness of the current paper, but I am convinced that with sufficient effort in terms of selection of solvers we can keep this kinda tractable in the large-scale scenario. I am looking forward for future work applying this in large-scale.
- **Experiment on pretrained frozen embeddings** and **experiment on imbalanced dataset**, thank you for providing these convincing experiments.
I am raising the score to 7. | null | null | null | null | null | null |
Direction-oriented Multi-objective Learning: Simple and Provable Stochastic Algorithms | Accept (poster) | Summary: This paper proposes a gradient manipulation method named SDMGrad for multi-task learning (MTL). SDMGrad improves the previous MGDA method by using two constraints. The first one is to constrain the common descent direction nearby the one computed with a specific preference (such as the average direction), which is similar to the previous CAGrad method. The second one is to add a regularization term for the computed weights, which is the same as the previous MoCo method. To reduce the computational cost, partial tasks are randomly selected to compute their gradients in SDMGrad, which is called SDMGrad-OS and is similar to the previous RLW method.
Theoretically, a stochastic non-convex convergence analysis for the proposed methods is provided. Empirically, experiments on multi-task supervised learning and reinforcement learning are conducted but some important baselines are missing.
Strengths: 1. This paper proposes a method for MTL.
2. This paper provides a stochastic non-convex convergence analysis for the proposed methods.
3. The code is provided.
Weaknesses: 1. The novelty of the proposed methods is limited. This paper aims to improve the previous MGDA method by adding several components. However, those components are similar to some existing work.
2. Lack of some important baselines, especially Nash-MTL (ICML 2022), which outperforms the proposed SMDGrad in all benchmark datasets.
3. The regularization term with a coefficient $\frac{\rho}{2}$ in Eq. (9) is one of the proposed improvements. However, the paper does not mention how to set $\rho$. Besides, from the code in the supplementary material, it seems $\rho=0$ in the implementation.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. The novelty of the proposed methods is limited. The proposed SDMGrad method improves the previous MGDA method by using two constraints.
- constraining the common descent direction nearby the one computed with a specific preference, as shown in Eq. (6). This is very similar to CAGrad in Eq. (7). The only difference is the norm constraint used in CAGrad while the angle constraint used in SDMGrad. Thus, what is the motivation and advantage to use Eq. (6)?
- adding a regularization term for the computed weights in Eq. (9), which is the same as MoCo.
- SDMGrad-OS reduces the computational cost in SDMGrad by randomly sampling tasks, which is similar to RLW [1]. In RLW, loss/gradient weights are randomly sampled from a distribution (such as Normal and Bernoulli) at each iteration. If the sample distribution is Bernoulli-family, it is equivalent to randomly sampling task losses/gradients to optimize at each iteration.
   In a word, it is hard to find something new or interesting in Section 4.
2. What is the meaning of "propose a new direction-oriented multi-objective problem" in Lines 3-4, Line 139, and Line 313? Maybe "formulation" is more appropriate than "problem".
3. $L_i$ in the last of Line 23 is quite confusing. It is better to use more clear notation.
4. Lines 24-25: "MTL aims to solve an average loss for all $K$ tasks" is wrong.
5. Line 27: why MOO **cannot** find a common parameter $\theta$ that achieves optima for all objectives? MOO can, but rarely. The description of "parameter $\theta$ that minimizes all objective functions" is wrong.
6. Eq. (4): what is the meaning of $h_{t, i}$?
7. Line 133: $\mathbb{R}^K\rightarrow\mathbb{R}^m$
8. Lines 145-147: in what scenarios? It is better to provide some examples here.
9. Line 176 in the text and Step 5 in Algorithm 1: why two different sampled data is used for the update of $\omega$? From the code in the supplementary material, the same sampled data is used in the implementation.
10. Proposition 1: it seems the bound is meaningless when $\rho\rightarrow0$.
11. Lines 447-448 in Appendix: the experimental setting of adding zero-mean Gaussian noise is following MoCo but without citation. Besides, MoCo does not been compared in this toy example section.
12. Section B in Appendix: "$g_0$ denotes the average gradient" in Line 483 but $g_0=G_0(\theta)=G(\theta)\tilde{\omega}$ in Eq. (13). What does $\tilde{\omega}$ denote? Is $G_0$ a vector here? It is confusing because capital $G$ denotes a matrix and lowercase $g$ denotes a vector in Eq. (13). The last line in Eq. (13): $G(\theta_t)\rightarrow G(\theta_t)\omega$.
13. Eq. (17) in Appendix: Is it possible to provide a detailed derivation of step (i)?
14. Hyperparameter selection.
- how to set $\rho$? From the code in the supplementary material, it seems $\rho=0$ in the implementation. So why? If so, the regularization term in Eq. (9) and the bound in Proposition 1 are meaningless.
- $\lambda$ in the proposed SDMGrad is searched for the best. How about the hyperparameters of baselines? From Table 8 in Appendix, CAGrad performs significantly better than the results in Table 4 in the main text.
- how to set $n$ for SMDGrad-OS?
15. Line 289: the definition of $\Delta m$ is wrong since some task has more than one metric (for example those tasks in Cityscapes and NYU-v2).
16. Lack of some important baselines.
- Nash-MTL [2] outperforms the proposed SMDGrad in all benchmark datasets, i.e., Cityscapes, NYU-v2, MT10, and MT50. Besides, the running time of Nash-MTL is shorter than SMDGrad-OS in MT10.
- It is better to compare with more baselines such as RLW [1], GradNorm [3], IMTL [4], and so on.
17. SegNet is out-of-dated and performs unsatisfactory. It is better to conduct experiments using a more powerful backbone, such as resnet50 and transformer.
18. The results of SMDGrad-OS in Cityscapes and NYU-v2 datasets should be reported.
19. The claim of SMDGrad-OS can deal with the case where the number of objectives is large (Lines 10-11) while MoCo does not (Lines 44-47). However, there is no experiment to support this.
20. Lines 94-96: there is no experiment to support that the proposed SMDGrad is model-agnostic.
----
[1] Reasonable Effectiveness of Random Weighting: A Litmus Test for Multi-Task Learning. TMLR 2022.
[2] Multi-Task Learning as a Bargaining Game. ICML 2022.
[3] GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks. ICML 2018.
[4] Towards Impartial Multi-task Learning. ICLR 2021.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Answer to Q1** We would like to clarify the novelty of our algorithmic designs and the difference from previous works.
1. As we discussed in lines 152-158, optimizing the constraint-based regularization (see eq. (7)) in CAGrad involves the evaluations of product $\\|h_0\\|\\|g_w\\|$ and the ratio $\\|h_0\\|/\\|g_w\\|$, which can heavily complicate the design of unbiased stochastic gradient/multi-gradient in $w$ and $\theta$ updates. As a result, CAGrad does not have performance guarantees in stochastic setting. As a comparison, our angle-based regularization admits very simple optimization steps in the stochastic setting (see line 5 and 7 of our Algorithm 1), while guaranteeing the convergence for general regularization constant $\lambda$. Thus, developing a simple and provable stochastic MOO method under such direction-oriented mechanism is the motivation and advantage of our design in eq. (6).
2. Our regularization term in Eq. (9) and the one in MoCo serves different purposes, and also lead to different analyses. In specific, our regularizer is to ensure the last-iterate convergence for solving Eq. (9), whereas the regularizer in MoCo is to ensure the Lipschitz continuity of the solution $w^*_{\rho,0}(\theta)$ (under our notations) w.r.t. $\theta$.
3. We agree that the task sampling is not new, but it is new to show that such sampling guarantees the near-unbiasedness of the gradient/multi-gradient in lines 5 and 7 of our Algorithm, and achieves an improved convergence guarantee. Previous works do not have such results.
**Answer to Q2, 3, 7.** Many thanks. We will follow your suggestions to improve the presentation.
**Answer to Q4.** We will revise this sentence to “The objective function of MTL is taking as the average loss over K objectives”.
**Answer to Q5.** Sorry for the improper wording. We will revise it to “MOO rarely finds a common parameter $\theta$ that achieves optima for all individual objective functions simultaneously”.
**Answer to Q6.** Please refer to our answers to Q3 from the reviewer ft1r.
**Answer to Q8.** The most relevant example is multi-task learning, whose objective function is an average loss for all tasks. Another possible case is that we have prior knowledge about the importance of different tasks, and the target is to optimize a weighted sum of loss functions for all tasks.
**Answer to Q9.** In theory, please refer to our answers to Q2 from the reviewer ZeY5. In experiments, we found that using two different mini-batches performs similarly to using the same mini-batch. The result can be found in Table 1 in the global response.
**Answer to Q10.** Note that in theory, the upper bounds are often derived in a uniform sense, i.e., hold for a class of objectives satisfying the assumptions. Thus, the $\rho>0$ is necessary here to ensure the worst case in this class to achieve a performance guarantee. However, in practice, the problem may not be the worst case, and hence $\rho\rightarrow 0$ may still work well.
It is also worth mentioning that for another class of objectives under a bounded function value assumption, we do not need this $\rho$ (Theorem 3 and 4).
**Answer to Q11.** We will add the citation of MoCo. The results of reproduced MoCo can be found in Figure 1 in the global response PDF.
**Answer to Q12.** Since we use $g_0$ to denote average gradient, $\widetilde{w}$ denotes a vector whose elements are all $\frac{1}{K}$, where $K$ is the number of objectives. Yes, $G_0$ is a vector. To remove the confusion, we will revise $G_0(\theta)$ to $g_0(\theta)$ in the revision.
**Answer to Q13.** Note that eq. (9) is strongly convex w.r.t. w. Using the property of a smooth $\mu$-strongly-convex function $f$ that $\forall x, y, (\nabla f(x)-\nabla f(y))^T(x-y)\geq\mu\\|x-y\\|^2$, we can derive $(i)$. We will clarify it in our revision.
**Answer to Q14.** Our selection takes following steps:
The parameter $\rho$ is used only for theoretical analysis and for guaranteeing our method to work well in the worst case. In the experiments (which are not necessarily the worst case), we find the performance when $\rho=0$ is good enough. Thus, we make this selection for simplicity.
From the formulation of our proposed SDMGrad, we know that it becomes close to SGD when $\lambda$ is large, and close to MGDA with a small $\lambda$. We first try to identify the large and small values of $\lambda$ where the performance shows consistency with our formulation. In experiments, we find $\lambda=0.1$ and $\lambda=10$ work well. Next, we narrow the range by trying different values in [0.1, 10]. Specifically, we search with $\lambda \in [0.1, 1, 2, 5]$ and evaluate which choice is better. For $n$, we make the same selection as the baseline CAGrad.
**Answer to Q15.** Thanks! We revise the definition as:
$\Delta_m=\frac{1}{K}\sum_{k=1}^K(-1)^\delta_k (M_{m,k}-M_{b,k})/M_{b,k}$, where $K$ is the number of metrics, $M_{b,k}$ is the value of metric $M_k$ obtained by the baseline and $M_{m,k}$ obtained by the compared method. $\delta_k=1$ if a higher value is better for metric $M_k$ and 0 otherwise.
**Answer to Q16.** For supervised learning, we add baselines including RLW, IMTL-G, and Nash-MTL in supervised learning and reinforcement learning. It can be seen from Table 3 in the global response PDF that our SDMGrad and SDMGrad-OS achieve higher success rate than NashMTL, while using much less time.
**Answer to Q17.** We follow former works using SegNet as backbone and it is fair to compare.
**Answer to Q18.** The numbers of tasks in Cityscapes and NYU-v2 are 2 and 3 respectively, which are already very small, and hence we do not apply task sampling in these two datasets.
**Answer to Q19.** Please refer to added experiment in the global response.
**Answer to Q20.** Following the CAGrad paper, all gradient manipulation methods such as CAGrad, MoCo, PCGrad, and MGDA are called as model agnostic because they manipulate gradients rather than models to avoid conflict.
---
Rebuttal Comment 1.1:
Title: Thanks for reply
Comment: Thanks for the response. It addresses some of my concerns. But I still have many concerns.
---
**Response to "Answer to Q4"**: Does it mean only $\frac{1}{K}\sum_{i=1}^K \ell_i$ (where $\ell_i$ is the loss of i-th task) is the objective function of MTL? How about $\frac{1}{K}\sum_{i=1}^K w_i\ell_i$, where $w_i$ is the task weight of i-th task.
**Response to "Answer to Q8"**: Same question as above.
**Response to "Answer to Q9"**: Please see **Response to "Answer to Q14"** below.
**Response to "Answer to Q11"**: It seems MoCo and the proposed SDMGrad perform comparably since they are both stochastic methods. Thus, this toy example cannot demonstrate the advantages of the proposed method over MoCo.
**Response to "Answer to Q14"**:
- This answer means **the proposed method is designed for theoretical analysis and is not used in practice**. In other words, in practice, SGD is used to solve Eq. (8) and **we do not need anything introduced in Section 4.2**.
- It is better to provide a comprehensive ablation study of $\lambda$ (only two values of $\lambda$ are conducted in the paper).
- **This question has not been answered.** Q14: "How about the hyperparameters of baselines? From Table 8 in Appendix, CAGrad performs significantly better than the results in Table 4 in the main text."
- I cannot understand it. "For $n$, we make the same selection as the baseline CAGrad." in Answer to Q14.
**Response to "Answer to Q16"**:
- What are the meanings of SGD and Unit. Scal. in Tables 1, 2, and 3 in the global response PDF?
- It seems $\Delta_m$ of RLW in Table 2 in the global response PDF is 7.78, according to Table 2 in the Nash-MTL paper.
- **Nash-MTL outperforms the proposed method on both Cityscapes and NYU-v2 datasets, according to the results of Tables 1 and 2 in the global response PDF**.
- It seems the running time of Nash-MTL reported in Table 3 in the global response PDF is not convincing. Table 5 in the Nash-MTL paper shows that Nash-MTL can be as efficient as PCGrad by tuning the hyperparameter (the frequency of task weights updates). It seems the computational cost of PCGrad is similar to the proposed method. Thus, I do not think Nash-MTL is 7-9 times slower than the proposed method.
**Response to "Answer to Q18"**: If my understanding is correct, the proposed SDMGrad-OS method can work when the number of tasks is small. It is important to evaluate it comprehensively. If SDMGrad-OS cannot work with small task numbers, it is necessary to explain why and the definition of "small".
**Response to "Answer to Q19"**: Which experiment?
**New Question**: Line 435 in Appendix: what is the detail of rescaling? Lines 435-437 in Appendix: MoCo uses softmax for projection operation (see Page 30 of MoCo paper in the ICLR version).
---
**All in All, I appreciate the theoretical contributions of this paper. However, I have two major concerns about this paper.**
- **The proposed method is inconsistent with the implementation**. There are **four inconsistencies** as follows.
1. In the implementation, each task's gradient is normalized by its norm before Line 5 of Algorithm 1.
2. $\rho$ in Line 5 of Algorithm 1 is set as $0$ in implementation.
3. Two mini-batches strategy in Line 5 of Algorithm 1 is not used in the implementation.
4. $\alpha_t$ in Line 7 of Algorithm 1 is set as $\frac{\alpha_t}{1+\lambda}$ in the implementation, where $\lambda$ is the same as in Line 5 of Algorithm 1 and is a hyperparameter that needs to be carefully tuned.
**The second and third points indicate that the proposed method in Section 4.2 is for better convergence properties but is not used in practice**. I know **the first point** is a "potential" trick in MTL and significantly affects the performance. Thus, I think it should be mentioned in the paper. **The last point** means tuning $\lambda$ has the same effect as directly tuning the learning rate $\alpha_t$, which also significantly affects the performance. According to the empirical results [1], the most simple method, $\min\sum_{i=1}^K \ell_i$, can perform comparably or even better than all those multi-objective optimization methods for both supervised learning and reinforcement learning by tuning some hyperparameters (like the learning rate).
[1] In Defense of the Unitary Scalarization for Deep Multi-Task Learning. NeurIPS 2022.
- **The experimental results cannot demonstrate the effectiveness of the proposed method.**
1. **Nash-MTL outperforms the proposed method** on both Cityscapes and NYU-v2 datasets, according to the results of Tables 1 and 2 in the global response PDF.
2. **The hyperparameters of the proposed method are carefully tuned, while the baselines are not**. See the results of CAGrad in Tables 3 and 4 in the paper and Tables 7 and 8 in Appendix.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback (Part1)
Comment: We thank the reviewer for further reply and for appreciating our theoretical contributions.
___
**Response to "Answer to Q4" and "Answer to Q8"**: Here, we refer to the commonly used average loss $\frac{1}{K}\sum_{i=1}^K \ell_i$, following the CAGrad paper. However, we can also use the weighted average loss $\frac{1}{K}\sum_{i=1}^K w_i\ell_i$ when the preferences $w_i,i=1,...,K$ are known. In fact, our framework also covers this case by setting the weight $\widetilde{w}_i$ in our $g_0=\sum_i\widetilde{w}_i\nabla l_i$ to be $\frac{1}{K}w_i$. In other words, our framework covers both formulations.
**Response to "Answer to Q9" and "Answer to Q14"**:
1. We think you have a misunderstanding of our answer to Q14. As we emphasized in response, the parameter selection (like $\rho>0$) is to ensure that our method can **also** work well in the worst-case scenario. This type of theoretical results (i.e., convergence upper bounds) are very common in optimization, and is often used to evaluate the theoretical performance of different algorithms.
However, the experiments are not necessarily the worst cases, and hence in practice, people usually search the hyperparameters.
In our experiments, we found that the choice of $\rho=0$ performs similarly to a small positive $\rho>0$, so we set $\rho=0$ for simple implementation. To totally resolve your concern here, we have added an additional experiment (see the table below) with $\rho=0.01$, which achieves a performance similar to $\rho=0$.
| Method | Segmentation $\uparrow$ | | Depth $\downarrow$ | | $\Delta \mathrm{m}$% $\downarrow$ |
| :---: | :---: | :---: | :---: | :---: | :---: |
| | mloU | Pix Acc | Abs Err | Rel Err | |
| Independent | 74.01 | 93.16 | 0.0125 | 27.77 | |
| SDMGrad ($\lambda=0.3,\rho=0.01$) |75.15| 93.45 | 0.0161 | 29.59 | 8.42 |
| SDMGrad ($\lambda=0.3,\rho=0$) |75| 93.43 | 0.0135 | 35.35 | 8.39 |
||||||
2. Following your suggestion, we provide a comprehensive ablation study of $\lambda$ on Cityscapes in the following table.
| Method | Segmentation $\uparrow$ | | Depth $\downarrow$ | | $\Delta \mathrm{m}$% $\downarrow$ |
| :---: | :---: | :---: | :---: | :---: | :---: |
| | mloU | Pix Acc | Abs Err | Rel Err | |
| Independent | 74.01 | 93.16 | 0.0125 | 27.77 | |
| SDMGrad ($\lambda=0.1$) |75.53| 92.94 | 0.0165 | 37.4 | 16.99 |
| SDMGrad ($\lambda=0.2$) |73.78| 92.95 | 0.0141 | 36.53 | 11.17 |
| SDMGrad ($\lambda=0.3$) |75| 93.43 | 0.0135 | 35.35 | 8.39 |
| SDMGrad ($\lambda=0.4$) |75.24| 93.52 | 0.0133 | 38.73 | 10.94 |
| SDMGrad ($\lambda=0.5$) |74.8| 93.42 | 0.0138 | 37.05 | 10.62 |
| SDMGrad ($\lambda=0.6$) |74.63| 93.46 | 0.014 | 37.3 | 11.32 |
| SDMGrad ($\lambda=0.7$) |75.49| 93.58 | 0.0142 | 37.33 | 11.46 |
| SDMGrad ($\lambda=0.8$) |75.18| 93.52 | 0.0137 | 37.89 | 11.03 |
| SDMGrad ($\lambda=0.9$) |75.2| 93.53 | 0.0139 | 39.51 | 12.92 |
||||||
3. The results of all baselines in Table 4-8 are directly quoted from their original papers. CAGrad has a hyperparameter $c$ similar to our $\lambda$ in SDMGrad. The value of $c$ in CAGrad is set to 0.2 in Table 3, and 0.4 in Table 4.
The results of CAGrad shown in Table 3-4 are the ones with the best average training loss. While the results of CAGrad shown in Table 7-8 are the ones with the best performance drop $\Delta_m$. This is consistent with the presentation in the original CAGrad paper. We report both two results of the proposed SDMGrad in Table 3-4. In fact, we have discussed this in Lines 291-292. We will clarify this in revision.
4. The original CAGrad paper proposed a speedup strategy via sampling a subset of tasks. Following CAGrad, we set $n=4$ for MT10 and $n=8$ for MT50.
**Response to "Answer to Q11"**: In this toy example, our goal is to show that our SDMGrad method converges to the target solution, and hence does not fall short compared to MoCo. However, note that in other experiments on Cityscapes, NYU-v2, and MT10, our SDMGrad clearly outperforms MoCo a lot.
**Response to "Answer to Q16"**:
1. SGD means that SGD is used to optimize the average loss of all tasks during the training. Unit. Scal. denotes the unitary scalarization method proposed in [1].
2. Sorry for the carelessness. Yes, the $\Delta_m$ of RLW should be 7.78, and 10.11 is the MR (mean rank) of RLW. We will revise it.
3. NashMTL achieves a better performance drop than the proposed SDMGrad on both Cityscapes and NYU-v2. However, SDMGrad performs better than or similar to NashMTL in the segmentation task. Moreover, SDMGrad outperforms NashMTL both in success rate and time efficiency on MT10.
4. The official implementations of NashMTL on MT10 are not released by the original paper, and hence we use the most recent re-implementation and reproduced results in [2]. The reproduced results in Table 3 are consistent with those in [2]. We will clarify this in revision.
[1] In defense of the unitary scalarization for deep multi-task learning. NeurIPS.
[2] FAMO: Fast Adaptive Multitask Optimization. arXiv. | Summary: The contributions are as follows:
- First, this work gives a new framework of direction-oriented multi-objective optimization.
- Second, they propose an algorithm, SDMGrad (and an objective sampling version when the number of objectives is large)
- Third, they give a convergence analysis for their algorithm and show that it has improved complexity compared to previous work
- Finally, they show good empirical performance on both supervised learning (semantic segmentation + depth estimation) and reinforcement learning (robot manipulation task) paradigms
Strengths: Overall, the paper is clearly written, the new algorithm is straightforwardly effective, and the convergence rates beat existing bounds with fewer convergence assumptions. More specifically:
- The proposed formulation is a generalization of existing algorithms, CAGrad and SGD
- Algorithm 1 is intuitive to understand and memory-efficient to implement
- The convergence rate beats the previous MoCo by epsilon^2, and even is able to handle the case where the number of objectives is large (via the objective sampling version)
- Experiments are conducted well and show the effectiveness of the method in practice
Weaknesses: - I am unsure about the applicability of the newly proposed formulation in the context of multi-objective optimization. Could the authors share some examples where this direction based regularization would be useful?
- Also, I am not sure how the -1/2||d||^2 term will affect the regularization, as it seems added for the convenience of the algorithm and convergence proof.
- Combining the above two points, properly choosing $$\lambda$$ also seems like it is important but would add an extra hyperparameter (making things more costly, empirically). For the examples you could give above, can you also share how this parameter would be chosen?
- I believe the work would benefit from a short discussion of the theorems and proof challenges in the main text. Currently, there are four theorems listed with little discussion of any proof details.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Could you outline, in intuitive terms, a sketch of the proof of Theorem 1? Is the proof inspired by any previous work (such as MoCo or CR-MOGM)?
- Is there any chance to remove or relax the bounded gradient assumption?
- Why can the convergence rate (of both your algorithm and MoCo) improve so much with a bounded function assumption?
- Do you have any sense how things would change if you remove the smoothness assumption?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: No limitation section is included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Could the authors share some examples where this direction based regularization would be useful?**
**A:** This direction based regularization would be useful if one target is to minimize a specific objective function, e.g., the average loss in MTL. A more specific example is provided in A.1 of the appendix, where the goal is also to find the minimizer (the black point in the Pareto front) of the average loss of two tasks. From Fig. 2, it can be seen that MGDA without such regularization converges to different points in the Pareto front when starting from different points. As a comparison, our method can converge to this target black point from different initializations. This supports the importance of our regularization.
**Q2. Also, I am not sure how the -1/2$\\|d\\|^2$ term will affect the regularization, as it seems added for the convenience of the algorithm and convergence proof.**
**A:** The regularization $-1/2||d||^2$ is necessary here to ensure the boundedness of the magnitude of the direction $d$. Without this regularizer, it can be seen that eq. (6) that the maximizer $d^*$ can go to infinity, and hence makes the problem meaningless. This regularizer is not new, and has also been used in works such as MGDA, CR-MOGM [1], etc.
[1] Zhou, Shiji, et al. "On the convergence of stochastic multi-objective gradient manipulation and beyond." Advances in Neural Information Processing Systems 35 (2022): 38103-38115.
**Q3. Choosing $\lambda$ also seems like it is important but would add an extra hyperparameter (making things more costly, empirically). For the examples you could give above, can you also share how this parameter would be chosen?**
**A:** From the formulation of our proposed SDMGrad, we know that it becomes close to SGD when $\lambda$ is large, and close to MGDA with a small $\lambda$. We first try to identify the large and small values of $\lambda$ where the performance shows consistency with our formulation. In experiments, we find that $\lambda=0.1$ and $\lambda=10$ work well. Next, we narrow the range by trying different values in [0.1, 10]. Specifically, we search with $\lambda \in [0.1, 1, 2, 5]$ and evaluate which choice is better. Overall, this grid search takes small efforts and the performance improvement is robust to certain ranges (e.g., [0.1,1] used in our search) within $[0.1,10]$.
**4. Currently, there are four theorems listed with little discussion of any proof details. Could you outline, in intuitive terms, a sketch of the proof of Theorem 1? Is the proof inspired by any previous work (such as MoCo or CR-MOGM)?**
**A:** Great suggestion! We summarize our proofs as following three main steps:
1. Characterization of the last-iterate convergence of SGD in solving eq. (9).
2. Use Proposition 1 to upper-bound the bias of the update vector $d$. The main step here is to use an intermediate quantity $G(\theta_t)w_{t, \rho, \lambda}$ to split the bound into the error in solving eq. (9) and the bias induced by smoothing.
3. Combine the previous two steps with a decent lemma of each objective to derive the final convergence.
We will provide a proof sketch in the revision.
**Q5. Is there any chance to remove or relax the bounded gradient assumption?**
**A:** Interesting question! It is hard to remove this assumption given the current framework. This is because when approximating the true update direction $d^*=G(\theta)w^*_\lambda+\lambda G_0(\theta)$, we cannot get the exact $w^*_\lambda$ but an estimate $\hat w$, and hence if the gradient $G$ is unbounded, one approximation error $\\|G(\theta)(w^*_\lambda-\hat w)\\|$ can be uncontrollable. This is why the assumption is necessary here.
**Q6. Why can the convergence rate (of both your algorithm and MoCo) improve so much with a bounded function assumption?**
**A:** Good question! Without bounded function assumption, we need to add a quadratic term $\rho\\|w\\|^2$ to smooth the problem w.r.t. $w$, which makes the complexity to be proportional to $\frac{1}{\rho}$ (see proof of Theorem 1). Since the smoothing factor $\rho$ is sufficiently small, the complexity becomes large. When having this assumption, we do not require such a smoothing trick, and hence improve the complexity quite a lot.
**Q7.Do you have any sense how things would change if you remove the smoothness assumption?**
**A:** Interesting question! First we note that solving the eq. (9) w.r.t. $w$ keeps unchanged. The main change lies in optimization w.r.t. the variable $\theta$, i.e., line 7 of our Algorithm 1. Some new challenges will arise here. For example, it is not clear if $d=G(\theta_t;\zeta)w_{t,S}+\lambda G_0(\theta_t;\zeta)$ still achieves a small bias for approximating true direction $d^*$. Decent lemma for each objective in our proof may not work here. We guess some techniques in [1] or some proximal methods may be helpful here, and we would like to leave it for future study.
[1] Ohad Shamir et al. "Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes".
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! After reading other reviews and your clarifications, I have decided to maintain my original score of 6.
---
Reply to Comment 1.1.1:
Title: Thanks for the feedback
Comment: We thank the reviewer 5qpL for the feedback! We will take your suggestions in our revision!
Best,
Authors | Summary: Authors presented a new stochastic gradient method for multi-objective optimization (MOO) problem, called SDMGrad.
Compared with previous SMG, they claimed SDMGrad dose not need to increase the batch size linearly with the iteration numbers.
Compared with previous MoCo, they claimed SDMGrad could be applied into situations with large number of objectives.
Strengths: The authors presented a new stochastic gradient method for multi-objective optimization (MOO) problem, called SDMGrad.
Detailed background is provided to help understand the history of MOO.
And various experiments show that the new method can outperform the previous methods.
But I need to ask some questions to fully understand the novelty of this method.
Weaknesses: I need to ask some questions to fully understand the novelty of this method.
Also, some technically details are presented very clear, I will also post some questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Could you detail the advantages and novelty of SDMGrad over MoCo?
First, Line 5 in Algorithm 1 seems similar with Equation 4 of MoCo.
Second, authors commented MoCo "a relatively strong assumption that the number T of iterations is much larger than the number K of objectives." Well, in the real industrial systems, the number T of iterations could be millions, and the number K of objectives (maybe 3 or 10?) is obviously much smaller than T. So this is a natural assumption to me and hence weaken the necessity to improve MoCo from this perspective.
2. Why the equation in the Line 5 of Algorithm 1 is an unbiased estimation? (Line 175)
3. The equation after line 175 is the derivative of Equation 9, right? Why we need two different data samples: \xi and \xi\prime?
and how do we implement this in the mini-batch training? sample two mini-batches in each s?
4. Line 5 in Algorithm 1, why we need \prod_{W}? what is the meaning of this? In Equation 3, W is a set with infinite possibilities of w. How does this multiply work? And this is element-wise multiple?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Could you detail the advantages and novelty of SDMGrad over MoCo? First, Line 5 in Algorithm 1 seems similar with Equation 4 of MoCo. Second, authors commented MoCo "a relatively strong assumption that the number T of iterations is much larger than the number K of objectives." Well, in the real industrial systems, the number T of iterations could be millions, and the number K of objectives (maybe 3 or 10?) is obviously much smaller than T. So this is a natural assumption to me and hence weaken the necessity to improve MoCo from this perspective.**
**A:** Good question! After reading MoCo paper carefully, we guess that you refer to eq. (10) of the ArXiv version of MoCo paper.
For the first question, line 5 of our method and eq. (10) of MoCo have some key differences. In eq. (10) of MoCo, an auxiliary tracking variable $Y_k$ is used as a stochastic estimator for the full gradient $G(\theta_k)$, and the direction bias $\\|d(x_k)-Y_kw_k\\|$ can be shown to decrease iteratively. As a comparison, our approach directly uses the stochastic gradients $G(\theta_t; \xi), G(\theta_t; \xi^\prime)$ in $w$ updates, and show that the direction bias is sufficiently small at each iteration.
For the second question, we wanted to claim that the assumption that $T$ is large at an order of $K^{10}$ is relatively strong. For example, $T$ can be as large as $10^{10}$ even though we choose a relatively small $K=10$. We will make this clear in the revision.
**Q2. Why the equation in the Line 5 of Algorithm 1 is an unbiased estimation? Why we need two different data samples: \xi and \xi\prime? and how do we implement this in the mini-batch training? sample two mini-batches in each s?**
**A:** Great questions! Since we use two independent sample data $\xi$ and $\xi^\prime$, $G(\theta_t;\xi)$ and $G(\theta_t; \xi^\prime)w_{t,s}+\lambda G_0(\theta_t;\xi^\prime)$ are independent w.r.t.~$\xi,\xi^\prime$. Based on the fact that $\mathbb{E}[AB]=\mathbb{E}[A]\mathbb{E}[B]$ if A and B are independent, we can get
$\mathbb{E}[G(\theta_t;\xi)^T(G(\theta_t; \xi^\prime)w_{t,s}+\lambda G_0(\theta_t; \xi^\prime))+\rho w_{t,s}] = \mathbb{E}[G(\theta_t;\xi)^T]\mathbb{E}[G(\theta_t; \xi^\prime)w_{t,s}+\lambda G_0(\theta_t; \xi^\prime)]+\rho w_{t,s}$
$=G(\theta_t)^T(G(\theta_t)w_{t,s}+\lambda G_0(\theta_t))+\rho w_{t,s}$. Therefore, it is an unbiased estimation w.r.t. data sampling. Then, if there is one sample data $\xi$, then $G(\theta_t; \xi)$ and $G(\theta_t; \xi)w_{t,s}+\lambda G_0(\theta_t; \xi)$ have correlation with each other, and hence it could lead a biased estimation. This is the reason why we use double sampling in the gradients.
In experiments, we found that using two different mini-batches performs similarly to using the same mini-batch. Thus, for simple implementation, we use the same mini-batch for gradient constructions. We also provide additional experiments with two different mini-batches Table 1 shown in above to support this observation.
**Q3. Line 5 in Algorithm 1, why we need \prod_{W}? what is the meaning of this?**
**A:** This notation $\prod_{\mathcal{W}}$ denotes the projection on the probability simplex. We will clarify this in the revision.
---
Rebuttal Comment 1.1:
Title: following questions
Comment: Thanks for the reply of authors!
I just have a few more questions to confirm if my understanding is right and to determine if a rating change is needed:
(1) The motivation of stochastic MOO algorithms?
Why we want stochastic MOO? because the true gradient or full gradient is hard to get (as claimed in MoCo)? What is the definition of true gradient or full gradient?
I have this question because MGDA and CAGrad has been introduced and worked before these stochastic MOO algorithms, I wonder how did MGDA or CAGrad to get objective gradients? maybe they just use the gradient of a mini-batch? hard to believe MGDA or CAGrad was using a full gradient (the gradient of the whole dataset)?
(2) To understand this paper, I go back to read MoCo and seems MoCo is to design an unbiased estimate of the model parameter's gradient; and this paper is to design an unbiased estimate of the gradient of the task weight (w), am I right?
(3) In previous methods like MGDA and CAGrad, how did they solve w from equation (8)? this paper is the first one that use SGD-style to solve w?
---
Reply to Comment 1.1.1:
Comment: Thanks for the further questions
Dear Reviewer ZeY5,
We thank you a lot for the feedback and the additional questions! Our further responses are listed below.
**Q1. The motivation of stochastic MOO algorithms? Why we want stochastic MOO? because the true gradient or full gradient is hard to get (as claimed in MoCo)? What is the definition of true gradient or full gradient?
I have this question because MGDA and CAGrad has been introduced and worked before these stochastic MOO algorithms, I wonder how did MGDA or CAGrad to get objective gradients? maybe they just use the gradient of a mini-batch? hard to believe MGDA or CAGrad was using a full gradient (the gradient of the whole dataset)?**
**A:** Great question! The biggest motivation for studying the stochastic MOO is that full-batch gradient (which is calculated using all data samples) requires large memory or computation and may be computationally infeasible in large-sample scenarios. In MGDA and CAGrad works, they describe and analyze their methods in the deterministic case using full gradients, but, as also noted by the reviewer, use the mini-batch sampling in the implementations. However, there is no theoretical guarantee for their approaches in the stochastic case. In fact, there are some theoretical works (e.g., [1] and MoCo paper) showing some counterexamples under which MGDA and CAGrad cannot converge to a Pareto stationary point. In addition, there are some empirical evidences such as Fig. 2 in [1] and Fig.3 in MoCo paper showing such divergence behaviors.
[1] Suyun Liu and Luis Nunes Vicente. "The Stochastic Multi-gradient Algorithm for Multi-objective Optimization and its Application to Supervised Machine Learning."
**Q2. To understand this paper, I go back to read MoCo and seems MoCo is to design an unbiased estimate of the model parameter's gradient; and this paper is to design an unbiased estimate of the gradient of the task weight (w), am I right?**
**A:** Exactly! In MoCo, the tracking variable $Y_k$ is used as a stochastic estimator for the full gradient $G(\theta_k)$, and the direction bias $\\|d(x_k)-Y_kw_k\\|$ can be shown to decrease iteratively. Our approach is to design an unbiased estimate of the gradient of the task weight (w), which in turn leads to a near-unbiased multi-gradient estimation at each iteration.
**Q3. In previous methods like MGDA and CAGrad, how did they solve w from equation (8)? this paper is the first one that use SGD-style to solve w?**
**A:** Good question! In CAGrad, they solved the constrained problem in eq. (7), which designed a constraint to regularize the update direction close to the average gradient. It is equivalent to solving the Lagrangian of this constrained problem (see page 3 in CAGrad paper), where they used projected gradient descent to solve $w^*$ from the minimization problem $w^*=\arg\min_{w\in\mathcal{W}} g_w^Tg_0+\sqrt{\phi}\\|g_w\\|$ with $\phi=c^2\\|g_0\\|^2$. However, their theoretical guarantee highly relies on the access to the full gradients, and in the stochastic setting, the analysis is not clear because the norms $\\|g_w\\|\\|g_0\\|$ complicates the design of unbiased gradient/multi-gradient estimates. Similarly, for MGDA, they used projected gradient descent to solve eq.(3) in our paper and achieved guaranteed convergence in the deterministic case.
However, in the stochastic case, MoCo paper provides a counterexample (see Section 2.3 in MoCo paper), which shows that directly using mini-batch gradients in MGDA leads to a **biased** multi-gradient estimate.
To the best of our knowledge, our work is the first one to use SGD with near-unbiased gradient estimates to solve $w$. | Summary: This paper proposes a direction-oriented multi-objective gradient descent algorithm under stochastic gradient settings. The authors show that the algorithm can benefit from the direction-oriented mechanism and ensure optimal convergence. In addition, an objective sampling strategy is applied to the proposed algorithm for the case of many objectives.
Strengths: This paper studies an important problem in multi-objective optimization. It has solid analysis and experiments.
Weaknesses: 1. The key benefit of MoCo compared with CR-MOGM is that the direction can gradually be unbiased to the one calculated by full-batch gradients (Lemma 2). However, there is no respective analysis of the direction bias in the analysis.
2. I check that CR-MOGM has the same sample complexity as SDMGrad in the non-convex setting. Note that $O(T^{-1/4})$ in CR-MOGM is for the first-order convergence of $\|\|G_t w_t\|\|$, and for the second-order convergence $\|\|G_t w_t\|\|_2^2$ is $O(T^{-1/2})$. Usually, the first-order one is more often used.
3. The idea is very similar to [1], which studies multi-objective online learning that can be easily reduced to stochastic settings. [1] also uses a regularization to restrict the direction to be closed to a prior preference.
4. CR-MOGM has not been compared in the experiments.
5. I am not convinced with the motivation to face many objectives problems as stated in Line 46. It is known that the many objective problem is hard to be solved by Pareto optimization, since when the number of objectives becomes too large, the Pareto set will cover nearly the majority of the space, then optimizing towards Pareto optimality is meaningless.
[1] Jiang et al. Multi-Objective Online Learning. ICLR 23.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: No
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Novelty is limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. There is no respective analysis of the direction bias in the analysis.**
**A:** Our analysis of the direction bias can be found in Proposition 1. It can be seen that the bias is upper-bounded by an exponentially decaying term $4C_g^2(1-2\beta_t\rho)^S$ plus two small terms $\frac{\beta_tC_g^2C_1}{\rho}$ and $\rho$. This error bound is controllable by selecting stepsize $\beta_t$ and smoothing constant $\rho$ properly small.
**Q2. CR-MOGM has the same sample complexity as SDMGrad in the non-convex setting.**
**A:** Thanks for pointing this out for us! We have double checked the sample complexity of CR-MOGM and the second order convergence is indeed $\mathcal{O}(T^{-\frac{1}{2}})$. We will revise it accordingly.
**Q3. The idea is similar to [1].
[1] Jiang et al. Multi-Objective Online Learning. ICLR 23.**
**A:** Thanks for pointing this paper out for us! However, we believe our work has substantial differences from this work:
1. First, we focus on different objective functions. [1] studied the multi-objective online convex optimization problem with convex objective functions, whereas we focus on the stochastic multi-objective optimization with nonconvex objective functions.
2. Second, our regularizer is different from theirs. In specific, the $l_1$-regularizer in [1] is to use time-varying historical information for stabilizing the algorithm performance, whereas our regularizer $\lambda\langle g_0, d\rangle$ is to regularize the update direction $d$ close to a fixed direction $g_0$.
3. Third, algorithms are different. The method in [1] is motivated by mirror descent, whereas ours is an SGD-type approach with a double-sampling scheme.
We will cite this relevant paper in the revision and provide a detailed discussion for comparison.
**Q4. CR-MOGM has not been compared in the experiments.**
**A:** The official codes of CR-MOGM have not been released, so we implement them by ourselves. We provide the preliminary results of CR-SDMGrad on Cityscapes and NYU-v2 in Table 1-2 in global response. It can be seen that our method performs comparably to CR-SDMGrad on Cityscapes, but on NYU-v2, our method is significantly better.
**Q5. I am not convinced with the motivation to face many objectives problems as stated in Line 46. It is known that the many objective problem is hard to be solved by Pareto optimization, since when the number of objectives becomes too large, the Pareto set will cover nearly the majority of the space, then optimizing towards Pareto optimality is meaningless.**
**A:** Sorry for the confusion and let us clarify the motivation here. In lines 44-46, we mentioned that MoCo needs an assumption that the number $T$ of iterations is large at an order of $K^{10}$, where $K$ is the number of objectives. This is a strong assumption even for a small number $K=10$ (in this case, $T$ is as large as $10^{10}$). Then, our motivation is to remove this requirement rather than the need to face many objectives. We will clarify this in the revision.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal.
Comment: Although the review did not engage, I'll carefully read and consider it during the decision period.
AC
---
Rebuttal 2:
Title: Look forward to the reviewer's feedback
Comment: Dear Reviewer tz6S,
Thanks so much for your time and efforts in the review. While the discussion period has started for a while, we have not received your feedback on our response. We really appreciate it if you can let us know whether our response resolves your concerns (e.g., comparison to CR-MOGM) or not. Your further comments and questions are appreciated!
Best,
Authors | Rebuttal 1:
Rebuttal: **To all reviewers:**
We thank all reviewers for their time and valuable comments! Based on the reviewers’ suggestions, we have added the following additional experiments:
1. Comparison to additional baselines including SGD, unitary scalarization, RLW, IMTL, and NashMTL.
2. Running time comparison to unitary scalarization and MoCo.
3. Comparison to CR-MOGM.
4. Comparison between one data sample and two different data samples in the inner loop of SDMGrad.
Please refer to the attached PDF file for these experiment results.
**We also add details of how the inequality $(iii)$ is derived in Appendix C.2, Line 533-534.**
The detailed steps are shown as follows.
$\\|G(\\theta_t)w_{t,\\lambda}^*+\\lambda G_0(\\theta_t)- G(\\theta_t)w_{t,\\rho,\\lambda}^*-\\lambda G_0(\\theta_t)\\|^2$
$=\\|G(\\theta_t)w_{t,\\lambda}^*+\\lambda G_0(\\theta_t) \\|^2+\\| G(\\theta_t)w_{t,\\rho,\\lambda}^*+\\lambda G_0(\\theta_t)\\|^2-2\\langle G(\\theta_t)w_{t,\\lambda}^*+\\lambda G_0(\\theta_t) , G(\\theta_t)w_{t,\\rho,\\lambda}^*+\\lambda G_0(\\theta_t)\\rangle$
$\\overset{(i)}{\\leq}\\|G(\\theta)w_{t,\\rho,\\lambda}^*+\\lambda G_0(\\theta_t)\\|^2-\\|G(\\theta)w_{t,\\lambda}^*+\\lambda G_0(\\theta_t)\\|^2$
$\\overset{(ii)}{\\leq}\\frac{1}{2}\\rho,$
where $(i)$ follows from optimality condition.
$\\langle G(\\theta_t)w_{t,\\lambda}^*+\\lambda G_0(\\theta_t) , G(\\theta_t)w_{t,\\rho,\\lambda}^*+\\lambda G_0(\\theta_t)\\rangle$
$\\geq\\langle G(\\theta_t)w_{t,\\rho, \\lambda}^*+\\lambda G_0(\\theta_t) , G(\\theta_t)w_{t,\\rho,\\lambda}^*+\\lambda G_0(\\theta_t)\\rangle=\\| G(\\theta_t)w_{t,\\rho,\\lambda}^*+\\lambda G_0(\\theta_t)\\|^2$.
Step $(ii)$ follows from $(iv)$ in eq.(21). It uses the optimality that $\\|G(\theta)w_{t,\rho,\lambda}^*+\lambda G_0(\theta_t)\\|^2-\\|G(\theta)w_{t,\lambda}^*+\lambda G_0(\theta_t)\\|^2\leq\frac{1}{2}\rho$.
Then the inequality (iii) can be derived.
Pdf: /pdf/61a6adaf7f52df6b749ec8e1828f1350a2d9410f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces the stochastic direction-oriented multi-objective gradient descent (SDMGrad) and SDMGrad-OS (OS stands for objective sampling). The idea of SDMGrad is to make the objective “direction-oriented”, which is done by regularizing the descent direction towards a specific direction. This direction is chosen to be the average of all gradients, which would allow the regularizer to interpolate the algorithm between multiple-gradient descent and regular gradient descent. This formulation is similar to CAGrad, but results in a simpler stochastic analysis and a simpler algorithm. The experiments show that, for a well-tuned regularization parameter, SDMGrad’s performance is competitive with the state-of-the-art. The objective sampling procedure in SDMGrad-OS makes it run significantly faster than SDMGrad when the number of objectives is large. Analysis for both methods is provided with good sample complexities.
Strengths: - The idea is clear, the formulation is sound, and the algorithm is simple to implement.
- The formulation allows for a better stochastic analysis and simpler algorithm. The sample complexity achieved by the algorithm is good.
- SDMGrad generalizes MGDA and GD, and is a better formulation of CAGrad.
- The code is available, so experiments can be reproduced.
- Performance is good overall, and can be better for fine-tuned lambda.
- Objective sampling helps when the number of objective is large. The analysis also justifies this procedure, which is a good contribution.
Weaknesses: - Need to tune lambda to get better results. It does not seem to be easy to pick a good starting point as the algorithm is not very robust to the choice of lambda. Also, it is not clear how to choose $\rho$.
- The algorithm interpolates between MGDA and GD (with $w$ regularization), so the algorithmic novelty is limited.
- In equation 4, $h_{t,i}$ is not explained.
- The improvements are not significant, and that is for the best choices of $\lambda$. For other choices of $\lambda$, it might be worse. It seems to be on par with CAGrad and MoCO, which is competitive.
- In the appendix, the authors mention Young's inequality for some steps, but I think it's Cauchy-Schwartz, though I might be mistaken.
- Note that the Lipschitzness (bounded gradient) assumption of $L_i$ would not work with strongly convex losses.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Which method do you use to project the weights onto the simplex? Is the projection costly?
- Have you considered relaxing the simplex constraint? Would it make sense to use the closed-form minimum $w$ for unconstrained $w$?
- Have you considered a different regularizer for $w$? How do you set $\rho$ in practice?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Need to tune lambda to get better results. It does not seem to be easy to pick a good starting point as the algorithm is not very robust to the choice of lambda. Also, it is not clear how to choose $\rho$.**
**A:** Good question! From the formulation of our proposed SDMGrad, we know that it becomes close to SGD when $\lambda$ is large, and close to MGDA with a small $\lambda$. We first try to identify the large and small values of $\lambda$ where the performance shows consistency with our formulation. In experiments, we find that $\lambda=0.1$ and $\lambda=10$ work well. Next, we narrow the range by trying different values in [0.1, 10]. Specifically, we search with $\lambda \in [0.1, 1, 2, 5]$ and evaluate which choice is better. Overall, this grid search takes small efforts and the performance improvement is robust to certain ranges (e.g., [0.1,1] used in our search) within $[0.1,10]$.
The parameter $\rho$ is used for theoretical analysis and guarantees that the proposed method works well in the worst case. In the experiments (which are not necessarily the worst case), we find the performance when $\rho=0$ is good enough. Thus, we make this choice for simple implementation.
**Q2. The algorithm interpolates between MGDA and GD (with w regularization), so the algorithmic novelty is limited.**
**A:** Let us explain the algorithmic novelty of our method as follows.
First, our method has substantial differences from existing stochastic MGDA-type methods. For example, MoCo constructs an additional auxiliary sequence $Y_k$ to approximate the gradients in MOO with decreasing estimation error, whereas our approach leverages a double-sampling mechanism that admits a near-unbiased multi-gradient estimation at each iteration. This type of approach is new in the literature.
Second, our regularization is new and contains careful designs. Compared to the most relevant CAGrad method that uses a constraint to regularize the update direction close to $g_0$, our regularization not only enjoys such direction-oriented benefit, but also admits a provable algorithmic design in the challenging stochastic setting.
Third, our double-sampling-based approach is flexible to incorporate the objective sampling for better efficiency. To the best of our knowledge, this is the first result for a stochastic MOO algorithm with objective sampling.
**Q3. What is $h_{t,i}$ in equation 4?**
**A:** $h_{t,i}$ is defined in eq. (6) of MoCo paper in the ICLR version, where $h_{t,i}$ denotes a stochastic estimator of $\nabla L_i(\theta)$ at t-th iteration. We will clarify this in the revision.
**Q4. In proof, Young’s inequality or Cauchy-Schwarz.**
**A:** Thanks! We will double check the proofs and make the revisions accordingly.
**Q5. Note that the Lipschitzness (bounded gradient) assumption of $L_i$ would not work with strongly convex losses.**
**A:** Note that in our setting, the function $L_i(\theta)$ is generally nonconvex rather than strongly convex. The problem in eq. (9) is strongly convex w.r.t. $w$ rather than $\theta$.
**Q6. Which method do you use to project the weights onto the simplex? Is the projection costly?**
A: The computation of the projection to the probability simplex we use is the Euclidean projection proposed by [1], which involves solving a convex problem via quadratic programming. The implementation used in our experiments follows the repository in [2], which is very efficient in practice.
[1] Weiran Wang, and Miguel Á. Carreira-Perpiñán. Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application. arXiv preprint arXiv: 1309.1541
[2] Adrien Gaidon. Compute Euclidean projections on the simplex or L1-ball.
**Q7. Have you considered relaxing the simplex constraint? Would it make sense to use the closed-form minimum w for unconstrained w?**
**A:** Good question! It is possible to use the closed-form minimum $w$ if the full gradients are used. For example, for the two-objective example provided by the MoCo paper (see eq. (5) therein), a closed-form solution is provided. This solution is accurate when full gradients are taken. However, when only stochastic gradients are available, they show that a large bias can be induced. Thus, it may be challenging to apply this idea in the stochastic setting, which is the focus of this paper. However, we would like to leave such exploration for future study.
**Q8. Have you considered a different regularizer for w? How do you set it up $\rho$ in practice?**
**A:** Great question! Since the main purpose of the regularizer for $w$ is to make the problem in eq. (9) strongly convex such that the theoretical convergence guarantee can be established. For this reason, we use the simplest quadratic regularizer. However, it is possible to use other strongly-convex regularizers for a stronger theoretical guarantee. We would like to leave it for future study.
In experiments, we tune $\rho$ and find the best range to be $[0,0.1]$. For a simple implementation, we simply set $\rho=0$ in all experiments.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying my concerns. I have read the rebuttals and decided to update my score accordingly.
---
Reply to Comment 1.1.1:
Title: Thanks so much for your updates
Comment: Dear Reviewer,
Thanks so much for your updates and for raising your score! We will take your suggestions into our revisions!
Best,
Authors | Summary: This paper proposes stochastic variants of the MGDA for multi-objective optimization (MOO) problem, named "Stochastic Direction-Oriented Multi-objective Gradient descent" (SDMGrad) and SDMGrad-OS with efficient sampling. Optimization convergence analysis to the Pareto stationary point is provided, with improved complexities compared to prior works on stochastic MOO.
The proof of optimization convergence mainly follows that of [12].
Experiments on multi-task supervised learning and reinforcement learning justified the superior performance of the proposed method.
Strengths: 1. The paper proposes a stochastic variant of MGDA to address the MOO problem.
2. Optimization convergence analysis is provided with improved complexity over [12] without bounded function values assumption, and with improved complexity over [11] with bounded function values assumption.
3. Experiments on MTL benchmarks demonstrated the superior performance of the proposed methods.
Weaknesses: ## Soundness
### Lack of justification of some claims
See __Questions__-1,2.
### Some steps of the proof is ambiguous or unclear
See __Questions__-3,4.
### Theoretical and practical benefits of the proposed algorithm over the simple unitary scalarization (GD or SGD) baseline are unclear
1. The algorithm is not very efficient. In Theorems 1, 2, with a constant $\lambda$, it requires the inner loop iterations $S = \mathcal{O}(T^3)$, and each step of the inner loop needs to compute a projection to the simplex, which adds more time complexity to the MGDA based algorithms as they already require computing $K$ gradients at each outer iteration comparing to $1$ gradient for GD or SGD.
How does your algorithm compare to the simple SGD baseline in terms of convergence to Pareto stationary point in clock time? It would be better if some evaluations and discussions regarding this can be provided.
E.g., in Table 6 or other experiments, compare with the time and performance of the simple SGD baseline.
5. What is the theoretical benefit of this proposed algorithm compared to SGD of unitary scalarization, as the latter can also converge to Pareto stationary (PS) point and is more efficient?
For example, in [12], in addition to the convergence to PS point, Lemma 2 is provided to justify the convergence of MoCo to the desired MGDA direction, does similar results apply for the proposed algorithm also?
2. In the experiments, one baseline method with unitary scalarization is needed, as it has been shown in prior works [a,b] that unitary scalarization outperforms MGDA-based algorithms in some practical applications.
I understand that experiments with varying $\lambda$ are provided, which becomes close to GD with larger $\lambda$, but they are still not the same. Instead I would expect an ablation study with varied $\beta_t$, and especially $\beta_t = 0$.
>[a] Vitaly Kurin et al. "In Defense of the Unitary Scalarization for Deep Multi-Task Learning"
>[b] Derrick Xin et al. "Do Current Multi-Task Optimization Methods in Deep Learning Even Help?"
### Technical novelty is limited
Novelty in terms of proof techniques is limited, the proof mainly follows that of [12].
## Minor
1. Appendix C.2, line 530, "satisfies" -> "satisfy"
2. Move Table 4 to Section 6.2
3. In Appendix D.1, LHS of Equation below line 591, "$E$" -> "$\mathbb{E}$"
================UPDATE==========================
To avoid unnecessary misunderstandings, and unevidenced accusations, I change the word *honest* to *accurate* in my previous comment and also update my final comments and suggestions below.
I appreciate the reviewers carefully checking the code and I also appreciate the authors clarifying what they implemented in the rebuttal.
My current score is based on the theoretical contributions and also conditioned on the authors can correctly implement and report the results with double sampling. Otherwise it could be confusing.
=================================================
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Section 4.2, line 170-172, why is the case "the objective in eq. (8) is merely convex w.r.t. $w$, it can only guarantee the convergence of at least one iterate (not necessarily the last one) to the solution $w_\lambda^*$"?
In fact, there are works that prove the last iterate convergence of convex objectives, see [c].
Since this serves as a motivation for the algorithm design, it is important to clarify it.
And although it claims that "to ensure the more practical last-iterate convergence" in line 172, no guarantee of convergence of objective in eq. (9) for last iterate is provided in the paper.
>[c] Ohad Shamir et al. "Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes".
2. The use of smoothing constant $\rho$ seems to be redundant since you already have another regularization term $\lambda g_0$, which makes the update direction close to $g_0$. Using the regularization term $\rho ||w||^2$ has a similar effect of making the update direction close to a uniform combination of gradients of all objectives.
Since the motivation "to ensure the more practical last-iterate convergence" is questionable, see __Question__-1,
more justification is needed for this design.
3. The expectation operations used throughout the paper and proof are very unclear.
The same notation $\mathbb{E}[\cdot]$ is used for almost all expectations, whether conditioning on $\theta_t$ or not, except for Eq. (11) in the main paper where $\mathbb{E}[\cdot\mid \theta_t]$ is used. This makes the arguments or proof steps ambiguous.
- In Proposition 1, Equation (11), the LHS has two expectations, one conditioned on $\theta_t$, and the other not. What are the specific distributions for these two expectations? There is some ambiguity since there are multiple random variables, e.g. $\zeta, w_{t,S}, \theta_t$.
Also in the proof of Proposition 1, Appendix C.1, line 522, how is the condition on $\theta_t$ removed in the first equation of (21)?
This does not seem correct if the inner expectation is not conditioned on $\theta_t$.
- In most part of the proof, $\mathbb{E}[\cdot]$ is used for both conditioning on $\theta_t$ and without conditioning on $\theta_t$.
E.g. line 533 in the Appendix, Eq (24) is "conditioning on $\theta_t$", and later on in line 538 Eq(26) is "unconditioning on $\theta_t$".
1. In Appendix C.2, line 533-534, I did not see how the inequality (iii) is derived from eq. (21)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I did not see specific discussions on the limitations and broader societal impacts.
This does not decrease my score, and it would be better if the authors provide some.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. How does your algorithm compare to the simple SGD baseline in terms of convergence to Pareto stationary point in clock time?**
**A:** For the SGD baseline, we use the implementation by [1]. The experiments on Cityscapes, NYU-v2, and MT10 are provided in Table 1-3 in the global response PDF. It can be seen that SGD outperforms others on a specific task but achieves a worse overall performance than our SDMGrad method. The clock time comparison is provided in Table 3, where it can be seen that the SGD baseline is faster than SDMGrad but slower than SDMGrad-OS.
[1] Liu, Bo, et al. "Conflict-averse gradient descent for multi-task learning." Advances in Neural Information Processing Systems 34 (2021): 18878-18890.
**Q2. What is the theoretical benefit of this proposed algorithm compared to SGD of unitary scalarization, as the latter can also converge to Pareto stationary (PS) point and is more efficient?**
**A:** Great question! Although SGD of unitary scalarization can achieve a faster convergence rate, our method is more flexible and general to enjoy the theoretical advantages of both SGD and MGDA. On the one hand, our method reduces to this SGD type of method for a large $\lambda$ (which achieves a much higher efficiency as shown in Corollary 2). On the other hand, for a smaller $\lambda$, our method with iteratively optimized weights can enjoy the benefits of MGDA type methods in mitigating the gradient conflict during the optimization process. Indeed, it has been shown in [2] that the weight changing methods like MGDA strike a better tradeoff among optimization, generalization, and gradient conflict than static weighting methods like SGD.
[2] Chen, Lisha, et al. "Three-Way Trade-Off in Multi-Objective Learning: Optimization, Generalization and Conflict-Avoidance." arXiv preprint arXiv:2305.20057 (2023).
**Q3. In the experiments, one baseline method with unitary scalarization is needed, as it has been shown in prior works [a,b] that unitary scalarization outperforms MGDA-based algorithms in some practical applications. And compare the case with $\beta_t=0$.**
**A**: The comparison results of unitary scalarization [a] are shown in Tables 1 and 2 in the global response. In general, it can be seen that our SDMGrad outperforms unitary scalarization under different metrics. From Table 3 in the global response, it can be seen our SDMGrad-OS is faster than unitary scalarization due to the efficient task sampling.
The ablation study of $\beta_t=0$ is also shown in Tables 1 and 2. It can be seen that our method with $\beta_t=0$ performs poorly in these experiments. This confirms the importance of the $w$ updates.
[a] Kurin, Vitaly, et al. "In defense of the unitary scalarization for deep multi-task learning." Advances in Neural Information Processing Systems 35 (2022): 12169-12183.
**Q4. In Section 4.2, line 170-172, why is the case "the objective in eq. (8) is merely convex w.r.t. w, it can only guarantee the convergence of at least one iterate (not necessarily the last one) to the solution "?**
**A:** Sorry about the confusion and thanks for pointing this reference out for us! After careful checking, we find that the analysis in [c] (Theorem 2 therein) on convex functions relies on additional assumptions like bounded estimators and domains, which may be restrictive in our MOO setting. To make our claim more rigorous, we will revise our sentence to “To ensure the last-iterate convergence, one possible approach is to add a quadratic regularization term for smoothing.”
The results for the guarantee of last-iterate convergence of objective in eq. (9) can be found in Lemma 3 of the appendix. We will clarify this in the main body.
**Q5.The use of smoothing constant $\rho$ seems to be redundant since you already have another regularization term $\lambda g_0$, which makes the update direction close to $g_0$. Using the regularization term $\rho\\|w\\|^2$ has a similar effect of making the update direction close to a uniform combination of gradients of all objectives.**
**A:** We want to clarify that these two regularization terms serve different purposes. The regularizer $\lambda\langle g_0,d\rangle$ makes the update direction close to $g_0$, whereas the quadratic term is to make the eq.(9) strongly convex to establish the convergence rate guarantee. In other words, the smoothing term $\rho\\|w\\|^2$ is necessary here because the regularization $\lambda\langle g_0,d\rangle$ cannot make the problem in eq. (9) strongly convex.
**Q6. In Proposition 1, Equation (11), the LHS has two expectations, what are the specific distributions for these two expectations? In the proof of Proposition 1, Appendix C.1, line 522, how is the condition on $\theta_t$ removed in the first equation of (21)? In most parts of the proof, $\mathbb{E}[\cdot]$is used for both conditioning on $\theta_t$ and without conditioning on $\theta_t$. E.g. line 533 in the Appendix, Eq (24) is "conditioning on $\theta_t$", and later on in line 538 Eq(26) is "unconditioning on $\theta_t$".**
**A:** Sorry for the confusion. In eq. (11), the inner expectation is conditioning on $\theta_t$ but the outer expectation takes the randomness over $\zeta, w_{t,S}, \theta_t$.
In line 522, the inner expectation is indeed conditioned on $\theta_t$, and we remove the condition on $\theta_t$ based on the fact that $\mathbb{E}[\mathbb{E}[A|B]]=\mathbb{E}[A]$. We will make this clear in the proof.
To make it clearer, we will use $\mathbb{E}[\cdot]$ to denote the expectation unconditioning on $\theta_t$, and use $\mathbb{E}[\cdot |\theta_t]$ to denote the expectation conditioning on $\theta_t$.
**Q7. In Appendix C.2, Line 533-534, I did not see how the inequality (iii) is derived from eq. (21)?**
**A:** Sorry for the confusion. The detailed steps can be found in the global response.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thank you very much for the detailed response. It addresses most of my concerns. I have a few more questions below.
**Regarding Q1.** What is the intuition that SDMGrad-OS can be faster than SGD in Table 3? At each iteration during training, SGD needs to compute one gradient, while SDMGrad-OS needs to compute more than one (because of performing double sampling for $\xi$ and $\xi'$, as well as updating $w_ {t,s}$). Could you provide some intuitive explanation?
**Regarding Q4.** Since here we are considering the subproblem convergence in this context, the domain of $w$ is bounded within a simplex, and the gradient estimators also seem can be bounded under Assumptions 2 and 3. Therefore I think analysis in [c] can be applied. Or are there any other challenges in applying analysis in [c]?
Also, in Lemma 3, you provide the convergence in terms of $\mathbb{E}[||w_S-w_{\rho, \lambda}^*||^2]$, where $w_{\rho, \lambda}^*$ is the solution to the smoothed problem with $\rho > 0$, instead of the original problem with $\rho = 0$. I understand the original problem with $\rho=0$ could have multiple solutions, but ideally it would be more useful if the convergence in of $w_S$ to the solution set with $\rho = 0$ is analyzed. Is it possible to extend your analysis to the convergence to the solution set with $\rho = 0$ that matches the original problem?
---
Reply to Comment 1.1.1:
Title: Thanks for the feedback!
Comment: We thank the reviewer for recognizing our response and for the further questions. Our answers are listed as follows.
**Q1. What is the intuition that SDMGrad-OS can be faster than SGD in Table 3? At each iteration during training, SGD needs to compute one gradient, while SDMGrad-OS needs to compute more than one (because of performing double sampling for $\xi$ and $\xi^\prime$, as well as updating $w_{t,s}$). Could you provide some intuitive explanation?**
**A:** Thanks for the question. SDMGrad-OS can be faster than SGD for two reasons. On the one hand, SDMGrad-OS adopts the objective sampling at each iteration, whereas the SGD method, implemented in CAGrad paper, does not have this feature. On the other hand, in our implementation, the number $S$ of iterations in updating $w$ is set to be relatively small and the gradient computation is efficient in our case. Thus, SDMGrad-OS is overall faster than SGD in Table 3.
**Q2. Since here we are considering the subproblem convergence in this context, the domain of $w$ is bounded within a simplex, and the gradient estimators also seem can be bounded under Assumptions 2 and 3. Therefore I think analysis in [c] can be applied. Or are there any other challenges in applying analysis in [c]? Also, in Lemma 3, you provide the convergence in terms of $\\mathbb{E}[\\|w_S-w_{\\rho,\\lambda}^\ast\\|^2]$, where $w_{\\rho,\\lambda}^\ast$ is the solution to the smoothed problem with $\\rho>0$, instead of the original problem with $\\rho=0$. I understand the original problem with $\\rho=0$ could have multiple solutions, but ideally it would be more useful if the convergence in of $w_S$ to the solution set with $\\rho$=0 is analyzed. Is it possible to extend your analysis to the convergence to the solution set with $\\rho=0$ that matches the original problem?**
**A:** Thanks for pointing it out. After a careful checking of this literature, we notice that the assumptions in [c] can be satisfied in our method. Suppose $\\hat{g}_t$ to be the stochastic gradient estimator w.r.t $w$ in our problem. The domain of $w$ is bounded within a simplex and $\\mathbb{E}[\\|\\hat{g}_t\\|]$ can also be bounded due to the fact that $\mathbb{E}[\\|\\hat{g}_t\\|]\\leq\\sqrt{\\mathbb{E}[\\|\\hat{g}_t\\|^2]}\\leq\\sqrt{3C_1}$, where $C_1$ is defined in our lemma 3 in Appendix. Thus, to solve the original question with $\rho=0$, one possible solution might use the analysis from the work that you mentioned and another possible solution would be choosing the minimum $w$ instead of the last iterate. We thank the reviewer for pointing the work [c] out for us. However, it may still need to take some time for making sure that the technique in [c] is applicable in our analysis, and we would like to leave it as part of our future work. | null | null | null | null |
When Does Confidence-Based Cascade Deferral Suffice? | Accept (poster) | Summary: The paper consists of 2 parts. Part 1 contains theoretical analysis of when confidence-based deferral rules for cascades of 2 or more models succeed or fail, based on a proposed risk function (equation (1) in section 3) presenting a tradeoff between accuracy and computational cost of invoking subsequent models. Part 2 proposes new deferral rules that are more sophisticated than existing confidence-based deferral rules using a machine-learned postdoc model. Experiments attempt to justify claims made in part 1 and to show that posthoc-based deferral rules produce better results than others.
Strengths: The paper provides a number of edge cases, both good and bad, together with good explanations on those edge cases.
References to previous works are excellent. However, I cannot claim my expertise in this area.
Weaknesses: I have found a few places that appear rather puzzling in the followings:
1. Lemma 4.1 line 176: it is unclear what authors mean by "produce the same ordering over instances x". The lack of a precise formula makes it unnecessarily difficult to follow the proof for lemma 4.1 and to verify the proof, because I did not know what end result to expect. Eventually I understood what that term meant after carefully checking the appendix. But it was via a few definitions in the proof in the appendix, but not in the lemma statement itself. The statement definitely needs revising.
2. Section 4.1 seems to have a technical flaw in reasoning, despite that Lemma 4.1 appears to be correct. Consider functions $R1(c, c^{(1)}) = R(r_{conf} | c, c^{(1)})$ and $R2(c) = R(r^* | c)$. Lemma 4.1. shows a condition when the accuracy-deferral curve of $r^*$ matches with that of $r_{conf}$. But that is an analysis with respect to accuracy, not risk $R(\cdot)$ defined in (1). For a given $c$, for example $c$ represents a trade off between accuracy and inference speed, $R2(c)$ is constant but Lemma 4.1 does not guarantee that the lowest value of $R1(c, c^{(1)})$ over $c^{(1)}$ cannot go lower than $R2(c)$. To me, it only makes sense if we assume that the algorithm sees $c$ as a variable and we consider $c^*, R^* = \[arg\]min_c{R2(c)}$ instead. However, in such cases there are 2 issues: (1) Lemma 4.1 still does not guarantee that across all possible $(c, c^{(1)})$ pairs, the only time when $R1 = R^*$ is when $c=c^*$, and (2) if $c$ is a variable then how do you take practical considerations like inference speed into the analysis?
While the subsequent success/failure cases in 4.2 seem intuitive to me, I just don't see how Lemma 4.1 can be used to convincingly explain them.
3. The general idea proposed in 4.2 is somewhat weak as well. In order to make a deferral rule for a cascade of 2 models to work better, you need to create a third model, i.e. the posthoc model g(x)? Wouldn't that incur an additional computational cost, meaning the risk function in (1) has to be redefined? I do not see any quantisation of that effect in the experiments.
In addition, wouldn't introducing g(x) also mean that you have given a bit of training power to complement model 1 and 2, whereas the general setting of the paper is to treat model 1 and 2 as pre-trained blackboxes not to be tampered with (e.g. line 244-246)?
4. Section 5 presents all results in terms of accuracy-deferral curves. However, I cannot see any curve or any piece of information at all mentioning the risk $R(\cdot)$ in (1). This is puzzling. On one hand, sections 3 and 4 discuss minimising risk in (1) which can be seen as a tradeoff between accuracy and computational cost. On the other hand, all experiments only present information related to accuracy. I am not sure how those results can completely justify sections 3 and 4.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I need to be able to see that the paper has an objective and that every section is consistent towards that objective. At the moment, the three sections 3, 4 and 5 do not appear to be completely aligned with each other. If you could convince me where I am wrong in the Weaknesses section above, and show me some form of consistency throughout the paper, I will change my opinion.
It is unclear to me whether the constant cost $c$ on line 117 in (1) is to be treated as constant or not. It appears to be constant at first (and it makes sense to do so from a practical point of view). But then in section 4.1, $c$ seems to be treated as a variable that can be optimised away. What is the point of having an accuracy-deferral curve for $r^*$ over $c$ if $c$ is constant?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: There is no societal negative impact from the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Section 4.1 seems to have a technical flaw in reasoning, despite that Lemma 4.1 appears to be correct… Section 5 presents all results in terms of accuracy-deferral curves… I cannot see any curve or any piece of information at all mentioning the risk $R(\cdot)$ in (1)
**There appears to be a misunderstanding which we’d like to clarify**. We believe the reviewer’s doubt arises since:
Section 3 considers the cascade _risk_ (Equation 1) at a fixed _cost parameter_ $c$
Section 4 and 5 considers the cascade _accuracy_ at a fixed _deferral rate_ $\tau$, as noted in L126 and L177
These respectively yield cost-risk curves (as $c$ is varied) and deferral curves (as $\tau$ is varied). These are _equivalent_ ways of assessing the overall quality of a cascade (see below), and so we use them interchangeably. [This is analogous to the equivalence between ROC and cost curves in classification.] Following prior work (Bolukbasi et al., 2017), (Kag et al., 2023), our experiments use deferral curves to compare methods. We will make this clearer.
In more detail:
Section 4 seeks to identify when confidence-based deferral produces the “best” possible cascade. By “best”, we mean the cascade which achieves the _best possible deferral curve_, i.e., which achieves maximal accuracy at *all* deferral rates. **Equivalently** (see below), this is the cascade which achieves the best possible _cost-risk curve_, i.e., which achieves minimal risk (1) for *all* cost parameters $c$.
To make progress, we characterise the “best” cost-risk curve in Proposition 3.1. This shows that to achieve the “best” curve as $c$ varies in $[0, 1]$, we can threshold $s^*( x ) = \eta\_{h^{(2)}(x)}(x) - \eta\_{h^{(1)}(x)}(x)$ by $c$.
Crucially, there is a **one-to-one correspondence** between the optimal cost-risk and deferral curves. Thus, Proposition 3.1 also characterises the optimal deferral curve, which we reference in Lemma 4.1. To see this correspondence, recall that $\forall c \in [0, 1]$, the risk (1) is
$$ R(r; h^{(1)}, h^{(2)}) = \mathbb{P}(y \neq h^{(1)}(x), r(x)=0) +
\mathbb{P}(y \neq h^{(2)}(x), r(x)=1) + c \cdot \mathbb{P}(r(x)=1). $$
This captures
$$ (\text{classification error of the cascade}) + c \cdot (\text{deferral rate}). $$
Appealing to the Lagrangian — analogous to the Neyman-Pearson lemma in ROC analysis — $\exists \tau \ge 0$ such that minimising $R(r; h^{(1)}, h^{(2)})$ over $r$ is equivalent to
$$ \max_{r} \quad (\text{classification accuracy of the cascade}) \quad \text{subject to} \quad (\text{deferral rate}) \leq \tau. $$
Thus, tracing out the optimal cost-risk curve for varying $c$ will trace out the optimal deferral curve for varying $\tau$.
In fact, the two curves have an intimate **point-line duality**. Any point $(d_0, e_0)$ in deferral curve space – where $d$ denotes deferral rate, and $e$ denotes error – can be mapped to the line $\{ (c, c \cdot d_0 + e_0) : c \in [0, 1] }$ in cost curve space. Conversely, any point $(c_0, r_0)$ in cost curve space – where $c_0$ denotes cost, and $r_0$ denotes risk – can be mapped to the line $\{ (d, r_0 - c_0 \cdot d) : d \in [0, 1] \}$ in deferral curve space. This is analogous to the correspondence between ROC and cost-risk curves in classification (Drummond & Holte, “Cost curves: An improved method for visualizing classifier performance”, ‘06).
**We are happy to answer any further queries.**
> What is the point of having an accuracy-deferral curve for $r^*$ over $c$ if $c$ is constant?
Optimising (1) may be cast as a constrained optimisation problem where $c$ (or its re-parameterisation) controls the deferral rate. Changing $c$ changes the constrained optimisation problem. It is thus natural that its solution $r^*$ depends on $c$. Intuitively, varying $c$ allows one to trade off accuracy and inference cost. In practice, one will be operating at a particular point on this trade-off curve, and so there is only one fixed $c$ at inference time.
The deferral curve helps compare two deferral rules **over all possible choices of the deferral rate**. This is analogous to the use of ROC curves to compare two classifiers over all possible values of relative false positive to false negative cost.
> cost of the posthoc model g(x)
__Our post-hoc model $g$ is a simple feedforward network and is much smaller than the two base models__. This is described in **Section C.2 (Appendix)** where $g$ has no more than 100 hidden units. In comparison, on the ImageNet problem, we use MobileNet v2 as $h^{(1)}$ (3.5 million parameters), and EfficientNet B0 as $h^{(2)}$ (5.3 million parameters). The cost of invoking $g$ at inference time is negligible with respect to these models. We are happy to add a comment noting this.
> [by introducing g(x)] you have given a bit of training power to complement model 1 and 2, whereas the general setting of the paper is to treat model 1 and 2 as pre-trained blackboxes?
We indeed consider the setting where one cannot modify the models $h^{(1)}$ and $h^{(2)}$. However, exploiting the models’ _outputs_ for further learning is perfectly admissible.
Per L244-246, we are allowed to train a separate post-hoc deferral model $r$. When we use the risk $R(r; h^{(1)}, h^{(2)})$ (see (1)), we optimise for $r$ which is parameterised as thresholding $g(x)$.
> Sections 3, 4 and 5 do not appear to be completely aligned
Our work studies when confidence-based deferral suffices. The logical flow is:
In Section 3, we characterise the form of the optimal deferral rule.
In Section 4, we identify specific conditions under which confidence-based deferral may disagree with the optimal rule, and study post-hoc deferral approaches in an attempt to address these limitations.
In Section 5, we present experiments that verify the above failure modes.
> Meaning of "produce the same ordering over instances x"
We apologise for not being clearer here. Please see the global response.
---
Rebuttal Comment 1.1:
Title: Thank you. Points 1, 2 and 4 cleared. Point 3 remains for discussion.
Comment: Thank you for the clarifications.
>> Section 4.1 seems to have a technical flaw in reasoning, despite that Lemma 4.1 appears to be correct…
> There appears to be a misunderstanding which we’d like to clarify...
Right. The Lagrangian manipulation to connect between the unconstrained risk in (1) and accuracy with a constraint. Clearly, without this fact it is difficult to link section 3 with sections 4 and 5. This is a crucial missing bit, which clears both my points 2 and 4. Why did it not appear in the original version? It would have been a lot easier to read had the risk-accuracy equivalence got established in section 3.
Since points 2 and 4 are major, I am ready to increase my ratings by the way.
>> cost of the posthoc model g(x)
> Our post-hoc model is a simple feedforward network and is much smaller than the two base models...
I understand your argument. But your paper's title is `When Does Confidence-Based Cascade Deferral Suffice?`. To a normal reader, the title suggests that the cascades of interest contain pretrained models plus some thresholding on the model confidences, no additional training is needed. A posthoc model would require the user to train additionally. Under a research context this may be fine. But in real world, this additional requirement can be costly due to resources and teams involved. It may be better to improve the pretrained models instead.
I am not saying the idea of using a post-hoc model is wrong. I am saying that in my practical view point, it somewhat backfires.
In addition, a hidden assumption here is that the posthoc model g(x) must be very lightweight compared to the pretrained models. But in very high speed applications where the pretrained models can be marginally slower than the posthoc model g(x) then this assumption will break.
---
Reply to Comment 1.1.1:
Title: We seek to understand when confidence-based deferral may fail, and when alternate deferral strategies can perform better
Comment: We thank the reviewer for their detailed feedback.
> Right. The Lagrangian manipulation to connect between the unconstrained risk in (1) and accuracy with a constraint. Clearly, without this fact it is difficult to link section 3 with sections 4 and 5. This is a crucial missing bit, which clears both my points 2 and 4. Why did it not appear in the original version? It would have been a lot easier to read had the risk-accuracy equivalence got established in section 3.
We are glad this clarifies! We apologise if this point wasn’t clear in the original submission. In our revised version, **we will expand the para in L126 - L130** to explicate the Lagrangian view of (1), and the equivalence of the cost-risk and deferral rate-accuracy curves.
> I understand your argument. But your paper's title is When Does Confidence-Based Cascade Deferral Suffice?. To a normal reader, the title suggests that the cascades of interest contain pretrained models plus some thresholding on the model confidences, no additional training is needed.
Our intended logical flow is the following. First, our analysis in Section 3 and 4.1 precisely studies conditions under which confidence-based cascades may _not_ suffice (e.g., label noise). This is in line with the title.
Next, given that there are conditions where such cascades _don’t_ suffice, a natural question is whether some alternate strategy _does_. To that end, Section 4.2 proposes and studies post-hoc models, which are a minimal and (in our opinion) natural extension.
We attempted to convey this framing in the writing (e.g., L8 “In this paper, we seek to better understand the conditions under which confidence-based deferral may fail, and when alternate deferral strategies can perform better”). Nonetheless, we are certainly happy to update the text if the reviewer believes this is not made sufficiently clear.
> A posthoc model would require the user to train additionally. Under a research context this may be fine. But in real world, this additional requirement can be costly due to resources and teams involved… In addition, a hidden assumption here is that the posthoc model g(x) must be very lightweight compared to the pretrained models. But in very high speed applications where the pretrained models can be marginally slower than the posthoc model g(x) then this assumption will break.
We certainly agree that there can be settings where post-hoc models may not be appropriate. Nonetheless, we argue that **they are many practical settings where they _are_ appropriate**. As a remark, per L246, we note that prior work such as [31, 44, 68] also considered the use of auxiliary models to improve cascading.
In our view, cascading makes sense when at least one constituent model is highly compute-intensive (both for inference _and_ training). Given this, _if_ the post-hoc model is significantly lightweight compared to the smallest constituent model, it adds minimal extra overhead both for training and inference.
Now, if one or more of the constituent models is itself lightweight, then the reviewer is correct that the post-hoc model may add an overhead compared to regular confidence-based cascading. However, if there is a sufficiently large cost gap between the smallest and largest model in the cascade, then the post-hoc approach _may still offer a more favourable cost-quality tradeoff_ than regular confidence-based cascading (which, per Section 4.1, may underperform in some settings).
The reviewer is completely correct that in some settings, alternate strategies (e.g., improving the base models) may be appropriate. Our purview however is to understand the space of strategies for combining multiple models via a deferral mechanism. | Summary: This paper explores the cascade deferral problem, which is an issue in the context of machine learning models arranged in a cascading order, where a decision needs to be made about whether to defer the processing of data from one model to the next in the cascade. The challenge is to optimize the deferral decision to achieve the best trade-off between computational costs and model performance.
Previously, this problem was often addressed using confidence-based deferral rules. This approach involves deferring data to the next model in the cascade when the first model's confidence in its prediction falls below a certain threshold. However, the paper identifies limitations with the confidence-based deferral method. Specifically, it may underperform in certain settings, such as when the second model in the cascade is a specialist model, or when there's label noise in the training data.
To overcome these limitations, the authors introduce post-hoc deferral rules. Unlike the confidence-based approach, post-hoc deferral rules use additional information, beyond just the confidence of the first model, to make deferral decisions. These rules are trained and optimized to provide better accuracy-cost trade-offs.
The authors compare the performance of confidence-based and post-hoc deferral rules under various experimental settings. They use datasets like ImageNet and CIFAR 100, with settings including a specialist model scenario, label noise, and distribution shift. They find that post-hoc deferral significantly outperforms confidence-based deferral in scenarios involving a specialist second model and label noise. However, they also identify potential overfitting issues with post-hoc deferral, highlighting the need for careful capacity control.
Strengths: **Originality**: The paper introduced a post-hoc deferral scheme that utilizes a Bayes-optimal deferral rule. In particular, it addresses issues not resolved by the traditional confidence-based deferral methods.
**Quality**: The authors effectively utilize mathematical proofs and models to construct and explain their deferral schemes, and they validate these models through extensive experiments on established datasets such as ImageNet and CIFAR 100.
**Clarity**: The paper is well-structured and clearly written. The paper's theoretical concepts are explained with clarity and are substantiated with illustrative figures, while the experimental design and results are presented in detail.
**Significance**: The research addressed a critical limitation in the commonly used confidence-based deferral schemes. Confidence-based methods, while widely used, tend to underperform in certain situations such as in specialist settings or when there is label noise. This research introduces and investigates the efficacy of post-hoc deferral models as a solution to this problem, offering a more optimal and efficient method of deferral. The paper's analysis of the limitations of the confidence-based methods and the demonstration of how the post-hoc deferral models overcome these issues are essential for further research in this area.
Weaknesses: **Limited Dataset and Task Diversity**: The empirical results of this paper are primarily based on two datasets: CIFAR-100 and ImageNet. Additionally, the experiments were conducted on image classification tasks only. Expanding the scope of datasets and including other tasks, such as object detection, segmentation, or even venturing into different domains like natural language processing or audio processing, could have provided a more comprehensive evaluation of the post-hoc deferral models.
**Generalizability**: The authors point out that post-hoc models can overfit and fail to generalize well, even when controlling the capacity of the model.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The authors acknowledge that post-hoc models can overfit and struggle to generalize. Could the authors provide some intuitive explanation for the reasoning behind this issue?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reviewing our work, and for the complete summary which we agree with.
> The empirical results of this paper are primarily based on two datasets: CIFAR-100 and ImageNet. Additionally, the experiments were conducted on image classification tasks only.
In the present work, we aim to illustrate when confidence-based deferral can fail with classification problems. The Bayes optimal deferral rule we derive in Proposition 3.1 and subsequent analyses all start from the risk in Eq (1), which is based on the 0-1 loss. Specifically, the risk of a deferral rule $r$ can be written as
$$ R(r; h^{(1)}, h^{(2)}) = \text{const.} + \mathbb{E}\_{x} 1[r(x)=1] \mathbb{E}\_{y|x} \left[1[y \neq h^{(2)}(x)]-1[y\neq h^{(1)}(x)]+c\right]. $$
Note that the risk depends on 0-1 losses of the two base models $h^{(1)}, h^{(2)}$. **For other tasks, one can simply replace these base losses to other appropriate losses (e.g., log loss for language models) and the Bayes optimal rule can be derived.** We plan to include results on NLP classification tasks in the revised version. Please note that we have added results on Mini-ImageNet to the PDF as part of this rebuttal. Please see the response to Reviewer kqwH for details.
> The authors point out that post-hoc models can overfit and fail to generalize well, even when controlling the capacity of the model. Could the authors provide some intuitive explanation for the reasoning behind this issue?
The studied post-hoc approaches are not without limitations. It can be hard for a small post-hoc model to reliably predict the confidence or confidence difference of much larger models (underfitting). The overfitting case arises especially in (but not limited to) the generalist setting. In this setting, the two base models $h^{(1)}, h^{(2)}$ perform roughly equally well, and there is no obvious set of instances that should be deferred to either of them. In this case, the learned post-hoc deferral model may fail to generalise. This is in contrast to, say, the specialist setting where there is a clear set of instances that should be deferred to $h^{(2)}$ (i.e., the data sub-group that model 2 specialises on). | Summary: This paper systematically investigates why and when the confidence-based deferral rule work, and particularly, identifies cases when it fails. To enable this investigation, they provide a theoretical characterization of the problem and the optimal deferral rule. They also provide a post-hoc solution that can work well in cases when the confidence-based method fails.
Strengths: The paper is clearly written.
The paper provides a nice theoretical framework for studying the deferral rule in general.
The paper provides lots of useful insights, supported by both theoretical and empirical evidence.
Weaknesses: The listed conditions about when the confidence-based method fails are not that surprising, this leads one wondering the significance of such a contribution if the findings are already something intuitive that need not proof. Anyhow, this might not count as a weakness, as it is also reassuring that the theory does produce intuitively sensible results.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: This deferral rule is about selecting models, while there is also this line of research about selecting samples for the noisy data case. Basically, for the data with noisy labels, one would want to select only the clean ones to do the training, and there are also many confidence-based methods to do this selection. I am wondering how these two lines of research connect with each other, and can you similarly identify cases when the confidence-based method fails in label denoising?
It would be good if the authors can provide some discussion/ideas about how to identify whether it is good to use the confidence-based method in practice. Basically, is there a unified way to identify the failed cases from the data, instead of checking if the data have the issues related to each failed case?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The listed conditions about when the confidence-based method fails are not that surprising. Anyhow, this might not count as a weakness, as it is also reassuring that the theory does produce intuitively sensible results.
We are glad the reviewer finds the results intuitive. We would like to emphasise that despite cascades being studied in various forms for several decades, there has been limited study of the conditions under which confidence-based cascading suffices as a deferral mechanism. We would hope that by clearly laying out specific conditions under which this technique does not suffice, we can aid practitioners in their decision of whether or not to invest in more complex modelling strategies (e.g., post-hoc approaches). This is particularly true for the case of label noise, which we expect to be encountered in many (though not all) practical settings.
> .. for the data with noisy labels, one would want to select only the clean ones to do the training, and there are also many confidence-based methods to do this selection. I am wondering how these two lines of research connect with each other
We thank the reviewer for mentioning this broadly related topic of noisy label learning. Please note that label denoising is outside the scope of the present work. In our work, we address the setting where the two (or more generally $K > 2$) base models are already trained and fixed. The goals are to investigate the performance of confidence-based deferral, identify conditions under which it can fail, and identify approaches to learn a post-hoc rule to remedy it. In our experiments with label noise (i.e., second row of Figure 2), we assume that the given base models were trained with label noise without an option of fine-tuning them on clean labels.
We are happy to provide a connection if the reviewer could please point out which work the reviewer has in mind for confidence-based methods for label denoising.
> how to identify whether it is good to use the confidence-based method in practice… is there a unified way to identify the failed cases from the data,
This is an interesting and important question. We think that the simplest approach is to use a held-out validation dataset that has the same distribution as the test data, produce deferral curves of both confidence-based deferral and post-hoc deferral approaches, and compare them.
The three failure cases identified in our work are for studying sufficient causes that lead to a failure of confidence-based deferral. In practice, for the purpose of deciding whether to deploy confidence-based deferral, there is no need to exactly identify one of the three settings.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal, I am quite satisfied and have no further questions. | Summary: The authors present a theoretical analysis for the Bayes optimal deferral rule for a cascade of K=2 classifiers and a certain population risk.
Based on this rule, they characterize when confidence deferral rule using exact posterior probabilities is similar to the Bayes optimal deferral rule is some sense.
Based on their analysis, the authors (informally) discuss cases where (practical) confidence deferral rule may or may not be sufficient, and proposed several post-hoc methods for the latter.
The insights and the methods are examined empirically.
Strengths: I find the paper very interesting and the analysis to be solid.
The theory is important and the insights and practical guidance that it provides look useful.
Weaknesses: The empirical coverage may be somewhat improved, e.g., by using practical datasets without the controlled modifications (such as the dog-specific classifier). I also wonder whether there is some benchmark that can be used for comparing the new post-hoc methods to existing ones.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please improve the explanation for Lemma 4.1.
State what you mean in "the same ordering" in the main body of the paper and not only in the proof in the appendix.
The text below Lemma 4.1, and actually in several other places in the paper, needs to be more accurate when you claim optimality of confidence-based deferral rule based on your analysis --- as by saying so, you assume that the classifier's softmax values are exactly the true posterior probabilities, which is obviously not the case in practice [21].
In Figure 2, first row, how do you explain the fact that plain confidence deferral oftentimes outperforms the two post-hoc methods Diff-01 and Diff-prob? Furthermore, Diff-prob in fact is consistently worse than the other methods in all 3 rows in Figure 2, can you explain why?
A minor comment: In Algorithm 1, when you write predict inside the loop, letting it be followed by a "break" command would make the scheme clearer.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive feedback, and for recognizing the importance of our theoretical analysis. We appreciate the reviewer’s comments on the presentation. We will revise accordingly.
> The empirical coverage may be somewhat improved, e.g., by using practical datasets without the controlled modifications (such as the dog-specific classifier).
Thanks for the excellent suggestion. **We have confirmed that we obtain similar conclusions on a dataset with realistic label noise.**
We considered the Mini-ImageNet dataset (with noise rate 60%) from Jiang et al., Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels, ICML ‘20. In the rebuttal PDF, we have included deferral curves for various methods under a cascade of ResNet 10 models with widths 16 (small model) and 64 (large model). As in the body, Diff-01 significantly outperforms Confidence based deferral in this setting.
> Text below Lemma 4.1: you assume that the classifier's softmax values are exactly the true posterior probabilities,
To clarify, Lemma 4.1 and text below it **does not** assume that the first model’s softmax probabilities are exactly the true posterior probabilities. Rather, the key assumption is that we accurately estimate the **error probability** of model 1, which is a weaker condition.
In more detail:
Recall that $h^{(i)}$ denotes the i-th classifier where $i \in \\{1, 2\\}$. In line 153, we define $\eta_{h^{(i)}(x)}(x) := \mathbb{P}(y=h^{(i)}(x) | x)$. This can be interpreted as the _conditional_ accuracy of the classifier $h^{(i)}$ on an instance $x$. Intuitively, this quantity captures the agreement between the prediction $h^{(i)}$ and that of the Bayes’ classifier $\mathbb{P}(y | x)$ on $x$. The Bayes optimal deferral rule $r^{*}(x) =\boldsymbol{1}\left[\eta_{h^{(2)}(x)}(x)-\eta_{h^{(1)}(x)}(x)>c\right]$ given in Proposition 3.1 means that we should defer $x$ when the relative gain in conditional accuracy is larger than $c$.
Lemma 4.1 characterises when confidence-based deferral (i.e., based on only $\eta_{h^{(1)}(x)}(x)$) is optimal. While measuring the confidence with $\eta_{h^{(1)}(x)}(x)$ is ideal, in practice, we do not have access to it since it depends on the true conditional distribution $\mathbb{P}(y | x)$. The commonly used confidence measure $\max\_{y'} p^{(1)}\_{y'}( x )$ may be regarded as an estimator of $\eta\_{h^{(1)}(x)}(x)$.
A practical interpretation of Proposition 3.1 and Lemma 4.1 would require us to assume that $\max\_{y'} p^{(1)}\_{y'}( x )$ is close to $\eta_{h^{(1)}(x)}(x) = \mathbb{P}( y \neq h^{(1)}( x ) \mid x )$. In other words, the confidence $\max\_{y'} p^{(1)}\_{y'}( x )$ captures the conditional accuracy. However, this is not the same as stating that the classifier's softmax values are exactly the true posterior probabilities. The latter translates to $p_y^{(1)}(x) = \mathbb{P}(y | x)$ for $x \in \mathcal{X}, y \in [L]$, a strong condition that we do not assume. In fact, assuming so would have defeated the purpose of forming a cascade, since model 1 alone suffices.
> In Figure 2, first row, how do you explain the fact that plain confidence deferral oftentimes outperforms the two post-hoc methods Diff-01 and Diff-prob
As described in Section 4.1 (lines 187-195), when $h^{(2)}$ is a specialist, it performs exceptionally well on a data sub-group $\mathcal{X}_{\mathrm{good}}$ that it specialises on, and performs poorly on the rest of the data $\mathcal{X}\_{\mathrm{bad}} = \mathcal{X} \setminus \mathcal{X}\_{\mathrm{good}}$. In other words, $\eta\_{h^{(2)}(x)}(x)$ is high on instances in $\mathcal{X}\_{\mathrm{good}}$ and low on instances in $\mathcal{X}\_{\mathrm{bad}}$.
A post-hoc deferral rule can perform well by learning to defer $x \in \mathcal{X}\_{\mathrm{good}}$ to $h^{(2)}$ and defer $x \in \mathcal{X}\_{\mathrm{bad}}$ to $h^{(1)}$. That is, the post-hoc rule can learn to identify the set of instances $x$’s for which $\eta\_{h^{(2)}(x)}(x) - \eta\_{h^{(1)}(x)}(x) > c$ (recall the optimal deferral rule in Proposition 3.1). This is the case in Figure 2(c) where $h^{(2)}$ is a dog specialist model.
However, in Figure 2(b), the fraction of non-dog training images for $h^{(2)}$ is 4% (compared to 2% in Figure 2(c)). In this case, $h^{(2)}$ becomes relatively more like a generalist, and $\eta\_{h^{(2)}(x)}(x) - \eta\_{h^{(1)}(x)}(x)$ is closer to 0 i.e., less accuracy gap between the two base models. This makes it harder for a post-hoc deferral rule to learn to identify instances to defer. This is why Confidence can be better than post-hoc rules in this case.
> Diff-prob in fact is consistently worse than the other methods in all 3 rows in Figure 2, can you explain why
This is an interesting question. While we don’t have a conclusive answer, our current hypothesis is that it is the strong assumption of Diff-Prob that causes such poor performance.
To start, recall the Bayes optimal deferral rule from Proposition 3.1: $r^{*}(x) =\boldsymbol{1}\left[\eta\_{h^{(2)}(x)}(x)-\eta_{h^{(1)}(x)}(x)>c\right]$. Diff-01 is based on the one-hot oracle in (3) derived as a one-sample estimator of $\eta_{h^{(i)}(x)}( x ) = \mathbb{E}\_{y \mid x}\left[ \boldsymbol{1}[y = h^{(i)}(x)] \right]$. This is a straightforward plug-in estimator of the optimal rule that requires no assumption on how close the i-th model $h^{(i)}$ is to the Bayes classifier $\mathbb{P}(y|x)$.
By contrast, Diff-Prob is based on the probability oracle in (4): $\hat{r}\_{{\rm prob}}(x) = \boldsymbol{1}\left[ p_y^{(2)}(x) - p_{y}^{(1)}(x) > c\right]$. Deriving this rule assumes that $\mathbb{P}(h^{(i)}(x) = y \mid x)$ is close to $p_y^{(i)}(x)$ for $i \in \{1,2\}$. This may not hold in practice.
> theoretical analysis for the Bayes optimal deferral rule for a cascade of K=2
We note that Lemma F.1 presents the optimal deferral rule for cascades of $K>2$ models. This supplements Proposition 3.1 in the main text that characterises the optimal deferral rule for cascades of $K=2$ models.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal. My comments were addressed by the authors. | Rebuttal 1:
Rebuttal: We thank all the reviewers for constructive comments. We will revise our submission accordingly. In what follows, we clarify common points raised by multiple reviewers. *Please note that we have also attached a PDF to this rebuttal.*
> Meaning of “produce the same order” in Lemma 4.1 [Reviewer kqwH, Reviewer ahRx]
**We will make this statement more clear in the revised version.** First, we recall the statement of Lemma 4.1:
__Lemma 4.1__ The deferral rule $r_{\mathrm{conf}}$ produces the same deferral curve as the Bayes-optimal rule (2) __if__ $\eta_{h^{(1)}(x)}(x)$ and $\eta_{h^{(1)}(x)} (x) − \eta_{h^{(2)}(x)} (x)$ produce the same ordering over instances $x \in \mathcal{X}$.
For brevity, we write $\eta_{i}(x)$ for $\eta_{h^{(i)}(x)}(x)$.
* Define real-valued random variables $ z := \eta_{1}(x)$, and $z^* := \eta_{1}(x)-\eta_{2}(x)$
where $x \sim \mathbb{P}_{x}$.
* Let $\gamma_{\alpha},\gamma_{\alpha}^*$ be the $\alpha$-quantile of the distributions of $z$ and $z^*$, respectively.
By “produce the same ordering over instances $x \in \mathcal{X}$, we precisely mean that for any $x \in \mathcal{X}$ and $\alpha \in [0, 1]$, $\eta_1(x) < \gamma_\alpha \iff \eta_1(x) - \eta_2(x) < \gamma^*_\alpha$. We use this definition in the proof.
Pdf: /pdf/709e6d523acd3206274f3c78898a039244b5092b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors study the problem of confidence-based cascade deferral for the classification task. They first proposed the formulation of Bayes optimal deferral rule, which relies on both the base model and its successive model, and then proposed several baselines that works by mimicking the Bayes optimal rule. They further analyzed confidence-based methods and proposed the condition for their consistency. Given the fact that these baselines' computational costs are not acceptable, they turn to propose a post-hoc method guided by the baselines and analyzed its consistency under different conditions. Experimental results validated the performance of the proposed method.
Strengths: 1. The authors studied the optimality condition for cascade deferral rule and show the formulation of the Bayes optimal rule that relies on the confidence score of two models, which takes an intuitive form that implies the limitation of confidence-based deferral.
2. The authors gave several baselines to approximate the Bayes optimal rule, and further proposed the post-hoc versions for them to resolve the computational issues.
3. Theoretical analyses are conducted for both the confidence-based method and post-hoc deferral rule under various conditions.
Weaknesses: 1. Though the connection between this work and learning to defer/ classification with rejection is revealed, it is not thoroughly investigated in my opinion. In fact, learning to defer and classification with rejection are shown to be equivalent to the ordinary multi-class classification problem [1, 2]. In the post-hoc regime, I think it is worth trying to reduce the training of the cascade deferral rule to a multi-class classification problem by setting the maximum confidence of each model as a pseudo posterior probability.
2. As stated in the limitation section, the analyses are conducted on a case-by-case basis, indicating that there is still progress needed to develop a consistent and computationally friendly deferral rule. While the experimental results suggest that the proposed method is comparable, it is still a matter worth addressing. Conducting a finite-sample analysis could significantly alleviate this concern.
[1]. Mohammad-Amin Charusaie, Hussein Mozannar, David A. Sontag, Samira Samadi. Sample Efficient Learning of Predictors that Complement Humans. ICML 2022.
[2]. Yuzhou Cao, Tianchi Cai, Lei Feng, Lihong Gu, Jinjie Gu, Bo An, Gang Niu, Masashi Sugiyama. Generalizing Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses. NeurIPS 2022.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please see the Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are adequately analyzed in Appendix G.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
> The analyses are conducted on a case-by-case basis. There is still progress needed to develop a consistent and computationally friendly deferral rule.
We reiterate that by Proposition 3.1, the Bayes-optimal deferral rule is based on thresholding $s^*( x ) = \eta_{h^{(1)}(x)}(x) - \eta_{h^{(2)}(x)}(x)$. By Lemma 4.1, confidence based deferral is optimal if $\eta_{h^{(1)}(x)}(x)$ produces the same order over input instances compared to $s^*( x )$. This one condition characterises the optimality of the confidence based deferral and thus can be seen as a unified (population level) analysis.
The three cases presented in Section 4.1 (i.e., specialist setting, label noise, distribution shift) are intuitive problem instances where the confidence-based deferral is not optimal. Note that in experiments on all these three settings, the same set of post-hoc approaches and the same training recipe are used (see Section C.2). In all cases, the post-hoc model is a small MLP composed of no more than 100 hidden units. The overhead in the inference cost from calling the post-hoc model is negligible, and thus we believe this approach is computationally-friendly.
> Conducting a finite-sample analysis could significantly alleviate this concern.
We agree with the reviewer that conducting a finite-sample analysis is of interest. As we demonstrate next, **the Diff-01 method readily admits standard generalisation bounds.**
### Finite sample analysis
Recall that for given base classifier $h^{(1)}$ and $h^{(2)}$ and deferral cost $c$, the optimal deferral rule takes the form
$r^*(x) = 1[g^*(x) > c]$,
where
$g^*(x) = \eta_{h^{(2)}(x)}(x)-\eta_{h^{(1)}(x)}(x)$.
We seek to approximate this optimal rule by constructing samples $S_{\rm{val}} := \{ ( x_i, z_i) \}$, where the labels $z$ are chosen such that $\mathbb{E}[ z | x] = g^*(x)$. We then pick a scorer $\hat{g}$ from a hypothesis class $\mathcal{G}$ that minimizes the empirical average squared loss on this sample:
$$\hat{R}\_{\rm sq}(g) \,= \frac{1}{| S\_{\rm{val}} |} \sum\_{( x_i, z_i ) \in S\_{\rm val}} ( z_i - g( x_i ) )^2,$$
and construct a deferral rule $\hat{r}(x) = 1[\hat{g}(x) > c]$.
We have the following result.
**Proposition**
Let $\mathcal{N}(\mathcal{G}, \epsilon)$ denote the covering number for $\mathcal{G}$ with the $\infty$-norm. Suppose for any $g \in \mathcal{G}$, the squared loss $(z - g(x))^2 \leq B, \forall (x, z)$.
Fix $\delta \in (0,1)$. Then with probability at least $1 - \delta$ over draw of $S_{\rm val}$,
$$R(\hat{r};h^{(1)},h^{(2)}) - R(r^*;h^{(1)},h^{(2)})
\leq
\mathbb{E}\_x [ (\tilde{g}(x) - g^*(x))^2 ] +
4\cdot\inf\_{\epsilon > 0}
\epsilon +
B\sqrt{
\frac{ 2\cdot\mathcal{N}(\mathcal{G}, \epsilon) }{|S\_{\rm val}|}
}
+
\mathcal{O} (\sqrt{\frac{\log(1/ \delta)}{|S\_{\rm val}|}})
$$
In the above finite-sample bound, the first term on the right-hand side (RHS) is an approximation error quantifying the distance between the best possible model $\tilde{g}$ in the class $\mathcal{G}$ to the optimal function $g^*$. The second and the third terms on RHS can be interpreted as an estimation error.
> Though the connection between this work and learning to defer/ classification with rejection is revealed, it is not thoroughly investigated
**The reviewer is completely correct that there are close links between model cascading and learning to defer.** However, as this is not the primary focus of the work, in the interest of space we “deferred” a detailed discussion of this topic to the cited work of [Narasimhan et al., 2022]. We are happy to add citations to the referenced works from the learning to defer literature.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' detailed response. My concerns are all addressed, and I will keep the score. | null | null | null | null | null | null |
On the Variance, Admissibility, and Stability of Empirical Risk Minimization | Accept (spotlight) | Summary:
This paper proves that variance for ERM enjoys a a minimax rate. The findings indicate that in scenarios where ERM is not optimal, the source of suboptimality lies within the bias component. Furthermore, these insights are extended to encompass an admissibility-type theorem for both fixed and random design, as well as a stability result for ERM.
The paper's contributions can be summarized as follows:
1)Demonstrating the optimality of the variance term in both fixed and random design situations.
2)The obtained results automatically yield a stability result for ERM.
3)Presenting a simpler proof for the admissibility theorem in the fixed and random design setting.
4)Highlighting a counterintuitive outcome concerning the ERM estimator $\widehat{f}_n$: when $f'$ is close to $\widehat{f}_n$, it can lead to a large squared error.
Strengths: The paper studies an important problem, provides convincing conclusion. The problem is not only intellectually interesting, it also advances our understanding of ERM, which is arguably the most important estimator for modern machine learning.
Weaknesses: Typo: above Equation 1 $f^* \in f^*$.
Remark should be referred to in the appendix.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Given dense results here, it would help reader to better understand the paper if the theorems could be summarized in a table.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind report and for pointing out that typo. In the revised version, we will follow your suggestion and embed a tabular summary of the results. We will move many of the remarks to the appendix.
---
Rebuttal Comment 1.1:
Title: The rebuttal solves all my concerns
Comment: The rebuttal solves all my concerns | Summary: This paper explores the minimax optimality of ERM in terms of the bias and variance of the ERM method.
They find in some settings that the variance is always at the minimax rate, implying that suboptimality can only occur due to bias. This paper also explores stability of ERM, finding that almost-minimizers are close to the ERM w.r.t the population distribution, but not necessarily the empirical distribution.
Strengths: * Cool results and proofs in the fixed design setting
* The random design setting results are very interesting as well, albeit harder to digest all the quantities.
Weaknesses: * The independence relations should be clarified in the preliminaries. I assume that $\mathbf{\xi}$ is independent from $\mathbf{X}$ in the random design case, but I don't think that's mentioned.
Minor Typos:
* In the proof of Theorem 2, $\mathcal{G}$ and $\mathcal{G}_\star$ seem to be used interchangeably.
* The statement of Lemma 2 in the proof of Theorem 2 seems to have a typo? I think it should be $f$ instead of $f^\star$.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Given that assumption 8 is necessary to get from a $\epsilon_V^2$ to an $\epsilon_\star^2$ bound, how restrictive is this assumption?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind report and your typo fixes. We have clarified that the noise vector is drawn independently of the data points.
Regarding Assumption 8, is motivated by high-dimensional models, especially in the ``benign overfitting'' literature where we they assume that ERM interpolates the noisy observations. However, there are function classes which do not satisfy it. As mentioned below its statement, the assumption cannot hold if $\mathcal F$ is taken to be a compact class of functions on some compact domain $\mathcal X \subset \mathbb R^d$. For instance, if $\mathcal F$ is the class of $1$-Lipschitz functions, $d$ must increase at least logarithmically in $n$.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: Thank you for addressing my questions. I will maintain my score. | Summary: This paper studies the suboptimality of Empirical Risk minimization (ERM) of the squared loss, or equivalently, Least Squares (LS), for convex function classes, in both fixed and random design settings. In the context of non-parametric statistics, necessary and sufficient conditions for the optimality of LS are not known (though sufficient conditions were given by Birge and Massart, '93).
Roughly speaking, the main message of this paper is showing in both settings that the suboptimality is due to the bias term, where the variance term enjoys minimax rates, under some reasonable assumptions.
Strengths: This paper studies a classic and fundamental question in statistics, understanding the suboptimality of Least Squares.
This paper makes nice progress in this direction. Besides the main result (mentioned in the summary), there are several other contributions, such as the (weak) admissibility property of the ERM in the random design setting; sometimes ERM can be optimal.
Moreover, one of the results shows that ERM is stable; all approximate minimizers of the squared loss are close to each other in the functions space, and on the other end, the converse doesn't hold, sometimes the ERM is an optimal estimate and there exist close functions to it with high empirical error.
Also, the proof techniques (the isoperimetry approach for example) look interesting and elegant.
Weaknesses: I don't see any.
There is a chance that I didn't understand some details, this is not my main research field and some proof techniques were new to me.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Typos:
line 69: should be $f*\in F$.
line 16: should be "in detail".
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I don't see any.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind report and your typo fixes. | Summary: The paper considers the Empirical Risk Minimization (ERM) problem with squared loss and shows that the suboptimality of ERM is due to the bias rather than variance.
Strengths: This paper is quite theoretical and technical. The paper provides useful insights in different aspects. The paper finds that (1) the variance term of ERM agrees with the minimax rate of estimation in both fixed and random design, (2) ERM is stable, (3) the landscape of near-solution for non-Donsker class is irregular, (4) the relation between isoperimetry and variance of ERM.
Weaknesses: The main weakness is organization. Too many results wrapped in nine pages makes reader feel the paper is very crowded.
Some definitions and results could be explained more detailedly. For example, what does it mean by the minimax rate of ERM and variance, respectively? What is local optimality? Donsker and non-donsker classes. Any specific application of current results? Technical differences between fixed and random design could be explained on a very high level.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I found that this paper is more suitable for conference like COLT instead of NIPS. Audience from learning theory filed might be more interested in this topic.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: More examples and applications should be included. Otherwise, it is a merely technical paper with no practical usage.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your report. We agree that the intuitive explanations of our assumptions and results are a bit briefer than we would like, which was forced upon us by the page limit for the submission. In the revised version, we will move Theorem 2 to the appendix, along with the additional page in the revised version, to provide fuller explanations to improve the accessibility of this manuscript.
- Applications: our results are applicable to the ERM under extremely mild assumptions on the function class, as well as to other estimators satisfying Lipschitz conditions, such as regularized least-squares. A wide variety of problems intensively studied in non-parametric statistics, from estimation of a bounded $L$-Lipschitz or convex function, to kernel regression, satisfy these assumptions.
- "I found that this paper is more suitable for conference like COLT....'' - At first glance, this paper may look very theoretical. However, our results reveal that with regard to the large learning models commonly used in practice, if the ERM performs poorly then it does so because it is biased. In contrast (according to our knowledge) to the setting of models with few parameters, efficient debiasing methods for ERM on these large models do not exists (see Item 1 in the general rebuttal). We believe that the main message of our work -- efficient debiasing methods are all you need to improve the statistical performance of ERM over large models -- is insightful for the entire learning community. We moved this takeaway from the end of the paper to the introduction. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful comments. We provide the following general remarks and comments for all reviewers and to the area chair:
1. The main contribution/message of the paper: Empirical Risk Minimization (ERM) with squared loss, or any "Lipschitz" loss in the observations, enjoys a minimax optimal variance error term. Therefore, if the ERM is minimax suboptimal, this is due to its bias. Beyond mere intellectual curiosity, our results demonstrate the salience of finding computationally efficient methods to debias ERM (see the remark below). This finding aligns with a few recent works that demonstrate that ERM (or MLE) suffers from high bias in certain classical models in their high dimensional settings, such as logistic regression or $l_1$ regression. We have proven that the potential high bias of ERM is a general phenomenon over rich function classes or in "high dimensions."
This takeaway appeared at the end of the manuscript (page 9); in the revised version, we improved the phrasing and placed it more prominently in the introduction section.
Remark: In the classical regime (i.e., classes with a low number of parameters), one may reduce the bias by using bootstrap methods (see, e.g., B. Efron, "Bootstrap methods: another look at the jackknife," Ann. Statist. 7(1): 1-26). Unfortunately, according to our knowledge, in the non-parametric case or with models with a large number of parameters (such as neural networks), such methods do not work.
2. Clarity and organization of the manuscript: We have enhanced the organization of the manuscript by adding more intuitive explanations for the quantities that appear in the random design, and moved some of the more technical results to the appendix. This change is aimed at improving the accessibility of the paper. Additionally, in line with the reviewers' suggestions, we have incorporated a concise summary of all our results, potentially presented in a table format, to provide readers with a quick overview.
3. Generality of our results: Although we presented our results specifically for Empirical Risk Minimization (ERM) with squared loss, otherwise known as the LS estimator, our findings are more broadly applicable. As indicated in our remarks, many of our results hold for estimators that exhibit certain stability properties, such as Lipschitzness in noise or data. This includes many variants of convex regularizers, such as ridge regression. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the classic problem of regression under noisy observations, with a convex and closed function class $\mathcal{F}$. In particular, the paper considers the performance of the Empirical Risk Minimizer (ERM) under the mean squared error, in both the fixed design and random design settings.
At a high level, the author(s) show that:
1. The variance of the ERM is comparable to the minimax rate of regression, implying that when ERM is not minimax-optimal, it must be due to large bias
2. ERM is admissible up to constant factors, via a simpler proof using fixed point theorems
3. ERM enjoys stability, in the sense that all almost-empirical-loss-minimizers also have similar expected loss as the actual empirical loss minimizer.
4. For "non-Donsker" function classes, the converse of 3 is false, that there is always some ground truth function $f^* \in \mathcal{F}$ such that, with high probability over the samples and noise, there is a function $f_{\mathrm{bad}}$ whose expected loss close to the ERM, but that its empirical loss is much larger than the ERM's.
I did the following review as an emergency review, so I did not check the details/appendices carefully.
Strengths: The ERM is perhaps the most commonly used regression estimator. This paper furthers our understanding of its behavior, particularly for function classes where ERM isn't minimax-optimal. I found the introduction well-written, describing each result at a high level. I also find the observation that "non-minimax-optimality must be due to bias" to be an interesting result.
Weaknesses: While I liked the paper, and recommend a weak accept, I think there are the following writing/presentational issues that can be improved. Overall, my comment is that the paper currently perhaps reads better for experts who have worked on ERM (or at least, in empirical process theory), but not that easy to read for a more general learning theory audience.
- The paper reads a bit like a collection of related results, but without a "main point", and could maybe be strung together better as a story. For example, the "variance <= minimax-rate" result and the stability result seem somewhat disjoint in the introduction, even though later on in Theorem 1, they are shown together as one big result. I was also a bit confused about the writing in Section 2, in terms of the narrative. For example, I don't quite understand how Theorem 2 "complements" Theorem 1 (cf. Line 150), though Theorem 2 is a cool result itself with the local minimax optimality and is used to prove Corollary 1 (the admissibility result). Moreover, Theorem 3 also "complements" Theorem 1 (cf. Line 185), but there's a quadratic gap. Is the gap necessary?
- I also found that the technical Section 2 is a bit too heavy on definitions and assumptions and not enough interpretation. When I was reading, I felt that the author(s) tried hard to succinctly give the most general results that are shown, but in my opinion, for readability, it might be better to start with simpler cases (and perhaps even informal versions) of statements (particularly the assumptions), and provide more interpretation.
- Related to the above point, some of the assumptions/quantities in the results are stated without much interpretation or intuition (in the main body). There are also a few comments along the lines of "this assumption is considered mild in the literature/by some other authors" without much additional interpretation. This makes the paper not quite as self-contained as it could be. This issue is particularly prevalent in Section 2.2 (especially the isoperimetry assumptions), and it became quite hard to interpret the results. I can see that there are some remarks in the appendices, but I think a lot of them really should be in the main body for readability.
- The term non-Donsker was never defined in the main body, even though it is a key element in a main result. While Donsker classes are a basic object in empirical process theory, I don't think it should be assumed knowledge for the general theory reader. There is also a lack of discussion on whether Theorem 4 only hold for non-Donsker classes, or more generally what's the significance of the assumption: whether it's a necessary condition for the result, or just that this is the result that can be proved.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: Minor questions and comments:
1. Please consider using the same number-counter across all of definitions/theorems/assumptions. It was hard to scroll through the paper looking at backward references when reading.
2. (Line 164) The author(s) mention that $\mathcal{G}_\ast$ in Theorem 2 can replace 2 with any other absolute constant. How does the replacement propagate to Theorem 2? Does it change the constants in the $\asymp$?
3. Theorem 6, the assumptions 1,2,4,6 have nothing to do with the instantiation of $\mathbf{X}$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed report. Following your comments as well as those of other reviewers regarding the density of results in the paper, in the revised version, we moved the relatively technical Theorem 2 to the appendix and inserted more explanations/intuition into the main body, which the more generous page limit for the revision enabled us to do.
Regarding some of the specific points raised by this review:
- The minimax optimality of the ERM's variance and its stability are conceptually distinct phenomena and thus explained separately in the introduction, but mathematically they are so closely linked that it is easier to present them together in Theorem 1.
- Theorem 2 complements Theorem 1 mainly in the sense that they are both needed to prove the ERM's admissibility. In any case, Theorem 2 will be relegated to the appendix in the revision.
- Regarding the quadratic gap between Theorem 3 and Theorem 1, we do not know if it is necessary, but we do not currently know how to improve it.
- "The paper reads a bit like a collection of related results, but without a "main point", and could maybe be strung together better as a story." - see Item 1 in the rebuttal for all reviewers. Also, we improved the organization to the paper that the main points and the applications appear before the main results -- that are more techincal in their nature.
- "The term non-Donsker was never defined in the main body" - we will add a definition and some examples for non-Donsker classes and move this result to the random design section, and provide some more motivation as well.
- "I also found that the technical Section 2.... Related to the above point, some of the assumptions/quantities in the results are stated without much interpretation'' - Again, we have moved some results to the appendix, which along with the extra page frees up room to add more explanations and concrete examples, especially in the random design setting.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I hope the authors will actually implement the changes described, since they all will make the paper a lot more readable and self-contained. My final comment is that the sentence "Beyond mere intellectual curiosity, our results demonstrate the salience of finding computationally efficient methods to debias ERM" from the overall response is useful to emphasize further in the final paper.
I will maintain my current score. | null | null | null | null | null | null |
Effectively Learning Initiation Sets in Hierarchical Reinforcement Learning | Accept (poster) | Summary: The paper considers an HRL setting with options (including sub-goal reward functions) in which termination conditions for options are provided, and the goal is to learn the option policies and initiation sets simultaneously. The main focus is on learning of initiation sets in an online scenario where option policies are changing. The authors highlight three possible issues with learning the initiation set with classifiers in such an online setting. Firstly, the data is non-stationary because the policy keeps changing during learning. Secondly, classifiers ignore the temporal structure of the RL task. Thirdly, the difficulty of expanding the initiation set arises due to the agent not being initialized (and consequently not visiting) states outside of the initiation set.
To alleviate the first two issues, they propose combining a classifier with initiation value function (IVF) weights (expected probability of successfully completing the task from a state). They additionally propose addressing the third issue by adding an exploration bonus to IVF. The experiments compare the performance with these proposed changes against ablations without (some of) these changes. The last experiment includes a comparison against a prior method where the proposed method is used in an existing HRL algorithm.
Strengths: - Originality:
- Submission proposes an original approach for learning of initiation functions in the option setting that is considered. However, since inferring success probability is a problem that should span other parts of RL, I am not certain whether such approach isn't related to other works from different branches of RL.
- Clarity:
- Submission is well organized (proper separation into sections and subsections etc.)
- Data non-stationarity, pessimistic bias and temporal structure parts in section 3 are well explained
- Significance:
- Submission tackles a task of learning initiation sets. This task is an important problem that is being researched by the HRL community
Weaknesses: - Quality:
- I am not fully convinced that the submission is technically sound. In particular:
- I think that the problem setting described in the paper is somewhat confusing. The background section mentions SMDP and the options framework but adds internal option reward function and discount (lines 66-67). As far as I am aware, these are not standard part of the options framework. From the lines 147-150 I understand that the option policy is trained to maximize this internal reward function. This makes the problem different from the one that is considered in the original options paper [1], which relies uses a single global reward function but the paper does not elaborate on this. It is also unclear to me, what the relationship between the success condition cumulant $c_0$ and the option reward function is in section 3.1. The former is not used to learn option policy but shouldn't reaching "success" (termination states) be something that the policy is rewarded for?
- When the IVF is introduced the timescale (discount) is set to 1 and the horizon is capped at $t=H_0$. However, there is no discussion about the effect of the $H_0$ on learning. As far as I can tell this seems to be a very important hyperparameter especially when discount is set to 1, since based on this value the size of initiation set should increase (for bigger $H_0$) or decrease (for smaller $H_0$)
- The paper does not include qualitative visualization of the learned initiation sets and IVF in the easy environment (four rooms). I think that such visualization should be in the paper to evaluate and justify proposed changes and help to understand their effect on learned initiation sets and IVF.
- It is unclear from the experiments whether (and in which cases) proposed adjustments help. The plots (in particular Figure 2), seem inconclusive to me.
- Authors do not discuss the limitations/weaknesses of their method
- In Figure 5., the prior work baseline seems to perform very poorly. While I understand that the setting was slightly adjusted, I am not sure whether the baseline was properly tuned.
- Clarity:
- The comparisons are sometimes not well explained (what are GVF baseline in Fig. 2?)
[1] Sutton, R. S., Precup, D., & Singh, S. (1999). Between MDPs and semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - In the abstract you mention that option policy and initiation set must be learned simultaneously? Could you elaborate on why this is the case?
- In line 92 you mention that "initiation function is used in the context of primitive actions". What does context of primitive actions mean here?
- Regarding data non-stationarity part in section 3. Wouldn't it be possible to discard old training samples based on the probability of current policy executing old trajectories?
- Is it possible to use IVF with thresholding directly instead of learning a weighted classifier? If yes, why haven't you included this baseline in the paper?
- Why does the option use internal reward function instead of a global one?
- Shouldn't reaching "success" (termination states) be something that the option policy is rewarded for?
- If the option maximizes internal reward function, is the problem of finding policy $\pi_0$ pretty much an MDP $<S,A,R_0, T, \gamma>$?
Comments:
- line 70 - you cite Feudal RL with "usually assume that all options apply everywhere" afaik, this approach does not use options and in fact was proposed before options
- line 109 - It is probably better to cite the full paper Options of Interest: Temporal Abstraction with Interest Functions (Khetarpal et al. 2020) from AAAI 2020 as a reference to IoC
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Authors do not sufficiently discuss the limitations/weaknesses of their method
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are happy that you found our method to be original and the problem setting to be important. We hope that we can clarify some misunderstandings and convince you about the value our proposals to the HRL community.
## Weaknesses
> Inferring success probability is a problem that should span other parts of RL
We agree the problem is very general and we hope that our solution is also adopted by other parts of RL. However, we are not sure how that is a weakness of our work.
> The background section mentions SMDP and the options framework but adds internal option reward function and discount, which makes the problem different from the one in the original options paper, which relies on a single global reward function.
It is very common in HRL for options to have their own internal reward functions [1]; this was discussed in the HRL survey paper [3] (whose formalism we adopt in our Background section) and has been the case in option discovery research for several decades [2]. We can add more formalism around this in the background section if you want, but we hope that this is not a reason for rejection.
> What is the relationship between the success cumulant and the option reward function in section 3.1?
The option reward function is used to learn the option policy--it depends on the environment and is designed to enable sample efficient policy learning. For e.g, in Robosuite, the reward function is dense and depends on the configuration of objects in the scene. However, the initiation cumulant is fixed and environment agnostic---it is always $+1$ for reaching the goal and $0$ otherwise. However, this cumulant should not always be used for policy learning; for e.g, you might want negative rewards for states to avoid or shaped rewards for sample efficiency.
> When the IVF is introduced the discount is set to 1 and the horizon is capped at H0. However, there is no discussion about the effect of the horizon on learning.
In our experiments, we treated the horizon as a hyperparameter but forgot to include them in the appendix. Here are the values: MiniGrid $50$, MontezumaRevenge $200$, RoboSuite $250$, AntMaze $200$ (all methods used the same horizon). These numbers were picked based on the overall time limit (max_steps/episode) for each domain and we did not tune them. We apologize for the omission and will include this information in the camera-ready if the paper is accepted.
Based on your suggestion, **we ran an experiment in which we swept over the option horizon $H_o$** in MiniGrid-FourRooms and recorded the accuracy and size of the learned initiation sets; the results are included in the accompanying PDF. Option horizon $H_o$ has a predictable effect on accuracy: as long as $H_o$ is not too small ($\geq 40$ timesteps), accuracy is high. Your intuition about the relationship between horizon and initiation set size is *mostly* correct: $H_o$ sets an upper bound on the size of the initiation set, the final size is also dependent on the competence of the learned policy and the connectivity of the underlying MDP.
> The paper does not include qualitative visualization of the learned initiation sets and IVF in four rooms
Thank you for your suggestion, **we visualized the learned classifier and IVF** for MiniGrid-FourRooms; see Fig A in the accompanying PDF.
> The plots (in particular Figure 2), seem inconclusive to me.
Figure 2 shows that our method achieves \~$25$-$30$% increase in accuracy in MiniGrid and \~$50$% increase in accuracy in Montezuma’s Revenge; these results suggest that our techniques **conclusively** and significantly outperform the baseline approach to learning initiation sets. To be clear, the line labeled “Baseline Binary” is the only one that is not our contribution and corresponds to the method used in almost all prior work; the other three lines are novel to our paper.
> (Fig 5) I am not sure whether the baseline was properly tuned.
We used the author implementation of baseline DSC. We reproduced their reported results when the initiation was learned over the $(x, y)$ location of the ant, but the performance drops dramatically when the full state is used for initiation learning. We tried different values of gestation period ${5, 10}$ and learning rates {1e-3, 1e-4, 1e-5}; none of these settings could solve AntMediumMaze. Our algorithms (Weighted Clf and Initiation GVF) were also implemented in the author codebase for an apples-to-apples comparison. The DSC codebase is open-source, so if you want to try the experiment yourself, we are happy to share the run commands and configs we used.
## Questions
> Why should the option policy and initiation set be simultaneously learned?
The initiation set evaluates the reachability of the current policy, so as the policy changes in the continual learning setting, so should the initiation set, i.e, they depend on each other.
> Is it possible to use IVF with thresholding directly instead of learning a weighted classifier? If yes, why haven't you included this baseline in the paper?
We *do* use the IVF by itself – since it always outputs values between 0 and 1, it is much easier to threshold than the value function used for action-selection (see Section 3.1). We should clarify that the thresholded IVF is *not* a baseline, it is one of the two algorithms we propose. That line was mistakenly labeled as “GVF” in Fig 2, but it should read “IVF”.
> If the option maximizes internal reward function, is the problem of finding policy pretty much an MDP?
Yes, that is correct! In fact, M White et al 2017 formalized options as RL subtasks.
## References
[1] Precup, Temporal abstraction in RL, PhD Thesis (2000).
[2] McGovern & Barto. Automatic discovery of subgoals in reinforcement learning using diverse density (2001).
[3] Barto & Mahadevan. Recent Advances in Hierarchical Reinforcement Learning (2003).
[4] Sutton et al. Reward-respecting subtasks for model-based reinforcement learning (2022).
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your response.
- **We agree the problem is very general and we hope that our solution is also adopted by other parts of RL. However, we are not sure how that is a weakness of our work.**
In the setting that is considered (terminations are given, internal reward function), the problem becomes learning a policy in an MDP while also inferring whether a policy can reach certain state in a set time horizon. While here it is mostly considered in hierarchical setting, it is quite likely that similar setting may arise non-HRL areas of RL. My comment was not in the weaknesses section because I do originality of the work to be a weakness. I do think that there is an original contribution. This comment reflected my uncertainty about possible connection to prior related work outside of HRL space.
- **It is very common in HRL for options to have their own internal reward functions...**
Thank you for the clarification. Could you please incorporate these references into the background section in the next revision?
- **In our experiments, we treated the horizon as a hyperparameter but forgot to ...**
I appreciate additional results and have some comentes/questions:
- When you calculate accuracy shouldn't Figure A (c) result in 100%? I suppose that you get <100% for plots because you consider successes from samples for this metric but wouldn't it better to evaluate accuracy by comparing (c) with true initiation set (tresholded true IVF or something similar)? Could you maybe elaborate on how accuracy is calculated?
- I don't think that the setting in which H=50 is the most relevant because (as far as I can tell) simply using trivial initalization set I=S would lead to the best performance (100% accuracy and set size). This is because it should take less than 50 steps from each S to reach the goal. Is there something I am misunderstanding? I think that using lower H where there is a region where options should not be initialized is much more interesting since it imitates real usecase and requires your method to find I that are different from S.
- Are these plots in Figure A with H=50?
- Why does the performance of the algorithm drop with lower H? As far as I can tell, there should be no reason why the accuracy should drop if the classifier can predict states that can reach the goal correctly. Having plots similar to Fig. A with H<50 would probably help explain this?
- I think that it would be useful to have similar experiments (with changed H) for other settings to see the effect of this (important) hyperparameter
- Is the goal reachable from all initial states within H steps in montezuma's revenge?
- **We used the author implementation of baseline DSC. We reproduced their reported results when the initiation was learned over the
location of the ant, but the performance drops dramatically when the full state is used for initiation learning. ...**
- I think that it would be best if the results on the original domain would be included in the paper (together with the new domain) for a fairer comparison since tuned hyperparameters are available for that domain unless there is an explicit argument for why DSC baseline would not be able to handle the new setting.
---
Reply to Comment 1.1.1:
Title: Further clarifications
Comment: Thank you for continuing to engage with our work, we appreciate it!
> Could you please incorporate these references into the background section?
Absolutely!
> Could you maybe elaborate on how accuracy is calculated?
At each state $s\in\mathcal{S}_0$, we record the initiation decision made by the learning algorithm as $\hat{\mathcal{I}}_o(s;\theta)\in\\{0, 1\\}$. We then execute the option policy $\pi_o$ from that state and record whether or not the agent reached the option's subgoal as $Y_s\in\\{0,1\\}$. Accuracy at state $s$, for option $o$ is then given by the indicator $1(\hat{\mathcal{I}}_o(s;\theta)=Y_s)$. This process is repeated several times for all options $o\in\mathcal{O}$ and start states $s\in\mathcal{S}_0$. Detailed discussion in Section 4.1 and Appendix B.1.
> Wouldn't it be better to evaluate accuracy by comparing A(c) with the true initiation set?
Yes, evaluation would be easier if we had access to the true initiation set, but we don't; we only have samples from it. In FourRooms, the ground-truth initiation set might be true everywhere at the *end of training*, but we cannot always treat I=S as ground-truth because:
- We do not have similar prior knowledge in other domains where the observation space is high-dimensional and harder to reason about.
- The ground-truth depends on the reachability of the *current* policy: if the policy is newly initialized, its ground-truth initiation set is not true everywhere.
> The trivial initialization set I=S would lead to the best performance (100% accuracy and set size). Is there something I am misunderstanding?
Yes, you are confusing the initiation set of the *current* policy with that of the *optimal* policy: we approximate the former by framing the problem as policy evaluation. So setting $\mathcal{I}=\mathcal{S}$ in FourRooms would register low accuracy most of the time because the policy would be unable to reach the goal within H steps, i.e, the ground-truth initiation set would not be true everywhere. Similarly, the size of the true init set would not be 100% because in early stages of learning, the policy does not have 100% competence. The best we can hope to do with non-stationary policies is to *track* their initiation set, which is what our algorithms attempt to do.
> Are these plots in Figure A with H=50?
Yes, we used $H=50$ in Figure A because that was the hyperparameter setting we used in the main paper.
> Why does the performance of the algorithm drop with lower H?
For very small values of H, the initiation classifier decides that only states that are right next to the goal (say 2 steps away) should be inside the initiation set. However, the stochastic policy is still able to sometimes reach the goal from states that are outside the learned classifier (say 4 steps away), just not reliably. This is why the accuracy is both low and high variance for very small values of H.
We have plots similar to Figure A for different values of H, but could not include in the rebuttal PDF because of space constraints. We are happy to include it in the main paper.
> It would be useful to have similar experiments (with changed H)
We will be sure to include experiments about the option horizon in the main paper, thank you!
> Is the goal reachable from all initial states within H steps in Montezuma's Revenge?
A horizon of H = 200 steps is sufficient for an optimal policy to reach the goal from all initial states in Montezuma’s Revenge. However, with finite data, training time and function approximation, we unsurprisingly do not recover the optimal policy. Furthermore, the initiation set captures the set of states from which the policy can succeed *with high probability*. So, even if the policy might *sometimes* reach the goal from some states, they would not be in the initiation set because of our insistence on reliability.
> [DSC] I think that it would be best if the results on the original domain would be included in the paper
Baseline DSC uses privileged information about which state variables are necessary for initiation set learning: they use the (x, y) location of the ant; we do not. Comparing our algorithm to a version of DSC which has access to privileged information would not be an apples-to-apples comparison, which is why we did not include that result in Figure 5.
Figure 6 in [1] shows that the baseline DSC (with privileged information) gets a success rate of ~50%, our algorithm gets a success rate >80% in roughly the same number of episodes. So, not only does our method remove the need for privileged information about which state variables are relevant for initiation learning, it also gets higher reward. If you want, we can include the baseline's final reported result as a dotted line in our plot.
Thank you once again for your time, we appreciate it! **Are there any other questions you want answered during the discussion period to consider raising your score?**
[1] Robustly learning composable options in deep RL, IJCAI 2021. | Summary: The paper presents an approach for efficiently learning initiation sets of options for approaches that automatically learns temporal options for hierarchical planning. The paper introduces the concept of initiation value function (a probability measure for successful option executions for a given imitation set) and uses IVF to learn initiation sets of options while option policies are also changing. Lastly, the approach learns a neural network for predicting the probability of successful option execution using a classifier. The authors evaluate their approach in variety of settings including discrete spaces (minigrid settings) as well as continuous robotics settings.
Strengths: - The paper is well written. Details are highlighted nicely.
- Authors have thoroughly presented related work and mentioned differences with existing works.
- The experiments are thorough. Sufficient details are provided for experimental setup.
- The paper successfully motivates the readers about the technical challenges in learning initiation sets of the option using the traditional binary classifier based approach and provides clear references to what part of their approach tackles each problem. This makes understanding the paper and following the approach lot easier.
Weaknesses: The only weakness I see of the paper is lack of clarity on how the approach is used with an option discovery approach. The given approach requires a policy to compute IVF and learn a neural network. However, for problems that require option discovery, do not have these options along with a learned policy. It is unclear from the paper how was this made possible without having a fixed policy.
It would be helpful to highlight how the approach was configured to be used with DSC.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please refer to the weakness mentioned above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors do not explicitly address the limitations of the paper.
In my opinion the rest of the paper (apart from not explicitly highlighting the limitations and the weakness mentioned about) is written nicely.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are delighted that you thought that our paper was well written, thorough and easy to follow!
We understand your desire for more details about how our methods are incorporated into the option discovery algorithm, DSC. Section B.3 (including Algorithms 2 and 3) in the Appendix provides more details about the DSC algorithm and the experimental setup. A lot of these details are part of the DSC algorithm, which is not our contribution, so we deferred the discussion to the Appendix. However, we are happy to add some more details to the main paper so that Section 4.3 is easier to follow.
Based on your suggestion, we will also add a discussion about the limitations of our method to the camera-ready.
Thank you once again for your thoughtful review!
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and acknowledging the question. | Summary: This paper addresses the problem of learning initiation sets in hierarchical reinforcement learning. Initiation sets define the states from which an option can be executed successfully. However, learning initiation sets is challenging because they depend on the option policy, which changes as the agent learns. Previous approaches suffer from data non-stationarity, pessimistic bias, and a lack of exploitation of temporal structure. To address these issues, the authors propose the use of the Initiation Value Function (IVF) as a predictive value function that estimates the probability of option success. The proposed method is evaluated in various domains, including grid worlds, robot manipulation, and maze navigation, and shows improved performance compared to existing methods.
Strengths: 1) Novelty: the paper presents a novel method for learning initiation sets from off-policy trajectories by incorporating the temporal structure of the MDP. It also introduces an original technique to mix leverage the strengh of classification togfether with TD learning.
2) The evaluation of the proposed method is comprehensive, with separate experiments conducted to systematically test and validate each claim.
3) The performance of the proposed method, particularly when incorporating the bonus correction, show significant improvements compared to existing methods.
Weaknesses: 1) There are certain parts of the paper that lack clarity, such as the algorithm for selecting the goal for the DSC method. Additionally, Figure 3 is challenging to interpret due to the excessive use of similar colors.
2) The paper primarily focuses on the DSC algorithm and does not explore the application of the proposed method to other option frameworks where termination states are also learned simultaneously. Expanding the evaluation to different option frameworks would have strengthened the paper's overall contribution.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) Is it possible to use TD learning with cross-entropy loss from the logits instead of regression? Would this approach outperform TD learning with regression?
2) Regarding the "Initiation Set Accuracy" plot (Fig 2), it would be valuable to have a similar plot that compares the different methods while choosing a single option policy learning shared by all methods, focusing solely on the variation in the initiation set learning. The current plot, although informative, compounds the evaluation of initiation set learning with the resulting option policy learning.
3) The abstract mentions that the termination condition is typically identified first. However, the algorithm could potentially work even when the termination policy changes. Could you provide insights into the performance of the method if the termination state is learned in parallel? Alternatively, how restrictive is the assumption?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: see weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and suggestions. We are glad that you found our work to be novel and our experimental approach to be comprehensive and systematic!
> There are certain parts of the paper that lack clarity, such as the algorithm for selecting the goal for the DSC method.
The algorithm for selecting goals for DSC is discussed in Section B.3 of the Appendix. Additionally, [1] discuss the nuances of goal-selection in DSC. Briefly, the subgoal state for the current option is the highest probability (IVF prediction from the current state $\mathcal{V}_o(s_t)$) positive example used to train the parent option’s initiation set.
> Figure 3 is challenging to interpret due to the excessive use of similar colors.
You are right, thank you, we will improve the presentation for the camera-ready if the paper is accepted.
> Expanding the evaluation to different option frameworks would have strengthened the paper's overall contribution.
We chose DSC because of its explicit dependence on initiation set learning. Having said that, the primary focus of our experiments is evaluating how accurate the initiation sets are, and we added a secondary experiment to show the effect of that on downstream learning. There are many option discovery methods and we expect this approach will help all of them similarly.
> Is it possible to use TD learning with cross-entropy loss from the logits instead of regression? Would this approach outperform TD learning with regression?
Yes, that is a good suggestion: using distributional RL might further improve initiation set learning. We used the traditional TD error in this paper because of its simplicity and because distributional RL is not the focus of our paper.
> The current initiation set accuracy plot (fig 2), although informative, compounds the evaluation of initiation set learning with the resulting option policy learning.
Initiation set learning is extensively studied and well understood when the option policy is fixed; in that setting, it is just binary classification. While we agree that learning initiation sets in tandem with policies complicates evaluation, that is exactly the focus of our paper. We introduced the "initiation set size" metric alongside accuracy specifically to understand the interplay between initiation set learning and policy improvement, which naturally arises during continual/online reinforcement learning.
> Could you provide insights into the performance of the method if the termination state is learned in parallel? Alternatively, how restrictive is the assumption?
DSC uses goal-conditioned policies to deal with non-stationary termination/subgoal regions, and our proposed methods seem to work very well with DSC, so in that sense we already have an experiment that does what you are asking for (Figures 5 and 7). Other than that, if the termination condition of the option changes, its policy will likely degrade; since our initiation value function (IVF) is learning using off-policy policy evaluation, it will adaptively lower the policy’s probability of success. Over time, as the policy becomes more competent at the new subgoal, the IVF’s predictions will also increase. In other words, as long as the policy learning adapts to the non-stationarity, we fully expect our initiation set learning to also adapt.
Thanks again for your thoughtful reviews! If you have any questions/concerns, please let us know.
[1] Bagaria, Akhil, et al. "Robustly learning composable options in deep reinforcement learning." Proceedings of the 30th International Joint Conference on Artificial Intelligence. 2021.
---
Rebuttal Comment 1.1:
Title: Any additional questions or concerns?
Comment: We hope that we have been able to resolve most of your concerns via our rebuttal. Are there any other questions/concerns you want addressed during the discussion period to consider raising your score? | Summary: This paper studies the problem of learning initiation sets, which are important in hierarchical reinforcement learning for indicating where a policy will succeed, and identifies three main challenges that arise with existing methods to learn these sets. These three challenges are data non-stationarity, temporal credit assignment, and pessimism. The authors introduce the Initiation Value Function (IVF), which predicts the probability that an option will succeed from that state, which is learned using off-policy policy evaluation and adapts as the policy improves. IVF is effective when used as a input to a weighted classifier and additionally includes states for which the option policy is likely to improve in order to address pessimistic bias. Experiments in Minigrid, Montezuma's Revenge, Robosuite, and a maze navigation problem are shown, and in these environments, the method improves upon the baseline of using a General Value Function.
Strengths: - The paper is generally well-written and has empirical evaluations on a wide range of domains.
- The paper brings up some good discussion on three challenges to address for learning initiation sets in hierarchical RL.
Weaknesses: -The paper involves adding a lot of complicated tricks into a hierarchical method, and it's unclear if they're worth it for the often small improvements. I'm not sure if people will want to adopt this method. It's also unclear which of the challenges is most important, and which of the tricks added is most important. Can you add some more analysis/discussion on this?
- It's a bit strange how the authors emphasize the important of exploiting temporal structure to learn IVF, in order to be more effective than a classifier, but then just uses IVF to weight a classifier. Seems like a lot of effort to learn a value function just to weight the classifier. It's not clear if the temporal aspect is actually meaningful in the classifier. For example, how does this compare to just using binary classifier that has mixup regularization or other types of heavier regularization along with the optimism?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses. Additionally:
-In the Figure captions (or elsewhere), can you clearly define what the each method is? Figure 2, for example, does not have a method called "IVF" so it's unclear which is yours. Also, it'd be helpful to add y-axis labels for all the plots.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have not addressed limitations. It would be helpful to include discussions of limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that you found our paper to be well written and our empirical evaluations to be diverse. We hope that we can change your mind about the potential value of our paper to the HRL community.
> In the Figure 2, can you clearly define what each method is?
We accidentally labeled the IVF approach as “GVF”, we will fix that for the camera-ready if the paper is accepted.
Here is a brief description of the different lines in Figure 2:
1. **Baseline Binary.** Binary classifier used to learn the initiation set. This is what is used in essentially all prior work and is the only baseline method in this figure.
2. **GVF.** Threshold the Initiation Value Function (IVF).
3. **Optimistic GVF.** Threshold the sum of IVF and an exploration bonus.
4. **Weighted.** This is a weighted binary classifier where the weights are assigned by the IVF.
Again, (1) is a baseline; (2), (3), and (4) are our contributions.
> The paper involves complicated tricks with small improvements. It is unclear which of the tricks added is most important.
We are not sure what the reviewer means here: our proposed methods are simple and they lead to substantial performance increase. Figure 2 aims to ablate the different contributions, are there specific “tricks” whose contribution you are unsure about?
Furthermore, we did not randomly add tricks to see if they increase accuracy/reward. Instead, we are introduced modifications specifically designed to address an important, known issue in HRL: the collapse of initiation sets [1, 2].
> It's a bit strange how the authors emphasize the importance of exploiting temporal structure to learn IVF, but then just uses IVF to weight a classifier.
First, we show that thresholding the IVF learns better initiation sets than the baseline binary classification approach, so we do not *just* use the IVF to learn a classifier. Our second contribution is the IVF-weighted classifier, which adds a temporal dynamics-aware target to combine the strengths of the value function and the classification approaches.
In *Montezuma’s Revenge*, the pure IVF and weighted classifier approaches increase accuracy by $50$% and $40$% respectively over binary classification; in *AntMediumMaze*, they are the difference between solving the task reliably ($>80$% success rate) and not solving the task at all ($0$% success rate), so our approaches are certainly worth the effort.
Mixup regularization (or anything similar) is a good suggestion, but it cannot learn temporal structure, TD learning (or temporal bootstrapping in general) is required for that. To show the benefit of using TD for learning the IVF, **we have included a new experiment in the accompanying rebuttal PDF**: Figure B shows that even when a trajectory does not reach the goal, TD unsurprisingly assigns positive credit to states that are near the goal, thereby aiding more sample efficient learning.
## References:
[1] Harb, Jean et al. When waiting is not an option: Learning options with a deliberation cost. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[2] Bagaria, Akhil, et al. "Robustly learning composable options in deep reinforcement learning." Proceedings of the 30th International Joint Conference on Artificial Intelligence. 2021.
---
Rebuttal Comment 1.1:
Title: Any additional questions or concerns?
Comment: It seems to us that your negative review was mostly because of:
- confusion regarding what the different lines in Figure 2 meant
- lack of clarity about the importance of using TD when learning the IVF
- missing discussion of the limitations of our work
We hope that we have been able to resolve these concerns in our rebuttal. If you have any lingering questions, we would be happy to address them in the remaining time. Thanks again for your time and engagement! | Rebuttal 1:
Rebuttal: We are grateful to all the reviewers for their thoughtful reviews and constructive feedback. We are happy to see that all reviewers found our work to be novel and our paper to be well written and clear in its presentation. We hope that our rebuttal helps resolve misconceptions and remaining reservations about our work.
## Misconceptions
Some reviewers seemed to think that the GVF approach in Fig 2 was distinct from the IVF approach in Section 3.1; they are the same – this approach thresholds our Initiation Value Function (IVF). Furthermore, thresholding the IVF is not a baseline, it is one of our two contributions (in addition to the weighted classifier). This is because the prevalent way to use value functions as initiation sets is to threshold the option value function that is also used for action-selection. But as we discussed in Sections 2 and 3.1, that is not a good idea because it either leads to difficulties in policy learning or involves complicated and fragile normalization tricks.
## Common Concern
A common cause for concern was our use of DSC as a skill-discovery algorithm in Section 4.3. We used DSC for two reasons:
- It is a recent option discovery algorithm that performs well in sparse reward control problems and DSC learns options whose subgoals are initiation sets of other options, i.e, its performance is heavily dependent on learning good initiation sets. Because learning high quality initiation sets is central to DSC, it makes for a good test-case for our proposed ideas.
- Furthermore, the experiments preceding that section are meant to evaluate initiation set accuracy directly. This one is to demonstrate initiation set accuracy's effect on downstream learning. Of course that depends on the skill discovery algorithm, of which there are many, so we picked a representative and recent one.
## Limitations of our Work
Reviewers suggested that we discuss limitations of our method: our proposed cumulant for the IVF in Section 3.1 only works for goal-reaching options, which although is quite general, it is not universal. For example, if the option’s task is to maximize the velocity of a robot and there is no specific target velocity, then we could not write down a 0/1 cumulant that faithfully describes that subtask. We will add this discussion to the camera-ready if our paper is accepted.
## New Experiments
- Reviewer uZYk asked how option horizon impacts initiation sets, Figure C in the PDF aims to answer this question.
- Reviewer zsHA was unclear about the contribution of TD to the IVF learning problem: Figure B in the PDF shows how TD is exploiting temporal structure for more sample efficient learning.
- Reviewer uZYk asked for visualizations of the initiation sets and the Initiation Value Function (IVF) in FourRooms; this is now included in Figure A of the PDF.
Pdf: /pdf/7cb629c54a81662e94a7303e68cec0bd6c3b54aa.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper discusses three difficulties in effectively learning option initiation sets in reinforcement learning --- these are non-stationarity, temporal credit assignment, and pessimism --- and proposes a method that addressed these difficulties. The proposed method is evaluated empirically in a variety of environments. The results are positive.
Strengths: This paper makes a good contribution on a topic of interest. It is clear and well structured. The algorithmic approach is well motivated and sound. Experimental analysis is conducted on a range of environments and, importantly, includes the use of the proposed method within a specific skill discovery algorithm, with positive results. Overall, I believe the paper is a useful addition to the literature.
Weaknesses: It would be useful to see a discussion on the computational cost of the proposed approach.
The experimental evaluation can be broadened to include a detailed analysis of the utility of the weighting function (equation 1) and the discussion could include whether any other alternatives would be sensible. In addition, it would be useful to see experimental results with the proposed approach in the context of skill discovery methods other than deep skill chaining.
Additional Comments:
In the abstract, and later in the paper, the authors state that they identifiy and address three difficulties in effectively learning option initiation sets: non-stationarity, pessimism, and temporal credit assignment. In the introduction, I do not see any reference to temporal credit assignment.
In line 201, the bonus reward is the change in the value function. This was explored by Simsek & Barto (ICML, 2006, An Intrinsic Reward Mechanism for Efficient Exploration) earlier than the papers cited here.
In figure 2, the second plot from the right: either the plot title or the label of the vertical axis is wrong. The title is "size of the true initiation set" but the axis labels range from 0.2 to 0.9. The same holds for the last plot in the figure.
The writing is generally well structured and clear but at times repetitive. Reducing the repetition would allow the authors to include in the main paper (rather than the supplementary material) the results on initiation set accuracy and size in the robot manipulation environments, which are informative. If space cannot be found in the main paper for these plots, the results can still be summarised and discussed in the main paper. These plots show key performance variables.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Have you considered alternatives to the weighting function in Equation 1?
Note: I have read the rebuttal. Thanks to the authors for answering my question and their additional comments.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I do not see a discussion of limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive review. We are glad that you found our paper to be clear and our approach and experiments to be sound.
> It would be useful to see a discussion on the computational cost of the proposed approach.
The computational cost of the IVF approach is very similar to that of the baseline approach of training a binary classifier. The weighted classifier is more expensive since the IVF is used to weigh each of the training examples for the classifier. However, it is worth noting that our new approaches lead to faster experiments in Section 4.3: when the agent reaches the goal in a given episode, that episode is shorter than 1000 steps, so the overall experiment finishes in a lot fewer environment steps than the baseline. In other words, although we add some compute, we drop sample complexity dramatically.
> Have you considered alternatives to the weighting function in Equation 1?
Eq 1 was the simplest way to incorporate the IVF into a weighting function. Experimentally, we verified that the weighting function adapted to a changing policy over time. Finally the weighted classifier was accurate and performed well in the skill-discovery setting, so we did not feel the need to try more complex weighting functions.
> it would be useful to see experimental results with the proposed approach in the context of skill discovery methods other than deep skill chaining (DSC).
We chose DSC because (a) it shows strong performance in sparse-reward continuous problems and (b) its performance is sensitive to initiation set learning. It is possible that other skill-discovery algorithms would also benefit from better initiation set learning, but we leave that analysis to future work.
> In figure 2, the second plot from the right: either the plot title or the label of the vertical axis is wrong.
We apologize for the confusing axis label: the y-axis is the *normalized* size of the initiation set---it is the fraction of states inside the ground-truth initiation set (described in Section 4.1), so it is a real number between 0 and 1. We will change the axis labels to make this more clear.
> This was explored by Simsek & Barto (ICML, 2006, An Intrinsic Reward Mechanism for Efficient Exploration) earlier than the papers cited here.
You are right, thank you, we will certainly cite this paper in the camera-ready.
Thank you again for the positive review and thoughtful suggestions. | null | null | null | null | null | null |
Neural approximation of Wasserstein distance via a universal architecture for symmetric and factorwise group invariant functions | Accept (poster) | Summary: The authors propose a framework for learning Wasserstien distances and other `SFGI functions'. Two key ingredients in their approach are the characterization as all SFGI functions (a universality theorem), and a sketching mechanism which shows that the size of certain components in the networks need not depend on the number of points in a set, but only on the dimension of the points and the required approximation error. Empirical verification of the value of this approach is presented.
Strengths: The paper presents two nice ideas:
(a) Characterizaton of SFGI functions (not so suprising in my humble opinion, but still new)
(b) A `sketching' mechanism for the Wasserstein distance which is nice. I think it is worth noting in the paper that the fact that the (psuedo)inverse g of the encoding h can be computed is an advantage over existing encoding which can be proved to be invertible, but we have no good understanding of how to invert them.
Additionally the empirical results seem promising
Weaknesses: The complexity of the proposed sketching algorithm does not depend on the number of points n, but does depend exponentially on the dimension of the points d. This means that the bounds obtained by the paper would improve upon the bounds obtained by fully injective mappings only for specific values of (d,n,epsilon), namely when epsilon^{-d} is small relative to n. Since in applications often d=3 and n is large, this regime is indeed of interest.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Suggestions:
Explain a bit more re why to learn Wasserstien distances. Mention you results in the appendix which show better inference time than Sinkhorn.
In Theorem 2.1: the output dimension of phi has considerably been improved recently. The dimension can be 2kn+1 rather than (n+k) choose k. See Corollary 2.2 in [1] and Theorem 3.3 in [2] (full refs below). Presumably this can lead to improved complexity in the subsequent theorem you prove which rely on Theorem 2.1.
Line 127: not sure that the infinity norm is well defined, unless you specify that F_1 and F_2 are subsets of the same C(X,R).
Definition of W_p: D should be replaced by D^p. Also the index of w_n' should be w_m'.
Definition 3.1: specify the group G acts on X.
Lemma 3.2 and elsewhere: do you need to specify that f is uniformly continuous (rather than just continuous) if we are working on compact metric spaces?
Line 185: `then we have that'--> then we can choose
Line 191: not clear why the assumption of a topological embedding is strong, given Theorem 2.1. Is it because injectivity is not the same as a topological embedding? Or maybe the issue is the dimensionality?
Line 208: i.e., MLP should be e.g. MLP
Line 223 and Theorem 3.4: I would consider just saying W_p is the distance without introducing d_{fancy X} which is confusing since there is also d_X
I would view the construction in Theorem 3.4 as a smoothened version of one-hot encoding. You might want to mention this connection.
Line 238: M was N previously, better to be consistent.
Line 244: I'm not sure whether d_max was defined? (presumably it is the diameter of X)
The first inequality just under 254 could be explained more: you're basically saying that some matrix is a transport plan between the measures defined by S and g(h(S)).
The second line in said inequality has an explanation in the middle, this could be styled better.
In Corollary 3.5: you do not specify that delta depends on epsilon. The quantifiers should be reoredered. Also note that for the f you are interested in, namely the W_p distance, f is actually 1-Lipschitz so you can take delta=epsilon.
Corollary 3.6: mention dependence on dimension not only on epsilon.
Line 354: wasn't completely clear whether this was referring to the permutation+orthogonal problem from the previous line or this was a new problem which I didn't fully understand
[1] Low Dimensional Invariant Embeddings for Universal Geometric
Learning, https://arxiv.org/pdf/2205.02956.pdf
[2] Neural Injective Functions for Multisets, Measures
and Graphs via a Finite Witness Theorem https://arxiv.org/pdf/2306.06529.pdf
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Exponential dependence on dimension of points in the set should be mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your very helpful comments! We will accommodate those in our revised manuscript (including mentioning explicitly the dependence on (the intrinsic) dimension). Below we will just mention a couple more major ones.
- Regarding your comment on **exponential dependence on dimension of input points**: Indeed, that is the case. However, note that in general, this is the ''intrinsic dimension" of the domain where points are sampled from, instead of the ``ambient dimension". For example, if the input points are sampled from a hidden manifold of dimension $m$, then the covering number will depend only on $m$ and the curvature bound of the manifold. One could also get a bound on a collection of manifold pieces. In other words, it really depends on the intrinsic dimension of the hidden domain points are sampled from, although for simplicity, we use the Euclidean dimension in our paper. We also note that in modern machine learning, we often assume (explicitly or implicitly) that data is sampled from a hidden space of low dimensions (e.g., in the manifold hypothesis), even though the ambient dimension might be very high. We will make this point more clear and explicit in the revised manuscript.
- Thank you for your references regarding improved bounds for (multi)set functions. We will include them into the revised manuscript. Indeed, using results from the provided references, the dimension of the latent space in Lemma 3.3 becomes $2\cdot a \cdot m + 1$ and the dimension of the latent space in Corollary 3.4 becomes $2\cdot (a(\delta)) \cdot 2 + 1$.
- Regarding **Line 191**, what we meant is that a topological embedding requires injectivity, while in a sketch, one can collapse input objects as long as after decoding, we obtain an approximated object close to input.
---
Rebuttal Comment 1.1:
Comment: I am happy with the authors' rebuttal. Will retain my rating of 6. Thanks | Summary: The paper introduces the concept of symmetric and factor-wise group invariant functions (SFGI), which are continuous and symmetric product functions on complex objects like point sets and graphs. The authors propose a general neural network architecture for approximating SFGI functions and combine it with a sketching idea to develop a specific and efficient neural network that approximates the p-th Wasserstein distance between point sets. The key contribution is that the model complexity of the proposed neural network is independent of the input point set sizes, which is a novel result in the field. The paper demonstrates the effectiveness of the neural network architecture through empirical evaluations, comparing it with other models including a state-of-the-art Siamese Autoencoder. The proposed architecture outperforms existing models in terms of generalization and training speed. The authors highlight the potential application of their framework in solving various geometric optimization problems.
Strengths: 1. *Novel Contribution*: The paper introduces the concept of symmetric and factor-wise group invariant functions (SFGI) and presents a general neural network architecture for approximating these functions. This contribution addresses the need for distance functions on complex objects that are invariant to various group actions, such as permutation or rigid transformation.
2. *Bounded Model Complexity*: The authors demonstrate that their proposed neural network architecture for approximating SFGI functions has a bounded model complexity. This result is significant because it shows that there exists a neural network with the capacity to approximate the p-th Wasserstein distance without the complexity depending on the sizes of the input point sets.
3. *Integration of Sketching Ideas*: The paper integrates sketching ideas with the general neural network architecture to develop an efficient and specific neural network for approximating the p-Wasserstein distance between point sets. This integration adds a practical element to the theoretical framework and contributes to the overall efficiency of the approach.
4. *Potential Applications*: The authors discuss potential applications of their framework beyond Wasserstein distance estimation. They highlight its potential use in solving a broad range of geometric optimization problems, such as k-means in a metric space.
Weaknesses: 1. *Limited Comparison*: While the paper compares the proposed neural network architecture with a state-of-the-art Siamese Autoencoder and other models, the comparison may not be comprehensive enough. Authors consider only the evaluation of the accuracy of the 1-Wasserstein distance. It would be beneficial to include also case p=2 to better understand the relative strengths and weaknesses of the proposed approach.
2. *Limited Generalizability*: While the proposed framework shows promising results for approximating the p-Wasserstein distance between point sets, the paper does not extensively explore its generalizability to other types of complex objects or distance functions. Further investigation into the applicability of the framework to a broader range of geometric matching problems would strengthen the overall contribution.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Section 'Weaknesses'.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors should devote more space to the limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your comments!
Regarding your comments on **limited comparison**: Thank you, and we have now also carried out experiments w.r.t. 2-Wasserstein distance. See our results in the attached PDF. As we reported at the beginning in the general comments to all reviewers, we can see that the improvements over all other ML methods in terms of both accuracy and speed are very similar to the 1-Wasserstein distance. Note that for the Sinkhorn distance, there is a tradeoff of time complexity versus accuracy via the regularization parameter $\epsilon$. We choose a sufficiently small $\epsilon$ so that Sinkhorn has high accurary as shown in Table 1 ($\epsilon = 0.01$). However, as we show in Table 2, the speed is much more slower than ours. In fact, if we increase the regularization parameter $\epsilon$ to $0.10$, Sinkhorn is still slower than ours even on small dataset, yet the accuracy is already much worse than our neural approximation $\mathcal{N}_{\mathrm{ProductNet}}$. On average, the Sinkhorn with respect to $\epsilon = 0.01$ is 20 times slower than our method on datasets with small input point set size (100-300) and 80 times slower than our method on datasets with larger input point set size (400-600). Note that the gap is increasing as the Sinkhorn distance takes quadratic time to compute.
To further address your point regarding limited comparison, we also show the effectiveness of our method for higher dimensional data. Here we take a single cell gene expression data set $P$ consisting of $4441$ cell each represented by $4000$ genes (i.e., $4441$ points in $\mathbb{R}^{4000}$ space. We subsample 3000 pairs of point sets of size 20 to 200 for training, and the testing set consists of 300 pairs of point sets of size 20 to 200. As we can see, our neural estimation remains accurate (see the comparison with Sinkhorn). Unfortunately, due to constraints on computational resources and time, we were not able to generate comparisons to other ML methods (WPCE, $\mathcal{N}_{\mathrm{SDeepSets}}$) for RNAseq.
See the attached PDF for the additional experimental results, and we will add these results to the revised manuscript.
Regarding your comments on **limited generalizability**: First, we want to point out that just having an efficient and effective neural model to estimate the Wasserstein distance itself is of great interests, which have already attracted various past work (in addition to what we have in Related work of our manuscript, there are also several very recent pieces of work emerging).
Our response to Reviewer CxcX is also relevant, which for completeness we include here as well:
In the appendix, we showed that the same framework can work for Hausdorff distance (but with max pooling, similar to PointNet). In general, we think that we might be able to extend these to geometric matching such as Frechet distance for curves (and variance of Frechet distance) without much change of the architecture. A more challenging next step is to compute optimal Wasserstein distance between point-sets under rigid transformations: While there are ways to handle rotation invariance (basis invariance, e.g., via the approach of [Lim et al., ICLR 2023]), how to do so with bounded model complexity (for the encoder) is still challenging; but we think this is achievable.
Developing such a neural estimation for graph distances is also very interesting -- with suitable distances for the space of graphs and proper assumptions, one might be able to show the existence of low dimensional sketch, but how to argue that the encoder needed has bounded model complexity is more challenging.
Finally, while it is not stated in the paper, our current framework can be directly extended to compute the Frechet variance, or, the average distance to 1-median of a set of $k$ pointsets $P_1, \ldots, P_k$ (see the response to Reviewer 1U84 for the definition of Frechet mean/Frechet variance, or 1-median and average ditsance to it). Roughly speaking, the Frechet mean of a set of point-sets can be thought of as their ``center'' minimizing the Frechet variance (which is the total squared distance from the mean-pointset to each input pointset).
We will add more discussions in the revised manuscript. | Summary: The goal of this submission is to learn the (Wasserstein) distance between complex objects (e.g. point sets, graphs) within an arbitrary additive $\epsilon$-error.
This paper presents a general neural network architecture for approximating symmetric and factor-wise group invariant (SFGI) functions.
The proposed SFGI neural network could achieve universality with the number of parameters independent of the input point set size.
The experiment results show the proposed SFGI neural network could approximate the (best or comparable) Wasserstein distance between point sets.
Strengths: This paper presents a general neural network framework for approximating the Wasserstein distance between point sets.
The size of the proposed universal neural network approximator is independent w.r.t. the input size, which is new and important. The above advantage is demonstrated in Table 2 in the main paper.
The proposed neural network approximator is simple but may inspire dozens of downstream network designs.
The presentation is clear. Well written.
The code is provided.
I haven't checked the supplementary carefully.
Weaknesses: The proposed neural uses sum-pooling, so that the parameter size will be independent of the input point set size. I am looking forward to seeing more detailed discussions of it. e.g. How could the proposed neural deal with varying input point set size (which is common in many applications)?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: in table 1, the row of 'uniform', it seems Sinkhorn achieved a better Wasserstein distance estimation rather then the proposed model, so the wrong highlight.
Any insights for different distances and different complex objects (e.g. graph)? Should the proposed model to change accordingly?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your comments!
Regarding your question on the handling of **points of varying sizes** (from your comment ``I am looking forward to seeing more detailed discussions of it ...''): First, in our theoretical results, the fact that the parameter size (model complexity) is independent of input point size essentially is from the constructive proof of Theorem 3.4. The proof shows not only that a latent space of dimension independent to $n$ exists, but also that this can be achieved by a bounded set of functions $h_i$s (forming the encoder), for $i\in [1, a]$, each of which has a simple form and will later be replaced by a small MLP of constant size in our final neural model $\mathcal{N}_{ProductNet}$.
Note that a normalization is happening as the input is a weighted point set with total weight $1$ -- one can think that if there are $n$ input points in a point-set, each point receives weight $1/n$.
While we assume a maximum size of input pointsets for simplicity in formulating and stating our results, our final neural model can handle any input pointset size (similar to DeepSet, PointNet, or message passing GNNs) -- indeed, in our experiments, we often train on point-sets of small sizes (100 to 300 points) and show that the final model generalizes very well to point sets of much larger sizes unseen during the training (see the second row for each dataset in Table 1): we tested up to around 2000 points -- as the accuracy degrades only very slightly with larger point sets, we expect that it perform similarly for much larger sizes as well.
We will make these points more clear in our revised manuscript.
Regarding ''Any insights for different distances and different complex objects (e.g., graphs) ...'': That is a very good question. In the appendix, we showed that the same framework can work for Hausdorff distance (but with max pooling, similar to PointNet). In general, we think that we might be able to extend these to geometric matching such as Fréchet distance for curves (and variance of Fréchet distance) without much change of the architecture. A more challenging next step is to compute optimal Wasserstein distance between point-sets under rigid transformations: While there are ways to handle rotation invariance (basis invariance, e.g., via the approach of [Lim et al., ICLR 2023]), how to do so with bounded model complexity (for the encoder) is still challenging; but we think this is achievable.
Developing such a neural estimation for graph distances is also very interesting -- with suitable distances for the space of graphs and proper assumptions, one might be able to show the existence of low dimensional sketch. However, arguing that the encoder needed has bounded model complexity is more challenging.
Finally, while it is not stated in the paper, our current framework can be directly extended to compute the Fréchet variance or the average distance to 1-median of a set of $k$ pointsets $P_1, \ldots, P_k$ (see the response to Reviewer 1U84 for the definition of Fréchet mean/Fréchet variance, or 1-median and average distance to it). Roughly speaking, the Fréchet mean of a set of point-sets can be thought of as their ''center'' minimizing the Fréchet variance (which is the total squared distance from the mean-pointset to each input pointset).
We will add more discussions in the revised manuscript. | Summary: The paper proposes a neural network architecture that efficiently approximates $p$-Wasserstein distance between point sets. This is achieved by exploiting the networks' capability to approximate functions that are symmetric and invariant to group actions componentwise. The highlight of the model is that its complexity remains independent of the input sample sizes, improving generalization capabilities. Supporting empirical evidence also showcases the time efficiency of the approach compared to SOTA architectures.
Strengths: The paper is well-written, with ample theoretical discussion regarding the problem and sound proofs. The empirical evidence provided supports the theoretical claims to a fair extent.
Weaknesses: The dimension of the range of functions, used to approximate uniformly continuous and permutation invariant maps $f(A, B)$, turn out to be independent of the size of input points, rather depend on the covering number of the input space. Under the assumption of compactness, this becomes straightforward as a finite cover is ensured. As such, the real challenge lies in showing the result under unbounded domains, perhaps with decaying tail conditions (e.g. sub-Exponential) on input distributions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is meant by '$1$-median' of a collection of point sets? [Line 39] Also, specify the notation $S$ in Definition 3.1.
2. Shouldn't it be $D_{i,j}=d_{X}(x_{i},x'_{j})^{p}$ instead in Line 153?
3. In dimensionality reduction or embedding problems it is often difficult to find out continuous encoding maps. For example, the very problem of Autoencoders is mostly dedicated to ensuring that $d_{\mathcal{X}}(g \circ h(\cdot), \cdot) <\delta$. In that light, the assumption that sketches readily exist seems quite strong (as in Lemma 3.2). The paper itself expresses the concern in Line 287. Perhaps the authors can provide some examples to justify. This seems crucial since in most results regarding embedding onto Euclidean spaces (due to Johnson-Lindenstrauss, Bourgain, etc.) the optimal embedding dimension under a given level of distortion depends on the input sample size (cardinality of the finite input space).
4. The Wasserstein distance operates as a metric on the class of probability distributions defined on $X$ (say, $\mathcal{P}(X)$). So, is it perhaps more appropriate to define $\mathcal{X}$ as the set of weighted Dirac deltas corresponding to points from $X$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper has no immediate negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments!
Regarding your main comment that "Under the assumption of compactness, this becomes straightforward as a finite cover is ensured.", we would like to clarify the following:
Note that there are **two spaces** involved in our problem. The first space $\Omega \subset (X, d_X)$ is the metric space from which input point sets are sampled -- in particular: we take $X = \mathbb{R}^d$, metric $\mathrm{d}_X$ to be the standard Euclidean distance $\| \cdot \|_2$, and $\Omega$ to be a compact set in $\mathbb{R}^d$.
The second space, denoted by $(\mathcal{X}, d_{\mathcal{X}})$ is the space of weighted point-sets (of maximum cardinality $N$, where points are from $\Omega \subset (X, d_X)$). This space, $(\mathcal{X}, d_{\mathcal{X}})$, is a factor of the metric space where our product function $f(A, B)$ is defined on; that is, $f: \mathcal{X} \times \mathcal{X} \to \mathbb{R}$. In our setting, if input points are from $\Omega \subset \mathbb{R}^d$, then $\mathcal{X} = \Omega^N \subset \mathbb{R}^{N d}$, and note that the metric $d_{\mathcal{X}} = W_p$, the $p$-th Wasserstein distance for point-sets.
While $\mathcal{X}$ is compact when $\Omega$ is compact, note that its dimension is $Nd$, which depends on the maximum size of input point-set. We can not directly use a bound of covering number of this space $\mathcal{X}$ (which will naively depend on the dimension of $\mathcal{X}$ which is $Nd$) to define a model as we want the number of parameters to be *independent* of the maximum input point set size $N$.Instead, our Theorem 3.4 shows that when the metric for $\mathcal{X}$ is $d_{\mathcal{X}} = W_p$ (the $p$-th Wasserstein distance), we can find a sketch whose latent dimension depends on the covering number of $\Omega$.
Our notations of $(X, d_X)$ and $(\mathcal{X}, d_{\mathcal{X}})$ might have caused confusion. We will update them to $(\Omega, \ell_2)$ and $(\mathcal{X}, W_p)$ in the revised manuscript to make the distinction more clear.
Furthermore, the existence of a low-dimensional latent space **does not** necessarily imply that one can approximate the map from the original domain to this latent space via a model of bounded complexity.
To this end, note that our proof in Theorem 3.4 is constructive: it has specific form using a bounded number of functions, $h_i$'s (for $i\in [1, a]$), each of which has a simple closed form and can be replaced by a MLP of complexity independent to input point set size.
Regarding "What is meant by '1-median' of a collection of point sets?":
Given a collection of $k$ objects $\{ Y_1, \ldots, Y_k\}$ sampled from a metric space $(\mathcal{Y}, d_{\mathcal{Y}})$, its {\it 1-median}, or geometric median, is defined as
$$ Y^* = argmin_{Y} \sum_{i=1}^k d (Y, Y_k),$$
which intuitively, is the center of these $k$ objects. For example, if each $Y_i$ is a point set, one can think of $Y^*$ as its average point-set. In our sentence (line 39), we in fact refer to the total distance to the 1-median (or one can think of this as the cost of 1-median), namely,
$$ cost(Y_1, \ldots, Y_k) = \min_Y \sum_{i=1}^k d (Y, Y_k) = \sum_{i=1}^k d (Y^*, Y_k).$$
If we use squared distance $d^2 (Y, Y_k)$ in the above definitions, then we obtain 1-mean (sometimes also called the Fréchet mean) and its corresponding cost is Fréchet variance. Both 1-median (and its associated cost) and the 1-mean (and the associated Fréchet variance) are commonly used in statistical analysis of collections of objects $Y_i$s (e.g., computing mean / variance in the space of pointsets). While we didn't explicitly state it in the paper, our results / architectures also hold for computing the cost of 1-median or Fréchet mean of a fixed set of pointsets sampled from a compact set in $\mathbb{R}^d$. In general, there is no closed form solution for 1-median or 1-mean for complex spaces, and they are often computed empirically by solving a non-convex optimization procedure.
Regarding your comment "In dimensionality reduction or embedding problems ...": We note that having a sketch for space of point-sets (under Wasserstein distance or some other distance) is different from standard dimensionality reduction or embedding, which often aims to preserves distance metric in the latent space. In our case, we only need the latent space to provide sufficient information so that the Wasserstein distance can be approximately **after decoding**. There is no distance preservation in the latent space.
At least for the case of Wasserstein distance, it turns out that it is sufficient to achieve this by mapping to $\mathbb{R}^p$ where $p$ depends only on the covering number of the input domain. A low covering number (and by extension, a lower dimension for the latent space $\mathbb{R}^p$) can be obtained if the domain $\Omega$ has low ``intrinsic dimension'', such as a low-dimensional Euclidean space $\mathbb{R}^d$ in the linear case, or a hidden non-linear manifold with low intrinsic dimension $d$ and bounded curvature (and the ambient dimension could be high), or a collection of constant number of manifold pieces.
In our paper, we use compact subsets of Euclidean space $\mathbb{R}^d$ for simplicity, but our results can be extended to points sampled from a hidden manifold of dimension $m$ with bounded curvature -- in which case the covering number is also bounded (e.g., using results of [Roër 2013]).
We note that the existence of a low-dimensional hidden structure in data is an important assumption for modern machine learning in many settings. For instance, while the space of natural images has very high dimension (size of an image), the intrinsic dimension of the hidden space where real images lie is far smaller.
Thank you for your suggestion of defining $\mathcal{X}$ as weighted Dirac measures. We will incorporate that in the revision.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I appreciate the detailed response by the authors. Taking into account the other reviews and corresponding rebuttals, I stick to my initial evaluation. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable comments and feedback.
We are happy that reviewers appreciate our contribution of a simple neural network model (constructed based on our theoretical insights) which can estimate Wasserstein distance accurately with bounded model complexity.
Indeed, identifying the right neural model for a given problem, especially for an optimization problem, is not often an easy task and the role of representation learning is crucial, especially for complex objects (point-sets in our case).
At a high level, we are interested in problems relating to geometric matching of complex objects, and we focus on the Wasserstein distance between point-sets in this paper.
Our theoretical analysis implies that a simple neural model consisting of a Siamese architecture **followed by MLP layers at the end (which is crucial)** turns out to be universal with respect to the Wasserstein distance problem and the model complexity is independent of input point sets (see Corollary 3.6 and Figure 1).
Empirically, we emphasize that our neural model outperforms both Sinkhorn and SOTA ML-based approaches for Wasserstein distance computation in terms of **both accuracy and speed**, and also generalizes significantly better than SOTA ML-based approaches.
To further demonstrate the effectiveness of our model, as suggested by Reviewer xMX9, we also:
(1) carry out experiments evaluating the accuracy of our approximation for $W_2$, i.e., $p$-th Wasserstein distance for $p = 2$; and
(2) further tested our accuracy on a high dimensional real RNAseq dataset (in 4000 dimensions), which are gene expression data.
See the attached pdf for results.
For the case of 2-Wasserstein distance, Table 1 in the pdf shows a similar trend the 1-Wasserstein distance originally reported in our paper. In fact, the improvement in terms of accuracy over other ML models are even bigger.
Note that for the Sinkhorn distance, there is a tradeoff of time complexity versus accuracy via the regularization parameter $\epsilon$. We choose a sufficiently small $\epsilon$ so that Sinkhorn has high accurary as shown in Table 1. However, as we show in Table 2, the speed is much more slower than ours. In fact, if we increase the regularization parameter $\epsilon$ to $0.10$, Sinkhorn is still slower than ours even on small dataset, yet the accuracy is already much worse than our neural approximation $\mathcal{N}_{ProductNet}$. On average, the Sinkhorn with respect to $\epsilon = 0.01$ is 20 times slower than our method on datasets with small input point set size (100-300) and 80 times slower than our method on datasets with larger input point set size (400-600). Note that the gap is increasing as the Sinkhorn distance takes quadratic time to compute.
We also point out that a neural approximation of Wasserstein distance itself, or more generally, optimal transport, is interesting on its own and has already attracted extensive past work (see our related work section), as well as recent new work.
In our Supplementary material, we show that our neural model can be easily extended to Hausdorff distance. Interesting future work will include extending the model to estimate Fréchet distance (or variants) for curves, or optimal Wasserstein / Hausdorff distance under rigid transformations.
Pdf: /pdf/dc98a8614bc4d09665a8fcd051cf1e4c89365f9b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning to Receive Help: Intervention-Aware Concept Embedding Models | Accept (spotlight) | Summary: The intervenability of concept bottleneck models has been taken for granted thus far. There have been numerous methods that tried intervening or propose policies to intervene. However, previous methods did not go so far as to optimize models for interventions. This work proposes intervention-aware training and extends Concept Embedding Models for this purpose. With experiments across MNIST, CUB, and CelebA, the authors demonstrate that IntCEM achieves on-par downstream performance compared to CEMs, and investigate the intervention performance with two sets of automated studies.
**After the rebuttal:** I read and acknowledged the authors rebuttal. I am leaning towards an acceptance position after the rebuttal, given that the authors clarified my questions. I find the paper's message interesting and the presentation to be clear and well-written, the reason I am not giving a higher score is the lack of real-world settings to test the method in more complex settings and where the models do not need concept annotations at the training time.
Strengths: 1. Optimizing a model for intervention awareness is an interesting and strong idea. Many works did indeed attempt interventions, whereas it is infrequent to optimize the model for this purpose and the intervenability has been taken for granted. I do have a question about whether the current form of this is useful (Q1), yet I still believe the idea is worth sharing with the community.
2. I believe having an intervention policy has implications beyond interventions per se. By looking at the Rollout Loss, I could imagine the policy revealing locally important / less important concepts and performing selective inference. While this is a hypothesis that needs verification, this could be an interesting direction to think about.
3. (Minor) I appreciate the clarity in the writing. Formalization is clear and easy to digest even though there are nuanced tricks to make the method work.
Weaknesses: 1. In my understanding, there is a disconnect between the promise/message of the paper and the experiments. Concretely;
- 1.1 In many parts of the paper, there is a theme around getting the models ready for expert feedback, which can make sense. However, I do not fully understand how we can infer that the current method achieves something related to a user or an expert.
- 1.2 In this regard, a major weakness is the lack of a user study. There are many arguments in the paper that touch upon optimizing the model for user interventions, whereas there are no experiments that verify this argument. An experiment where the users used these models, intervened on them, and IntCEM performed better than baselines would support the argument here.
- 1.3 Furthermore, the selection policy encodes a certain belief: That the users will choose concepts that would improve downstream performance. An alternative user metric would be showing that users prefer the concepts chosen by the selection policy. This may or not may not be the case. One could have also hypothesized that the policy can pick concepts that the users would also like to intervene on more than other policies, but again, there is no supporting evidence for this argument either.
In general, I believe this is an important disconnect that needs addressing.
2. The usability of the method suffers from the requirement to have concept annotations at training time, as do vanilla CBMs and CEMs. However, there are still concept-based models that could be worth exploring in this context that are already cited in the paper as [26, 27] which are Post-hoc CBMs and Label-Free CBMs that also scale to ImageNet/COCO. The authors make the argument that the method would complement those architectures as well, and I can believe this hypothesis, but this needs verification.
3. The above weakness also leads to another one, which is the lack of experimental settings closer to the real-world such as ImageNet, COCO, or medical datasets that were tested before in the above papers. I do think experiments would benefit from expanding beyond CUB and CelebA. Personally, I believe this would also allow impactful user studies.
4. It is hard for me to interpret the numbers in Sections 4.2 and 4.3. This is also connected to the 3rd point above – but the improvements in MNIST or CUB seem 1-2%. Further, as a minor comment, while there are Figures, all of the y-axes are in different scales, and I cannot really tell the numbers.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. In terms of the story, while the premise of intervention awareness sounds interesting at face value, I still think the case needs to be made for such a need. I believe it could help to give a concrete use case before laying out the method. E.g. L118-119 authors mention an interesting example, “may be hard for the model to predict on its own”. Can authors make it more concrete, give real world examples / formalize this? When would it hurt to not be intervention aware? When would we benefit from being intervention aware?
2. I would like to ask the authors’ reasoning about the first weakness I raised in the Weaknesses section. If intervention awareness is indeed to improve the interaction between the model and the human, how should we think about the current results? Is there a way to deduce that models are more ready for humans? Can we infer that humans would prefer this notion of intervention awareness over previous methods (or not having it)?
3. In my understanding, according to the loss function after L194, the selection policy $\psi$ is optimized to select the concept that would increase the probability of the ground-truth label being predicted the most, upon the intervention. Would this not bias the selection policy towards more typical interventions for the class (e.g. where $P(Y|C)$ is high)? For instance, say that a bird specie X is 95% of the time red but 5% of the time blue, whereas other birds are more often blue. Then, wouldn’t this loss function incentivize the selection policy to pick the color concept for blue X birds?
4. Similar to the above, while Section 4.3 studies the performance improvement from the policies, it is not clear what these policies pick. Do the authors have any analyses that explain what the chosen concepts are? For instance, are these the concepts that are specific to a class? Are these the most common concepts for the class, are there any other patterns there?
5. (Minor) I would appreciate having the Tables in addition to the figures, I’m sorry if I missed them but I could not find them in the Appendix either. Numbers could be hard to read from the Tables.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: I think the most significant limitation is the lack of an actual real-world study to verify the claims. In my understanding, it is not possible to infer this from the current results. I would appreciate hearing the authors' reasoning about this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 4KHx,
Thank you for your feedback and valuable questions. We appreciate that you found our work novel and clear. Below we answer the questions you raised.
### Message of the paper and experiment design
We believe there may be a misunderstanding of the methodology underlying our paper, particularly when seen in relation to prior work. IntCEMs are designed to better uptake user/expert feedback at test time (i.e., via “concept interventions”). However, we do not change how we think of concept interventions compared to prior work; intervention awareness is simply – yet powerfully – a training-time change that models that support concept interventions (in particular embedding-based models) can make use of to improve their receptiveness to interventions. We evaluate concept interventions in the same way as prior works in this area (e.g., [1-5]). The underlying assumption of our and their evaluation is that an expert can correctly answer a query of the form “What is the correct binary value of concept X?”; our evaluation is just a way to simulate such queries when we know the ground truth concepts. While there are important grounds for questioning these assumptions (see [5]) – in this first instantiation, we believe our experiments are substantial towards demonstrating the potential value of IntCEMs and follow the way the community has been evaluating concept interventions.
### Selection policy
We emphasize that there is an important difference between what a policy may suggest and what the user may opt to intervene on. There is no assumption in our work dictating that a user must provide a concept label when requested by $\psi$. In fact, in Section 4.2 we show that our model significantly outperforms competing state-of-the-art methods even in the case where a user randomly selects which concepts to intervene on. Nevertheless, we agree that one may want to encode a user’s preferences into the intervention policy. This is an exciting direction and one we are keen to explore next with user studies (we will mention these two in a new future work section as suggested by reviewer 23Ef).
We highlight, however, that in this work we operate under the reasonable and common assumption that expert queries can be asked and an expert will answer them correctly. This is no different to assumptions made in the active learning and active feature acquisition communities, both popular and mature fields of study.
### Intervention policy and concept typicality
Notice that our learnt policy is dynamic (i.e., it is a function of the current sample and the previous interventions) rather than static (i.e., concept intervention is fixed a priori). As such, IntCEM’s policy $\psi$ does not pick the same sequence of concepts for all samples ( see Figure 1 in the rebuttal supplement for an example). Hence, it would not necessarily always pick the colour blue if the IntCEM is, say, very confident that the input sample is blue.
### Extension to other kinds of concept models and datasets
As you correctly mentioned, the principles underlying IntCEMs could be applied to other models. We include details on possible extensions for traditional CBMs in our Appendix (see A.8) and discuss these results in our discussion. As for even more recent concept-based architectures such as post-hoc CBMs and label-free CBMs that are applicable to larger scale datasets lacking complete concept annotations (e.g., ImageNet and COCO), we learnt about them at ICLR 2023 only a couple of weeks before NeurIPS’ deadline. This near overlap with their publication and the NeurIPS paper submission deadline meant that we could not sensibly include them as part of our experiments and that these works fell within the two-month NeurIPS baseline exclusion period. Because of this, we focused on evaluating variants of CBMs and CEMs on datasets with a complete set of concept annotations across all samples. Nevertheless, we find these works very interesting and exciting, which is why we suggested as part of future work that one may explore applying IntCEM’s core design principles to these sorts of models. Further verification is sensible for future work, but we do not think this is essential for affirming the validity of our intervention-aware design. Finally, note that, unlike CBMs, IntCEM does not require a complete set of concept annotations because we its use of embeddings.
### IntCEM improvements and presentation of results
We agree that the scales in our figures might make exact values hard to determine when analysing the gains obtained with IntCEMs. To address this, we have followed your suggestion and made a simplified table version of Figures 3 and 4 that we will include in a new appendix. Due to lack of space, we show only the table corresponding to Figure 3 (and only for some a fixed set of fractions of intervened concepts) in our attached supplement as Figure 1.
Looking at this new table and Figure 3, we see a significant difference between IntCEMs and CEMs across all datasets once interventions are performed. For example, in CUB we observe a difference of 6.5% absolute improvement for IntCEM vs CEM when randomly intervening on 50% of the available concepts while in CelebA we observe a difference of ~13.20%. This table also shows that by intervening following IntCEM’s learnt policy $\psi$, IntCEM can achieve nearly perfect accuracy (99.17%) after intervening on 50% of the concepts. This boost comes with negligible test time computational costs and without the need to (re)learn any further policies.
## References
[1] Koh et al. "Concept bottleneck models." ICML 2020.
[2] Kazhdan et al. "Now you see me (CME): concept-based model extraction." arXiv:2010.13233 (2020).
[3] Espinosa Zarlenga et al. "Concept embedding models" NeurIPS 2022.
[4] Chauhan et al. "Interactive concept bottleneck models." AAAI 2023.
[5] Collins et al. "Human Uncertainty in Concept-Based AI Systems." arXiv e-prints (2023).
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: I thank the authors for their detailed rebuttal.
> Message of the paper and experiment design
Thank you for the discussion. I do believe now I better understand the notion of intervention here.
> Policy
Thank you for the clarifications. Of course, I see that the policy is dynamic and not a static one. My typicality question was a very simple and empirical one - in practice what concepts end up being intervened on, and are they simply the typical concepts? This should be a simple statistic compute, if I am not mistaken. If the answer is yes, I think this has important implications.
> Extension to other kinds of concept models and datasets
Thank you for this discussion. I find the argument authors raised to be completely fair. Due to the overlap in the exclusion period and moving forward I will not consider the inclusion of P-CBM or Label-Free CBM results. I would still find it informative to add a (short) discussion of this in the limitations. Similarly, I still find the lack of larger datasets to be a limiting factor to demonstrate how much the method would transfer to real world problems.
> IntCEM improvements and presentation of results
Thank you for creating this table.
> Overall Comment
I thank the authors for an informative rebuttal. I find several of the authors' arguments convincing and will revise my score accordingly. I would still appreciate if it were possible for the authors to answer the simple question I asked above.
To clarify, while I am leaning towards an accept position and find the ideas interesting, the reason I do not increase my score further is due to the lack of real-world settings (larger datasets, more complex problems that would clarify that results transfer beyond MNIST/CUB) and user studies which would strengthen the significance of the work.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your reply and for leaning towards acceptance after our rebuttal. We really appreciate the time taken to reply to our rebuttal and your careful answer. Below we reply to your specific concerns. Please let us know if there are any further concerns and/or questions after our replies below.
### Typicality
Apologies on our end for misunderstanding your concern. We now see what you mean, and we agree with you. Empirically, we observe that certain concepts are selected with a higher probability at earlier steps of rollouts of our policy $\psi$. For example, we observed that "breast pattern" and "wing shape" were two concepts that were often selected in the first few steps by $\psi$ for the IntCEM we used to generate the plots in Figure 1. Intuitively, these seem to be highly informative concepts for some underrepresented classes. This is indeed an interesting observation, and we will discuss it in Section 4.3 while also including a histogram showing these statistics as part of the appendix in our updated manuscript. Nevertheless, for the sake of fairness, we are not allowed to include this histogram as part of this reply, and therefore we apologize that we cannot provide this figure at this moment.
### Large datasets and user studies
Our evaluation consisting of studies on task performance, concept performance, receptiveness to different intervention policies, and study of our own learnt intervention policy across five tasks (three of which are real-world datasets) unfortunately left us with very little room for adding even more baselines, datasets, or well-crafted user studies. Nevertheless, we hope that the publication of our method in an easy-to-use Pytorch-based installable library will enable future research to be easy to carry regardless of the task or setup. As you suggest, however, we will make sure to include a short discussion of the need for future larger-scale studies, as well as user studies, in our limitations subsection. | Summary: The authors proposed a modified version of CEM that is better at receiving human test-time intervention by explicitly incorporating intervention into the training stage. Specifically, an intervention prediction module is trained to behavior clone an optimal-greedy intervention policy (Skyline). The CEM is trained to optimize for the correctness of the initial guess as well as the prediction after the predicted intervention. The idea of the method is simple and straightforward. Extensive experiments explore design choices and show strong results in practical settings.
Strengths: * The paper is well-written. Extremely easy read and all concerns I have about the method are covered in experiments.
* Simple and effective idea anchored on the premise that models not explicitly trained to receive intervention might not handle intervention well (empirical support in Fig 3, 4).
* Questions proposed in P6L221 - 230 effectively guides reader through the work and the authors' through process.
* The authors care about how this method could realistically be applied in real-world settings.
* Heuristics for tuning hyperparameters for the losses are provided.
* Considers realistic settings of applying IntCEM where human-intervention might not be greedy-optimal and show empirically that IntCEM still performs better than baselines under non-optimal interventions (even random intervention).
* Observe desirable properties for application in the real world (P7L265)
Weaknesses: * The only weakness is lacking human study but this could be conducted rather straightforwardly. Given the extensive study on intervention policies, I believe this to be merely nitpicking.
* This is obviously a work that would induce plenty of follow-up work. Perhaps adding a section for what types of design choices have not been explored yet (e.g. other RL methods besides BC) could help stimulate the research community.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * How severe is information leakage for CEMs? This is tangent to this work's contribution but since this work is based on the CEM framework, perhaps the authors could comment on whether IntCEM succeeds because of the leakage of downstream task information into the concept embeddings?
* Typo:
* P2L39 artefact -> artifact
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 23Ef,
Thank you so much for the very encouraging review and the valuable feedback and suggestions. We are glad you found our work very easy to read, well-motivated, and friendly for reproduction/real-world use. Below we discuss some of the points you raised in your review, including answers to your questions (which led to what we believe are some important new additions to our manuscript!).
### Section detailing future work
We agree this is a useful addition. We will include this section in our updated manuscript outlining some of the future directions we have discussed in our rebuttal with you (e.g., user studies and a better understanding of leakage) and the other reviewers (e.g., evaluation of large-scale concept-based models).
### Lack of user study
Please see our global rebuttal reply for a reply to this concern.
### Information leakage and IntCEMs
We believe that your intuition on how leakage may play a part in IntCEM is correct and we present some evidence in favour of it in our global reply for this rebuttal. These results will be included as a new appendix in our updated manuscript.
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for the response. I do agree that information leakage may or may not be (un)desirable and explicitly stating this in the paper is a good way to inform readers about the potential catch with this method working. | Summary: The authors in this paper proposed a novel method of improving test-time interventions for Concept Bottleneck Models (CBMs). Although many CBM works showcase their ability to do intervention, none of them explicitly motivate the learned model to do well on intervention during training phase, hence hindering their ability and reliability to do intervention correctly. This work builds on top of concept embedding models (CEMs) and constructs an additional probability distribution (parameterized by neural networks) to learn what concepts to intervene on. To learn this distribution, the authors proposed a composite of three losses, where the first loss (Rollout loss) follows a greedy strategy of selecting concepts that yield the maximum probability of the given class, the second (task prediction loss) penalizes wrong predictions with many interventions, in which the penalty scales with the number of intervention performed. Finally, the third loss concept loss is just binary cross entropy to promote the correctness of the concept prediction. Empirical results on MNIST and CUB (and their variants) show that the proposed formulation performs competitively against state-of-the-art CEMs/CBMs that also utilizes interventions to improve performance. The authors also argued that the sampling from the learned distribution is much more scalable than current state-of-the-art methods.
Strengths: The paper is well-written and is organized in a chronological and clear manner. Introduction sufficiently motivates the problem formulation. The background and related work is written neatly so that the paper is sufficiently self-contained. The description of the method and notation is also clear and easy to follow. And experiments have clear objectives and empirical results clearly demonstrate the effectiveness of their method.
Given CBMs are a very popular topic, and that there is a lack of work in the current field that focuses on improving test-time interventions in CBMs, I consider the formulation proposed in this work of significantly novel. It definitively addresses a limitation of CBMs/CEMs and proposed a good formulation that addresses that limitation.
Weaknesses: There are no major weaknesses in this paper. But a few things that I suggest the authors to add for the sake of completeness:
As the main selling point of interpretable-by-design methods are interpretability, it feels a bit odd that authors did not add any examples of interventions on a particular sample. Even though the metrics clearly show the method is effective, a few concrete test samples and use of intervention can better illustrate what interventions do, how the posterior of the task prediction changes before/after interventions, etc. Potentially, illustrations could help readers who may perhaps be from a different field.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. I understand that “the single MC sampling per training step” is valid, but what is the main motivation for doing so? Is it to improve training speed? And do you speculate, (even better, show with ablation studies) that improving this estimation can improve convergence when optimizing your objective?
2. In Figure 3, can you explain why intervention could cause a drop in performance for Join CBM-Logit? Since the proposed objective explicitly optimizes for CE after intervention, would you agree that your method trained to prevent a drop in performance after intervening from happening?
3. How long does it take to train typical model with the proposed objective? I would imagine the rollout loss can be slow when T is large. Does restricting T to be small heavily impacts the performance of the trained model?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitations of this work is addressed in the main manuscript and is well-stated. The answer to my questions above can potentially brought up as limitation, and if so, I would suggest the authors to add that to the limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer oMUi,
Thank you so much for the very encouraging review and the extremely valuable feedback that came with it. We are very glad that you found our work significantly novel, well-written, and clearly motivated. Below we answer the questions you raised in your review.
### Motivation for single MC sample
As you rightly suggested, the primary motivation for a single MC sample is efficiency at training time (as this has to be done for each training step). During early experimentation with our method, we did some informal studies on how the number of samples affected our model’s performance. We noticed that the gains from more MC samples are marginal in practice yet have a significant cost to wall-clock training times. Intuitively, we believe this can be because a sample is seen multiple times during training (across different epochs) and therefore there is not much benefit in adding more mask samples during a specific epoch if the same sample will be eventually seen again in a future epoch (and we will sample different initial intervention mask then). One way to think of this effect is akin to how increasing MC samples during a VAE’s training has marginal improvements while making training times longer. As such, we opted for a single MC sample.
### Detrimental interventions in Joint-CBM-Logit
This is an excellent and interesting question. Our hypothesis is that traditional CBMs can react negatively after an intervention is performed because of *concept leakage* (see our global reply for more information regarding this). Leakage occurs when a CBM-like model learns to exploit a concept’s continuous nature to encode unnecessary information as part of a concept’s representation. Such impurities can have detrimental effects when intervening especially if the intervention operation results in a destructive state of this continuous space (e.g., setting a concept probability to $1$ or $0$ as done when intervening in a Joint-CBM). This is because such an operation would destroy any extra information that the downstream label predictor might’ve otherwise exploited to predict the output class, leading to detrimental concept interventions. When using logits over sigmoidal bottlenecks, we allow much more flexibility in the concept activations and therefore the chance for concept leakage is higher (and also a possible reason why the same drop in Figure 3 is not observed for Joint-CBM-Sigmoid). Notice furthermore that when the set of concept annotations at training time is incomplete w.r.t. the downstream task (as in CUB-Incomp), the chances of leakage in traditional CBMs are much higher as the model’s bottleneck will be underprovided and therefore the model will have to make trade-offs between task accuracy and concept purity/accuracy.
Such leakage occurs in IntCEMs; however, as you mention, it is unlikely to have a detrimental effect on the model’s performance after the intervention is performed due to how IntCEM is trained and how our model operates when intervened on. This is because (1) we incentivise our model to be reactive to interventions at train time with the hope that they will learn to be more accurate with very little external feedback at test time, and, more importantly, (2) interventions in IntCEMs are **not destructive** as we only change the embedding that we pass into the downstream label predictor, allowing the model to still exploit extra information encoded in the learnt embeddings even after the intervention has been performed.
We realise this could have been better explained in our experimentation section and will include this leakage-motivated discussion and hypothesis in our updated discussion section.
### Training time of IntCEM and effect of $T$
We studied the effect of IntCEM’s regularisers on its training times in Appendix A.5 (particularly Table A.2). To summarise, we observed that the training times in IntCEMs increase by about 60% compared to CEMs. However, it is worth noting that we observed a large variance in training times (with even some IntCEM runs converging faster than CEM runs!) and that the extra training costs of IntCEMs amortise in practice as running the learnt intervention policy is much more efficient than equivalent post-hoc intervention policies (see table A.3 for details). Training time computations are also implementation-specific so they could be further optimised in the future.
As for the value of $T$, you are correct in stipulating that this affects both training times and performance. We included an ablation study showing the effect of $T$ in our model’s performance in Appendix A.6.3 (particularly Figure A.3). There, we observe significant performance differences only when T is very small (e.g., $T=1$) relative to larger values of $T$ (although notably, IntCEM is still better than CEM in this instance after a larger number of interventions). In practice, we observe that increasing $T$ above a small value (around $5$) results in negligible gains. Some intuition for this result is that one has fewer “interesting” concepts for a given sample to select from after long trajectories, leading to diminishing returns as we increase the value of $T$.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for addressing my questions. As of now, I do not have any further questions. | Summary: The authors introduce IntCEMs, an extension of Concept Embedding Models designed
specifically to react correctly to external interventions to the learned concepts.
Compared to regular CEMs, IntCEM feature two additional elements: a policy that,
essentially, guesses what interventions a human expert would do on the model for
any given input, and a penalty that regularizes the model to achieve high accuracy
under interventions. The idea is to ensure the model performs well even after
interventions - under the assumption the model requires interventions to ``ask
for help'', i.e., to obtain the ground-truth label of certain concepts at test
time. Experiments show that in fact IntCEMs outperform CEMs and CBMs on four
data sets at test time under interventions, even from a different policy.
**Post-rebuttal update**: Increasing the score by one point under the assumption
the authors will fairly highlightly leakage as a limitation of CBMs/CEMs/IntCEMs
and its interaction with interventions in the paper. I am still a bit concerned about
impact, but the contents of the paper are good quality overall.
Strengths: + English is good, narrative is generally good but parts of it are confusing, see below
+ Motivation is sensible
+ Proposed approach is also sensible
+ Empirical evaluation is generally positive, with only some minor drops in concept quality depending on choice of \lambda_roll
+ Related work is sufficient
Weaknesses: - One issue is limited significance, in the sense that the authors tackle a problem that affects only a certain type of operation (namely, a *specific* type of interventions - not all possible interventions) performed on a certain type of model (CEMs, CBMs). It is true that the performance gain of IntCEM vs CEM under interventions can be substantial (CelebA in figure 3), but it is generally more modest (CUB, CUB-Incomp; the diff in perf for MNIST-Add and MNIST-Add-Incomp are somewhat biased by the difference at Groups Intervened = 0, which is not due to extra robustness to interventions). This is not a deal breaker, and indeed my score is positive, but it explains why I decided not to go above weak accept.
- Writing is a bit confusing. It took me a while to understand where the ``learning to receive help'' fits into the abstract and introduction. The idea is that models can ask users to help them at test time by - essentially - requesting the ground-truth label of certain concepts, which means that this work is closer in spirit to concept-level active learning (but at test time) rather than to interventions proper. These have a causal connotation and can be used for changing the concepts to *any* value, not only to their ground-truth values. See for instance all the literature on algorithmic recourse. The target task should be made more explicit, at the bare minimum in the introduction. I also feel the text abuses greek letters (all of section 3) a little, hindering readability. Every time I see, say, \eta, \psi, \omega... in the text, I have to look back at their definition. The $c$ letter also appears in too many variants. More generally, the text feels quite compressed.
- Evaluation is only carried out against CEMs and CBMs. Other self-interpretable models exist and have been published recently. Considering the focus on CBMs, this is not a major issue, but it is an issue nonetheless.
Minor issues worth fixing
-------------------------
- line 102: the range {0, 1}^k is wrong, \tilde c_i is set to 0.5.
- Considering \mu_i also appears in the definition of \tilde c_i, Eq 1 is probably too complex for what it needs to achieve. Is it possible to simplify it?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1. It was shown by:
Mahinpei et al. "Promises and pitfalls of black-box concept learning models", 2021.
Margeloiu et al. "Do Concept Bottleneck Models Learn as Intended?", 2021.
that CBMs suffer from "concept leakage", whereby learned concepts encode information from unintended sources (like unobserved concepts), with negative consequences of interpretability. I am wondering whether 1) How IntCEMs fare in this regard; specifically, I am only aware of two self-interpretable models that specifically address this, namely:
Havasi et al. "Addressing Leakage in Concept Bottleneck Models", 2022.
Marconato et al. "GlanceNets: Interpretable, Leak-proof Concept-based Models", 2022.
A brief comment on IntCEMs vs leakage would be welcome - this is an important issue. 2) Robustness to interventions can prevent leakage - the link is that the model is specifically trained to be robust against replacing a (possibly leaky) concept prediction with a (presumably non-leaky) concept annotation.
Q2. Why did you compare only against CEMs and CBMs? There are other self-interpretable models out there, like the two aforementioned works, part-prototype networks, concept whitening, self-explainable neural networks, and many others. They all support interventions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I think so, in section 5 there is a brief discussion on some limitations (e.g. time complexity). The issue of concept leakage is not mentioned though.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Zgwp,
Thank you for your very insightful feedback. We are glad you found our paper’s motivation, evaluation, and relevant work interesting and generally positive. Below we address your questions.
### More explicit definition of “interventions”
This is a great point. We want to clarify that, as you correctly mentioned, the term “intervention” here is not used as in the causality literature and is more related to the field of active feature acquisition (e.g. [1]). This aligns with your intuition regarding seeing this as a form of concept-level active learning at test time. Nevertheless, the concept learning community widely uses the term “concept intervention” for the operation in which an expert provides a binary concept label at test time for a CBM-like model. Hence we maintained the term to avoid confusion within this paper’s “host” field of study.
To address possibly confusing future readers from other subfields, we will state what we mean with the term “concept interventions” early on in our introduction. Furthermore, we will elaborate on how these differ from their causal counterparts and how they are related to active feature acquisition in our background section.
### IntCEM’s performance gains
The scale in some tasks in Figures 3 and 4 is large as we needed to fit baselines underperforming w.r.t. IntCEM. However, looking at CUB’s performance values in a table view of these figures (see Table 1 of our supplement), we observe a significant absolute boost of 6.5% for IntCEM vs CEM when randomly intervening on 50% of concepts (we achieve an almost perfect performance after randomly intervening on 75% of the concepts). What is more important, however, is that by intervening following $\psi$ rather than performing just random interventions, IntCEM’s absolute gains in CUB when intervening with 50% of the concepts shift to ~9.5% (with an almost perfect acc of 99.17%). This boost comes with negligible test time computational costs and without the need to (re)learn any further policies.
As for the MNIST tasks, these tasks showcase how our training procedure can lead to richer embeddings that allow **performance boosts even without interventions**. We will add a small sentence to highlight this in Section 4.1. Moreover, to make this analysis easier for external readers, we will include Table 1 of our rebuttal supplement in a new Appendix.
### Evaluation baselines
To the best of our knowledge, the main non-graph-based prior works that have evaluated concept interventions are CBMs (they introduced concept interventions formally), Concept-based Model Extraction (CME) [2], CEMs [3], and leakage-free CBMs [4]. Leakage-free CBMs avoid using soft representations that are prone to leakage by using hard representations for their bottleneck together with an optional side channel and training procedure that intentionally “flows/leaks” information not captured by the hard known concepts. Nevertheless, although these additions are noteworthy and interesting, this model effectively operates like a CBM in practice and the extra leakage bypass it offers is a constrained version of the bypass allowed by embeddings in CEMs. Similarly, CMEs were shown to underperform vanilla CBMs when intervened on [2] and are post-hoc explainability methods rather than interpretable architectures. Because of these reasons, we focused on CEMs and CBMs as they capture the key mechanisms in all of the methods mentioned above.
We indicate why we did not include other concept-based interpretable methods as baselines for our evaluation:
- GlanceNets: Although GlanceNets support concept interventions, these go beyond the original scope of the architectural changes between GlanceNets and CBMs. The lack of intervention experiments in GlanceNet’s paper supports this. Furthermore, concept interventions in GlanceNets are not too different from those in Hybrid-CBMs [3], which are known to underperform w.r.t CEMs.
- Concept Whitening (CW): To the best of our knowledge, Concept Whitening does not support concept interventions as it is unclear how one would set entire feature maps at test-time to trigger a concept intervention. This difficulty is even greater when one considers that CW usually performs a reduction over these maps to determine a concept’s “activation level”. Notice that their manuscript does not allude to concept interventions as they slightly precede CBMs.
- Self-explaining Neural Networks (SENNs) and Prototypical part networks (ProtoPNets): Both of these methods work in a concept unsupervised manner (they do receive feedback from task labels but not from concept labels). Therefore it is not immediately clear how to perform concept interventions given that one needs to assign semantics to the discovered concepts first. Such a task is non-trivial to evaluate and goes beyond the scope of our own work.
- Other more recent methods (label-free CBMs, post-hoc CBMs): There are very recent methods that do support concept interventions (e.g., label-free CBMs and post-hoc CBMs). However, these were published less than two months before the submission deadline (at ICLR 2023), which meant that we could not reasonably include them as part of our evaluation.
We will clarify these choices in our evaluation section by justifying our baselines as done above.
### Misc minor fixes
Thanks for finding this typo! As for Eq 1, it could be simplified by using a piecewise definition, however our original intent of expressing Eq 1 as it is was to make it clear how one may right a concept intervention operation in a differentiable manner.
## References
[1] Li et al. "Active feature acquisition with generative surrogate models." ICML 2021.
[2] Kazhdan et al. "Now you see me (CME): concept-based model extraction." arXiv:2010.13233 (2020).
[3] Espinosa Zarlenga et al. "Concept embedding models" NeurIPS 2022.
[4] Havasi et al. "Addressing leakage in concept bottleneck models." NeurIPS 2022.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: > More explicit definition of “interventions”
I do appreciate this change, thank you.
> IntCEM’s performance gains
Thank you for the clarification.
> Leakage-free CBMs [...] effectively operates like a CBM in practice and the extra leakage bypass it offers is a constrained version of the bypass allowed by embeddings in CEMs.
Based on your general reply, would I be correct in thinking that a "less leaky" model is likely to see smaller benefit from better chosen interventions? I'm just curious.
> concept interventions in GlanceNets are not too different from those in Hybrid-CBMs [3], [...]
My understanding is that Hybrid-CBMs are CBMs with additional unsupervised concepts. Presumably these are used for prediction. GlanceNets do not use unsupervised concepts for prediction. In what sense are their "concept interventions" similar? Also, what sense Hybrid-CBMs "underperform"?
> Concept Whitening, SENNs, ProtoPNets
I see. I think these points are valid, at least at a high level. Thank you for the explanation.
Edit: fixed the formatting.
---
Reply to Comment 1.1.1:
Comment: Thank you for replying to our rebuttal; we really appreciate your feedback and the thorough review. Below we reply to the questions raised in the comment above:
> "Would I be correct in thinking that a 'less leaky' model is likely to see smaller benefit from better chosen interventions?"
This is a really good question. We think a thorough study of this may be needed to answer your question. However, if we were to speculate, our intuition from this work is that models that allow more information through their bottlenecks, and that are able to maintain that information after an intervention is performed (i.e., the intervention is not destructive), may be able to perform better after interventions. This is because the label predictor has more information it is able to exploit after an intervention is performed.
Nevertheless, we emphasize this is **not** a sufficient condition as seen in Hybrid-CBMs. In these models, as one increases the amount of extra unsupervised capacity in the bottleneck, interventions become less effective due to (1) the model learning possible shortcuts directly from the input features to the task prediction via the unsupervised neurons, and (2) the lack of clarity on how one should modify these unsupervised neurons when one intervenes on a specific known concept. Notice that CEMs and IntCEMs circumvent this shortcutting by introducing training-time regularisers that implicitly encourage the distribution of concept embeddings to be distinct when the concept is "on" than when it is "off".
> Hybrid-CBMs and GlanceNets
You raised a good point, and we acknowledge that GlanceNets do not use the extra unsupervised "concepts" for prediction, while Hybrid-CBMs use the unsupervised capacity in their predictive process. We meant that they were similar in that, during interventions, only the supervised concepts are modified (without touching the unsupervised neurons in the bottleneck). This leads to "ill-defined" states during interventions as the change intended from intervention is not propagated to all relevant neurons in the bottleneck. The original CEM paper empirically showed that this leads to Hybrid-CBMs being significantly less receptive to interventions than all other baselines they compared against (including CBMs and CEMs). If GlanceNets would use the unsupervised concepts during prediction, we would expect a similar situation to develop. More importantly, however, precisely because GlanceNets do not use the unsupervised concepts for prediction and they use them only for reconstruction for the open set recognition, we expect interventions to suffer significantly when the set of training concepts is incomplete w.r.t. the downstream task, a common setup in real-world tasks (e.g., $\texttt{MNIST-Add-Incomp}$, $\texttt{Cub-Incomp}$, $\texttt{CelebA}$). Because of this, and equally as important because GlanceNets were not designed or even evaluated to intake interventions, we decided to focus our evaluation on models that are very likely better at receiving interventions such as CEMs. | Rebuttal 1:
Rebuttal: We thank the reviewers for their very insightful feedback and for taking the time to read our work carefully. Their feedback has certainly improved the quality of our manuscript. We hope to address your concerns in this rebuttal and its corresponding supplementary document. We reply to questions shared by two or more reviewers in this general rebuttal while replying to specific questions raised by reviewers in their respective rebuttals.
# Summary of Supplement
In the rebuttal’s supplement (see PDF attached below), we include the following figures and tables:
- Table 1 represents a (simplified) tabular version of Figure 3 to help better analyse each model’s performance when a fixed fraction of concepts are intervened on. This table will be added to the Appendix in our updated manuscript.
- Figure 1 depicts an application of IntCEM’s learnt policy to two CUB test samples. This figure will be added in the main body of our updated manuscript in experimental Section 4.3 where we discuss IntCEM’s learnt policy $\psi$.
- Table 2 shows the Oracle Impurity Score (a measurement of leakage described below) for our jointly-trained baselines across all datasets. This constitutes a **new experiment** introduced in this rebuttal, the purpose of which is to understand better how leakage works in IntCEM and the role it may play in concept interventions with embedding-based models. We will include this table in a new Appendix section with a brief summary of its main results included in our discussion section.
# Answers to common questions
### Concrete example of IntCEM’s policy at test-time (reviewers oMUi and 4KHx)
In Figure 1 of the supplement, we show two concrete examples of intervening on an IntCEM in CUB to help clarify how our model works and how following its policy $\psi$ can change its posterior label distribution in practice. These examples will be included in Section 4.3 where we discuss how $\psi$ performs.
### Leakage in IntCEM’s embeddings (reviewers Zgwp and 23Ef)
As mentioned by reviewers Zgwp and 23Ef (and also related to one of reviewer oMUi’s questions), leakage [1] is a known issue in traditional CBMs. In fact, it has been empirically shown by [2] that leakage may lead to detrimental interventions because a CBM’s downstream label predictor may learn to rely on information leaked from concept representations and this information is necessarily destroyed when performing interventions in CBMs. This is because to intervene in a CBM’s bottleneck, one must overwrite a neuron’s activation to one that is aligned with the ground truth value of the concept that is represented by that neuron (e.g., a bottleneck’s neuron may be set to $0$ if the concept it represents is known to be “off” for the current sample, removing all information that could have been encoded in the continuous value previously outputted by that neuron).
This issue does not arise in CEMs and IntCEMs because **embedding-based concept interventions are not destructive**. Indeed, as described in Section 2 of our paper, when intervening on either of these models, one essentially performs a swap between embeddings, allowing the model to maintain any leaked information from the input/other concepts/task labels as part of the embedding of each concept. Therefore, we do not expect interventions to be significantly affected by leakage in concept representations as long as a concept’s distribution of positive ($\hat{\mathbf{c}}^{(+)}$) and negative ($\hat{\mathbf{c}}^{(-)}$) embeddings are easy to discriminate by the downstream label predictor.
As keenly pointed out by reviewer 23Ef, we hypothesize that higher leakage may contribute to IntCEMs’ better performance when being intervened on. To evaluate this hypothesis, we measure the Oracle Impurity Score (OIS) [2] of concept representations learnt by CBMs, CEMs, and IntCEMs across all tasks. This score, between 0 and 1, essentially measures how much extra information, on average, each learnt concept representation captures from other possibly unrelated concepts (with higher scores representing higher impurities and therefore more leakage between concepts). Our results, shown in Table 2 of our rebuttal’s supplement, show that (1) as one would expect, there is significantly more leakage in CEMs compared to CBMs and, (2) more importantly, IntCEM embeddings are also capturing more impurities than CEM’s embeddings across all datasets we tested except for $\texttt{CUB-Incomp}$, providing evidence towards our hypothesis. Given that this preliminary leakage study may open the door for potentially useful and interesting future research, we will include this experiment in a new appendix (with a brief discussion of it introduced in our paper’s discussion). Our updated discussion will highlight that contrary to common assumptions, leakage may not always be undesired and could be a healthy byproduct of more expressive concept representations in models that accommodate such expressivity.
### User studies (reviewers 23Ef and 4KHx)
We agree that a human user study could reveal very interesting results and complement our paper's extensive experiments and main contributions. We have, however, left this for future work as we believe some exciting research directions spawning from a formal user study for concept interventions would merit a whole paper on their own. We are indeed very excited and optimistic about exploring these future avenues.
Following Reviewer 23Ef’s suggestion of including a future work section for our manuscript, we will update our paper to suggest user studies as part of future work.
### References
[1] Mahinpei et al. "Promises and pitfalls of black-box concept learning models." Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI at ICML (2021).
[2] Espinosa Zarlenga et al. "Towards robust metrics for concept representation evaluation." AAAI (2023).
Pdf: /pdf/22abd8506892efbac769bcba242becba62941967.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.