title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RDumb: A simple approach that questions our progress in continual test-time adaptation | Accept (poster) | Summary: Test-time adaptation techniques seek to adapt a model to new unlabelled samples without access to the original training data. However, TTA approaches are typically not evaluated for long runtimes. This work proposes a new benchmark Continuously Changing Corruptions (CCC), which is a stream of corrupted ImageNet images with gradual changes and varying degrees of difficulty. Previous approaches fail on (CCC) when compared to the pretrained model as well as a simple new baseline method dubbed RDumb. RDumb is shown to achieve high performance across several experimental settings.
Strengths: Strengths:
- The overall message of the paper is easy to follow and the experiments are illustrative of a serious shortcoming in the evaluation of TTA methods.
- The baseline is simple and satisfies all the TTA assumptions, and should be considered as a baseline in future works in this area
- Experiments are very extensive and thorough
Weaknesses: Weaknesses:
- There are a few issues related to clarity that should be addressed in the next version of the paper. See Questions
- I would encourage the authors to add error bars to their tables, so that it is clear that the differences are statistically significant
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Questions:
1) The mathematical notation in Eq. 1 is a bit difficult to parse. In particular, it is unclear what is meant by $cos(y_t, \bar{y}_t)$. Is this cosine similarity or cosine distance? I would encourage the authors to clarify this and also provide a clear description of the equation in the text.
2) Can the authors precisely define "collapse"? This term is used but is never defined.
3) In Figures 3 and 4, if RDumb is being reset to the pretrained model weights, then why do the blue and purple lines never intersect?
4) It would be interesting to test CLIP image encoders [1] that are naturally robust and have high zero-shot accuracy on OOD samples. In future work, I would be curious to see how such models behave on the proposed CCC when TTA techniques are applied.
References:
- Learning Transferable Visual Models From Natural Language Supervision, Radford et al. 2021
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We’re happy you found our paper to be easy to follow, and the experiments to be extensive. We want to address your questions:
> I would encourage the authors to add error bars to their tables, so that it is clear that the differences are statistically significant
Good point. We updated the paper with a new version of the table:
| Adaptation method | CIN-C | CIN-3DCC | CCC-Easy | CCC-Medium | CCC-Hard | Average |
|--------------------------|----------------|----------------|----------------|----------------|----------------|-----------------|
| Pretrained (He et al.) | 18.0 $\pm$ 0.0 | 31.5 $\pm$ 0.0 | 34.1 $\pm$ 0.22 | 17.3 $\pm$ 0.21 | 1.5 $\pm$ 0.02 | 20.5 |
| BN (Schneider et al.) | 31.5 $\pm$ 0.02 | 35.7 $\pm$ 0.02 | 42.6 $\pm$ 0.39 | 27.9 $\pm$ 0.74 | 6.8 $\pm$ 0.31 | 28.9 |
| Tent (Wang et al.) | 15.6 $\pm$ 3.5 | 24.4 $\pm$ 3.5 | 3.9 $\pm$ 0.58 | 1.4 $\pm$ 0.17 | 0.51 $\pm$ 0.07 | 9.2 |
| RPL (Rusak et al.) | 21.8 $\pm$ 3.6 | 30.0 $\pm$ 3.6 | 7.5 $\pm$ 0.83 | 2.7 $\pm$ 0.36 | 0.67 $\pm$ 0.14 | 12.5 |
| SLR (Mummadi et al.) | 12.4 $\pm$ 7.7 | 12.2 $\pm$ 7.7 | 22.2 $\pm$ 18.4 | 7.7 $\pm$ 9.0 | 0.66 $\pm$ 0.57 | 11.0 |
| CPL (Goyal et al.) | 3.0 $\pm$ 3.3 | 5.7 $\pm$ 3.3 | 0.41 $\pm$ 0.06 | 0.22 $\pm$ 0.03 | 0.14 $\pm$ 0.01 | 1.9 |
| CoTTA (Wang et al.) | 34.0 $\pm$ 0.68 | 37.6 $\pm$ 0.68 | 14.9 $\pm$ 0.88 | 7.7 $\pm$ 0.43 | 1.1 $\pm$ 0.16 | 19.1 |
| EATA (Niu et al.) | 41.8 $\pm$ 0.98 | 43.6 $\pm$ 0.98 | 48.2 $\pm$ 0.6 | 35.4 $\pm$ 1.0 | 8.7 $\pm$ 0.8 | 35.5 |
| ETA (Niu et al.) | 43.8 $\pm$ 0.33 | 42.7 $\pm$ 0.33 | 41.4 $\pm$ 0.95 | 1.1 $\pm$ 0.43 | 0.23 $\pm$ 0.05 | 25.8 |
| RDumb (ours) | **46.5 $\pm$ 0.15** | **45.2 $\pm$ 0.15** | **49.3 $\pm$ 0.88** | **38.9 $\pm$ 1.4** | **9.6 $\pm$ 1.6** | **37.9** |
> The mathematical notation in Eq. 1 is a bit difficult to parse. In particular, it is unclear what is meant by $\cos(y_t, \hat{y}_t)$. Is this cosine similarity or cosine distance? I would encourage the authors to clarify this and also provide a clear description of the equation in the text.
We apologize for not making this clear - what was meant is cosine similarity. We made this clearer in the text.
>Can the authors precisely define "collapse"? This term is used but is never defined.
We consider a model as collapsed when a model classifies images worse than a pretrained, non-adapting model. Thanks for raising this clarity issue, we will make the term more clear in the next version.
>In Figures 3 and 4, if RDumb is being reset to the pretrained model weights, then why do the blue and purple lines never intersect?
Each point on the graph represents an average of 781 iterations (around 50k images seen). This is done so as to not have to plot millions of points. Although RDumb resets, it very quickly becomes better than a pretrained model (see also Figure 5a), which is why the lines don’t intersect.
>It would be interesting to test CLIP image encoders [1] that are naturally robust and have high zero-shot accuracy on OOD samples. In future work, I would be curious to see how such models behave on the proposed CCC when TTA techniques are applied.
We think so too, which is why we think CCC is an important benchmark for future work.
We hope that our response clears up any questions you had, and if it doesn't, we’d be glad to add further clarifications.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for the clear rebuttal, all of my concerns are addressed so I have raised my score. I would recommend the authors to address why the lines do not intersect in the next version of the paper.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Dear Reviewer,
Thank you for your responses. | Summary: This paper proposes a new benchmark for continual test-time adaptation (CTTA): CCC (Continually Changing Corruptions).
Experiments for existing state-of-the-art approaches using their proposed benchmark demonstrate that even a non-adapting model performs better than existing approaches for this benchmark.
Further, they propose a simple approach as a baseline, named $\textit{RDumb}$, which periodically resets the model to its pre-trained state, which performs better than the SOTA approaches.
Strengths: 1. This paper proposes a more challenging benchmark for CTTA.
2. Combining a pair of corruptions is a novel idea.
3. Experimental results demonstrate that several state-of-the-art approaches perform poorly on this benchmark.
4. A simple periodic reset mechanism with weighted entropy (as in ETA) adaptation loss outperforms the SOTA CTTA approaches.
Weaknesses: 1. In CoTTA (Wang et al., CVPR 2022), there is a setting of gradually changing the corruptions (CIFAR10-to-CIFAR10C gradual). So the idea of gradually changing corruption is not entirely new, even though CCC has a combination of two corruptions.
2. This paper does not analyze the reasons for the performance degradation of the SOTA TTA approaches, such as overfitting, error accumulation, catastrophic forgetting, or interference, and does not provide any insights or suggestions on mitigating or preventing it.
3. Experimental results on CIFAR10-to-CIFAR10C, CIFAR100-to-CIFAR100C classification tasks, along with their CCC variants, are missing.
Minor Comment: Line 62: use `` for starting quotes before RDumb
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In CoTTA (Wang et al., CVPR 2022), there is a setting of gradually changing the corruptions (CIFAR10-to-CIFAR10C gradual). CoTTA, as well as PETAL [1], reports performance improvement in this gradual setting, so it would be interesting to see how they would work on the CCC variant of CIFAR10-to-CIFAR10C. Do the authors have any intuition behind this?
2. As the authors point out, the ImagenetC dataset is large and other benchmarks are much smaller; the dataset being smaller can also make the task more challenging, since there would be lesser data to learn from for a domain shift. It would be interesting to see how the performance is for CCC variants of CIFAR10-to-CIFAR10C, CIFAR100-to-CIFAR100C.
3. Also, with regards to the semantic segmentation task, do the authors have some idea as to how this dataset/benchmark building mechanism would be applicable?
*References*
1. Brahma, Dhanajit, and Piyush Rai. "A Probabilistic Framework for Lifelong Test-Time Adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. Since the state-of-the-art approaches are being evaluated on a new benchmark dataset, it is very important to tune their hyperparameters optimally.
2. The details regarding hyperparameter tuning for various SOTA approaches are missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We’re glad you found the dataset novel and challenging, and our experimental results meaningful! We addressed all mentioned weaknesses below and replied to your questions.
> In CoTTA, there is a setting of gradually changing the corruptions [...]. So the idea of gradually changing corruption is not entirely new, even though CCC has a combination of two corruptions.
We would like to point out three important differences and contributions CCC makes over this setting:
First, the setting used in the COTTA paper is way easier than CCC and fails to show a collapse to chance level. We ran an experiment on the concatenated CIFAR10-C dataset for more than 100 million images with no sign of collapse (Fig. 4, Rebuttal PDF). We also run a similar experiments for 25 million images on the gradual CIFAR10 task (Fig. 5). While the fluctuations in performance increase, the setup still fails to produce the collapsing behavior to chance level we observe at ImageNet-scale with CCC. This highlights the difference in difficulty to our “real world sized” CCC dataset.
Second, “gradual” here means a transition from clean to corrupted images. CCC corruptions have a much larger combinatorial space due to the simultaneous application of multiple corruptions.
Third, the C10-C tasks are less-well controlled, which is similar to the issues we discuss for concatenated ImageNet-C. You can see this in Figure 6 in the rebuttal PDF, where performance fluctuates greatly between noises. This makes the dataset not suitable for controlled evaluation of models in a scientific setting.
We are happy to discuss this further and agree with you that we should point out these features of CCC more prominently.
> This paper does not analyze the reasons for the performance degradation of the SOTA TTA approaches, such as overfitting, error accumulation, catastrophic forgetting, or interference, and does not provide any insights or suggestions on mitigating or preventing it.
This is a fair point, and we added an analysis now.
We constructed a simple 2d Gaussian binary classification example, a domain shift which slightly rotates the data and adds Gaussian noise, and a model which consists of a batch norm layer followed by a logistic regression. When running entropy minimization on the adaptation layer of the batch norm, simple cases emerge where the loss does or does not result in collapse, mainly depending on the relation of signal and noise variances and directions (Fig. 1-2).
The toy example furthermore predicts that the adapted parameters of a model should grow on the long run and indeed we were able to find exactly this effect when running ETA on a ResNet50 on CCC-Medium (Fig. 3, rebuttal PDF), suggesting that our minimal setup successfully reproduces the relevant aspects of the large scale case. However, the weight explosion becomes apparent only after the collapse happens, hence weight regularization is not enough to avoid the collapse (Fig 3b).
Thank you for raising this issue, we think that this analysis adds to the paper and will include it in the camera ready version. We would be happy to hear your thoughts on this.
> Experimental results on CIFAR10-to-CIFAR10C, CIFAR100-to-CIFAR100C classification tasks, along with their CCC variants, are missing.
Continual test time adaptations make claims about the performance of real-world scale vision models during deployment time. CIFAR bears none of the desiderata for a dataset to be predictive of model performance in this setup, as we discussed above.
Hence, we refrained from running CIFAR scale experiments, the minimum scale for meaningful evaluation should be ImageNet-C, variants of it, or CCC. These experiments are sufficiently quick to run on typical research hardware.
However, if we missed a good argument on how CIFAR experiments are useful in providing a signal for building continual adaptation models, we are happy to discuss further. In particular, if CIFAR scale experiments would meaningfully change your assessment of our work, we could discuss a suitable setup for hyperparameter search on all models for evaluation (as CoTTA, EATA, etc. lack details on the CIFAR evaluation protocol and how hyperparameter search was performed).
> Also, with regards to the semantic segmentation task, do the authors have some idea as to how this dataset/benchmark building mechanism would be applicable?
While possible with our construction mechanism, building a “Segmentation-CCC” is well beyond of scope for the current work. However, we think that any model should pass the CCC test first before scaled up to even more involved computer vision problems.
Otherwise, we agree that this could be an interesting future work, the limiting factor is the huge compute budget needed to calculate the calibration set.
> Since the state-of-the-art approaches are being evaluated on a new benchmark dataset, it is very important to tune their hyperparameters optimally.
We agree, and we performed hyperparameter searches on both EATA (Appendix C) and CoTTA (see reply to reviewer h5ar).For CoTTA, we ran a hyperparameter search on CoTTA using the same protocol we used for RDumb (Section 6), and found that CoTTA still collapses on every level of CCC.
Please note that the only additional hyperparam we consider is the resetting interval. All other parameters in RDumb are taken from previous work.
If you want us to run any other hyperparam search, we would be more than happy to do so, but we doubt it will change our key results (as we saw in CoTTA and EATA).
> The details regarding hyperparameter tuning for various SOTA approaches are missing.
Please see our reply above, and also Section 6 and App. C in the paper. We are happy to add additional information.
We fully agree with your sentiment of the importance of hyperparameter tuning in TTA, but feel that the practice in our paper matches or exceeds the practiced standards in the field.
We are happy to discuss further, though.
---
Rebuttal Comment 1.1:
Title: Some of the concerns addressed with additional experiments
Comment: Dear authors,
I want to thank the authors for responding to my comments and questions.
Some of the questions asked have been addressed.
The experiment with toy 2d Gaussian binary classification is interesting.
However, the smaller benchmark dataset, as in CIFAR10-to-CIFAR10C, and CIFAR100-to-CIFAR100C, can also make the task more challenging since there would be less data to learn from for a domain shift. This point has not been addressed. Experimental results on these benchmarks would make the case stronger for the proposed approach.
Based on the response, I do not have any other queries at this point in time.
Thank you.
---
Reply to Comment 1.1.1:
Title: Addressing your remaining questions
Comment: Dear reviewer xRL5,
Thanks a lot for getting back to us. We are happy that you also find the 2D example interesting, and will add it to the paper. Thanks again for pushing in the direction to add analysis, we think this improves the paper a lot.
We would like to address the remaining questions now. In summary, we now show that RDumb significantly outperforms CoTTA on both CIFAR10-C and CIFAR100-C, plus add some further discussion on CIFAR below. Thanks for suggesting these experiments which now further corroborate our claims.
In more detail, we would first like to clarify a possible misunderstanding, as you write:
> since there would be less data to learn from for a domain shift [in CIFAR10/100-to-CIFAR10/100C].
On the mentioned CIFAR->CIFAR-C task in the CoTTA paper, each corruption has 10,000 images which amounts to 156 adaptation steps with our current hyperparams. RDumb reaches on average 84.6% of its performance gain on each holdout noise already after 156 steps (Figure 5a, paper). Therefore, we think that our existing experiment already answers your original question of adaptation behavior with limited data per domain shift. However, please let us know if we missed something / please clarify what you meant originally!
That being said, we still ran the experiments to address your remaining point:
> Experimental results on these benchmarks [CIFAR10/100-to-CIFAR10/100C] would make the case stronger for the proposed approach.
We can now share full results for RDumb, CoTTA, BN and a pre-trained net on C10-C and C100-C for the setting you outline. We expand over the CoTTA paper by running 10 permutations to be able to test for statistical significance and report error bars, as suggested by Reviewer Gw5H. Note that CoTTA uses augmentations that resemble the ones in CIFAR-C (e.g. brightness, Gaussian Noise, contrast, blurring, etc.) which we removed to facilitate a meaningful comparison to RDumb.
RDumb significantly outperforms CoTTA in this setting on both CIFAR10 and CIFAR100, either when using the parameters reported in the paper, or when being re-tuned using the same protocol as RDumb (Table 1, below). Note, CoTTA also takes more than 11x as long to run as RDumb on both C10-C and C100-C (Table 2, below).
Overall, we acknowledge your encouragement to let us run this experiment --- it shows that RDumb outperforms CoTTA in a comparable setup also on CIFAR10 and CIFAR100, while being considerably faster.
As a side-note, the optimal reset parameter found by our hyperparameter search is larger than the whole dataset, i.e., the model never resets on this run --- this is exactly the intended optimal behavior we anticipated in [our original response](https://openreview.net/forum?id=VfP6VTVsHc¬eId=hp6DGsSHib) (CIFAR-C does not result in a collapse asymptotically, cf. Figure 4, [rebuttal PDF](https://openreview.net/forum?id=VfP6VTVsHc¬eId=SZYBZnJCYQ)). Our theoretical model of collapse proposes a possible explanation for this behavior: the dataset might be to easy (20.43% or 35.37% error after correcting BN stats vs. 57.4--93.2% error in CCC-easy to -hard, cf. Table 1, paper). For sufficiently high signal-to-noise ratios, collapse can be avoided (cf. Figure 1b, [rebuttal PDF](https://openreview.net/forum?id=VfP6VTVsHc¬eId=SZYBZnJCYQ)).
Please let us know what you think and whether this addresses your remaining concern. We are also happy to further discuss your original motivation for us to run this experiment.
---
*Table 1: Performance comparison between CoTTA and RDumb on the CIFAR→CIFAR-C tasks. We report error rates in % (mean +/- empirical standard deviation) over n=10 permutations of noise sequences. Differences between models are statistically signficant (ANOVA and Tukey HSD for post-hoc testing, RDumb vs. CoTTA at p<0.0001 in all settings).*
| | CIFAR10-C | CIFAR100-C |
|--------------------------|---------------------|---------------------|
| Baseline | 43.28 +/- 0.00 | 46.66 +/- 0.00 |
| Batch Norm | 20.43 +/- 0.00 | 35.37 +/- 0.00 |
| CoTTA (reported params, Wang et al., 2022) | 18.02 +/- 0.214 | 33.20 +/- 0.247 |
| CoTTA (validated params, ours) | 18.31 +/- 0.308 | |
| RDumb (validated params, ours) | 17.27 +/- 0.196 | 31.84 +/- 0.195 |
---
*Table 2: Runtime comparison for a full run through the dataset (150k samples in total) for each method. RDumb is 14.3x faster than CoTTA on CIFAR10, and 11.9x faster than CoTTA on CIFAR100. Note that a WRN-28 is used on CIFAR10 and an AugMix ResNeXt is used on CIFAR100 in line with Wang et al. (2022).*
| | CIFAR10 | CIFAR100 |
|------------|---------------|-----------------|
| Baseline | 60.9s | 33.5s |
| Batch Norm | 80.6s | 36.1s |
| CoTTA | 1909.7s | 856.2s |
| RDumb | 133.5s | 71.9s | | Summary: The paper newly introduces a new benchmark dataset for Continual Test-Time Adaptation (CTTA) named Continually Changing
Corruptions (CCC), and suggests a simple technique - repeatedly initialize the learned model weights during CTTA. CCC is composed of interpolated corruption data and its data scale varies at each timestep. They can control the difficulty of CCC data by changing the degree of interpolation of two different corruptions.
Strengths: The paper suggests a new large-scale continual test-time adaptation benchmark dataset, considering corruption interpolation, scale variation, and repetition.
Weaknesses: The suggested method is too naive and not attractive in view of knowledge transfer. Periodically resetting the trainable weights to the initial state indicates simply discarding obtained knowledge and adaptive representation regardless of their importance and relevancy. Even though this remedy outperforms baselines, there is a lack of quantitative and qualitative analysis of why it happens and of the reason RDumb behaves differently from baselines. Additionally, the decision of resetting iteration is heuristic.
The suggested idea is surprising but I strongly recommend more rigorous analyses and validation of the suggested method and CCC dataset since they are counterintuitive in some sense.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Why does RDumb consistently outperform a pre-trained model during TTA? Shouldn't its performance reach the pre-trained model's performance when re-setting the model weights to the initially pre-trained ones?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Please see the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review! We agree with your assessment that our results are quite surprising. We hope to clarify your concerns and address the remaining weaknesses below:
> “Even though this remedy outperforms baselines”
This sounds like a possible misunderstanding. While RDumb indeed outperforms baselines (pre-training, batchnorm), it also outperforms (or matches) *all published state-of-the art test time adaptation methods*, which includes CoTTA and EATA, while being conceptually simpler and easier to analyze. Due to its technical simplicity, our paper argues that RDumb should serve as a baseline in follow up work, and questions the way we assess TTA methods.
> The suggested method is too naive
Technical complexity by itself is not a criterion for acceptance at NeurIPS: the principle of Occam's razor remains pertinent. Our results suggest that an extremely simple method performs just as well or *better* than more complex ones.
But furthermore, besides the RDumb method, there is a lot of technical depth in the construction of CCC: Unlike previous datasets which simply stack a few existing data points, we propose a well-designed setup which is orders of magnitude bigger than current evaluation setups, in number of noises (210 vs 15), number of severities (441 vs 5).
It enables the benchmarking of continual methods for at least 10 times as long as current benchmarks, without repeating images (and longer, if required in the future). The complexity of the benchmark allows for evaluating methods on a controlled baseline accuracy, which wasn’t possible in previous work.
With CCC, we pose a now challenging evaluation setting which highlights the shortcomings of current TTA methods. This evaluating setting is arguably the first true “continual test-time adaptation setting”, as previous benchmarks miss important hallmarks of continual adaptation (as we argue).
This is a key contribution of this work, and its depth is easy to miss, as a lot of the work of constructing the benchmark is hidden in the supplementary material.
We’re happy to discuss this point further in case we missed what you meant.
> … not attractive in view of knowledge transfer. Periodically resetting the trainable weights to the initial state indicates simply discarding obtained knowledge and adaptive representation regardless of their importance and relevance.
We respectfully disagree.
The fact that such a method outperforms others suggests that *none* of the previously used methods are effective at knowledge transfer, and that is a very valuable insight (as you point out) that was missed in previous work. The title of our paper exactly reflects this line of thought.
While other methods are allowed to benefit from many images seen, they are still not as effective as a method that resets itself every 1,000 steps.
This is a very surprising result given claims in previous papers [1,2]. Don’t you agree? Happy to discuss further.
> lack of quantitative and qualitative analysis of why it happens
This is a fair point. We plan to add the following analysis that will further strengthen the paper:
We constructed a simple 2d Gaussian binary classification example, a domain shift which slightly rotates the data and adds Gaussian noise, and a model which consists of a batch norm layer followed by a logistic regression. When running entropy minimization on the adaptation layer of the batch norm, simple cases emerge where the loss does or does not result in collapse, mainly depending on the relation of signal and noise variances and directions (Fig. 1-2, Rebuttal PDF).
The toy example furthermore predicts that the adapted parameters of a model should grow on the long run and indeed we were able to find exactly this effect when running ETA on a ResNet50 on CCC-Medium (Fig. 3, rebuttal PDF), suggesting that our minimal setup successfully reproduces the relevant aspects of the large scale case. However, the weight explosion becomes apparent only after the collapse happens, hence weight regularization is not enough to avoid the collapse (see Fig 3b).
Thank you for raising this issue, we think that this analysis adds to the paper and we will include it in the camera ready version. We would be happy to hear your thoughts on this experiment.
> the decision of resetting iteration is heuristic.
This is not the case, please see Section 6. The reset interval is obtained by cross-validation on the holdout noises, which to our knowledge is the known best practice in the field. However, we are happy to re-evaluate with another selection strategy, if you have suggestions (but doubt it would change the core message of our paper).
As an aside, we would argue that a resetting interval is a more interpretable hyperparameter than a resetting fraction (in CoTTA) or regularizer weights (in EATA).
We’d be happy to discuss this point further with you.
> Q: Why does RDumb consistently outperform a pre-trained model during TTA? Shouldn't its performance reach the pre-trained model's performance when re-setting the model weights to the initially pre-trained ones?
You’re right that the pretrained model and RDumb have similar accuracies everytime RDumb is reset. In Figure 2, each point is an average of 781 batches (approx. 50k images, i.e. the size of the ImageNet validation set). We do this because otherwise there would be too many points on the graph. RDumb is able to learn quickly and therefore surpasses the pretrained model within those 781 batches, as can also be seen in Figure 5a. We will make this clearer in the next draft, thank you for bringing this to our attention.
We hope this addresses your concerns, and would be happy to discuss further.
[1] Wang, Qin, et al. "Continual test-time domain adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[2] Niu, Shuaicheng, et al. "Efficient test-time model adaptation without forgetting." International conference on machine learning. PMLR, 2022. | Summary: The paper proposes a new benchmark (dubbed CCC) for test-time adaptation, which generalizes previous corrupted imagenet benchmarks. Specifically, the CCC gradually draws a sequence of corruptions (e.g. gaussian noise or motion blur), and gradually interpolates between two consecutive corruptions, creating a stream without hard boundaries, akin to many realistic settings. By considering long sequences of corruption pairs, CCC yields a stream longer than previous benchmarks, which sheds new light on how previous methods fair in such settings. Indeed, it is shown that given a long enough stream, methods collapse to a performance worse than a non-adapted model. The authors then propose a new baseline, Rdumb, which resets to the default parameters of the pretrained model at regular, fixed intervals. Over both CCC and previous benchmarks, it is shown that Rdumb, despite its simplicity, works well.
Strengths: 1. The proposed benchmark is well-designed. I liked the use of calibration with a pretrained model to generate different stream difficulties, and the fact that this overall is stream generator can is flexible and can accommodate smooth transitions.
2. The proposed baseline, Rdumb, is extremely simple and should be considered as a standard baseline for TTA.
3. Overall the paper is well written and easy to follow.
Weaknesses: 1. The idea of resetting the model to its original pretrained state in order to combat accumulating degradation in TTA is not new (as correctly stated in the paper). Indeed, it was shown in [45] that resetting after each task boundary helps the network. For a fair comparison with COTTA, how would such method perform if the probability of resetting the weights was determined by a similar cross-validation technique as used in Section 6 ?
2. COTTA also has some experiments with gradually changing corruptions (see Table 3). it would be good for this to be mentioned in the paper.
minor comments :
1. regarding figure 2, I would use the color red as the most severe corruptions, and yellow as the least severe.
2. Optimal resetting interval : I think it should be Rdumb, and not Gdumb (lines 189-190)
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. My understanding of TTA's use-cases is mostly for online settings; the deployed model receives data for which to output predictions, and this data may encounter distribution shifts. In such a setting, how would one determine the frequency of when to reset the model ?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review. We’re happy you found our dataset interesting, and our paper easy to read. We want to address the weaknesses and questions below.
> For a fair comparison with COTTA, how would such method perform if the probability of resetting the weights was determined by a similar cross-validation technique as used in Section 6?
This is a good suggestion. To recap, CoTTA has a standard parameter of p=0.001 (on ImageNet), which we used in the experiments so far. We now applied our full evaluation protocol to CoTTA, and checked different thresholds on the holdout set:
| resetting param (p) | 0.001 (default) | 0.005 | 0.01 | 0.05 | 0.1 | RDumb |
|--------------------|----------------|-------|------|------|-----|-------|
| CIN-C Holdout Avg. Acc | 35.73 | 36.77 | 38.04| 37.86| 37.41| 46.7 |
We find an optimal parameter of p=0.01 which we apply to our test set:
| | CCC-Easy* | CCC-Medium | CCC-Hard |
|------------------|-------------------|------------|----------|
| Pre-Trained | 34.04 | 17.3 | 1.5 |
| Cotta (p=0.001) | 14.9 | 7.7 | 1.1 |
| Cotta (p=0.01) | 27.8 | 15.6 | 1.1 |
| RDumb | 49.15 | 38.9 | 9.6 |
*\*Note: All runs except for one are finished on CCC-Easy, therefore we take out the unfinished run from the metrics of all models to have a fair comparison. We will of course update this table with the finished run as soon as possible.*
While the results seem to improve slightly, note that CoTTA is still worse than a pretrained net on all benchmark datasets. We suggest to include this analysis in the supplementary material, and use the published hyperparameters (which is also present in the CoTTA codebase), as the message remains consistent.
Does this clarify your concern? Thanks again for this good suggestion.
> COTTA also has some experiments with gradually changing corruptions (see Table 3). it would be good for this to be mentioned in the paper.
Yes you are right, thanks for pointing this out, we now call this out in the discussion of previous work.
For completeness, however, we would like to point out three important differences and contributions CCC makes over this setting:
First, the setting used in the COTTA paper is way easier than CCC and fails to show a collapse to chance level. We ran an experiment on the concatenated CIFAR10-C dataset for more than 100 million images (vs. 150k images in the CoTTA paper) with no sign of collapse (Figure 4, Rebuttal PDF). We also run a similar experiments for 25 million images on the gradual CIFAR10 task (Figure 5, Rebuttal PDF). While the fluctuations in performance increase, the setup still fails to produce the collapsing behavior to chance level we observe at ImageNet-scale with CCC. This highlights the difference in difficulty to our “real world sized” CCC dataset.
Second, “gradual” here means a transition from clean to corrupted images. CCC corruptions have a much larger combinatorial space due to the simultaneous application of multiple corruptions.
Third, the C10-C tasks are less-well controlled, which is similar to the issues we discuss for concatenated ImageNet-C. You can see this in Figure 6 in the rebuttal PDF, where performance fluctuates greatly between noises. This makes the dataset not suitable for controlled evaluation of models in a scientific setting.
The point you raise will help to position CCC even better within the existing literature, thanks for the suggestion.
> My understanding of TTA's use-cases is mostly for online settings; the deployed model receives data for which to output predictions, and this data may encounter distribution shifts. In such a setting, how would one determine the frequency of when to reset the model ?
Resetting is not conceptually different from any other hyperparameter in TTA models, your concern would hence also apply to the hyperparameters in CoTTA or EATA.
However, in contrast to previous work, the resetting parameter has the advantage of being easily interpretable. As it tends to 0, the model gets more conservative, and approaches the performance of batch norm adaptation. As it tends to infinity (no resetting), the model approaches the performance of a (collapsing) TTA algorithm.
We show (cf. Table 2) that the model performance on CCC is fairly robust to the exact setting. In real-world scenarios, the parameter could be easily set by cross-validation (like any other hyperparameter in TTA) on a suitable development set (which are now very easy to generate for different difficulty levels and noise settings thanks to CCC).
We hope this addresses your concern, and we are happy to answer any questions that might still remain.
---
Rebuttal Comment 1.1:
Title: Answer to Rebuttal
Comment: Thank you for the rebuttal.
After reading the other reviewers' comments and your rebuttal,
- I agree that lacking Technical complexity is not a negative attribute in itself; simple methods that work well should be prioritized.
- I agree that an analysis as to *why* existing methods suffer from such performance degradation would a good addition to the paper. I thank the authors for the additional experiments, and making some progress towards answering this.
While my expertise is not directly in continual test time adaptation, I do believe that well executed papers which revisit the current state of a field of research (and its evaluation protocols), as well as propose simple methods which perform well, can be of value to the research community. I believe that my previous score and confidence reflect my overall assessment of the paper, and will therefore leave as-is.
Thank you.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Dear Reviewer,
Thank you for your responses. | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive feedback. We are glad to hear that reviewers found our paper easy to follow, our proposed dataset interesting, and our method to be simple and effective.
We commented on all reported weaknesses and addressed the reviewers' questions in the individual responses below.
Concerning new results to be added to the paper, we report new results for [additional hyperparameter tuning on CoTTA](https://openreview.net/forum?id=VfP6VTVsHc¬eId=zMGLcCwMDu), an [updated Table 1](https://openreview.net/forum?id=VfP6VTVsHc¬eId=rZV3x9TDSH) with error bars, and extensive new analysis of the collapsing behavior, for which we attach additional figures in the PDF here.
We hope that our replies clarify all reviewer concerns, and are happy to engage in further discussion and experiments.
Pdf: /pdf/65397934f92f8c62a8c679e6c5026601b0a05c01.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Conformal Prediction for Uncertainty-Aware Planning with Diffusion Dynamics Model | Accept (poster) | Summary: The paper combines diffusion modeling and conformal prediction to predict state-action trajectories with uncertainty quantification. The proposed approach then performs uncertainty-aware model-based planning with strong results on several established offline RL benchmarks.
Strengths: The approach has fairly strong results on several established offline RL benchmarks and outperforms several state-of-the-art baselines. The results show that uncertainty-aware planning does perform better than simply rolling out the underlying diffusion model alone.
The introduction/abstract are well-written and motivate the approach well. Most of the writing is quite good (some feedback on parts I think are unclear below). The figures are also well done.
To my knowledge, the proposed approach is novel and technically sound. I think working to incorporate more ideas from uncertainty quantification into offline RL methods is worth investigating deeper as well.
Weaknesses: There are no comparisons to other uncertainty quantification methods. Even within the offline RL literature, several existing methods [1][2][3] rely on uncertainty estimates and could be compared to. I think at the very least a comparison to popular uncertainty estimation methods in the model-based RL literature (e.g. bootstrapping disagreement with ensembles, predicting variances) could be helpful.
I struggle with Sections 3.3 and 3.4 a bit and think they could use some improvements clarity-wise.
While the approach is novel, it is a fairly straightforward application of conformal prediction to offline model-based planning. While I don't think the result is groundbreaking, it is certainly interesting enough to merit acceptance in my view.
[1] Zhan, Xianyuan, Xiangyu Zhu, and Haoran Xu. "Model-based offline planning with trajectory pruning." arXiv preprint arXiv:2105.07351 (2021).
[2] Yu, Tianhe, et al. "Mopo: Model-based offline policy optimization." Advances in Neural Information Processing Systems 33 (2020): 14129-14142.
[3] Kidambi, Rahul, et al. "Morel: Model-based offline reinforcement learning." Advances in neural information processing systems 33 (2020): 21810-21823.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I am confused about the role that the expert trajectories play in this approach. The approach is evaluated on datasets without labeled expert trajectories, i.e. the datasets include suboptimal demonstration data. But the Algorithm 1 block references expert trajectories, and I'm not sure why? It seems to me that the approach doesn't really assume optimal demonstrations.
Related to the above, I'm not sure what Algorithm 1 is actually showing. The "result" line indicates that the output is a prediction $\tau$ with associated uncertainty $C$, but L1-9 describe the complete training loop and there aren't really details on inference in this figure.
I think a brief definition of the $Quantile$ operator would be useful when it's first introduced in Equation 9.
I feel like Equation 5 and the accompanying description of transformers is not necessary to include in the main body, and working to include more of the actual experimental results (currently in the appendix) would be a better use of space in my opinion.
===
I have read the rebuttal and still lean towards acceptance.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: Comparisons to other uncertainty quantification methods Zhan et al [1], Yu et al. [2], Kidambi et al [3].
A1:Thanks for pointing this out. We will make the writing of Sections 3.3 and 3.4 more concise with clearer expression.
Zhan et al. [1], Yu et al. [2], and Kidambi et al. [3] have made significant contributions to model-based offline RL. These works are noteworthy because even without utilizing techniques like conformal prediction, they are able to measure uncertainty effectively.
**We compare our method with Zhan et al [1], Yu et al. [2], and Kidambi et al [3]. The results are shown below as the attached Table 3 (please refer to the attached .pdf in the Summary Rebuttal)**. The results show that our method achieves smaller uncertainties. We can see that the performances (rewards) of these baselines are lower because they do not use the diffusion model as a dynamics model. Despite MOPO (Yu et al. [2]) and Kidambi et al [3] have an explicit parametric representation of uncertainty, the uncertainty (std and interval) is still large. Also, MOPO and Kidambi et al [3] have a limitation: it assumes that the data conforms to a Gaussian distribution.
>Q2: The role that the expert trajectories play in this approach
A2: We do not assume optimal demonstrations and the datasets include suboptimal demonstration data, which is the same setting as offline reinforcement learning. Different from imitation learning, an offline reinforcement learning algorithm applied to a dataset collected by a suboptimal non-learning-based algorithm can still result in a reasonable policy (sometimes even outperforms the behavior agent used to collect the data).
>Q3: Clarification of Algorithm 1
A3: Sorry for the confusion. We get predicted trajectories in L 3 (training) and L 11 (calibration). We get associated uncertainty in L 5 (training) and L 13 (calibration). During inference, we also get trajectories predicted by our planner and compute the associated uncertainty with Eq. 15.
>Q4: A brief definition of the Quantile operator
A4: Thanks for the suggestions.
We follow the same mathematical definition of the traditional Quantile function:
The quantile function, also known as the inverse cumulative distribution function, is a mathematical function that maps a probability to the corresponding value in a random variable's distribution. Formally, for a random variable $X$ with cumulative distribution function $F(x)$, the quantile function $Q(p)$ is defined as:
$$
Q(p) = \inf\lbrace x : F(x) \ge p \rbrace,
$$
where $p$ is a probability between 0 and 1, and "inf" denotes the infimum, or greatest lower bound, of the set of values $\lbrace x: F(x) \ge p \rbrace$.
In other words, the quantile function gives the value $x$ such that the probability of observing a value less than or equal to $x$ in the distribution of $X$ is at least $p$.
>Q5: Equation 5 and the accompanying description of transformers
A5: Thanks for the suggestions. We will update this part in the new version. We will include more of the actual experimental results instead of Equation 5 and the accompanying description of transformers.
---
Rebuttal Comment 1.1:
Comment: I appreciate the additional experimental results and clarifications from the authors. I am still mostly in favor of acceptance. My main concern with the submission is still essentially the same (e.g. mediocre novelty), but I still think this submission has a decent technical contribution.
---
Reply to Comment 1.1.1:
Comment: Thanks for the reply! We appreciate your recognition of our decent technical contribution. Our focus lies in finding better solutions for reducing uncertainty in sequential decision-making, which is particularly critical for safety-critical applications like robotics.
In our view, **scientific novelty** does not refer to complexity, technical difficulty, or surprise. We value simplicity over unnecessary complexity, as long as it brings usefulness and value to our community. | Summary: In this paper the authors study how to measure uncertainty in a planner with a generative diffusion model, and how to reduce uncertainty during prediction. Results are presented on several MDPs.
Strengths: + coupling planners to diffusion models is an interesting idea worth exploring.
+ measuring and reducing uncertainty is a good goal.
Weaknesses: - I found this paper really hard to read and follow. Although the paper appears to address planning, the text refers to expert trajectories the "planner" is supposed to reconstruct. It is not clear if the task is planning or imitation learning. Various aspects are not well defined, for example, how is a quantile directly a loss function? How do you optimize a quantile? At the same time there is a lot of repetition of reconstruction nonconformity and reward conformity. Multiple experimental tasks, such as "test time flexibility", are introduced without adequate discussion of why this approach should be applied to them. The results are in tables that are almost unreadable and there is not enough discussion of the meaning of the results.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: n/a
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: not an adequate discussion
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: It is not clear if the task is planning or imitation learning.
Thanks for pointing this out. We aim to address planning with uncertainty-aware diffusion models. We involve generating a sequence of actions to achieve a desired goal or optimize an objective. We focus on decision-making and determining the optimal course of action based on a given model or environment. Although we use expert data, we are more like offline reinforcement learning (RL) rather than imitation learning. Planning algorithms often utilize search algorithms, optimization techniques, or reinforcement learning methods to find the best action sequence. For more details about such kind of planner, please refer to prior work Diffuser (Janner et al Planning with Diffusion for Flexible Behavior Synthesis)
>Q2: How is a quantile directly a loss function? How do you optimize a quantile?
We aim to minimize the size of the quantile, concentrating the probability mass into a smaller area, therefore reducing uncertainty. After adopting the differentiable ranking and sorting techniques (Cuturi et al. Differentiable ranking and sorting using optimal transport, NeurIPS 2019), the soft quantile function is differentiable. We can use gradient descent to update the dynamics model.
Specifically, Let $X$ be a random variable that has a smooth density function $f$. Let $w = w(p)$ be the $p$-th quantile. Then the first derivative of the quantile function is
$$
\frac{dw}{dp}=\frac{1}{f(w(p))}
$$
provided that the denominator is not zero. The second derivative is
$$
\frac{d^2w}{dp^2}=-\frac{f'(w)}{f(w)}(\frac{dw}{dp})^2=\frac{-f'(w)}{f(w)^3}
$$
>Q3: Repetition of reconstruction nonconformity and reward conformity.
A3: Thanks for pointing this out. We will make the writing more clear and concise. We'll also organize the paper better to make it easier to read and understood.
>Q4: Discussion of why this approach should be applied to experimental tasks such as "test time flexibility".
A4: Thanks for pointing this out. We will update the writing to add the following explanation. In order to evaluate the ability to generalize to new test-time goals, we run a "test time flexibility" evaluation. "Test time flexibility" is useful to evaluate the capability of the planner as the Diffuser (Janner et al) paper shows.
Long-horizon multi-tasking planning is an important capability of the planner that we need to evaluate experimentally
Offline Reinforcement Learning allows us to evaluate the capacity of our method to recover an effective single-task controller from heterogeneous data of varying quality, which is useful for planning.
Hence, we demonstrate our framework has a number of useful properties and is particularly effective in offline control settings that require long-horizon reasoning and test-time flexibility.
>Q5: Interpretation of the results in tables.
The "Rewards" serve as metrics for evaluating the model. The "mean" and "standard deviation (std)" are statistical measures of the rewards. A higher "mean" indicates better performance, while a smaller "std" reflects lower uncertainty. The "Uncertainty" is represented by the prediction interval in conformal prediction. Interval is the result of the upper bound of uncertainty minus the lower bound. A smaller "interval" indicates reduced uncertainty. Additionally, a higher "overall interval" is desired, as it signifies a higher reward prediction.
The concept of "Coverage“ (”Validity"), discussed in Section 4.6 of the paper, evaluates the validity of a prediction sample in the test dataset ($D_{test}$) by checking if it falls within the uncertainty interval computed using the calibration dataset ($D_{cal}$). A higher coverage is desirable, indicating better performance. The "Credibility", defined as the p-value in a prediction set, is obtained by conducting a T-test between the calibration trajectory and the predicted trajectory. The provided equation calculates the T-value,
$$
t = \frac{\mu_1-\mu_2}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}}
$$
where $\mu_1$ is the mean of calibration trajectory samples, $\mu_2$ is the mean predicted trajectory samples, $\sigma^2_1$ is the variance of calibration trajectory samples, $\sigma^2_2$ is the variance of predicted trajectory samples, $n_1$ is the sample size of calibration trajectory samples, and $n_2$ is the sample size of predicted trajectory samples. Higher credibility values are preferred, indicating better performance.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. Some of my concerns are addressed and I will modify my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for the reply! We appreciate your willingness to modify the paper score. Let me know if you have any other concerns. | Summary: This work addresses the challenge of uncertainty estimation for planning. The authors propose the use of diffusion models for learning dynamics, which have demonstrated effectiveness in overcoming challenges such as multi-modal action distributions. To quantify the uncertainty of these dynamics models, they employ Conformal Prediction (CP), a technique for constructing prediction sets with valid coverage. They introduce PlanCP, a framework that connects conformal prediction to planning and optimizing the model explicity to minimize uncertainty. The effectiveness of uncertainty sets is evaluated through coverage and optimization performance, and the algorithm is tested in D4RL benchmarks and block stacking problem, showcasing reduced uncertainty and outperforming prior algorithms. The authors highlight the flexibility of their method, as it can be combined with different model-based planning approaches and provides uncertainty estimates of the dynamics model.
Strengths: - The paper provides a clear problem statement that is well motivated and clearly outlines the proposed approach with intuitive evaluation metrics. Overall I found the paper well written and easy to follow.
- The proposed methodology presents a robust framework for quantifying and mitigating uncertainties within diffusion-based dynamics models, which have garnered significant attention in recent times.
- The quantitative results on D4RL although not SOTA(in terms of rewards), consistently outperforms the baseline without uncertainty quantification. It is important to note that the limited performance gains could be attributed to the constraints of the D4RL dataset itself, potentially reaching the performance ceiling. Nevertheless, the outlined approach consistently outperforms the baseline methods across the metrics identified.
Weaknesses: - The computational complexity of the proposed framework, especially during training and calibration, could hinder its scalability and applicability in real-time or resource-constrained scenarios. It would be beneficial if the authors provided more details regarding the computational aspects of training and inference.
- The handling of partial observability in the framework is not adequately explained. Naive extensions to partially observed Markov Decision Processes (POMDPs) may lead to unreliable estimations in diffusion models. Further elaboration on how the framework addresses partial observability would be valuable.
- The paper lacks clarity on how uncertainty estimates will manifest in highly stochastic domains. It would be insightful to include experiments on a benchmark dataset with increasing levels of stochasticity to demonstrate the framework's performance under such conditions.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I was wondering if the authors had some thoughts on what adapting CP framework to other conditioning approaches will entail, such as temporal condition guidance mechanisms.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: More details regarding the computational aspects of training and inference.
A1: Thanks for the suggestion! Yes, the framework with conformal prediction is a little slower than the framework without conformal prediction. Regarding the computational aspects of training and inference, as the supplementary material states, we train our model for 200 epochs with a batch 521 size of 256, utilizing a single GTX 1080 Ti GPU for computation. It was measured that the computational framework with the addition of conformal prediction increased the training time by 18\% over the computational framework without conformal prediction. This is due to the fact that to compute reconstruction nonconformity. When making inferences, we use the same amount of time as the other frameworks and do not add extra overhead.
>Q2: Further elaboration on how the framework addresses partial observability.
Thanks for pointing this out. Addressing partial observability in the CP framework applied to diffusion models requires careful modeling of observability, employing appropriate state estimation techniques, and potentially incorporating informative features. By considering these approaches, the framework can better handle partial observability and provide more reliable estimations and uncertainty assessments.
We model partial observability by incorporating relevant information into the model. This involves integrating additional features and conditions that capture observable aspects of the diffusion process into the diffusion model. By incorporating these observations, the model can better account for the uncertainties arising from partial observability.
By considering the uncertainty associated with partial observability, the resulting predictions would be more reliable and better reflect the inherent limitations of the diffusion models in such scenarios. This would provide decision-makers with a better understanding of the confidence and uncertainty associated with the predictions, particularly in situations where partial observability can significantly impact the accuracy of predictions.
>Q3: How uncertainty estimates will manifest in highly stochastic domains?
Good suggestions. Actually, the Maze2D environment is a stochastic environment. The data is generated by selecting goal locations at random. We have already shown the results for Maze2D in **Tables 1 and 2 in the main text**.
Now, we modify the D4RL environment to introduce a random noise $\epsilon_1 \sim N(0, 1)$ and $\epsilon_2 \sim N(0, 0.5)$ add to the observations. **The framework's performance is shown in the attached Table 2 (please refer to the attached .pdf in the Summary Rebuttal).** We can see the stochastic D4RL is more challenging but PlanCP is still effective and can still quantify the uncertainty.
>Q4:Some thoughts on adapting the CP framework to other conditioning approaches, such as temporal condition guidance mechanisms.
A4: Thanks for the suggestions. Adapting the conformal prediction (CP) framework to other conditioning approaches, such as temporal condition guidance mechanisms, would require careful consideration of the specific requirements and characteristics of the conditioning approach. To adapt the CP framework to temporal condition guidance mechanisms, one approach could be to incorporate the temporal information into the conformal score calculation. Another approach could be to use a hybrid approach that combines CP with other methods that are specifically designed for temporal condition guidance, such as Kalman filters or particle filters. These methods are commonly used in control and tracking applications, where the goal is to estimate the state of a system based on noisy measurements and temporal dependencies.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for additional experimental results and clarifications. Most of my concerns were addressed in the rebuttal. I am still in favor of acceptance as the work addresses an important aspect of sequential decision making, i.e. uncertainty while in contrast to purely optimizing for benchmark performance.
---
Reply to Comment 1.1.1:
Comment: Thanks for the encouragement! We wholeheartedly agree with the significance of conducting uncertainty studies in contrast to solely focusing on optimizing benchmark performance. This is particularly crucial when it comes to safety-critical applications such as robotics, autonomous driving, and spacecraft. | Summary: The work proposes uncertainty quantification for learned dynamics model and imitation learning. The authors incorporates an uncertainty statistic as part of the loss to train their dynamics model that uses a diffusion model architecture. They show that doing so brings performance improvement empirically on common RL simulation environments. The method also performs conformal prediction on the learned reward forecast.
Strengths: Using UQ methods and optimizing to reduce uncertainty in dynamics modeling is a great idea. The authors are able to make the first step of incorporating conformal statistics directly into the loss function, and is able to show results of (slight) improvement in performance.
Weaknesses: There are many areas that the paper needs improvement.
1. English. There are grammatical errors, awkward sentence structures, and confusing statements throughout the paper. Although you don't need perfect English for a CS paper, improving on writing will help you get your message across. Using the abstract as an example, I would edit:
- line 3-5: "to overcome the xx, yy, and zz challenges" -> to overcome challenges such as xx, yy, and zz.
- line 14-15: "Furthermore, during the test, PlanCP can also measure the model uncertainty" -> Unclear what you are trying to say. Do you mean that model uncertainty is also used for planning during test time?
- line 20-21: "Our method is highly flexible and can combine .... and produces ..." -> Our model can be combined with ... to produce .... And what do you mean by flexible? is it with regard to tasks, or underlying models?
- Figure 1: "Dynamic model" -> dynamics model
2. There has been various recent works that uses CP for planning (see list below). You can argue that your setting is different from theirs, but it's important to cite them to provide context.
- Lindemann, Lars, et al. "Safe planning in dynamic environments using conformal prediction." IEEE Robotics and Automation Letters (2023).
- Strawn, Kegan J., Nora Ayanian, and Lars Lindemann. "Conformal Predictive Safety Filter for RL Controllers in Dynamic Environments." arXiv preprint arXiv:2306.02551 (2023).
- Tonkens, Sander, et al. "Scalable Safe Long-Horizon Planning in Dynamic Environments Leveraging Conformal Prediction and Temporal Correlations." ICRA (2023)
3. More explanation is needed for the experiments section.
- The term "Validity %" should be changed to "Coverage %". Validity is a binary metric; a method either satisfies the validity condition, or not. [1]
- On the topic of validity - you chose $\alpha = 0.1$ (line 261), does that mean that your coverage should be at least 90%? (or should it be 80% since you are choosing $[Q^{1-\alpha}, Q^{\alpha}]$ in Eq. 15?) Does that mean for all of your experiments, the CP uncertainty interval is _invalid_? That's problematic and means that your data violates the assumptions you are making. You can't say your method captures uncertainty in this case.
- You need to say in your table captions that bold font means better performance, or the meaning is unclear.
- How exactly is credibility calculated? it's not explained in either the main text or the appendix. What is the theoretical basis for using the $p$-value as metric?
4. Lastly The term "learning to reduce uncertainty" is unsound. The "reconstruction nonconformity scores" are calculated on the training set, which violates the algorithm of conformal prediction [1]. The $\mathcal{L}(\theta)_{uncertainty}$ in equation 13 is hence not a measure of uncertainty, but just a training heuristics. The only valid CP component is the interval for rewards, which is not used anywhere in training.
5. Contribution is not significant. Form my understanding, the authors used a common CP technique on an existing diffusion dynamics model. The uncertainty results are not new (in CP literature), and improvements in reward are rather insignificant.
[1] Angelopoulos, Anastasios N., and Stephen Bates. "A gentle introduction to conformal prediction and distribution-free uncertainty quantification." arXiv preprint arXiv:2107.07511 (2021).
I think the directions the authors are going in is interesting and insightful; with some work this paper can be a great contribution to the community. However in its current state, the paper is not fit for publishing at NeurIPS yet.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See weaknesses 3, 4, and 5.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: See weaknesses. I do not think the discussion & limitation section fully address the limitations of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: English
A1:Thanks for pointing this out. We have fixed these grammatical errors, awkward sentence structures, and confusing statements throughout the paper. We will update them in the new version of the paper.
**Is model uncertainty used for planning during test time?**
No. Currently, PlanCP can measure model uncertainty during testing. We have not used model uncertainty for planning during testing.
>Q2: Cite recent works to provide context.
A2: Thanks for mentioning these papers! In our NeurIPS submission, we already cited and discussed Lindemann, et al. RA-L (2023) in the related work section. Strawn et al. and Tonkens et al. were posted online after our NeurIPS submission, and are not peer-reviewed. We will cite and discuss these papers in the related work section in the updated version. Lindemann, et al. RA-L (2023), Strawn et al., and Tonkens et al. have made significant contributions to safe planning. These works are noteworthy because they effectively measure uncertainty with conformal prediction, even if it cannot be reduced by optimization.
>Q3: More explanation for the experiments.
A3: Thanks for pointing these out. We will fix them in the updated version.
3.1. We will change the term "Validity \%" to "Coverage \%".
3.2. We choose $\alpha = 0.1$. Theoretically, the coverage will be exactly 80\% only in expectation. In CP, the empirical coverage follows a beta distribution with mean at the desired coverage (see, e.g., Angelopoulos and Bates, pp. 14). Empirically, our coverages are closely distributed around 80\%, consistent with a beta distribution, which satisfies the assumptions.
3.3. We will add "Bold font means better performance." to the table captions.
3.4. As we explained in our paper *Section 4.6 Credibility*: "Credibility is defined as the p-value in a prediction set". Previously, we believed this to be a trivial mathematical definition, and due to page limitations, we have not explained it much. Mathematically, we obtain credibility by doing T-test between the calibration trajectory and the predicted trajectory.
$$
t = \frac{\mu_1-\mu_2}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}}
$$
where $\mu_1$ is the mean of calibration trajectory samples, $\mu_2$ is the mean predicted trajectory samples, $\sigma^2_1$ is the variance of calibration trajectory samples, $\sigma^2_2$ is the variance of predicted trajectory samples, $n_1$ is the sample size of calibration trajectory samples, and $n_2$ is the sample size of predicted trajectory samples.
\paragraph{What is the theoretical basis for using the p-value as a metric?} The use of the p-value as a metric is based on the principles of probability theory and statistical inference. The p-value is the probability of obtaining test results at least as extreme as the result actually observed. We follow Giovannotti et al. Transformer-based conformal predictors for paraphrase detection, PMLR 2021, to use $p$-value as metric.
>Q4: Explanation of the term "learning to reduce uncertainty"
Thanks for the suggestion. We understand that using the training data to calibrate invalidates the guarantees of conformal prediction. That's why we split the data into a separate calibration set and test set. When we mention "learning to reduce uncertainty," we are not referring to reducing $L_{uncertainty}$ on the training set. Instead, it means that our model exhibits lower reward uncertainty on the test set. We observed that our algorithm produces smaller reward intervals, which are calculated without relying on the training set. Our method carefully follows the underlying requirements of CP (exchangeability of calibration data, etc.), and inherits the guarantees of CP.
>Q5: Clarification of contributions
A5: Thanks for the suggestion.
5.1. The novelty of this work lies in using conformal prediction to (i) measure the uncertainty for the conditional diffusion planner using calibration data and (ii) reduce the uncertainty of the planner by including a quantile loss term during training. Previous CP literature can only measure the uncertainty but cannot reduce it. For experimental results, we emphasize that we accomplished reducing uncertainty without sacrificing performance.
5.2. The application of CP to the diffusion dynamics model is novel and provides insights into the uncertainty associated with the predictions of the model. This can be useful for decision-making in scenarios where accurate predictions are critical. There is no prior work that attempts to introduce conformal prediction on diffusion models.
5.3. The experimental results also show that we can reduce the uncertainty of the model without compromising performance.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response. I have increase my score to reflect the authors' edits and additions. If this paper was to be accepted, please edit the introduction to include the clarification of contribution in the rebuttal here.
I appreciate the explanation for the p-value - this is very cool. Though I feel because it is a new(ish) concept for the CP community, it might be beneficial to include some of this explanation in the paper as well.
---
Reply to Comment 1.1.1:
Comment: Thanks for the reply! We appreciate your willingness to update the paper score. We will update the introduction to include the clarification of contribution in the rebuttal here if this paper was to be accepted. | Rebuttal 1:
Rebuttal: We are grateful to the reviewers for their valuable feedback on our work. Thank you for the many positive comments: (i) acknowledging the novel use of conformal prediction in diffusion models (all reviewers), (ii) noting the reasonably well-written and clear exposition (gfdz, wAeC, BQNc), (iii) recognizing the importance of a robust framework for quantifying and reducing uncertainties in diffusion dynamics models (all reviewers), and (iv) noting that our approach consistently outperforms the baseline methods across the metrics identified (zTTG, wAeC, BQNc). We are happy to hear the reviewers find our direction interesting and potentially impactful. As noted by Reviewer zTTG, we hope, with some work, this paper can be a great contribution to the community.
Below are some important questions:
>Q1: Clarification of Contributions
A1:
1. Instead of developing a method for sequential decision-making that achieves the highest performance (reward) on D4RL, we are **the first** to augment the `diffusion` RL algorithm with uncertainty awareness.
2. The progress in the D4RL benchmark is rapidly changing, but at the time of our paper submission, Diffuser (Janner et al.) was the state-of-the-art open-source Diffusion RL algorithm. The original Diffuser algorithm was evaluated only 100 times, which is insufficient to capture uncertainty. To address this limitation, we re-run 1000 evaluations of the Diffuser algorithm on our setup and present the performance results.
3. We focus on finding better solutions for reducing the uncertainty of sequential decision-making than before.
4. The novelty of this work lies in using conformal prediction to (i) accurately measure the uncertainty for conditional diffusion planners, and **(ii) reduce the uncertainty of diffusion planners by including a quantile loss term in the training**. Previous CP literature can only measure the uncertainty but cannot reduce it. For experimental results, we emphasize that we accomplished **reducing uncertainty without sacrificing performance**.
5. The application of CP to the diffusion dynamics model is novel and gives an accurate quantification of uncertainty for the predictions of the model with statistical guarantees. This can be useful for decision-making in scenarios where accurate predictions are critical. **There is no prior work that attempts to introduce conformal prediction on diffusion models**.
6. The experimental results also show that we can **reduce the uncertainty** of the model without compromising performance.
>Q2: More explanation for the experimental results
A2: *We have added 3 tables* **(attached to the Summary Rebuttal)***. Please refer to the attached .pdf file for more details.*
Table 1: Comparison of our approach with MOPO. PlanCP has lower uncertainty than previous methods for most tasks.
Table 2: Uncertainty of Stochastic D4RL. We modify the D4RL environment to introduce a random noise $\epsilon_1 \sim N(0, 1)$ and $\epsilon_2 \sim N(0, 0.5)$ add to the observations. The framework's performance is shown in the attached Table 2. We can see the stochastic D4RL is more challenging but PlanCP is still effective and can still quantify the uncertainty.
Table 3: Comparison of our approach with other model-based RL. PlanCP has lower uncertainty than previous methods for most tasks
>Q3: The definition of the Quantile operator
A3: We follow the same mathematical definition of the traditional Quantile function:
The quantile function is a mathematical function that maps a probability to the corresponding value in a random variable's distribution. Formally, for a random variable $X$ with cumulative distribution function $F(x)$, the quantile function $Q(p)$ is defined as:
$$
Q(p) = \inf\lbrace x : F(x) \ge p \rbrace,
$$
where $p$ is a probability between 0 and 1, and "inf" denotes the infimum, or greatest lower bound, of the set of values $\lbrace x: F(x) \ge p \rbrace$.
In other words, the quantile function gives the value $x$ such that the probability of observing a value less than or equal to $x$ in the distribution of $X$ is at least $p$.
>Q4: How to update the dynamics model and differentiate through the Quantile function?
A4: After adapting the differentiable ranking and sorting techniques (Cuturi et al. Differentiable ranking and sorting using optimal transport, NeurIPS 2019), the soft quantile function is differentiable. We can use gradient descent to update the dynamics model.
>Q5: Discussion of why this approach should be applied to experimental tasks such as "test time flexibility".
A4: Thanks for pointing this out. We will update the writing to add the following explanation. In order to evaluate the ability to generalize to new test-time goals, we run a "test time flexibility" evaluation. "Test time flexibility" is useful to evaluate the capability of the planner as the Diffuser (Janner et al) paper shows.
Long-horizon multi-tasking planning is an important capability of the planner that we need to evaluate experimentally
Offline Reinforcement Learning allows us to evaluate the capacity of our method to recover an effective single-task controller from heterogeneous data of varying quality, which is useful for planning.
Hence, we demonstrate our framework has a number of useful properties and is particularly effective in offline control settings that require long-horizon reasoning and test-time flexibility.
Pdf: /pdf/ac746ac38fa18f6434806a61b29e10ac521bc55d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper describes a method for learning a dynamics model that uses conformal prediction for explicit representation of the uncertainty. The dynamics model is used for sequential decision making such as planning in a maze or learning control in one of the D4RL problems.
Strengths: * The primary strength of this paper is the novel use of conformal prediction. The authors correctly describe several possible benefits of conformal prediction as a representation of uncertainty, including not needing a commitment to a particular representation or parameterisation of the uncertainty in the dynamics.
* The authors give a complete algorithm --- it is (mostly) clear how all the pieces work.
* The paper is reasonably well written and clear.
Weaknesses: * The primary weakness of the paper is that it is not clear what exactly the authors have accomplished. Developing a new method for sequential decision making should either show that we can solve problems that we could not solve before ((or find better solutions than was possible), or alternatively, provide some understanding and analysis of the value of a dynamics model that encodes uncertainty using conformal prediction. Unfortunately, this paper does not show that it can solve problems that could not be solved before, or outperform the state of the art. The only comparison is to Diffusion RL which is not the best performing RL algorithms according to the original Diffusion RL paper (Janner et al) or the current leader in the [D4RL benchmarks](https://paperswithcode.com/sota/gym-halfcheetah-expert-on-d4rl).
* It's fine to not have the best performer on D4RL, but then what is conformal prediction buying for the algorithm? A far better comparison would have been to a model-based algorithm that had an explicit parametric representation of uncertainty, and an analysis of what is actually being learned. What is the effect of the loss functions in 12-14 on $P(\theta)$?
* The experimental results in general are not very compelling, at least in part because there is no systematic evaluation of the uncertainty. The paper needed to remind the reader of the uncertainty models in the D4RL and Maze engines, and even better, show that these uncertainties are difficult to capture with parametric models. Even better would be to show that conformal prediction outperforms non-parametric dynamics models such as Ko and Fox (2009, 2011) in important ways.
* Additionally, while the paper overall is well-written, it is not careful about the distinction between planning and RL. This paper appears to be fundamentally an RL paper, and not a planning paper in the sense that if the model changes or the reward changes, there is no optimisation that can be used to recover from this change. If the loss function was solely equation 13, and did not include equation 14, then at least the model would be robust to changes in the reward function, and a new sequence of actions could be optimised over.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The definition of calibration is very unclear. Why is a separate calibration data set is used for the reward conformity loss function, rather than use the training data.
* What is the exact definition of the Quantile function?
* Line 7 of Algorithm 1 says "update the dynamics model" but it is not explained in the text how to do this. Figure 1 suggest the update is done through gradient descent ($\nabla_\theta \mathcal{L}_{reward}$, etc.), but it is not obvious to me that you can differentiate through the Quantile function. How is this done? This is a minor point.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors provide a limitation section, but I disagree with them that the limitation of the approach is the absence of real robot experiments. Instead, I think the primary limitation is the lack of understanding and careful analysis of what conformal prediction achieves relative to the large body of work in planning under uncertainty, including non-parametric models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: What did this paper accomplish?
A1: Thanks for the comments! Instead of developing a method for sequential decision-making that achieves the highest performance (reward) on D4RL, we are the first to augment the `diffusion` RL algorithm with uncertainty awareness. The progress in the D4RL benchmark is rapidly changing, but at the time of our paper submission, Diffuser (Janner et al.) was the state-of-the-art open-source Diffusion RL algorithm. The original Diffuser algorithm was evaluated only 100 times, which is insufficient to capture uncertainty. To address this limitation, we re-run 1000 evaluations of the Diffuser algorithm on our setup and present the performance results. Since our PlanCP is a modification of the Diffuser algorithm, we compare PlanCP to Diffuser and experimentally prove that our method has lower uncertainty. We focus on finding better solutions for reducing the uncertainty of sequential decision-making than before.
>Q2: What is conformal prediction buying?
**Comparison with a model-based algorithm that had an explicit parametric representation of uncertainty.**
We compare with a model-based algorithm MOPO that had an explicit parametric representation of uncertainty. We can see that the performance (reward) of MOPO is lower because it does not use the diffusion model as a dynamics model and the uncertainty (std and interval) is still large, as attached Table 1 shows (Summary Rebuttal). Also, MOPO has a limitation: it assumes that the data conforms to a Gaussian distribution.
**What is actually being learned?**
What the model is actually learning in conformal prediction is a measure of how well a data point fits. This is captured by the conformal score, which is used to construct the prediction intervals or regions.
**The effect of the loss functions in 12-14 on $P(\theta)$?**
Eq. (12) represents the reconstruction loss used to improve the accuracy of diffusion models by guiding the diffusion planner $P_\theta$ in predicting dynamics more effectively.
Eq. (13) presents the joint learning objective for training the planner $P_\theta$, aiming to enhance both accuracy and reduce uncertainty in the learned planners.
Eq. (14) measures the $L_2$ loss between the predicted reward and the actual reward, helping to optimize the reward model as the condition $w(\cdot)$ in the conditional diffusion planner $P_\theta$, as shown in Eq. (4).
>Q3: How to interpret the experimental results?
A3: GP-BayesFilters (Ko and Fox, 2009) show how GP prediction and observation models can be combined with particle filters (GP-PF), and extended Kalman filters (GP-EKF). Bayesian filters and conformal prediction, although both involving probability and uncertainty, have distinct goals and applications. Bayesian filters estimate system states based on observations, while conformal prediction provides confidence intervals for predictions. The main advantage of conformal prediction over Bayesian filtering is that it does not rely on prior knowledge of the data distribution, making it more robust in uncertain or unknown probability distributions. Additionally, conformal prediction offers a validity measure for each prediction, which is valuable in applications where the impact of incorrect predictions can be significant.
We compare with Ko and Fox, 2009, a non-parametric dynamics method, and MOPO. The result is shown as attached Table 1 (attached to the Summary Rebuttal). The findings show that conformal prediction outperforms non-parametric dynamics models Ko and Fox, 2009, and these uncertainties are challenging to capture with parametric models like MOPO.
>Q4: Is this an RL paper or a planning paper?
A4: This is a planning paper. Eq. (14) is used to learn a better reward model, which guides the conditional diffusion planner $P_\theta$. Unlike the RL paradigm, the planner $P_\theta$ is learned through expert data, allowing for re-planning if the model or reward changes. Planning and RL are two different approaches to sequential decision-making problems. Planning involves constructing a model of the environment and using it to simulate future outcomes, assuming complete knowledge of the environment. In contrast, RL involves learning from feedback in the form of rewards or penalties without complete knowledge of the environment and its dynamics. We build on Diffuser (Janner et al.), which calls itself a planning method.
>Q5: Why is a separate calibration dataset used for the reward conformity loss function?
A5: In conformal prediction, calibrating with the training data invalidates the statistical guarantees of the method. Specifically, it results in overly confident (too small) prediction intervals that do not accurately reflect performance variation at test time. Therefore, separating training and calibration data is standard practice in CP.
>Q6: What is the exact definition of the Quantile function?
A6: We follow the same mathematical definition of traditional Quantile function: Formally, for a random variable $X$ with cumulative distribution function $F(x)$, the Quantile function $Q(p)$ is defined as:
$$
Q(p) = \inf\lbrace x: F(x) \ge p\rbrace
$$
where $p$ is a probability between 0 and 1, and "inf" denotes the infimum of the set of values $\lbrace x: F(x) \ge p \rbrace$.
>Q7: How to update the dynamics model and differentiate through the Quantile function?
After adopting the differentiable ranking and sorting techniques (Cuturi et al. Differentiable ranking and sorting using
optimal transport, NeurIPS 2019), the soft quantile function is differentiable. We can use gradient descent to update the dynamics model.
Specifically, Let $X$ be a random variable that has a smooth density function $f$. Let $w = w(p)$ be the $p$-th quantile. Then the first derivative of the quantile function is
$$
\frac{dw}{dp}=\frac{1}{f(w(p))}
$$
provided that the denominator is not zero. The second derivative is
$$
\frac{d^2w}{dp^2}=-\frac{f'(w)}{f(w)}(\frac{dw}{dp})^2=\frac{-f'(w)}{f(w)^3}
$$
---
Rebuttal Comment 1.1:
Comment: As the deadline for the author-reviewer discussion period is approaching, we kindly request that you review our response at your earliest convenience. This will allow us to address any further questions or concerns you may have before the discussion period concludes.
Additionally, we appreciate your suggestions for improving the limitation section of our work, and we agree with their importance. In our updated paper, we will provide a more comprehensive overview of the limitations by incorporating the **careful analysis of what conformal prediction achieves (Q2)** and including the **results of non-parametric models (Q3 and Table 1)**. This addition will enhance the clarity and completeness of our limitations section, allowing readers to better understand the scope and potential challenges of our work.
We greatly appreciate your time and effort in reviewing our work. Thanks! | null | null | null | null | null | null |
Efficient Algorithms for Generalized Linear Bandits with Heavy-tailed Rewards | Accept (poster) | Summary: In this paper, the authors study the problem of generalized linear bandits with heavy tailed rewards. They propose two algorithms based on truncation and mean of medians. The algorithms both achieve near optimal regret bound of $\tilde{O}(dT^{\frac{1}{1+\epsilon}})$. These regret bounds improve upon previous results in the sense of regret bound or computational complexity. Finally, the authors run simulations to support their claims.
Strengths: 1. The problem of generalized linear bandits with heavy-tailed rewards is well-motivated.
2. Both algorithms achieve near optimal regret bounds, the proof looks correct to me.
3. The writing is clear, the related work part is really helpful.
Weaknesses: 1. According to the introduction of previous methods for generalized linear bandits and bandits with heavy-tailed rewards, the algorithms are direct combinations of previous techniques. Therefore the algorithmic novelty is limited.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. For CRMM, since the assumption of symmetric distribution is used to guarantee that the median is 0, I am curious about whether it is possible to replace the condition with the median of the distribution equals 0.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you sincerely for your time and effort in reviewing our work. We have carefully considered your concerns and our responses are provided as follows.
----
**Weaknesses: According to the introduction of previous methods for generalized linear bandits and bandits with heavy-tailed rewards, the algorithms are direct combinations of previous techniques. Therefore the algorithmic novelty is limited.**
In fact, all existing heavy-tailed bandit algorithms are combinations of bandit algorithms' fundamental framework and heavy-tailed strategies (truncation and mean of medians) (Bubeck et al., 2013; Medina and Yang, 2016; Shao et al., 2018; Xue et al., 2020). The challenge is to find an appropriate approach to apply the heavy-tailed strategies for the problems at hand, and provide rigorous analysis for the proposed algorithms.
Our proposed algorithms differ from existing algorithms (Shao et al., 2018; Xue et al., 2020) in two main aspects. First, our work integrates heavy-tailed strategies into the Online Newton Step (ONS) method, whereas existing research has applied heavy-tailed strategies to Least Square Estimation (LSE), resulting in fundamentally different analytical techniques. Second, our algorithms not only achieve nearly optimal regret bounds but also offer significant improvements in efficiency compared to existing heavy-tailed algorithms (Shao et al., 2018; Xue et al., 2020). Specifically, CRTM reduces the computational complexity from $O(T^2)$ to $O(T)$ when compared to existing truncation-based algorithms (Shao et al., 2018; Xue et al., 2020). CRMM reduces the number of estimators required per round from $O(\log T)$ to only $1$ when compared to existing median-of-means-based algorithms (Shao et al., 2018; Xue et al., 2020). To attain such improvements, we have developed novel approaches for applying heavy-tailed strategies and introduced new analytical techniques. We provide a detailed introduction as follows.
**Truncation-based algorithm:** CRTM differs from existing algorithms TOFU (Shao et al., 2018) and BTC (Xue et al., 2020) in the truncated terms. Both TOFU and BTC have to store the historical rewards and truncate all these rewards at each epoch, resulting in a computational complexity of $O(T^2)$. In contrast, CRTM achieves online learning by truncating only the reward of current round, whose computational complexity is $O(T)$. The main innovation in CRTM's analysis is the adoption of inductive method. Please refer to the **response for reviewer QvG4** for more details.
**Mean-of-medians-based algorithm:** CRMM differs from MENU (Shao et al., 2018) and BMM (Xue et al., 2020) in the order of the "mean" and "medians" operations. MENU and BMM first calculate $O(\log T)$ "means" using multiple historical reward sequences, and then takes the median of these "means". CRMM reverses the order of these two operations to reduce the number of "means", as the calculation of "means" is time-consuming. Specifically, CRMM first takes the median of multiple rewards and then calculates a single "mean" using the median rewards, which reduces the number of estimators required per round from $O(\log T)$ to only $1$. The analysis has been adapted to prove such an exchange can achieve the nearly optimal regret bound. Two important properties of the median are utilized in our proof. The first property is that scaling a set of variables does not alter the index of the median term, which is employed in the proof of Lemma 10 (Lines 144-153). The second property is the upper bound of the median's $(1+\epsilon)$-th moment, as established in Lemma 8.
----
**Questions: For CRMM, since the assumption of symmetric distribution is used to guarantee that the median is 0, I am curious about whether it is possible to replace the condition with the median of the distribution equals 0.**
Yes, replacing the condition with the median of the distribution being equal to $0$ is sufficient to conduct the nearly optimal regret bound. The symmetric distribution is utilized in the proof of Lemma 10, and the details can be found in Lines 156-158 of the supplementary material, where we mention that the symmetric distribution is employed to ensure that the median is 0. We will provide a more comprehensive discussion of this assumption in the revised paper. | Summary: This paper considers Generalized Linear Bandit (GLB) with heavy-tailed rewards, i.e., the rewards $y_t=\mu(\langle x_t,\theta^\ast\rangle)+\eta_t$ only allows a bounded $(1+\epsilon)$-order moment. By utilizing the truncation and the mean of medians technique (both for handling heavy-tailed r.v.s), the authors establish efficient algorithms for GLBs with infinite arms and heavy-tailed rewards.
Strengths: 1. Compared to previous works on LB with heavy-tailed rewards, this paper considers Generalized LB instead of standard Stochastic LB, where the function $\mu$ can be any Lipschitz and uniformly bounded function.
2. This paper has a lower computational complexity: previous truncation-based works achieving $\sqrt T$-style regret require performing an iteration over all previous observations every time they update the confidence set, which costs $\mathcal O(T)$ time. In contrast, by utilizing the ONS step used in the $\text{OL}^2\text{M}$ algorithm, this algorithm only needs $\mathcal O(d^2)$ time for each round.
3. The presentation is pretty clear, and the algorithms are easy to understand.
4. The results are supported by various numerical illustrations.
Weaknesses: I am not sure what is the main technical contribution of this paper. For example, the CRTM algorithm looks pretty like equipping the $\text{OL}^2\text{M}$ algorithm by Zhang et al. (2016) with the previous truncation techniques by Shao et al. (2018) or Xue et al. (2020). Without any part in the main text devoted to explaining the technical hardness of showing Theorems 1 & 2, it is hard to evaluate the contribution of this paper. The same also holds for the mean-of-medians method CRMM.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See Weaknesses. I am willing to increase my score if the authors can address it convincingly.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks sincerely for your time and effort in reviewing our work. We have carefully considered your concerns, and are very happy to respond more questions during the rolling discussion.
----
**Q1:The CRTM algorithm looks pretty like equipping the OL$^2$M algorithm by Zhang et al. (2016) with the previous truncation techniques by Shao et al. (2018) or Xue et al. (2020). The same also holds for the mean-of-medians method CRMM.**
To deal with the heavy-tailed GLB problem, it is essential to incorporate the GLB algorithm's fundamental technique (Online Newton Step method) and heavy-tailed strategies (truncation and mean of medians). However, our algorithms not only attain the nearly optimal regret bounds but also offer significant improvements in efficiency compared with existing algorithms (Shao et al., 2018; Xue et al., 2020). Specifically, CRTM reduces the computational complexity from $O(T^2)$ to $O(T)$, and CRMM reduces the number of estimators required per round from $O(\log T)$ to only $1$. To achieve such improvements, we have developed novel approaches for applying heavy-tailed strategies and conducted new analytical techniques. We give a detailed introduction as follows.
----
**Q2: Technical contribution of the truncation-based method CRTM.**
Compared with existing truncation-based algorithm TOFU (Shao et al., 2018) and BTC (Xue et al., 2020), CRTM differs from them in two aspects. First, both TOFU and BTC have to store the historical rewards and truncate all these rewards at each epoch, resulting in a computational complexity of $O(T^2)$. In contrast, CRTM achieves online learning by truncating only the reward of current round, whose computational complexity is $O(T)$. Second, TOFU and BTC are designed for SLB model and calculate the estimator via Least Square Estimation (LSE) , CRTM is designed for the GLB model and updates the estimator using the Online Newton Step (ONS) method, which makes the analytical techniques different.
The main innovation in CRTM's analysis is the adoption of inductive method. We first display the key differences in the analysis between TOFU and CRTM. For TOFU employing LSE, the confidence region for the inherent parameter $\theta_\*$ is
\begin{equation}
\lVert\hat{\theta}\_{t+1}^{LSE}-\theta\_\*\rVert_{\widetilde{V}\_{t+1}} \leq \lVert \underbrace{\widetilde{V}\_{t+1}^{-\frac{1}{2}} A\_t\(Y\_t-A\_t^\top\theta\_\*\)}\_{A} \rVert\_2+\lVert\theta\_\*\rVert\_{\widetilde{V}\_{t+1}^{-1}}
\end{equation}
where $A\_t=[x\_1,x\_2,\ldots,x\_t]\in\mathbb{R}^{d\times t}$ is the matrix composed of selected arm vectors, $\widetilde{V}\_{t+1}=A\_tA\_t^\top+I_d$ and $Y\_t=[y\_1,y\_2,\ldots,y\_t]\in\mathbb{R}^{t\times 1}$ is the historical reward vector. For CRTM employing the ONS method, the confidence region for the inherent parameter $\theta\_\*$ is
\begin{equation}
\lVert\hat{\theta}\_{t+1}^{ONS}-\theta\_\*\rVert\_{V\_{t+1}}^2\leq\underbrace{\sum\_{\tau=1}^t 2x\_\tau^\top\left\(\hat{\theta}\_\tau^{ONS}-\theta\_\*\right\) \left\(y\_\tau-\mu\(x\_\tau^\top\theta\_\*\)\right\)}\_{B}+\sum\_{\tau=1}^t \lVert x_\tau\rVert\_{V\_\tau^{-1}}^2 \left\(y\_\tau-\mu\(x\_\tau^\top\theta\_\*\)\right\)^2+O\(\log t\).
\end{equation}
Although both above two equations include the linear combination of historical rewards ($A$ and $B$), there exists essential difference between these two terms. Specifically, $A$'s coefficients $\widetilde{V}_{t+1}^{-\frac{1}{2}}A_t$ are known and the agent can truncate the scaled rewards to reduce the impact of extreme noises. However, $B$'s coefficients $\\{x\_\tau^\top\(\hat{\theta}\_\tau^{ONS}-\theta\_*\)\\}\_{\tau=1}^t$ are unknown, which makes the existing technique of TOFU invalid.
To address this issue, we adopt the inductive method. The first step is to relax term $B$ as follows:
\begin{equation}
\|x\_\tau^\top\(\hat{\theta}^{ONS}\_\tau-\theta\_\*\)y\_\tau\|\leq\lVert\hat{\theta}\_\tau^{ONS}-\theta\_\*\rVert\_{V\_\tau}\cdot\lVert x\_\tau\rVert\_{V\_\tau^{-1}}\cdot|y\_\tau|.
\end{equation}
For $\tau=1$, it is evident that $\lVert\hat{\theta}\_1^{ONS}-\theta\_\*\rVert\_{V\_1}^2\leq \gamma$. Then, we assume $\lVert\hat{\theta}\_\tau^{ONS}-\theta\_\*\rVert\_{V\_\tau}^2\leq \gamma$ for $\tau=1,2,\ldots, t$, which replaces the unknown parameters $\lVert\hat{\theta}\_\tau^{ONS}-\theta\_\*\rVert\_{V\_\tau}$ with $\gamma^{1/2}$. Since $\lVert x\_\tau\rVert\_{V\_\tau^{-1}}$ is known, CRTM can truncate the term $\lVert x\_\tau\rVert\_{V\_\tau^{-1}}\|y\_\tau|\$ to reduce the impact of extreme values. A delicate analysis of this approach for applying the truncated strategy demonstrates that $\lVert\hat{\theta}\_{t+1}^{ONS}-\theta\_\*\rVert\_{V\_{t+1}}^2\leq\gamma$, which concludes the proof of the confidence region. Further details can be found in Lemma 5 of the supplementary material.
----
**Q3: Technical contribution of the mean-of-medians method CRMM.**
Compared with existing median of means algorithms MENU (Shao et al., 2018) and BMM (Xue et al., 2020), CRMM differs from them in two aspects. The first one is the estimator as we introduced in Q2. The second one is the order of "mean" and "medians'' operations. MENU and BMM first calculate $O(\log T)$ "means'' using multiple historical reward sequences, and then takes the median of these "means''. CRMM reverses the order of these two operations to reduce the number of "means'' to $1$, as the calculation of "means'' is time-consuming. Specifically, CRMM first takes the median of multiple rewards and then calculates a single "mean'' using the median reward.
The analysis has been adapted to prove that such an exchange can achieve a near-optimal regret bound. Two important properties of the median are utilized in our proof. The first property is that scaling a set of variables does not alter the index of the median term, which is employed in the proof of Lemma 10 (Lines 144-153). The second property is the upper bound of the median's $(1+\epsilon)$-th moment, as established in Lemma 8.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification. I'm glad to recommend an acceptance. It would be nice if the authors can include more discussions on these differences in the main text of the final version.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 7N78,
Thank you very much for your kind reply! We will highlight the technical contributions in the main text of the final version.
Best,
Authors | Summary: This paper studies the problem of generalized linear bandits with heavy-tail rewards. Due to the heavy-tailedness, methods and algorithms for linear bandits with sub-gaussian rewards cannot be directly applied. To handle such issues, existing works have developed certain strategies, two of which are the truncation strategy and the mean-of-medians strategy. To this end, this paper proposes two novel algorithms, CRTM and CRMM, which utilize the aforementioned strategies and achieve sublinear regret bounds. CRTM reduces the computational complexity of previously best known truncation-based methods, while CRMM reduces the number of estimators required. Furthermore, CRTM does not require the reward to have symmetric distribution, contrasting existing works. Experimental results demonstrate the low regret and computational complexity of the proposed methods.
Strengths:
**1. Clarity**
The paper has a clear presentation. The GLB bandit problem and heavy-tailedness are rigorously defined.
**2. Meaningful results for an important problem**
Heavy-tailedness is an important aspect due to its board application in many real-world scenarios. This paper proposed computationally efficient near-optimal algorithms for such a setting, which is a meaningful result.
**3. Experiments**
The experimental study covers the regret bound as well as the computational complexity.
Weaknesses: There is no major technical weakness/flaws in this paper, as far as I can tell.
However, I do challenge the technical contribution of this paper: in terms of the theoretical analysis, does this paper improves over existing results or does this paper develop novel analytical techniques to facilitate the analysis? For example, in line 209, this paper mentions that the theoretical analysis is fundamentally different from that of TOFU/BTC. However, it seems to me that this is not elaborated in the later part of the paper, so I have to assume that the technical contribution is incremental compared to existing works. Therefore, I am concerned about the technical contribution of this paper.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: In the theoretical proof, what are the major differences from previous works? For example, is there any non-trivial ideas applied?
More details in the Weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you sincerely for your time and effort in reviewing our work. We have carefully considered your concerns, and are very happy to respond more questions during the rolling discussion.
----
**Questions: In the theoretical proof, what are the major differences from previous works TOFU/BTC? For example, is there any non-trivial ideas applied?**
We first highlight the key differences in the analysis between TOFU and CRTM. For TOFU, which employs the least square estimation (LSE), the confidence region for the inherent parameter $\theta_*$ is given by
\begin{equation}
\lVert\hat{\theta}\_{t+1}^{LSE}-\theta\_\*\rVert_{\widetilde{V}\_{t+1}} \leq \lVert \underbrace{\widetilde{V}\_{t+1}^{-\frac{1}{2}} A\_t\(Y\_t-A\_t^\top\theta\_\*\)}\_{A} \rVert\_2+\lVert\theta\_\*\rVert\_{\widetilde{V}\_{t+1}^{-1}}
\end{equation}
where $A\_t=[x\_1,x\_2,\ldots,x\_t]\in\mathbb{R}^{d\times t}$ is the matrix composed of selected arm vectors, $\widetilde{V}\_{t+1}=A\_tA\_t^\top+I_d$ and $Y\_t=[y\_1,y\_2,\ldots,y\_t]\in\mathbb{R}^{t\times 1}$ is the historical reward vector. For CRTM, which employs the Online Newton Step (ONS) method, the confidence region for the inherent parameter $\theta\_*$ is
\begin{equation}
\lVert\hat{\theta}\_{t+1}^{ONS}-\theta\_\*\rVert\_{V\_{t+1}}^2\leq\underbrace{\sum\_{\tau=1}^t 2x\_\tau^\top\left\(\hat{\theta}\_\tau^{ONS}-\theta\_\*\right\) \left\(y\_\tau-\mu\(x\_\tau^\top\theta\_\*\)\right\)}\_{B}+\sum\_{\tau=1}^t \lVert x_\tau\rVert\_{V\_\tau^{-1}}^2 \left\(y\_\tau-\mu\(x\_\tau^\top\theta\_\*\)\right\)^2+O\(\log t\).
\end{equation}
Although both above two equations include the linear combination of historical rewards ($A$ and $B$), there exists essential differences between these two terms. Specifically, $A$'s coefficients $\widetilde{V}_{t+1}^{-\frac{1}{2}}A_t$ are known and the agent can truncate the scaled rewards to reduce the impact of extreme noises. However, $B$'s coefficient $\\{x\_\tau^\top\(\hat{\theta}\_\tau^{ONS}-\theta\_*\)\\}\_{\tau=1}\^t$ are unknown, which makes the existing technique of TOFU invalid.
To address this issue, our main contribution in analytical technique is the adoption of inductive method. We first relax term $B$ as follows:
\begin{equation}
\|x\_\tau^\top\(\hat{\theta}^{ONS}\_\tau-\theta\_\*\)y\_\tau\|\leq\lVert\hat{\theta}\_\tau^{ONS}-\theta\_\*\rVert\_{V\_\tau}\cdot\lVert x\_\tau\rVert\_{V\_\tau^{-1}}\cdot|y\_\tau|.
\end{equation}
For $\tau=1$, it is evident that $\lVert\hat{\theta}\_1^{ONS}-\theta\_\*\rVert\_{V\_1}^2\leq \gamma$. Then, we assume $\lVert\hat{\theta}\_\tau^{ONS}-\theta\_\*\rVert\_{V\_\tau}^2\leq \gamma$ for $\tau=1,2,\ldots, t$, which replaces the unknown parameters $\lVert\hat{\theta}\_\tau^{ONS}-\theta\_\*\rVert\_{V\_\tau}$ with $\gamma^{1/2}$. Since $\lVert x\_\tau\rVert\_{V\_\tau^{-1}}$ is known, CRTM can truncate the term $\lVert x\_\tau\rVert\_{V\_\tau^{-1}}\|y\_\tau|\$ to reduce the impact of extreme values. A delicate analysis of this approach for applying the truncated strategy demonstrates that $\lVert\hat{\theta}\_{t+1}^{ONS}-\theta\_\*\rVert\_{V\_{t+1}}^2\leq\gamma$, which concludes the proof of the confidence region. Further details can be found in Lemma 5 of the supplementary material.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for addressing my concern. I modified my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer QvG4,
Thank you very much for your kind reply! We will revise our paper to highlight the technical contributions.
Best
Authors | Summary: This paper proposes two novel algorithms which improve computational complexity and regret bounds over previous algorithms for generalized linear bandits with heavy-tailed rewards by combining the truncation strategy (or means of medians strategy) with the online Newton step.
Strengths: This work improves the computational complexity and regrets bound of the generalized linear contextual bandits with heavy-tailed rewards which are closely related to the real-world application.
Weaknesses: 1. Among the input values for the algorithm, $S$, $\epsilon$ and $v$ is not known in practice.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Can dependency on $\kappa$ in the regret bound be relaxed?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes. This work is mostly theoretical and potential negative societal impact is unseen.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you sincerely for your time and effort in reviewing our work. We have carefully considered your concerns and our responses are provided as follows.
----
**Weeknesses: Among the input values for the algorithm, $S, \epsilon$ and $v$ is not known in practice.**
Huang et al. [2022] proposed an algorithm for heavy-tailed MAB models that eliminates the dependence on $\epsilon$ and $v$ by utilizing the FTRL framework and the doubling trick technique. This provides some insights to remove the dependence on $\epsilon$ and $v$ in the GLB model. However, GLB algorithm is based on the online Newton step method, which is fundamentally different from FTRL, and we will strive to address this issue in future research. For the parameter $S$, most existing online GLB algorithms requires prior knowledge $S$ [Zhang et al., 2016; Jun et al., 2017; Lu et al., 2019]. We will try to remove this limitation in the future.
-----
**Questions: Can dependency on $\kappa$ in the regret bound be relaxed?**
[1] introduced an algorithm for the Logistic model that enhanced the dependency on $\kappa$ from $O(\kappa^{-1})$ to $O(\kappa^{-1/2})$. However, their algorithm is offline, as it relies on the maximum likelihood estimate. Consequently, directly applying [1]'s technique to our algorithms would not yield a similar improvement, since our algorithms are online. In future research, we aim to improve the dependency on $\kappa$ by employing a variant of [1]'s technique. The revised version of our paper will provide a more comprehensive discussion of this issue.
References
[1] Improved Optimistic Algorithms for Logistic Bandits. Louis Faury, Marc Abeille, Clément Calauzènes, Olivier Fercoq, 2020.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their helpful responses. My questions are resolved. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Efficient Adaptation of Large Vision Transformer via Adapter Re-Composing | Accept (poster) | Summary: This work presents an efficient adapter design for transfer learning of large retrained models. The key idea is to enable adapter to be shared across layers and imposing low-rank constraint, so that the overall trainable parameters can be reduced. Given the re-composed linear adapter, existing reparameterization methods can be further used to avoid extra computation cost.
Strengths: - Parameter efficient transfer learning is an important approach for making large pertained model useful in many applications.
- The paper is written to be easy to read.
Weaknesses: First of all, the motivation of this work is weak -- Existing adapters are already lightweight, e.g., less than 0.5% of the pretrained model, and the scope for further reducing the size of adapter is marginal. That being said, this adapter efficiency problem is not significant.
In introduction, it is hard to read out the novelty of this method since more related works such LoRA is not discussed and compared at all.
Limited novelty with the proposed method: Low-rank constraint is not new, as already used in LoRA [24] although not in the same way, and parameter sharing across layers is also not novel idea. This work combines the two in a single place, which could be considered as being not significant.
Limited performance gain:
- In most cases, the proposed method cannot achieve a good margin over previous methods (e.g., SSF).
- Another important metric, FLOPs, would be useful to report.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See the weaknesses above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your dedicated efforts in reviewing our work. While we value your contribution, we would like to address a few factual errors that have come to our attention. Your understanding and collaboration in rectifying these inaccuracies would be immensely valuable to us.
**1. Regarding this adapter efficiency problem is not significant.**
**Re:** Our response to this particular concern is available in Joint Response 1. To facilitate access to our reply, we are reproducing it here. We acknowledge the valid point regarding the limitations on reducing the absolute count of adaptation parameters. In light of this, we are revising our tone to convey a more balanced perspective on parameter reduction. It is essential, however, to underscore that our approach demonstrates improved adaptation performance even when operating with a reduced parameter count. It is important to note that our contribution extends beyond this efficiency aspect, as we introduce innovative insights into the domain of low-rank adaptation strategies.
**2. Regarding it is hard to read out the novelty of this method since more related works such LoRA is not discussed and compared at all.**
**Re:** We respectfully disagree with this assertion. Within our introduction, we thoroughly reviewed and analyzed pertinent parameter-efficient pre-trained adaptation methods, as demonstrated in lines 32 to 41. Notably, references [7,8] employed low-rank adapters within the context of vision applications. Additionally, we accentuate the key distinctions that set our approach apart from these existing solutions, elaborated upon from lines 42 to 48. Moreover, to provide a more comprehensive understanding of these distinctions, we offer a visual summary of typical parameter-efficient pre-trained model adaptation techniques in Fig. 1 of the original paper. This illustration is complemented by an in-depth analysis of existing methodologies. Given these comprehensive efforts, we respectfully contend that the criticism regarding the perceived novelty of our work is unfounded.
**3. Regarding concerns about limited novelty with the proposed method: Low-rank constraint is not new, as already used in LoRA [24] although not in the same way**.
**Re:** We want to clarify that our assertion has never been to claim the low-rank design of adapters as our contribution. In fact, we explicitly state our approach's novelty from line 43 to line 48 of the Introduction, where we outline, "We adopt a low-rank design for the adapter using a bottleneck operation but propose a novel approach. Unlike other methods that place the adapter in different layers and directly learn different parameters for each adapter to cater to layer-wise variation, we propose sharing the down/up projections in the low-rank adapter across different layers and simply learning low-dimensional re-scaling coefficients to re-compose the linear projections into layer-adaptive adapters." This clarifies that our innovation indeed lies in providing a fresh perspective within the domain of low-rank adaptation strategies. While the low-rank constraint itself might not be new, our distinctive approach and insights set our method apart from existing ones, such as LoRA [24].
**4. Regarding concerns about limited novelty with the proposed method: parameter sharing across layers is also not novel idea**.
**Re:** We respectfully disagree with the characterization that parameter sharing across layers lacks novelty. While the concept of parameter sharing is indeed a broad one, the crux lies in how this sharing is executed and whether it contributes fresh perspectives to the field. In our case, we delve into adaptation matrix sharing within the context of the low-rank design of adapters, specific to parameter-efficient pre-trained adaptation. As far as we are aware, our work is the inaugural exploration of this approach, imbuing the community with novel insights into the realm of low-rank-based adaptation methods. The uniqueness lies not in the general notion of parameter sharing, but in the inventive application and implications of this concept within our proposed framework.
**5. Regarding limited gains: In most cases, the proposed method cannot achieve a good margin over previous methods (e.g., SSF).**
**Re:** We appreciate your perspective on this matter. In the original paper, we executed our method with standard data augmentations and additionally re-implemented SSF under the same augmentation conditions. The findings, as detailed in Tables 1 and 2, reveal that our ARC method exhibits an improvement of approximately 3% over SSF – a substantial enhancement. Furthermore, in response to input from other reviewers, we proceeded to retrain our ARC method while integrating the data augmentation strategies outlined in the original SSF paper [9]. The outcomes, meticulously presented in Table 1 of the provided rebuttal_pdf, consistently underscore our method's superior performance compared to SSF across various datasets. These results affirm the efficacy of our approach and its propensity to yield consistent advancements over previous methodologies.
**6. Regarding FLOPs**.
**Re:** The majority of FLOPs emanate from our ARC modules, which are meticulously designed with a bottleneck structure. Consequently, the additional FLOPs introduced by our ARC approach align closely with those of LoRA and Adapter methods. Moreover, it's noteworthy that our supplementary adapter modules entail a linear mapping, enabling us to re-parameterize the ARC modules within the original pre-trained model framework without incurring additional FLOPs overhead. A detailed discussion on this aspect can be found in lines 170-176 of our paper. | Summary: This paper introduces a parameter-efficient transfer learning method named Adapter Re-Composing (ARC), which mainly focuses on investigating the reusability of adapted parameters. The authors propose to apply a shared adapter to all the layers (blocks) of the pre-trained model, and they use different Re-Scaling Coefficients (diagonal matrices) in different layers to ensure the diversity of parameters in different layers.
The motivation behind this design is the adapter module's low-rank property, shown in Fig.3 in the main text.
Moreover, the authors conduct extensive experiments to demonstrate the effectiveness of their designs, where they train fewer parameters and achieve competitive or even better performance compared to prior arts.
Strengths: 1: This paper is well-written and well-organized.
2: The ARC design in the paper is well-motivated.
3: Although the technic is simple and easy to implement, the performance is impressive.
4: The experiments are comprehensive.
Weaknesses: 1: The ARC leverages learnable re-scaling coefficients in different layers to maintain diversity. However, the authors didn't disscuss the numerical difference among the re-scaling coefficients across different layers. If they are also similar, a share-weight adapter can replace the ARC.
2: In Tab. 1 and Tab. 2, the SSF* applies additional techniques (e.g., data augmentation) during training and gains non-trivial improvements. Why did the authors not use those techniques to improve the performance of ARC further? Will these techniques further benefit the ARC?
3: I wonder about the computational overhead of the ARC, e.g., the GPU memory usage during training, because the number of learnable parameters may not necessarily be positively correlated with the GPU memory usage. Could authors provide the additional GPU memory usage comparison between ARC and other related works?
4: How to use ARC in Hierarchical Vision Transformers such as Swin Transformer is unclear in Lines 277-279. Adding more details here may be helpful.
Overall, my major concern is weakness 1. Happy to raise my rating if my concers are well adressed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors disscuss the limitations and boarder impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We value the positive feedback you have shared and address the raised concerns as follows:
**1. Regarding the diversity of re-scaling coefficients.**
**Re:** We appreciate the thoughtful insights you've provided. To comprehensively address this matter, we conducted an extensive analysis of the re-scaling coefficients across various layers. The outcomes, specific to the DTD dataset within the VTAB-1k dataset, have been visually captured in Fig. 2 of the provided rebuttal_pdf. This analysis reveals substantial dissimilarity among the scaling factors across different layers. This outcome unequivocally underscores the distinctiveness of our proposed ARC structure, making it non-trivial to substitute with a shared adapter. Furthermore, we conducted the experiment you suggested by sharing adaptation matrices while excluding re-scaling parameters. The corresponding results are presented as "Sharing-Adapter" in Table 1 of the provided rebuttal_pdf. The considerable decline in performance observed in this scenario serves as compelling evidence attesting to the indispensable role played by the re-scaling coefficients. This exploration reaffirms the crucial significance attributed to the re-scaling mechanism in our approach.
**2. Regarding run the paper's method under SSF's data augmentation**.
**Re:** In response to your insightful suggestion, we have rigorously re-trained our ARC method while implementing the data augmentations featured in SSF [9]. The results of this endeavor have been presented in Table 1 of the provided rebuttal_pdf, and a comprehensive explanation can be found in Joint Response 2. Notably, our ARC method consistently exhibits improved performance compared to SSF [9] when subjected to the same set of augmentations, reaffirming the efficacy and generalizability of our approach.
**3. Regarding GPU usage**.
**Re:** The utilization of GPU resources during model training is influenced by multiple factors, including the model size, intermediate variables generated during the forward process, and gradient information in back propagation. Notably, intermediate variables tend to consume a substantial portion of GPU resources, surpassing the model size in significance. In alignment with the LoRA and Adapter designs, our method fine-tunes the low-rank bottleneck structure within each layer. Consequently, the accumulation of additional intermediate variables during training is akin to these methods, resulting in similar GPU memory usage for our ARC approach. In the case of SSF, a concise breakdown of the GPU usage for intermediate variables can be approximated as O(L\*m\*N\*D), where L signifies the number of layers, m denotes the count of SSF modules embedded in each layer, N represents the token count, and D stands for feature dimensions. On the contrary, the GPU usage associated with intermediate variables produced by our ARC adheres to O(L\*N\*D), markedly smaller than the SSF approach. To validate this analysis, we conducted experiments, training different models under identical hardware conditions and gauging GPU memory consumption. The outcomes confirm that the GPU utilization of our method closely aligns with the Adapter and LoRA methods, notably outperforming the SSF method in terms of resource efficiency.
**4. Regarding clarification of how to use ARC in Hierarchical Vision Transformers such as Swin Transformer**.
**Re**: We regret any confusion caused by our previous description. To provide further clarity, it's important to note that the Swin Transformer comprises distinct stages, each characterized by varying feature dimensions. Achieving global sharing across these stages is unfeasible. In Swin Transformer, each stage consists of multiple transformer blocks with uniform feature dimensions. In light of this architecture, we've introduced shared adaptation matrices between blocks within the same stage.
---
Rebuttal Comment 1.1:
Comment: Thanks for the comprehensive reply. I read the rebuttal and all other reviews.
My major concern is that the proposed framework is an incremental modification of the LoRA.
First, the low-rank optimization isn't novel, given the existence of LoRA (the authors also admit that this operation isn't claimed as one of the contributions).
Second, the "sharing" operation is very sensitive to its placed position, making the proposed method a little bit trivial. From the supplied Table 1 in the pdf, "sharing adapter across all layers" results are significantly worse than "sharing adapter across layer."
Based on the above two facts, the ARC's contribution is limited to the "sharing down-/up-projections," which sounds a little tricky given the current results. Suppose the authors would like to claim the "sharing operation" as the main contribution. In that case, the paper should pay more attention to what makes this operation work and develop a more in-depth analysis (e.g., which layers share projections or the effect of residual connections, etc.).
To this end, I decide to downgrade both my score and confidence to (5,4). I'm still waiting for further replies from the authors and would like to hear the opinions of other reviewers on the rebuttal.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 4kbb,
We are grateful for your prompt response to our work. However, ***we must address a misunderstanding that has arisen from your feedback.***
In response to your original comment, "discuss the numerical difference among the re-scaling coefficients across different layers," we have taken a comprehensive approach to address this concern. We have presented a detailed analysis of the disparity among the re-scaling coefficients across layers, as evidenced by the findings showcased in Figure 2 of the attached rebuttal PDF. It is clear from this analysis that the re-scaling coefficients do indeed exhibit variations, which underscores the rationale behind our decision to learn layer-specific re-scaling coefficients. In our efforts to address your query more explicitly, we conducted experiments wherein the re-scaling coefficients were removed. The results of these experiments are meticulously documented as "Sharing-Adapter" in Table 1 of the rebuttal PDF. As anticipated, these results demonstrated a significant decline in performance. This outcome validates our assertion that the absence of layer-specific re-scaling coefficients fails to account for the nuanced layer-wise variations, leading to a lack of sufficient model capacity to adapt effectively to downstream tasks.
Given these clarifications, it is crucial to emphasize that the contribution of ARC extends beyond merely "sharing down-/up-projections." The true innovation lies in the strategic acquisition of layer-specific adapters—a process that efficiently utilizes resources while delivering significant impact. While we do indeed utilize the concept of shared down-/up-projections, the heart of our innovation revolves around the careful integration of these components through the use of layer-specific re-scaling coefficients. ***It is important to understand that the re-composed adapters should not be mistakenly interpreted as a simple "sharing adapter across layer," as suggested in your feedback. Our approach involves a thoughtful arrangement that optimizes model capacity and performance.***
We trust that this clarification better communicates the essence of our work and the distinctiveness of our contribution. We remain committed to addressing any further inquiries or uncertainties you may have.
Thank you for your thoughtful consideration. | Summary: The paper explores ARC, which is a novel parameter-efficient fine-tuning method which uses a similar architecture as adapters but introduces inter- and intra- layer weight sharing. Some down- and up- projection weights are shared but every adapter position uses an independent set of per-channel scaling factor on the channel-reduced intermediate features. The proposed method obtains competitive results on various vision transformer adaptation benchmarks in terms of recognition performance and number of parameters. Moreover it can be re-parameterized into adjacent fully-connected layers so no overhead is incurred during inference.
Strengths: * The paper is generally well written. The figures are clear and the text is easy to follow.
* The experiments are comprehensive and cover a wide range of visual recognition datasets and the results are competitive.
Weaknesses: * The effectiveness of the proposed method is not well justified. Section 3.3 does not make much sense to me: Having long-tailed singular values only implies that the weight difference can be approximated well by a low-rank matrix and thus justifies the bottleneck design. Shared projections further requires the adapters to have largely overlapped kernel and image spaces, which, instead of the singular values themselves, are determined by the direction of the singular vectors corresponding to the top singular values. It also doesn't theoretically justify the intra-block sharing design (i.e., using the transpose of down-projection as up-projection).
* In multiple places (e.g., caption of Table 1 and Line 229), the paper's claim of using simple augmentations on baselines for 'fair comparison' is questionable. Due to the vast differences in nature of different methods, it is expected that their optimal training configurations are different, and each method, including this paper's own, should have the right to choose its optimal training configuration as long as it does not violate some principal rules of machine learning (e.g., leak of test data): Conversely, it is also improper to request that the paper's method being run under SSF's data augmentation for 'fair comparison'.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: * The paper cited [7] in multiple tables for visual adapter results but [7] seems to not include any results on computer vision tasks (actually the paper title is *Parameter-efficient transfer learning for **NLP***). Are those results reproduced by the authors using the same architecture as [7]? If so, could the authors point to a place where more detailed settings (e.g., the bottleneck dimension, the activation function, the training hyper-parameters) of this baseline can be found?
* In Table 3, it is strange to me why the proposed methods obtain lower performance on ViT-Huge than ViT-Large. Is it possible that the proposed method or some baselines are over-fitting in extremely large backbones?
* Following weakness 1, an understanding of why the proposed method works fairly well can also be provided by experiments: For example, are features from adjacent layers similar due to the identity connections? What if the projections are progressively shared in groups of increasing sizes among consecutive layers before reaching the global sharing setting as reported in the paper?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 2 fair
Limitations: There are no unmentioned limitations in the paper to my mind.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the constructive comments. We address the concerns as follows:
**1. Regarding the motivation and justification of shared adaptation matrices in weakness 1**.
**Re:** We appreciate your feedback. In response, we have incorporated the suggested analysis to provide a robust rationale for the motivation behind our proposed adaptation matrix sharing strategy. For a comprehensive understanding of this matter, we invite you to refer to Joint Response 1, where you will find a detailed elaboration on this issue.
**2. Regarding run the paper's method under SSF's data augmentation for 'fair comparison' in weakness 2**.
**Re:** In response to your insightful suggestion, we have rigorously re-trained our ARC method while implementing the data augmentations featured in SSF [9]. The results of this endeavor have been presented in Table 1 of the provided rebuttal_pdf, and a comprehensive explanation can be found in Joint Response 2. Notably, our ARC method consistently exhibits improved performance compared to SSF [9] when subjected to the same set of augmentations, reaffirming the efficacy and generalizability of our approach.
**3. Regarding the results of [7] in question 1**.
**Re:** We wish to clarify that we did not independently replicate the results from the literature [7] in the realm of computer vision. Instead, we directly adopted the experimental outcomes presented in the VPT paper [6]. For precise details pertaining to the settings employed, we kindly direct your attention to the VPT paper [6].
**4. Regarding inferior performance of ViT-Huge comparing to ViT-Large**.
**Re:** We wish to highlight that upon a thorough comparison between Table 3(a) and Table 3(b) in our paper, it becomes evident that a number of methods, including Full Fine-tuning, Adapter, VPT-Deep, and LoRA, exhibit inferior performance on ViT-Huge compared to ViT-Large. It is noteworthy that these methods possess trainable parameters that increase in correspondence with the number of layers, potentially contributing to overfitting stemming from model expansion. This phenomenon aligns with the observations delineated in the Visual Prompt Tuning (VPT) paper and harmonizes with the outcomes depicted in Fig. 4 of the VPT paper. Conversely, methods such as VPT-Shallow, which feature learnable parameters independent of layer count, or methods with a modest count of learnable parameters (e.g., Bias), tend to display less pronounced manifestations of this phenomenon.
**5. Regarding question 3**.
**Re:** We appreciate your suggestions. In response, we conducted a thorough investigation into the input features of adjacent layers within the network. Our findings indicate that there are no noteworthy similarities in this regard. It's worth noting that the visualization and analysis elucidating the correlation between adaptation matrices across layers, as presented in Joint Response 1, sufficiently addresses the concerns surrounding the justification of adaptation matrix sharing.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses from the authors and the other reviewers.
The new experimental results (the proposed method using SSF's data augmentation) seem quite strong to me, which is the main reason why I'm raising the rating to a positive one.
The drawback, however, is that the analysis of why the proposed method works effectively is still somewhat limited. For the rebuttal pdf, it is not very clear to me how the correlation between two sets of (singular) vectors are defined, and what is the reference correlation value of two random matrices. More generally, it still looks to me that the experimental or theoretical analysis part can be systematically enhanced (e.g., covering a range of tasks from easy to difficult and observing more fine-grained results in a series of progressively sharing settings) which might be too much to cover in a rebuttal phase.
Based on the views above I'm raising the rating to a borderline accept. | Summary: This paper proposes to further reduce the parameters of the adapter by introducing a weight-sharing scheme between different layers. To accommodate the variations across different layers, re-scaling coefficients are learned to re-compose the layer-adaptive adaptation matrices. Experiments are conducted on 24 downstream image classification tasks using various Vision Transformer variants.
Strengths: + The observation that “learned adaptation matrices naturally exhibit low-rank characteristics” as shown in Fig. 3 is quite interesting, making the motivation for the weight-sharing design compelling.
+ This paper is well-written and easy to follow. The figures are well-prepared and illustrate the core idea clearly.
+ The experiments are extensive.
Weaknesses: - The comparisons to SSF shown in Tables 1 and 2 are re-implemented by the authors and data augmentations are removed. Their original performances of SSF are much higher than the proposed approach. It would be better if the authors reported the performance of the proposed approach with these advanced data augmentations.
- Fig. 3 shows the singular value distribution of the original MHA and FFN adapter. It would be better to show how the proposed approach alleviates such a problem by visualizing the value distribution after using the proposed approach.
- Though the observation of “learned adaptation matrices naturally exhibit low-rank characteristics” and the proposed weight-sharing scheme is very interesting, the motivation to further reduce the parameters of the adapter is not very compelling as the parameters of the adapter are already relatively small.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your diligent review of our paper and your acknowledgment of the strengths within our work. We have taken your feedback seriously and have formulated a detailed response to the specific issues you raised, which we present as follows:
**1. Regarding incorporating advanced data augmentations in SSF.**
**Re:** In response to your insightful suggestion, we have rigorously re-trained our ARC method while implementing the data augmentations featured in SSF [9]. The results of this endeavor have been presented in Table 1 of the provided rebuttal_pdf, and a comprehensive explanation can be found in Joint Response 2. Notably, our ARC method consistently exhibits improved performance compared to SSF [9] when subjected to the same set of augmentations, reaffirming the efficacy and generalizability of our approach.
**2. Regarding how the proposed approach alleviates a problem shown in Fig. 3.**
**Re:** We greatly appreciate your attention to this aspect. In Fig. 3, we offer a compelling insight into the remarkable low-rank attributes exhibited by the adaptation matrices. This distinctive observation underscores the feasibility of employing a shared basis for the reconstruction of these matrices. Drawing from this pivotal discovery, we introduce our ARC method, which not only shares down- and up-projection matrices across all layers but also learns low-dimensional re-calibration coefficients to reconstruct the adapters. This approach contributes to enhanced parameter efficiency. To further substantiate the rationale behind our parameter-sharing strategy, we provide a visual representation of the correlation between adaptation matrices across layers in Fig. 1 of the provided rebuttal_pdf. This visual analysis underscores a robust correlation among the adaptation matrices spanning layers, thus solidifying the justification for our parameter-sharing paradigm.
**3. Regarding the motivation to further reduce the parameters of the adapter is not very compelling as the parameters of the adapter are already relatively small.**
**Re:** Your insightful comment is greatly appreciated. We have provided a comprehensive response to this query in Joint Response 3. Thank you for raising this pertinent issue.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I appreciate the rebuttal and the clarification. The responses address most of my concerns. Therefore, I would retain my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your review and comments
Comment: Dear Reviewer,
Thank you for your feedback and for taking the time to review our rebuttal. We're pleased to hear that the responses have effectively addressed your concerns. | Rebuttal 1:
Rebuttal: **This concludes the the Joint Response to all reviewers.**
We appreciate the diligent efforts undertaken by both the reviewers and the ACs in thoroughly reviewing our manuscript. It is noteworthy that a consensus has been reached among four out of the five reviewers regarding the novelty and efficacy of our proposed methods. Within this context, we diligently address the shared concerns highlighted by the reviewers as **Joint Response**, and in addition, provide individualized responses to the specific inquiries raised by each reviewer.
**1. Regarding the motivation for adaptation matrix sharing across layers.**
**Re:** The key innovation underpinning our paper resides in our departure from learning low-rank adapters in isolation for each layer, and instead, reconstituting these adapters by recalibrating shared down- and up-projection matrices. This transformative approach is partially substantiated in Fig. 3 of the original paper, where the adaptation matrices exhibit pronouncedly low-rank characteristics, facilitating the use of a common basis to reconstruct the low-rank adapters. To fortify the rationale for adaptation matrix sharing, as suggested by Reviewer 4jX4 we present a visual depiction of the interlayer correlation among adaptation matrices in Fig. 1 of the rebuttal_pdf. Precisely, we apply the Singular Value Decomposition (SVD) technique to down- and up-projection matrices within adapters of every layer, gauging the alignment of right singular vectors in down-projection matrices and left singular vectors in up-projection matrices across layers. From the discernible patterns in Fig. 1 (a) and Fig. 1 (b) of the rebuttal_pdf, a conspicuous correlation is evident among the low-rank adapters spanning layers, substantiating the principle of shared projection matrices within our method. Furthermore, we expound the rationale for employing transpose down-projection matrices as up-projection matrices, as demonstrated in Fig. 1 (c) of the rebuttal_pdf, where the strong correlation between the right and left singular vectors within each layer underscores the justification for this approach.
**2. Applying the data augmentations in SSF [9] to the proposed ARC method.**
**Re:** In our original paper, we employed standard data augmentations for the proposed method. However, recognizing the advanced data augmentations used by SSF [9], we re-implemented SSF and presented results from both the original [9] and our re-implementation. In response to the insightful suggestions of Reviewers 4jX4 and 4kbb, we undertook re-training of the proposed ARC method using the same data augmentations as in SSF [9], and subsequently showcased the outcomes across the 19 datasets of VTAB-1k. Remarkably, our results indicated that with the integration of these advanced data augmentations, the proposed ARC method surpassed SSF [9] on 14 out of the 19 datasets. This noteworthy observation provides further compelling evidence underscoring the robust applicability and efficacy of our proposed method.
**3. Regarding the motivation to further reduce the parameters of the adapter is not very compelling as the parameters of the adapter are already relatively small.**
**Re:** We acknowledge the point that the scope for reducing the absolute count of adaption parameters is constrained. Therefore, we will adopt a more measured tone concerning parameter reduction. However, it's crucial to emphasize that our approach attains enhanced adaptation performance even with a reduced parameter count. Importantly, our contribution extends to providing novel insights into the realm of low-rank adaptation strategies.
Pdf: /pdf/7ac5b35e355099d7b43c0b438dbb3383c9a45215.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a novel parameter-efficient fine-tuning method called Adapter Re-Composing (ARC). ARC effectively reuses parameters across different layers, resulting in remarkable improvements in performance across 24 image classification datasets while utilizing fewer learnable parameters.
The experimental evaluation conducted on various downstream datasets provides compelling evidence of ARC's superiority. It outperforms existing methods and establishes a new benchmark for parameter-efficient fine-tuning techniques.
Strengths: 1. The proposed ARC method is simple yet effective, achieving superior performance on multiple downstream datasets while utilizing fewer trainable parameters.
2. The experiments conducted provide compelling evidence, as they encompass various datasets, attention-based architectures, and ablations, ensuring the robustness and reliability of the findings.
Weaknesses: 1. In Table 3, it is observed that the performance of ViT-Huge is lower than ViT-Large. It would be beneficial if the authors could provide an explanation for this disparity.
2. The utilization of symmetric matrices ($W_{up} = W_{down}^T$) in the bottleneck design helps reduce the number of learnable parameters. However, it would be interesting to explore whether further improvements in performance can be achieved by making the downsampling and upsampling matrices independent. It would be valuable if the authors could provide a comparison of performance and parameter statistics to address this potential enhancement.
3. The experiments conducted have demonstrated the effectiveness of the proposed method. However, it would greatly enhance the strength of this paper if the authors could supplement these empirical results with theoretical analysis.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please see Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognition of our work strength. We extend our sincere appreciation for the invaluable guidance you have provided. We respond to your concerns as follows.
**1. Regarding the performance of ViT-Huge is lower than ViT-Large.**
**Re:** We wish to highlight that upon a thorough comparison between Table 3(a) and Table 3(b) in our paper, it becomes evident that a number of methods, including Full Fine-tuning, Adapter, VPT-Deep, and LoRA, exhibit inferior performance on ViT-Huge compared to ViT-Large. It is noteworthy that these methods possess trainable parameters that increase in correspondence with the number of layers, potentially contributing to overfitting stemming from model expansion. This phenomenon aligns with the observations delineated in the Visual Prompt Tuning (VPT) paper and harmonizes with the outcomes depicted in Fig. 4 of the VPT paper. Conversely, methods such as VPT-Shallow, which feature learnable parameters independent of layer count, or methods with a modest count of learnable parameters (e.g., Bias), tend to display less pronounced manifestations of this phenomenon.
**2. Regarding making the downsampling and upsampling matrices independent.**
**Re:** To offer clarity on the matter of making the symmetric matrices in the bottleneck structure independent of each other, we direct attention to Table 5(c) within our paper. In the last row of the aforementioned table, we elucidate the outcomes stemming from this adjustment, which regrettably fail to yield discernible enhancements in performance, despite introducing an elevation in parameter count. It is important to underscore that our rationale for adopting a symmetric structure in the design of adapters is expounded upon in Joint Response 1.
**3. About theoretical analysis about the method.**
**Re:** The key innovation underpinning our paper resides in our departure from learning low-rank adapters in isolation for each layer, and instead, reconstituting these adapters by recalibrating shared down- and up-projection matrices. To fortify the rationale for adaptation matrix sharing we present a visual depiction of the interlayer correlation among adaptation matrices in Fig. 1 of the rebuttal_pdf. Precisely, we apply the Singular Value Decomposition (SVD) technique to down- and up-projection matrices within adapters of every layer, gauging the alignment of right singular vectors in down-projection matrices and left singular vectors in up-projection matrices across layers. From the discernible patterns in Fig. 1(a) and Fig. 1(b) of the rebuttal_pdf, a conspicuous correlation is evident among the low-rank adapters spanning layers, substantiating the principle of shared projection matrices within our method. Furthermore, we expound the rationale for employing transpose down-projection matrices as up-projection matrices, as demonstrated in Fig. 1(c) of the rebuttal_pdf, where the strong correlation between the right and left singular vectors within each layer underscores the justification for this approach.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
I have meticulously reviewed the rebuttal and taken into consideration the comments provided by the other reviewers. The majority of my concerns have been addressed, and as a result, I am inclined to maintain my initial score.
---
Reply to Comment 1.1.1:
Title: Thank you for your review and comments
Comment: Dear Reviewer,
We sincerely thank you for your thorough review of our paper and for dedicating your time to assess our rebuttal and the comments from other reviewers. We are delighted to learn that the majority of your concerns have been satisfactorily addressed, and we truly appreciate your positive feedback on our work. | null | null | null | null | null | null |
Correlation Aware Sparsified Mean Estimation Using Random Projection | Accept (poster) | Summary: The authors propose a compression algorithm that optimizes the accuracy in a setting where each client can send $k+O(1)\ll d$ values to the server. The algorithm leverages random projections and proposes a way to make the resulting estimate unbiased, which is valuable for DME.
Strengths: + DME is an important and well-studied problem, so advances are welcomed.
+ Leveraging correlations between clients' vectors leads to significant improvement in accuracy in some cases.
+ The setting of $n\cdot k \ll d$ is challenging and interesting.
Weaknesses: - The decoding time at the server is not presented, may be prohibitively high in some cases, and must be discussed upfront.
- The authors compare only with Rand-$k$ and Rand-$k$-Spatial. There are other solutions that leverage correlations between clients' vectors (e.g., New Bounds For Distributed Mean Estimation and Variance Reduction, ICLR 2021) and it is unclear to me whether the Rand-$*$ approaches are better or what are the tradeoffs. The authors do not cite the paper or compare with such approaches even qualitatively.
- It is unclear to me that these approaches, in which each client send $k$ floats are better than quantization methods that allow sending, e.g., $32k$ values with one bit each. One example that can use as a comparison point is [33].
- The authors discuss a case where $n\cdot k \ll d$; however, probably due to time complexity, even running it for $d \gg 1$ seems unlikely. The evaluation only shows $d\le 1024$ and no runtime numbers are provided.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Do you view Rand-Proj-Spatial as being orthogonal to quantization techniques? For example, what would be the impact of quantizing the $k$ values sent by each worker?
* Am I correct to understand that the server's decoding requires $\Theta(d^2\cdot k\cdot n)$ time? If not, what is it?
* You suggest that each client $i$ would use a different projection $G_i$. This may be important for the error, but also requires the server to compute the pinverse of each client's message separately. Instead, consider using a single projection $G$ that all clients use, and thus the server could compute a single pinverse after summing the messages. How would the error bounds change in such an implementation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: n/a.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W2 \& W3 \& Q1: See common response $\textbf{Quantization vs. Sparsification}$.
W1 \& W4 \& Q2: See common response $\textbf{Computation Time}$. Also, we included additional results on comparing the encoding and decoding wall-clock time of different estimators (see the pdf attachment in common response). We will make the computational time for decoding clearer and stress it as a limitation.
Q3: As we discussed in Appendix A.1, applying the same $\mathbf{G}$ random projection across the clients does not lead to any improvement of the MSE compared to that of Rand-$k$. The reason is that rotation does not change the $\ell_2$ norm.
Furthermore, we note that when $\mathbf{G_i}$ are different for each client, it is not true that our Rand-Proj-Spatial "requires the server to compute the p-inverse of each client's message separately".
Rand-Proj-Spatial only requires the server to compute p-inverse once, i.e., the p-inverse of $\mathbf{S} = \sum_{i=1}^{n}\mathbf{G}_i^T \mathbf{G}_i$.
---
Rebuttal Comment 1.1:
Comment: Thank you for the answers and the runtime experiments.
I will be raising my rating to weak accept but no higher as I view the decoding time as a significant bottleneck of the current approach, which limits its practicality.
I encourage the authors to explore methods that could give a lower decoding time.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for providing valuable feedback. Yes, improving the decoding time of the current approach and deriving the optimal tradeoffs between computation time, communication cost and the error for sparsification techniques with correlation information can be interesting future directions. | Summary: This paper studies the distributed mean estimation (DME) problem. In particular, the paper proposes a new DME technique called Rand-Proj-Spatial. In Rand-Proj-Spatial, each client uses SRHT for dimensionality reduction and sends the transformed lower-dimensional vector to the server. The server then recovers the mean estimate by computing a formula derived from an optimization problem designed to minimize the MSE given the client transforms and considering possible correlations among client vectors. Rand-Proj-Spatial is also unbiased, which is desired in the context of DME.
The main contribution of Rand-Proj-Spatial is that it both improves with increased correlation among client vectors and utilizes SRHT for dimensionality reduction instead of subsampling (like rand-k), achieving lower MSE than previous rand-k-based DME techniques.
Strengths: 1. The new DME technique improves upon previous rand-k-based DME techniques.
2. The reconstruction technique is interesting and not symmetric, putting the computational burden on the server.
3. Rand-Proj-Spatial leverages possible correlations among client vectors which can be expected and useful in some distributed scenarios.
Weaknesses: 1. The worst-case MSE of Rand-Proj-Spatial appears to be $O(d/n)$. Namely, unless $k = \Theta(d)$, the MSE of Rand-Proj-Spatial grows linearly with the dimension of the problem and may cause an asymptotic increase in the number of required optimization rounds in many (S)GD-based scenarios (e.g., neural network training).
2. No comparison to strong DME baselines, e.g., [1][2][3]: In the paper’s evaluation, the authors consider scenarios with d=1024, 21-51 clients, and k=4-40. These regimes can be readily compared to, e.g., [1][2][3] with 1-2 bits per coordinate. Moreover, some of these support sub-bit regimes as well.
3. Insufficient evaluation: the evaluation considers low-dimensional (mostly convex) scenarios. It would be more convincing to demonstrate the advantage of Rand-Proj-Spatial over, e.g., neural networks with sufficient dimension and demonstrate that Rand-Proj-Spatial results in better performance than existing DME techniques considering all aspects of the algorithm (i.e., error-to-bandwidth tradeoff and computational overhead).
4. The encoding and decoding time of Rand-Proj-Spatial are not evaluated and compared to previous DME techniques. These times are of major importance in practical scenarios.
[1] Suresh, Ananda Theertha, et al. "Distributed mean estimation with limited communication." International conference on machine learning. PMLR, 2017.
[2] Davies, Peter, et al. "New Bounds For Distributed Mean Estimation and Variance Reduction." International Conference on Learning Representations, 2021.
[3] Vargaftik, Shay, et al. "Eden: Communication-efficient and robust distributed mean estimation for federated learning." International Conference on Machine Learning. PMLR, 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can the authors address the concerns pointed out in weaknesses (1)-(4)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors point out a possible direction for future work. I would suggest adding disclaimers about the computational overhead and the asymptotic error-bandwidth tradeoffs compared to existing DME techniques.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
W1: The worst case MSE of prior works on sparsification techniques, e.g. [1] and [2],
are also on the order of $O(d/n)$.
Note we did not claim that our approach improves the asymptotic accuracy bounds. Just like [1], our method focuses on utilizing practically available side information to improve communication cost-estimation accuracy tradeoffs.
------
[1] Divyansh Jhunjhunwala, Ankur Mallick, Advait Harshal Gadhikar, Swanand Kadhe, and Gauri Joshi. ``Leveraging spatial and temporal correlations in sparsified mean estimation''. NeurIPS 2021.
[2] Jakub Konecny and Peter Richtárik. ``Randomized distributed mean estimation: Accuracy vs. communication''. Frontiers in Applied Mathematics and Statistics, 4:62, 2018.
------
W2:
See common response $\textbf{Quantization vs. Sparsification}$.
W3 \& W4: We included additional results on comparing the encoding and decoding wall-clock time of different estimators (see the pdf attachment in common response). We note our experiment setting mostly follows that of the prior work [1]. One potential way to make the decoding process more efficient is to divide the dimension $d$ into chunks, encode and decode each chunk. This is similar to layer-wise compression of NNs. We found performing Rand-$k$ on NN needs a bit of time to do hyperparameter tuning, or the optimization diverges. We will make the computational time for decoding clearer and stress it as a limitation.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers and the new experiments. I will be raising my score from 4 to 5. Stressing the limitation of the decoding time is important as it limits the practicality of this solution. Also, giving a broader introduction to the DME problem to include a unified view of sparsification and quantization DME solutions may help the reader to better understand the positioning of the work.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for providing valuable feedback. Yes, we will stress the limitation of the decoding time and give a broader overview of sparsification and quantization techniques to better position this work. | Summary: This work considers the problem of distributed mean estimation, wherein each node in a set of distributed nodes contains a vector, and the goal of the parameter server is to estimate the mean of those vectors. Unlike some other works, no distributional assumption is assumed over the vectors, and the error metric is the mean squared error between the true mean and the estimated mean. Each node sends a compressed version of its vector to the parameter server, and this is the sole source of estimation error. Any randomness in the estimation error arises from the randomness of the compression algorithm used at the nodes. This setup is motivated from distributed optimization / federated learning setups where the mean of the gradients is estimated at the parameter server at every iteration of the optimization algorithm. Since modern machine learning tasks deal with very high dimensional models and gradients, it becomes necessary to compress them before sending it to the parameter server, so as not to overwhelm the communication requirements between nodes and parameter server (which is often a bottleneck).
This particular work focuses on the effect of correlation between the vectors at different nodes, and considers exploiting this correlation information to design better compressors that achieve a lower mean squared error than correlation-agnostic compressors. Prior work (Jhunjhunwala et. al. (2021)) proposed the **rand-$k$ spatial family** of mean estimators that used correlation information. They did so by first compressing the vector at each node using a rand-$k$ compressor. Rand-$k$ essentially subsamples $k$ coordinates out of a $d$-dimensional vector to get a compressed $k$-dimensional vector, zeroing out the remaining coordinated and yielding a compression factor of $\frac{d}{k}$. These compressed vectors are transmitted to the parameter server, which then decodes the mean by appropriately scaling the received vectors that ensures an unbiased estimate of the mean. When correlation information is available, the server can decode the received vectors with a different scaling mechanism, yielding a smaller estimation error.
The primary contribution of this work is a modification of the encoding strategy to incorporate random projections in the encoding step at the clients. Instead of directly applying rand-$k$ to the vector, the vector is first projected on a random subspace (more specifically, the range space of a subsampled randomized Hadamard matrix) and then subsequently rand-$k$ compressor is applied.
Following this idea, the authors in this work propose a correlation-aware encoding and decoding strategy for distributed mean estimation under communication constraints, analytically derive upper bounds on the estimation error of their proposed scheme, and finally, numerically evaluate the performance of their scheme relative to existing benchmarks. Please let me know if I have missed anything in my understanding of the contributions of the paper. I would be more than happy to rectify any misunderstandings on my end.
Strengths: The following are the primary contributions of the paper:
1. The authors propose the *rand-proj-spatial* family of mean estimators as a generalization of the previously proposed *rand-$k$-spatial* estimators, that uses subsampled randomized Hadamard transform to first project the vectors onto a random subspace and subsequently apply a rand-$k$ compressor with appropriate decoding at the server.
2. Several numerical experiments on distributed power iteration, distributed $k$-means, and distributed linear regression have been performed to demonstrate the superiority of the proposed strategy over existing correlation-aware and correlation agnostic strategies.
One of the interesting observations made by the authors is Appendix A.1, which states that applying random projection + rand-$k$ directly to the vectors may not yield any benefit. In other words, if we apply the $\frac{d}{k}$ scaling individually for each vector with the hope of obtaining an unbiased estimate for each nodes' vector at the server, the expected estimation error is the same as that obtained by rand-$k$ compression without the projection. However, lower MSE can be obtained if the vectors received from each of the nodes are jointly decoded as is done in the *rand-proj-spatial* estimator as proposed in eq. (5) of the paper. At a first glance, this seems a little counter-intuitive since (as the authors point out in the introduction), the motivation to take random projections stems from the idea that random projections equalize the coordinate magnitudes.
However, a careful study reveals that when the error metric is the $\ell_2$-estimation error, random projections do not really help, because the $\ell_2$-norm of the vectors approximately remains preserved when an orthonormal projection is applied. However, I do believe the benefit will be apparent if other error metrics are considered, such as the $\ell_{\infty}$ estimation error (more remarks on why this might be the case in the *limitations* section).
Weaknesses: I have one critical concern which is the fact that the authors have titled their paper *Correlation Aware Distributed Vector Mean Estimation* as a very generic and broad approach, whereas the contributions of the paper are focused on exploiting correlation for communication compression for a very specific class of rand-$k$ compression strategies, and it is not immediately obvious how this correlation-aware compression strategy can be extended to other classes of compressor such as top-$k$ (instead of rand-$k$), which is a biased compressor -- but is very popular and has been extensively used for distributed optimization / federated learning applications. For example, the paper by Suresh et. al. (2022) (reference [10]) also exploits correlation for quantization purposes. Please note that this is not a drawback or a comment on the technical contributions of the paper. However, the title can be slightly misleading in a sense since the paper does not focus on the benefits of correlation beyond rand-$k$ compression. For instance, correlation can be exploited for client selection, or for designing correlation-aware private mean estimation algorithms. It is my personal opinion and would be highly appreciated if the authors chose a more descriptive title.
In addition to this, the *rand-proj-spatial* family of estimators proposed in this paper is also a very specific design, and there is no indication if it is the best strategy. In other words, given that our underlying task is communication compression, is *rand-proj-spatial* the best way to exploit correlation? I understand this is not an easy question to answer since lower bounds on the estimation error have not been derived in this paper. However, when authors introduce the optimization problem in eq. (4), it seemed to me that this optimization problem was formulated with a solution in mind -- a solution which is analogous (and a generalization) to the prior work on *rand-$k$ spatial* family of estimators. A natural question to ask here is why the problem was not initially formulated more generally as follows:
$\hat{\mathbf{x}} = argmin_{\mathbf{W}, \mathbf{x}} \mathbb{E}\left\lVert \overline{\mathbf{x}} - \sum_{i=1}^{n}\mathbf{W}_i\mathbf{G}_i\mathbf{x}_i \right\rVert_2^2$
where $\overline{\mathbf{x}} = \frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_i$, and $\mathbf{W} = (\mathbf{W}_1, \ldots, \mathbf{W}_n)$ are the individual decoding matrices over which optimization is done as well. Furthermore, one can solve this optimization problem subject to the unbiasedness constraint and potentially come up with a better decoding strategy than the one proposed in this paper? It would be appreciated if the authors discussed how (or under what assumptions) the optimization problem of eq. (4) relates to the this general formulation above, and what are the challenges that prevent one from solving the above general optimization problem directly. Moreover, in the above formulation, the encoding strategy is still the same, i.e., rand-proj + rand-$k$ -- one could also replace the encoding function by a more general class of (potentially biased) contractive compressors (such as top-$k$) and come up with a more very holistic approach to the problem of *correlation aware compression for distributed mean estimation* (which would justify this title).
Please note once again that this is not a weakness on the technical contributions of the paper as such -- just a suggestion that I genuinely believe would help highlight the contributions of the paper in a better context.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I have a few questions and suggestions for the authors, and would highly appreciate if they would take them into consideration. I would be more than happy to re-evaluate my review contingent on their response. The following are some of my concerns:
The idea of random projection + compression like rand-$k$ is not novel by itself (although it's implications in the presence of correlation as explored in this paper are, to the best of my knowledge). Accordingly, there are some very relevant references missing from the related works section and it would be highly appreciated if the authors included those:
1. R. Saha, M. Pilanci and A. J. Goldsmith, "Efficient Randomized Subspace Embeddings for Distributed Optimization Under a Communication Budget," in IEEE Journal on Selected Areas in Information Theory, vol. 3, no. 2, pp. 183-196, June 2022, doi: 10.1109/JSAIT.2022.3198412 (This work adopts a unified approach for studying the effects of random transforms on compressors -- it poses randomized Hadamard transform as a computationally efficient relaxation of Democratic (Kashin) representations, and also studies rand-$k$ compression (among others) with both kashin + near-kashin -- eventually showing that they attain information-theoretically optimal convergence rates for optimization).
2. Mher Safaryan and others, Uncertainty principle for communication compression in distributed and federated learning and the search for an optimal compressor, Information and Inference: A Journal of the IMA, Volume 11, Issue 2, June 2022, Pages 557–580, https://doi.org/10.1093/imaiai/iaab006 (This work studies a general class of contractive compressors which includes random-projections based compressors -- they focus on Kashin compression and study different variants of accelerated distributed optimization algorithms).
These properties are not specific to subsampled Randomized Hadamard transforms, -- but extend broadly to the class of randomized matrices that belong to Johnson-Lindenstrauss embeddings and/or follow Restricted Isometry Property (RIP). These results might perhaps be helpful to the authors in deriving bounds on MSE for *rand-proj-spatial* for other classes of matrices (such as random Haar orthonormal). Once again, this is not a critical drawback, but a suggestion that will help place the work better in context with respect to existing literature.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I have some other minor concerns and would be happy if the authors addressed / discussed them:
1. The authors mention on line $138$ *"Each client also sends a random seed to the server, which conveys the subspace information, and can usually be communicated using a negligible amount of bits."* -- Is this really negligible? Can $\mathbf{G}$ be conveyed between the nodes and server with less than $d$ bits of information? Since each node shares an independently sampled random matrix, can correlation help improve this shared randomness generation?
2. The statement of Theorem 4.3 is a high probability result but the equation (9) holds in expectation -- this is slightly weird. It would be better if the authors stated the result purely as a high probability statement or purely in terms of expectation (the latter might be simpler since the failure probability is $o(1)$). If this is not possible, they should atleast explicitly specify in the statement of the theorem which random variables is the expectation computed over, and which random variables is the high probability bound over? Moreover, $o(1)$ is not explicitly defined -- is it with respect to dimension / number of nodes -- please specify?
It would also be highly if the authors had a separate limitations sections that discuss these and other concerns raised in *weaknesses* and *questions*, which are my primary concerns. I would be more than happy to re-evaluate my assessment if these concerns are addressed satisfactorily.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: We indeed had a lot of discussions on the title before the submission. It is hard to give a precise yet succinct title. Other possible title candidates are "Projection Based Correlation Aware Distributed Vector Mean Estimation", "Unbiased Sparsification Induced Distributed Vector Mean Estimation with Correlation", etc. We are happy to take suggestions from the reviewers.
W2:
The proposed optimization problem
might need a bit clarification: Where is the variable $\mathbf{x}$ in the problem?
One thing to note is that to utilize cross-client correlation information, we need to design a decoding scheme that uses all client's information in decoding. For example, in Rand-$k$-Spatial [1], such cross-client information is the \# clients who sent the $j$-th coordinate. In our Rand-Proj-Spatial, this information can be thought of encoded in the eigenvalues of random matrices $\mathbf{G}_i$ each client uses during encoding. Each decoded
$\mathbf{x}_i$ is
$\mathbf{x}_i = (T(\sum_{j=1}^{n}\mathbf{G}_j^T \mathbf{G}_j))^{\dag} \mathbf{G}_i^T\mathbf{G}_i \mathbf{x}_i$
, where $\sum_{j=1}^{n}\mathbf{G}_j\mathbf{G}_j^T$ is the cross-client information.
Since the above optimization problem considers a single decoding matrix $\mathbf{W}_i$
for each client, solving this problem might still lead to individual decoding of each client, which does not use potential cross-client correlation information.
Another potential form of the optimization problem (Problem 2) we considered is solving
$\hat{\mathbf{x}} = \argmin_{\mathbf{x}} \|\frac{1}{n}\sum_{i=1}^{n}\mathbf{G}_i \mathbf{x}- \frac{1}{n}\sum_{i=1}^{n} \mathbf{G}_i \mathbf{x}_i\|_2^2$.
However, through simulations, the solution to Problem 2 is consistently worse than the one presented in the paper.
We can add the discussion to the Appendix.
Q1: Note both references mentioned here belong to the orthogonal line of work on quantization. See the difference between quantization and sparsification in common response $\textbf{Quantization vs. Sparsification}$. But we are happy to include the two references in our citation. While random projection + compression and leveraging correlation information across clients are well explored in quantization, these ideas are less explored in sparsification. To the best of our knowledge, the only prior work that explores cross-client correlation is [1]. And as we show in the Appendix A, unlike in quantization, applying a simple rotation in sparsification will not lead to improvement.
As discussed in Section 4.3 and Appendix B.2, we considered analysis tools such as J-L embeddings but the analysis of SRHT (and other random matrices) in current literature concerns mainly asymptotic properties. Yet, we do need an explicit distribution of the eigen-decomposition of the random matrix in Rand-Proj-Spatial in our case (which does not exist), to derive a tight MSE analysis and to compare against that of Rand-$k$-Spatial and Rand-$k$.
L1: The randomness of $\mathbf{G}_i = \frac{1}{\sqrt{d}}\mathbf{E}_i \mathbf{H} \mathbf{D}_i$ comes from two sources: 1) the subsampling matrix $\mathbf{E}_i$ and the diagonal matrix $\mathbf{D}_i$ with Redamacher random variables on the diagonal. $\mathbf{E}_i$ and $\mathbf{D}_i$ can both be generated by some pseudorandom function (PRF) and one only needs a seed (aka. a number) to describe the PRF. The length of the seed is related to the strength of pseudorandomness and can be independent of $d$. It is possible to send the seed of the PRF with only a constant number of bits. Since the server and the clients have shared randomness, each client only needs to send an additional constant number of bits to the server so that the server can generate $\mathbf{E}_i$ and $\mathbf{D}_i$, and then reconstruct $\mathbf{G}_i$.
See [1], for example, of how such PRF seed is applied in their distributed algorithm.
-----
[1] ``Locally Differentially Private Sparse Vector Aggregation''. Mingxun Zhou, Tianhao Wang, T-H. Hubert Chan, Giulia Fanti, Elaine Shi. IEEE S\&P 2022.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Dear authors,
Sincere apologies for the late reply. Thank you very much for your thoughtful rebuttal. Regarding the optimization problem, apologies again for not being clearer. This is what I had in mind -- The decoded estimate at the PS for worker $i$ is $\mathbf{W}_i\mathbf{G}_i\mathbf{x}_i$, where $\mathbf{W}_i$ is a decoding matrix. The mean estimated at the server is
$$\frac{1}{n}\sum_{i=1}^{n} \mathbf{W}_i\mathbf{G}_i\mathbf{x}_i$$
Consequently, solving the following optimization problem:
$$\mathbf{W}^* = argmin_{\mathbf{W}} \mathbb{E}\left\lVert \overline{\mathbf{x}} - \frac{1}{n}\sum_{i=1}^{n}\mathbf{W}_i\mathbf{G}_i\mathbf{x}_i \right\rVert_2^2$$
would give us the optimal decoding matrices, using which we can estimate the mean as: $\widehat{\mathbf{x}} = \frac{1}{n}\sum_{i=1}^{n} \mathbf{W}_i\mathbf{G}_i\mathbf{x}_i$. Additionally, we can impose the unbiasedness constraint:
$\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[ \mathbf{W}_i\mathbf{G}_i\mathbf{x}_i \right] = \overline{\mathbf{x}}$,
and solve the constrained minimization problem. My original question was, is this formulation any way equivalent to the formulation in the paper? Also, yes, a discussion in the paper on this or the optimization problem mentioned in the rebuttal would be informative. More specifically, an answer to the question -- **Why is minimizing the sum of squared norms of the random projections a good strategy to exploit correlation for mean estimation?** This should yield answers to natural questions like -- why not sum of $\ell_2$ -- norms (not necessarily squared), or $\ell_\infty$ -- norm. While I understand that other choices might not yield nice closed form expressions which can be analyzed, if this is the reason, then it should be mentioned that the this particular choice of optimization problem makes analytical tractability feasible. And possibly later, when the authors (or someone else) comes up with lower bounds to this work, it will reveal the optimality of any strategy.
Secondly, I agree that quantization and sparsification are not entirely different lines of work. A practical implementation will always employ quantization (even if it is $32$ or $64$ -- bit single / double precision formats). I believe it is important to discuss this work in the context of quantization as well, because if the eventual goal of any work is to be employed for communication compression, both quantization and sparsification would likely be employed and might involve tradeoffs. I agree this is not the core contribution of this work, but in my opinion, it does warrant a discussion -- for example, is simple concatetation of this strategy with quantization good enough (it's quite likely not optimal). So future work might involve coming up with a joint strategy. It is my opinion that elaborating and acknowledging this connection would help the work put better in context -- but once again, this is not critical and the authors may or may not agree with this opinion.
Thirdly, regarding Limitation 1, I agree a lot of existing works do consider the existence of perfect shared randomness while doing the analysis, but practical implementations would require PRNG seeds to simulate randomness (which cannot be perfect unless at least $log_2 d$ bits are exchanged -- which is the entropy of $\mathbf{H}$). I understand the gap between theory and practical implementation is not straightforward to resolve, and just ask the authors to point this out as a limitation/assumption of perfect shared randomness.
Fourthly, yes, it is my opinion that a more descriptive paper title is important so as not to be misleading. Something that brings out the fact that correlation between the random projections are exploited for the purpose of compression for mean estimation.
Finally, do the authors have any response to limitation 2? Is it possible to convert the high probability statement purely in terms of expectations over all sources of randomness? If I understand correctly, currently in Thm 4.3 expectation is over the randomness in rand - k, and the $o(1)$ probability is over construction of the random projections $\mathbf{G}_i$.
---
Reply to Comment 1.1.1:
Comment: Sorry for posting this response late. Let me respond to Limitation 2: $1-o(1)$ probability bound vs. in expectation bound first.
The $1-o(1)$ high probability part comes from the fact that the MSE stated in Theorem 4.3 holds when rank($\textbf{S}$) $= nk$, where $\textbf{S} = \sum_{i=1}^{n} \textbf{G}_i^T \textbf{G}_i$ and
$\textbf{G}_i$’s are the random SRHT matrices generated by each client.
With a slightly tighter analysis, we can get rid of the $o(1)$ part and have in expectation bound of MSE(Rand-Proj-Spatial) under max client correlation as follows. Let $\delta$ be the probability that $\textbf{S}$ does not have full rank (i.e., has rank $(\textbf{S})< nk$), and let $\delta_c$ be the probability that $\textbf{S}$ has rank $c \in \\{k, k+1,\dots, nk-1 \\}$. Note that $\delta = \sum_{c=k}^{nk-1} \delta_c$. Now, following similar steps in the proof of Theorem 4.3 in Appendix C:
$\textbf{Computing $\bar{\beta}$}.$
First, to ensure that our estimator $\widehat{\textbf{x}}$ is unbiased, we need $\bar{\beta} \mathbb{E}[\mathbf{S}^{\dagger}\mathbf{S} \mathbf{x}] = \mathbf{x}$. Consequently,
$
\mathbf{x} = \bar{\beta} \mathbb{E}[\mathbf{U} \Lambda^{\dagger} \mathbf{U}^T \mathbf{U} \Lambda \mathbf{U}^T] \mathbf{x}
$
$
= \bar{\beta} \left[ \sum_{\mathbf{U} = \Phi} \Pr[\mathbf{U} = \Phi] \mathbf{U} \mathbb{E}[\Lambda^{\dagger}\Lambda \mid \mathbf{U} = \Phi] \mathbf{U}^T \right] \mathbf{x}
$
$
\overset{(a)}{=} \bar{\beta} \left[ \sum_{\textbf{U} = \Phi} \Pr[\textbf{U} = \Phi] \textbf{U} \mathbb{E}[ \text{diag}(\mathbf{m}) \mid \textbf{U} = \Phi] \textbf{U}^T \right] \textbf{x}
$
$
\overset{(b)}{=} \bar{\beta} \sum_{\textbf{U} = \Phi} \Pr[\textbf{U} = \Phi] \textbf{U} \left[ (1-\delta) \frac{nk}{d} I_d + \sum_{c=k}^{nk-1} \delta_c \frac{c}{d} \mathbf{I}_d \right] \textbf{U}^T \textbf{x}
$
(For $c < nk$, $\mathbb P(\text{rank}(\text{diag}(\mathbf{m})) = c) = \delta_c$ and $\sum_{c=k}^{nk-1} \delta_c = \delta$)
$
= \bar{\beta} \left[ (1-\delta) \frac{nk}{d} + \sum_{c=k}^{nk-1} \delta_c \frac{c}{d} \right] \textbf{x}
$
$
\Rightarrow \bar{\beta} = \frac{d}{(1-\delta) n k + \sum_{c=k}^{nk-1} \delta_c c}
$
where in $(a)$, $\mathbf{m} \in \mathbb{R}^d$ such that
$\mathbf{m}_i = 1$
if $\Lambda_{jj} > 0$ for $j \in \\{1,2,\dots,d\\}$ and $0$ otherwise.
Also, by construction of $\mathbf{S}$, $\text{rank}(\text{diag}(\mathbf{m})) \leq nk$. Further, $(b)$ follows by symmetry across the $d$ dimensions.
Since $\delta k \leq \sum_{c=k}^{nk-1} \delta_c c \leq \delta (nk-1)$, there is $\frac{d}{(1-\delta) n k + \delta (nk-1)} \leq \bar{\beta} \leq \frac{d}{(1-\delta) n k + \delta k}$.
$\textbf{Computing the MSE.}$ Next, we use the value of $\bar{\beta}$ derived above to compute the MSE of Rand-Proj-Spatial.
$
MSE = \mathbb{E}[\|\|\widehat{\textbf{x}} - \bar{\textbf{x}}\|\|_2^2]
= \mathbb{E}[\|\| \bar{\beta}\mathbf{S}^{\dagger}\mathbf{S} \textbf{x} - \textbf{x}\|\|_2^2]
$
$
= \bar{\beta}^2\mathbb{E}[\|\|\mathbf{S}^{\dagger}\mathbf{S} \textbf{x} \|\|_2^2] + \|\|\textbf{x}\|\|_2^2 - 2\Big\langle \bar{\beta} \mathbb{E}[\mathbf{S}^{\dagger} \mathbf{S}\textbf{x}], \textbf{x} \Big\rangle
$
$
=\bar{\beta}^2\mathbb{E}[\|\|\mathbf{S}^{\dagger}\mathbf{S} \textbf{x} \|\|_2^2] - \|\|\textbf{x}\|\|_2^2
$
(Using unbiasedness of $\widehat{\textbf{x}}$)
$
= \bar{\beta}^2\textbf{x}^T\mathbb{E}[\mathbf{S}^T (\mathbf{S}^{\dagger})^T \mathbf{S}^{\dagger}\mathbf{S}] \textbf{x} - \|\|\textbf{x}\|\|_2^2
$
Using $\mathbf{S}^{\dagger} = \textbf{U}\Lambda^{\dagger}\textbf{U}^T$,
$
\textbf{x}^T \mathbb{E}[\mathbf{S}^T (\mathbf{S}^{\dagger})^T \mathbf{S}^{\dagger}\mathbf{S}] \textbf{x}
= \textbf{x}^T \mathbb{E}[\textbf{U} \Lambda \textbf{U}^T \textbf{U} \Lambda^{\dagger} \textbf{U}^T \textbf{U} \Lambda^{\dagger} \textbf{U}^T \textbf{U} \Lambda \textbf{U}^T] \textbf{x}
$
$
=\textbf{x}^T\mathbb{E}[\textbf{U} \Lambda (\Lambda^{\dagger})^{2} \Lambda \textbf{U}^T] \textbf{x}
$
$
= \sum_{\textbf{U} = \Phi} \textbf{x}^T\textbf{U} \mathbb{E}[\Lambda (\Lambda^{\dagger})^{2} \Lambda] \textbf{U}^{T} \textbf{x} \cdot \Pr[\textbf{U} = \Phi]
$
$
= \sum_{\textbf{U} = \Phi} \textbf{x}^T \textbf{U} \left[ (1-\delta) \frac{nk}{d} I_d + \sum_{c=k}^{nk-1} \delta_c \frac{c}{d} \mathbf{I}_d \right] \textbf{U}^{T} \textbf{x} \cdot \Pr[\textbf{U} = \Phi]
$
$
= \left[ (1-\delta) \frac{nk}{d} + \sum_{c=k}^{nk-1} \delta_c \frac{c}{d} \right] \|\|\textbf{x}\|\|_2^2 = \frac{1}{\bar{\beta}} \|\|\textbf{x}\|\|_2^2
$
Therefore,
$
MSE(\text{Rand-Proj-Spatial}) = (\bar{\beta} - 1) \|\|\textbf{x}\|\|_2^2 \leq \left[ \frac{d}{(1-\delta) n k + \delta k} - 1 \right] \|\|\textbf{x}\|\|_2^2
$ | Summary: In distributed learning, computing the mean of the vectors sent by the clients is an important subtask. Motivated by this, the distributed mean estimation problem is studied in this paper. The two main techniques commonly used for these problems are Quantization and sparsification. Rand-K was one prominent sparsification method used earlier. In [9], this was generalized to Rand-k-Spatial to exploit the spatial correlations of the data using the server-side decoding procedure. A more general encoding scheme, Rand-Proj-Spatial is proposed in this paper which utilizes the cross-client correlation information. This uses Subsampled Randomized Hadamard Transform as random linear maps. They prove that this improves mean estimation under some assumptions and some experiments supporting their claim.
Strengths: The paper is well presented. The problem generalizes the existing works and the results look interesting.
Weaknesses: Their techniques are not adaptive to the client vectors. More motivation on why studying why correlation information of clients is practical will be interesting.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is Rand-Proj-Spatial optimal or close to optimal, given the correlation information? Some intuition will be good.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: We gave the reason why we want our algorithm to be $\textit{non-adaptive}$ to client vectors in Section 1 line 82 - 89.
The motivation of studying correlation information of clients is well explained and experimentally demonstrated in the prior work [1]. For example, in distributed optimization, clients who are geographically similar often have similar data, leading to having similar model parameters.
Q1:
When there is no correlation, the proposed Rand-Proj-Spatial with SRHT is very likely to be the optimal $\textit{unbiased}$ and $\textit{non-adaptive}$ encoder. Recall in this case, Rand-Proj-Spatial recovers Rand-$k$.
The evidence of Rand-$k$ being the optimal $\textit{unbiased}$ and $\textit{non-adaptive}$ sparsification based encoder comes from Section 6 ``Optimal Encoders'' in [2]. If one considers the probability $p_{ij}$ across the clients and coordinates in the optimization problem in Section 6, which finds the best unbiased estimator that minimizes MSE, to be the same (aka. $\textit{non-adaptive}$), then the solution to this problem is exactly Rand-$k$.
When there is max correlation, the proposed Rand-Proj-Spatial is also very likely to be the optimal, since by Eq.(6) in our paper, the error of the estimator in this case essentially depends on the rank of $\mathbf{S}$. Rand-Proj-Spatial with SRHT makes $\mathbf{S}$ achieve the max rank, i.e., $nk$.
When the correlation is in between max and none, it is hard to derive a closed form expression for MSE. We do not know whether the current way of incorporating varying degrees of correlation by applying a transformation function $T$, which interpolates between max and no correlation, on the eigenvalues of $\mathbf{S}$, is optimal. However, the prior work [1] shows $T$ is the optimal transformation in the special case of our estimator, i.e., in Rand-$k$-Spatial.
Hence, we believe the transformation function $T$ in Rand-Proj-Spatial, which uses a similar way to address the degrees of correlation in between max and none, should be at least close to optimal.
---------
[1] Divyansh Jhunjhunwala, Ankur Mallick, Advait Harshal Gadhikar, Swanand Kadhe, and Gauri Joshi. ``Leveraging spatial and temporal correlations in sparsified mean estimation.'' In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
[2] Jakub Konecny and Peter Richtárik. ``Randomized distributed mean estimation: Accuracy vs. communication.'' Frontiers in Applied Mathematics and Statistics, 4:62, 2018. | Rebuttal 1:
Rebuttal: 1. $\textbf{Quantization vs. Sparsification.}$ There are two major techniques to reduce the communication cost of distributed vector mean estimation (DME): vector quantization and sparsification. The two techniques are orthogonal to each other. Vector quantization reduces the number of bits to represent each coordinate while vector sparsification reduces the number of coordinates each client sends. As a result, the communication cost of quantizaation is on the order of $O(d)$, while the communication cost of sparsification is on the order of $O(k)$ for some chosen $k \ll d$. In short, sparsification reduces the communication cost more aggressively compared to quantization. Note Rand-$k$-Spatial and the proposed Rand-Proj-Spatial, fall under sparsification, where the focus of algorithm design is on reducing the number of coordinates, while works mentioned by the reviewers, i.e., [1], [2] ([33] in the draft) and [3], all fall under quantization, where the focus is on reducing number of bits per coordinate. We can definitely include more quantization works to the citation and highlight them as an orthogonal line of works.
In practice, one can apply a combincation of quantization and sparsification techniques to reduce communication cost. Quantizing the $k$ values sent by each worker, for example, further reduces the number of bits per coordinate, while the communication cost remains $O(k)$, and incurs additional communication-accuracy tradeoffs.
To the best of our knowledge, the SOTA $\textit{unbiased}$ and $\textit{non-adaptive}$ sparsification methods are Rand-$k$ and Rand-$k$-Spatial, which serve as two baselines of our proposed Rand-Proj-Spatial. In the experiments, we also compare Rand-Proj-Spatial against two SOTA $\textit{unbiased}$ and $\textit{adaptive}$ sparsification techniques.
-----
[1] Peter Davies, Vijaykrishna Gurunathan, Niusha Moshref, Saleh Ashkboos. ``New Bounds For Distributed Mean Estimation and Variance Reduction''. ICLR 2021.
[2] Shay Vargaftik, Ran Ben Basat, Amit Portnoy, Gal Mendelson, Yaniv Ben-Itzhak, Michael Mitzenmacher. ``EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning''. ICML 2022.
[3] Ananda Theertha Suresh, Felix X. Yu, Sanjiv Kumar, H. Brendan McMahan. ``Distributed mean estimation with limited communication.''. ICML 2017.
-----
2. $\textbf{Computation Time. }$
The encoding time of Rand-Proj-Spatial is $O(kd)$. The decoding time is $O(d^2\cdot n\cdot k)$, which is the time to compute the eigen-decomposition of a $d \times d$ matrix $T(\sum_{i=1}^{n} \mathbf{G}_i^T \mathbf{G}_i)$ of rank at most $nk$ and this is the computational bottleneck of the decoding process. In practice, the server has a lot more computational power compare to the clients (e.g., edge devices) and so it can afford to spend more decoding time.
We will make the computation time clearer and stress this as a limitation.
We also attached additional experimental results on the encoding and decoding time for power iteration and linear regression. As one can observe, when the number of clients $n$ is large, the $\textit{adaptive}$ encoder Rand-$k$-Wangni takes a longer time to encode compared to the other $\textit{non-adaptive}$ encoders.
Pdf: /pdf/db23725a3da1e87999472709598c39ff667703e8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper discusses the problem of communication-efficient distributed vector mean estimation and proposes a new estimator called Rand-Proj-Spatial that improves upon existing techniques. The paper highlights the challenges of distributed optimization and Federated Learning, where communication cost can be a bottleneck, and explains how sparsification techniques can reduce this cost. The authors then introduce the Rand-k sparsification technique and show how it can be used to reduce communication cost in distributed vector mean estimation. They also propose the Rand-Proj-Spatial estimator, which combines Rand-k sparsification with a Subsampled Randomized Hadamard Transform to improve upon the performance of existing techniques. The paper concludes with experimental results that demonstrate the effectiveness of the Rand-Proj-Spatial estimator in various scenarios. Overall, the contributions of this paper include a new estimator for distributed vector mean estimation that is more communication-efficient than existing techniques, as well as insights into the challenges of communication-efficient distributed optimization and Federated Learning.
Strengths: In terms of originality, the paper proposes a new estimator called Rand-Proj-Spatial that combines existing techniques in a novel way to improve upon the performance of distributed vector mean estimation. The use of Subsampled Randomized Hadamard Transform in the encoding-decoding procedure is a creative combination of existing ideas that has not been explored in this context before.
In terms of quality, the paper is well-written and clearly explains the challenges of communication-efficient distributed vector mean estimation and the proposed solutions. The authors provide rigorous theoretical analysis and experimental results to support their claims, which adds to the credibility of their contributions.
In terms of clarity, the paper is well-organized and easy to follow. The authors provide clear explanations of the technical concepts and use visual aids to help readers understand the proposed techniques.
In terms of significance, the paper addresses an important problem in the field of distributed optimization and Federated Learning. The proposed Rand-Proj-Spatial estimator has the potential to significantly reduce communication cost in these settings, which can have practical implications for real-world applications.
Weaknesses: 1. While the paper compare the proposed Rand-Proj-Spatial estimator with Rand-k-Spatial and other sparsification techniques, it would be beneficial to compare the proposed estimator with other state-of-the-art techniques for distributed vector mean estimation.
2. While the paper briefly mentions that the Subsampled Randomized Hadamard Transform can be computed efficiently, it would be helpful to provide a more detailed analysis of the computational complexity of the proposed estimator.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the key difficulty/novelty in the design and analysis?
2. Can you provide more detailed analysis of the trade-off between communication cost and estimation accuracy for the proposed estimator and other sparsification techniques?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: See common response $\textbf{Quantization vs. Sparsification}$.
W2: See common response $\textbf{Computation Time}$.
Q1: $\textit{Two key novelties:}$ 1) the design of a general encoding-decoding algorithm that generalizes sparsification techniques and enables one to better use correlation among the client vectors to improve MSE compared to the prior work. 2) Previously, SRHT was mostly applied to reduce run time or to homogenize vector coordinates (i.e., to reduce variance). We propose a novel application of SRHT to better leverage cross-client correlation.
$\textit{Key difficulty in analysis:}$ As we have discussed in Section 4.3 and in Appendix B.2, it is hard to derive closed-form expression of MSE for Rand-Proj-Spatial with SRHT in the general case when the cross-client correlation is in between max correlation and completely orthogonal.
This is because the analysis requires a closed form expression for the non-asymptotic distributions of eigenvalues of SRHT, which is a hard problem by itself.
To the best of our knowledge, previous analyses of SRHT, rely on the asymptotic properties of SRHT, such as the limiting eigen spectrum, or concentration bounds on the singular values, to derive asymptotic or approximate guarantees. But we need an exact non-asymptotic analysis; or the resulting MSE bound is too loose.
In Appendix B.2, we also included several ideas we considered but did not work for deriving a closed-form MSE expression.
Q2: $\textit{Comparison to biased sparsification based estimators.}$
It is in general not fair to compare unbiased estimators, including our proposed Rand-Proj-Spatial, against biased estimators, including Top-$k$. As noted in prior distributed optimization literature [1] and mentioned in Section 1 line 81, unbiased estimators is more favorable to biased ones.
$\textit{Comparison to unbiased sparsification based estimators.}$
When the client vectors have max correlation and have no correlation, we give detailed analysis in Section 4.1 and Section 4.2 to compare the communication cost-estimation accuracy tradeoffs of our estimator against that of the known sparsification based unbiased estimators, e.g., Rand-$k$ and Rand-$k$-Proj.
When the client vectors have a correlation in between max and none, as we responded in Q1, the analysis is difficult and so we empirically compared the communication-estimation accuracy tradeoffs in Figure 3. Note the Figure 3 presents the estimation error with the communication cost for all estimators being fixed. One can see that the error of the proposed Rand-Proj-Spatial is the lowest and this implies Rand-Proj-Spatial has the best communication-accuracy tradeoff.
----------
[1] Samuel Horváth and Peter Richtarik. ``A better alternative to error feedback for communication- efficient distributed learning.'' In International Conference on Learning Representations, 2021.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for addressing my concerns. I will be raising my score from 5 to 6.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for providing valuable feedback! | null | null | null | null | null | null |
Score-based Data Assimilation | Accept (poster) | Summary: This paper proposes an approach to solve data-assimilation tasks using techniques from score-based generative modeling. Given observations and trajectories from the data distribution, a posterior distribution over an entire state-trajectory is to be computed. The score function of the posterior is estimated using a learned prior-score network and a Gaussian approximation to the observation model. To make the score-based approach scale to long state trajectories, the authors propose to estimate the prior score over the entire trajectory as a combination of scores computed on local time windows. The theoretical groundwork is backed by a set of experiments in simulated environments.
Strengths: - The presentation of the method is clear and understandable
- This approach to data assimilation is unlike any I have seen before. It combines two established techniques in a natural way without bending one or the other out of shape. The method yielded by said combination appears natural and it seems valuable to test it on real-world applications.
- The authors provide clean code in their supplementary material
- The limitation section answered open questions I had while reading the paper. It lists approximations made and work that is left to do in the future.
- My feeling is that this is the ground work to a fairly interesting and useful approach to inference in dynamical systems
Weaknesses: - From my understanding, the method has yet to be tested on (settings on the scale of) actual real-world problems. Concretely, the experiments shown -- yet convincing -- are rather small-scale, considering that data assimilation regularly has to tackle problems with several millions of state variables. This is not necessarily to be expected when presenting a novel technique, but the Kolmogorov flow experiment left me wondering if there was something standing in the way of choosing a grid that is larger than 64 x 64.
(Note: This is not asking for an additional experiment, but rather my opinion that a statement regarding scalability of the approach would be a good addition to the manuscript.)
- I suspect that -- as is -- the amount of training data that would have to be generated for very large-scale tasks could be computationally quite infeasible
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Is the approach scalable to massive-scale real-world problems, such as weather forecasting?
- Is it feasible to generate as many training trajectories as would be needed for such complex dynamics?
- Was the performance of the method studied under dynamics that develop turbulences or even shocks (such as, e.g., Burgers' equation)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: - Limitations were stated in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your enthusiastic review and for acknowledging the code provided in the supplementary material. We hope that the following answers will satisfy you.
**Weaknesses & Limitations**
> From my understanding, the method has yet to be tested on (settings on the scale of) actual real-world problems. [...] a statement regarding scalability of the approach would be a good addition to the manuscript.
This is a good point. We propose to add the following paragraph in Section 5.
* Concerning our experiments, although the Kolmogorov system is already high-dimensional (tens of thousands of dimensions) with respect to what is approachable with classical posterior inference methods, it remains small in comparison to the millions of dimensions of some operational DA systems. Whether SDA would scale well to such applications is an open question and will obviously present serious engineering challenges.
Nevertheless, we would like to mention that operational DA is usually performed on clusters with hundreds to thousands of CPUs/GPUs, while training and inference in our experiments were performed with a single GPU. We are actively working on/looking for collaborations with weather and oceanic research centers to apply SDA on operational data.
> I suspect that -- as is -- the amount of training data that would have to be generated for very large-scale tasks could be computationally quite infeasible.
In most dynamical systems, many symmetries and inductive biases (e.g. equivariances) can be exploited to reduce the amount of data necessary for training. This is even more true with SDA, as we train on short segments instead of complete trajectories. In fact, for the Kolmogorov system the full training set consists of 820 trajectories of 64 states, which is ridiculously small in comparison to the amounts of data typically used to train image/video generation models.
**Questions**
> Was the performance of the method studied under dynamics that develop turbulences or even shocks (such as, e.g., Burgers' equation)?
The Kolmogorov system considers the state of a two-dimensional turbulent fluid. We did not apply SDA on systems presenting shocks, but this would be an interesting application.
---
Rebuttal Comment 1.1:
Comment: Many thanks for your response and for acknowledging the open question of scalability in your added paragraph. Based on my current assessment I continue to argue in favor of acceptance and will maintain my initial score. | Summary: This paper presents a technique to tackle data assimilation problems (inverse problems involving partially observed time series with a dynamical model acting as a prior over the reconstructed states) using a score based model to estimate the prior distribution $p(x)$ from data. The method used, at its core, leverages a now well-known capability of unconditional score based models to address inverse problems by incorporating the score of the likelihood into the generating process. This makes the approach appealing in zero-shot problems, where a new inverse problem with the same dynamical prior can be solved without any retraining. The paper is the first (to my knowledge) to adapt these techniques for data assimilation. It further proposes a few technical contributions on score based modeling : i) the capacity to exploit the Markov structure of dynamical systems data to train the score based model by small time chunks rather than a global one for the whole trajectories ii) A new approximation technique to estimate the score of the likelihood term $p(y|x(t))$, which is no trivial task iii) and a predictor corrector sample technique to improve data generation by incorporating a few Langevin Monte Carlo sampling iterations in between the steps of the SDE solver. The method is then tested on various assimilation tasks on two datasets coming from the Lorenz-63 system (in a quantitative way since the posterior can be arbitrarily well approximated by a Bootstrap Particle Filter) and Kolmogorov fluid flow (in a qualitative way).
Strengths: - The paper gives a clear exposition of score based methods and their use to solve inverse problems. It also reviews/improves a number of techniques used for efficient training of the model and incorporation of the inverse problem likelihood into the score function.
- This seems to be the first application of score based models for posterior sampling in data assimilation, and it shows nice potential for those applications (emulating the physical model without the need to differentiate through the whole dynamical model and a solver, and reusing it for any assimilation problem using the same dynamics). The applications tackled, although relatively simple (in their dynamics), lead to think this could be applied to more ambitious situations in the future.
- The paper presents three additional technical contributions useful for the generative modeling community as a whole, improving sample generation/training local score based models adapted to Markov chains/improving the approximation of the score of the likelihood. These contributions and the associated claims are globally well supported by the experiments.
Weaknesses: - Section 3.1 should be a bit expanded, I believe. In particular, using the blanket to compute equations (11) and (12) could be a bit more detailed, and the link between the result of appendix A and the unbiasedness of the estimate (13) could be a bit explicited. It would be nice also to have an idea of how the value of k affects training time. However, the idea is sound and the approximation looks okay in the results on the Lorenz system. So overall, the blanket is good.
- More generally, there is a lot of emphasis in the paper on making the model easier to train/less expensive, less memory consuming or less prone to technical/computational difficulties (e.g. differentiating through the physical model and its associated solver), but none of this is actually quantified. For example, it would be quite interesting to test the effect of the value of $k$ on the training time compared to a naive global model, to quantify the effect of the number of correction steps $C$ on running time instead of just the quality of the solutions.
- In addition, the paper would benefit from the comparison to more standard data assimilation techniques. This is done on the Lorenz data since a BPF is used to sample from the true posterior, but it would be nice to have an idea of the quality of the SDA output with respect to say, a classical variational assimilation with differentiation through the model (both in performance and running time). The training time should be reported, as well as the inference time, to be compared with the computation cost of optimizing a variational assimilation cost function.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the authors highlight the difference/added value between their predictor-corrector sampler and the one in [24] ? The rationale seems very similar even though the equations differ. Please explain.
- In figure 4, would it be possible to quantify the closeness to the observations quantitatively, since this is the qualitative way of assessing the result on the Kolmogorov flow experiment? And compare it to what a competitor technique would output.
- Similarly, it would be interesting to show several posterior samples on one of these experiments to have an idea of whether the model exhibits a multimodal posterior in this situation (I would expect this e.g. in the super-resolution and extrapolation applications). An analog of figure 3 for the second case study, basically.
- Maybe a more detailed explanation of Tweedie’s formula and the computation of equations (9) and (10) would be nice to make the paper more self-contained.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - To me the main limitation is the lack of information about the practical impact of the contributions on the Markov blanket/correction step on the running time/memory in addition to simply their approximation capability. See above for more detailed remarks on this.
- As mentioned by the authors, the dynamical model is assumed to be perfectly known or at the very least fixed. However, this is not true for the observation model, which makes the proposed method versatile as long as one is interested in various inverse problems with the same dynamical model. An interesting next step would be to see how to enable fitting/fine tuning model parameters, but since here the model is never used but rather emulated, this seems like a difficult endeavor. By the way, going through an emulator of the dynamics rather than having to run/differentiate the model could also be an advantage for complex models that cannot be easily made differentiable (as is often the case for operational models), provided they can be emulated successfully by a score based model.
- On a related note, another score based model training is required in case of change in dynamical model (but this is the same for variational data assimilation techniques)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your in-depth review and feedback. Most of your concerns are sensible and will be addressed in the manuscript.
**Weaknesses & Limitations**
> In particular, using the blanket to compute equations (11) and (12) could be a bit more detailed, and the link between the result of appendix A and the unbiasedness of the estimate (13) could be a bit explicited. [...] Maybe a more detailed explanation of Tweedie’s formula and the computation of equations (9) and (10) would be nice to make the paper more self-contained.
We will motivate and explain these equations more explicitly in the manuscript.
> There is a lot of emphasis in the paper on making the model easier to train/less expensive, less memory consuming or less prone to technical/computational difficulties, but none of this is actually quantified.
Indeed, the primary goal of Section 3.1 is to make the training of the prior score easier/less expensive. If we were to train a (global) score network on trajectories $x_{1:L}$, we would perform amounts of computation proportional to $L$, which could be huge depending on the dimensionality of the states $x_i$. Conversely, training a local score network only requires segments $x_{i-k:i+k}$ instead of complete trajectories $x_{1:L}$. An immediate consequence is that the cost of training is drastically reduced; roughly by a factor $\frac{L}{2k + 1}$. A more subtle consequence is that the decomposition of the global score into local scores imposes a form of translational equivariance, which reduces the amounts of training data required.
We propose to replace (lines 143-144)
"Still, in some settings where each element $x_i$ is high-dimensional, building and training an FCNN for the full chain $x_{1:L}$ could be technically difficult. As a more tractable alternative, we propose to train a local score network"
with
"Importantly, if the receptive field is $2k + 1$, the network can be trained on segments $x_{i-k:i+k}$ instead of the complete chain $x_{1:L}$, thereby drastically reducing training costs. More generally, we can train a local score network"
to make the benefits less ambiguous.
At inference, the decomposition of the global score into local scores additionally allows for better parallelization, which could be significant depending on available hardware. Concerning the number of corrections, $C$ (see Algorithm 4) has a linear impact on the number of network evaluations at inference, and would have the same impact with a global score network. Hence, doubling the number of corrections roughly doubles the inference time, but not the memory consumption, which stays constant.
> In addition, the paper would benefit from the comparison to more standard data assimilation techniques.
We 100% agree with you. Unfortunately, we were only able to find ad-hoc implementations of 3D-Var and 4D-Var, with barely any documentation. If you point us to a relatively easy-to-use implementation, we'd be happy to oblige. We considered implementing 4D-Var ourselves, but we quickly realized that this would be very hard (which highlights the usefulness of SDA) and beyond the scope of this work.
> As mentioned by the authors, the dynamical model is assumed to be perfectly known or at the very least fixed. [...]
This is correct, but we invite you to consult the clarifications concerning this limitation in the global rebuttal.
**Questions**
> Can the authors highlight the difference/added value between their predictor-corrector sampler and the one in [24] ?
The difference between our predictor-corrector sampler and the one from Song et al. [24] (Algorithms 2 and 3) is the predictor. We employ a predictor called exponential integrator introduced by Zhang et al. [26]. Note that we do not claim our predictor-corrector sampling to be a novel contribution.
> Would it be possible to quantify the closeness to the observations quantitatively [...] ?
Because the observation process $p(y | x_{1:L}) = \mathcal{N}(y | \mathcal{A}(x), \Sigma_y)$ is known, we can measure the likeliness of an observation with respect to an inferred trajectory. However, the likeliness for a single pair $(x_{1:L}, y)$ cannot be interpreted. Instead, as is done for the Lorenz 63 system (see Figure 2), we can compute the expected log-likelihood $E_{q(x_{1:L} | y) p(y)} [ \log p(y | x_{1:L}) ]$, which should be equal to minus the entropy of $\mathcal{N}(0, \Sigma_y)$. In the case of the low and high-frenquency observation processes, the entropies are respectively $\frac{9}{2} \log(2 \pi e 0.05^2) \approx -14.191$ and $\frac{65}{2} \log(2 \pi e 0.25^2) \approx 2.122$, which matches what we observe in Figure 2. It should be noted, however, that this equality is a necessary but insufficient condition for exact posterior inference.
We will add a new section in Appendix computing this expected likelihood for the Kolmogorov system.
> It would be interesting to show several posterior samples on one of these experiments to have an idea of whether the model exhibits a multimodal posterior in this situation [...]
We will add the suggested examples in the Appendix. | Summary: The authors propose score-based data assimilation framework that relies
on score based generative modeling for trajectory inference/state estimation
of a dynamical model described by an SDE.
To make the procedure efficient, the authors employ three novelties, partly
adopted from existing literature.
They train a score based generative model on short segments of trajectories, and decompose the posterior score into a prior score and a likelihood score, and approximate the latter with a Gaussian assuming a Gaussian observation process.
For approximating the prior score, they rely on the Markovian nature of the system and employ local (in time)
prior score approximations.
They present their proposed approach on two non trivial model systems.
I find the contribution interesting, however the novelty the overall ambition
of the project is slightly limited for a conference like NeurIPS. In particular,
I would consider more reasonable the option of trying to identify both
the dynamics and estimate the state of the model, since in most practical
scenarios it is the underlying model dynamics that are unknown. In that respect I find the
use of word inference throughout the text misleading, since the actual problem that
is solved here is not model identification but model estimation and prediction.
Overall I find the paper quite well written, and technically sound. However the novelty and the contribution in the
existing literature is relatively low compared to existing approaches.
Strengths: - the approach does not require simulation through the model equations, that can in principle become too computationally demanding for large systems.
- well written manuscript and nice presentation.
- the proposed method provides an approximation to the entire posterior of the path of the system instead of a point estimate, that in principle allows for further Bayesian inference of the underlying dynamical model.
Weaknesses: - The observation model employed is rather trivial (Gaussian), yet the derivation relies on this assumption. Have you tested the approach with different observation models to test how well this Gaussian approximation holds?
- To my understanding, the trained system is able to generate only
trajectories in equilibrium, and it would fail in non-equilibrium scenarios.
- The authors mention that the introduced approximations introduce errors in the system
but I would expect them to at least quantify the resulting errors
numerically, if theoretical predictions cannot be formulated.
- The authors assume known physical model (both the parametric form and the parameters) that
considerably limits the applicability of the framework for practical applications.
- To my understanding, the noise employed in the numerical experiments is very small, and thereby one could treat the system as deterministic.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - the system in consideration is assumed to be stochastic, however in Fig.2 C the trajectories seem
pretty deterministic to me. I assume that the individual non-opaque trajectories are individual realisations.
Is the employed noise amplitude 0.025? I consider this negligible noise, especially considering the dynamic range of the system.
- can you quantify either theoretically or empirically the errors incurred by each of the approximations employed in the proposed solution.
- have you tried to predict non-stationary trajectories with the proposed framework? I would think that already the score function estimator
would struggle considerably in this scenario.
- doesn't Eq. 12 only hold in expectation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - the proposed framework assumes an already known dynamical model which is a rather limiting scenario considering both state of the art frameworks and practical applicability.
- according to my understanding, the employed noise in the experiments is weak and the system could be effectively treated as deterministic
- the method is limited only to Gaussian observation model settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and reading the manuscript in details. We hope that the following answers will satisfy you.
**Weaknesses & Limitations**
> [...] However, the novelty and the contribution in the existing literature is relatively low compared to existing approaches.
We will address the lack of discussion about the significance of the contributions in Section 5 (Conclusion). We invite you to consult the global rebuttal regarding the novelty of the contributions.
> The authors assume known physical model (both the parametric form and the parameters) that considerably limits the applicability of the framework for practical applications.
We believe this concern is due to some misleading statements in Section 5 (Conclusion), which will be addressed. We invite you to consult the global rebuttal regarding the assumption of a known physical model.
> The observation model employed is rather trivial (Gaussian), yet the derivation relies on this assumption.
Indeed, the observation process is assumed to be Gaussian. However, we argue that this assumption is by no means trivial. It covers all observations that can be expressed as "measurement + Gaussian noise", which corresponds to many real-life scenarios, including some of the most challenging non-linear ones. In fact, an overwhelming amount of literature in the fields of state estimation and Bayesian filtering (Kalman filtering) is based on that Gaussian assumption. In the field of conditional score-based modeling, this is also a very common assumption [19, 24, 34-37] and we are not aware of any literature assuming non-additive measurement noise.
> To my understanding, the trained system is able to generate only trajectories in equilibrium, and it would fail in non-equilibrium scenarios.
In our experiments, we leverage the statistical stationarity of the dynamics to make the local score network index-agnostic, as mentioned at line 153. If stationarity cannot be assumed, for instance in a periodic environment (e.g. seasons), the local score network would simply need to be conditioned with respect to the indices of the states.
However, your comment raises a legitimate point: the score network can only be valid for dynamics it has been trained for, which leads to open questions regarding distribution shifts (e.g. climate change), as mentioned in Section 5.
> The authors mention that the introduced approximations introduce errors in the system but I would expect them to at least quantify the resulting errors numerically, if theoretical predictions cannot be formulated. [...] Can you quantify either theoretically or empirically the errors incurred by each of the approximations employed in the proposed solution?
We already provide quantitative results for Lorenz 63, for which the true posterior is available via the bootstrap particle filter (BPF). We empirically demonstrate that SDA quantitatively converges to the true posterior as $C$ and $k$ increase (see Figure 2). For the Kolmogorov Flow system, the true posterior is intractable with classic posterior inference or data assimilation techniques, and therefore, cannot be quantitatively compared to.
As for theoretical estimation, the impact of the approximations (13) and (15) remains to be quantified, which we leave for future work, as stated in Section 5.
> To my understanding, the noise employed in the numerical experiments is very small, and thereby one could treat the system as deterministic.
Indeed, the noise used for both systems is not large. However, because the systems are chaotic, very small changes in the initial state or small stochasticites can lead to vastly different trajectories. Using more noise would only make the system more chaotic and make inference over long time horizons pointless ($x_L$ would be independent from $x_1$). In fact, increasing the noise levels would make the pseudo Markov approximation (13) more accurate as it would reduce the mutual information between far-away states. To paraphrase, the "low" amount of noise in our experiments is actually a means to challenge our methods and assumptions. Note that, in practice, physical models considered in data assimilation are often deterministic, or present very low amount of noises.
**Questions**
> The system in consideration is assumed to be stochastic, however in Fig. 2C the trajectories seem pretty deterministic to me. [...]
In Figure 2C, trajectories are not independent. They are realizations sampled from the true and estimated posteriors. All trajectories are close to each other because the observation $y$ enables to infer a narrow posterior. Conversely, in Figure 3, the observation (red crosses) leads to a multi-modal posterior, and trajectories diverge even when they start at the same initial states. If we were to represent the prior $p(x_{1:L})$ instead of the posterior $p(x_{1:L} | y)$, one could observe a wide variety of trajectories.
> [...] Is the employed noise amplitude 0.025? I consider this negligible noise, especially considering the dynamic range of the system.
$\Delta = 0.025$ is the variance of the noise at each step. The standard deviation is $0.5$ which is not negligible with respect to the system's domain.
> Have you tried to predict non-stationary trajectories with the proposed framework?
We are actively working on/looking for collaborations with weather and oceanic research centers to apply SDA on operational data, which typically have periodic (daily/yearly) dynamics.
> Doesn't Eq. 12 only hold in expectation?
This equation is valid. We can note that $\nabla_{x_i} \log p(x_{j}) = 0$ and, by definition of a Markov blanket, $p(x_i | x_{\neq i}) = p(x_i | x_{b_i})$.
We believe that this rebuttal addresses most of your concerns and, therefore, kindly ask you to reconsider your score.
---
Rebuttal Comment 1.1:
Comment: I read the rebuttal to my and the other reviewer’s comments and I updated my score accordingly.
I consider the contribution of the paper interesting and the paper well written.
My previous concerns regarding the simplicity of the Gaussian observation model are partly unreasonable, if we take into account the rest of this work. However as a comment to the authors there is existing work that identifies latent stochastic dynamics with non-gaussian observation models, i.e. Poisson.
Further, the clarification of the authors regarding a known physical model eliminated my concerns of limited contributions of the submitted work. | Summary: Under a data assimilation setting —where the hidden state of a dynamical system is only accessed through noisy observations— the authors consider the problem of inferring the latent state as well as learning a faithful generative model. In order to accomplish this, they take a score-based diffusion model approach by placing a diffusion model prior over latent trajectories. They reveal many practical problems that arise due to the complexity of training the model with (possibly and practically) very high dimensional latent trajectory samples.
Strengths: The paper is well written, and I think the flow and order of presentation was great. Those who may only be somewhat familiar with denoising diffusion models and latent variables should be able to follow along for the most part.
To make training a diffusion model with the intent to generate high dimensional latent trajectories scale practical, the authors propose pragmatic solutions to problems that arise:
They propose to take advantage of the temporal smoothness of a diffusion model prior by approximating the score function with respect to particular time indices of the perturbed trajectories with the score function of an artificially imposed Markov blanket around that index; they are able to show that in practice, only a few neighbors are needed to create a large enough artificial Markov blanket.
They propose to approximate the conditional likelihood of an observation given a sample from a point in the chain; unlike other proposed methods, they do not have to rescale the residuals
The experiments were interesting, especially, the experiment where the observations are taken to be coarse grained versions of the velocity field; the trajectory sampled from SDA is qualitatively similar to a ground truth sample. Showing a sample generated from SDA resembles feeding that sample through the true model was a good demonstration.
Weaknesses: Weaknesses:
It is mentioned in the paper, but by far, the biggest weakness would be that the true underlying dynamics are assumed to be known (or at the very least, can be easily sampled from). This does slightly bring down the experimental results, in my opinion, since learning the underlying dynamics of an unknown dynamical system is very difficult.
While the paper is well written, and the problem considered is both important and challenging, there is not too much in the way of technical advances; therefore, I think there should have also been more focus on the empirical results. While I am not certain if many classes of models/methods exist for solving the exact same problem — I am wondering if the authors could have compared their method to other denoising diffusion models that operate on time series e.g. audio or video.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Can the method only generate trajectories of length L?
Coming from a probabilistic inference and signal processing background, I was wondering if the author considers data assimilation models to be a superset of models that contain state-space models? Prior to this, I have never encountered this interesting field of data assimilation, and I felt like tying back this model to simpler latent variable models such as those would help increase its scope (mentioned a little at the end, but I think moving this up/fleshing it out could help).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: As mentioned, the biggest limitation is assumption of the underlying dynamics.
There are not many comparisons against other methods made.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and the legitimate concerns you have raised. We hope that the following answers will satisfy you.
**Weaknesses & Limitations**
> [...] the biggest weakness would be that the true underlying dynamics are assumed to be known (or at the very least, can be easily sampled from).
We believe this concern is due to some misleading statements in Section 5 (Conclusion), which will be addressed. We invite you to consult the global rebuttal regarding the assumption of a known physical model.
> [...] there is not too much in the way of technical advances;
We will address the lack of discussion about the significance of the contributions in Section 5 (Conclusion). We invite you to consult the global rebuttal regarding the novelty of the contributions.
> [...] I think there should have also been more focus on the empirical results.
We respectfully disagree with this statement. Three pages of the manuscript as well as a few pages in the Appendix are dedicated to empirical results, with both quantitative and qualitative assessments. Besides, we already condensed Sections 2 and 3 to the limit and cannot afford to remove any more content.
> [...] I am wondering if the authors could have compared their method to other denoising diffusion models that operate on time series e.g. audio or video.
Indeed, diffusion models have been extensively used for sequence data, including text, audio and video. Some of these methods could have been applied to the first system, Lorenz 63. However, the purpose of Section 4.1 is to compare our approximations against the ground-truth posterior (which is accessible) and study their convergence with respect to hyper-parameters ($k$ and $C$). The comparison with other approximations would have required a lot of space, without added value.
For the second system, Kolmogorov flow, the only contenders would have been video diffusion models (VDMs) [19]. However, as briefly mentioned in Section 5, these models generate sequences autoregressively from "left-to-right" (past-to-future), which prevents from conditioning and sampling all frames simultaneously. Hence, they cannot be used for trajectory posterior inference. We could have built our own vanilla diffusion model for image sequences, but it would have been very expensive to train and irrelevant as a point of comparison.
**Questions**
> Can the method only generate trajectories of length L?
Training requires sequences of length $2k + 1$, but the length $L$ of the generated trajectories is chosen arbitrarily (albeit longer than $2k+1$) at inference.
> I was wondering if the author considers data assimilation models to be a superset of models that contain state-space models?
Indeed, state-space models are a special case of dynamical systems. We chose to frame the paper within the context of data assimilation because of its generality.
We believe that this rebuttal addresses most of your concerns and, therefore, kindly ask you to reconsider your score.
---
Rebuttal 2:
Comment: The authors have addressed some of my concerns, so I have raised my score to reflect this. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the quality and pertinence of their reviews. All reviewers declare that the methods are sound, interesting and well presented.
The main concerns of reviewers DaY5 and bwmG regard the assumption of a known physical model and the novelty of the contributions. We beleive these concerns are due to some misleading statements in Section 5 (Conclusion) and a lack of discussion about the significance of the contributions, which we propose to address with minor changes in the manuscript. Other comments will be addressed in the per-reviewer rebuttals.
**Assumption of a known physical model**
The introduction starts with the assumption of a known physical model because data assimilation (DA) methods typically and routinely rely on physical models to make up for the lack of data in geophysical systems. In these settings, purely data-driven approaches are simply not applicable. However, SDA is not limited to that assumption and can be applied as long as segments $x_{i-k:i+k}$ ($k \ll L$) are available for training. These segments are not necessarily synthetic and we can apply SDA from data alone. For instance, in cellular biology, the state of some systems can be observed with incredible accuracy, but their dynamics are still relatively unknown.
What we tried to convey at lines 273-280 was that
1. All trajectories are assumed to share the same dynamics, known or unknown.
2. If a physical model is used to generate the training data, one can only expect SDA to be as accurate as the physical model.
We propose to clarify these points in the manuscript by
1. replacing (line 273)
> Another limitation of our approach is the assumption that the parameters of the physical model are known.
with
> Another limitation of our approach is the assumption that the dynamics of the system are shared by all trajectories. In particular, if a parametric physical model is used, all trajectories are assumed to share the same parameters.
2. replacing (line 276)
> Similarly, SDA relies on the assumption that the physical model is an adequate description of reality. Although we show that it can generalize well to unlikely scenarios (see Figure 5), we can only expect our approach to be as accurate as the physical model itself.`
with
> Additionally, if a physical model is used to generate synthetic training data, instead of relying on real data, one can only expect SDA to be as accurate as the model itself.
**Novelty of the contributions**
As noted by reviewers RNbZ and RS1N, our work present several novel contributions, in both the fields of data assimilation and score-based generative modeling. As the significance of these contributions are not explicitly discussed in the manuscript, we propose to add the following sub-section in Section 5.
> **Significance** In addition to its contributions to the field of data assimilation, this work presents new technical contributions to the field of score-based generative modeling.
>
> First, we provide new insights on how to exploit the structure of sets of random variables to build and train score-based generative models. Based on these findings, we are able to generate/infer simultaneously all the states of arbitrarily long Markov chains $x_{1:L}$, while only training score models on short segments $x_{i-k:i+k}$, thereby reducing training costs and the amounts of training data required. The decomposition of the global score into local scores additionally allows for better parallelization at inference, which could be significant depending on available hardware. Importantly, the pseudo-blanket approximation (13) is not limited to Markov chains, but could be applied to any set of variables $x_{1:L}$, as long as some structure is known.
>
> Second, we motivate and introduce a novel approximation (15) for the perturbed likelihood $p(y | x(t))$, when the likelihood $p(y | x)$ is assumed (linear or non-linear) Gaussian. We find that computing the likelihood score $\nabla_{\! x(t)} \log p(y | x(t))$ with this new approximation leads to accurate posterior inference, without the need for stability tricks [37]. This contribution can be trivially adapted to many tasks such as inpainting, deblurring, super-resolution or inverse problems in scientific fields [35-37].
We would also like to emphasize that several papers whose sole purpose were to approximate the likelihood score, were published in major venues [34-37]. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Understanding the Latent Space of Diffusion Models through the Lens of Riemannian Geometry | Accept (poster) | Summary: The paper presents an analysis of the latent structure of diffusion models using differential geometry. The authors propose a method to define a geometry in the latent space by pulling back the Euclidean metric from the U-Net bottleneck space *H* via the network encoder. This approach enables the identification of directions of maximum variation in the latent space. The paper also explores the application of the proposed latent structure guidance for image editing. Finally, the evolution of the geometric structure over time steps and its dependence on text conditioning are investigated.
Strengths: - I found the analysis presented in the paper interesting. It both uncovers unknown details about diffusion models (effect of text prompt and complexity of the dataset on the latent space) and confirm some previous observations (e.g., coarse-to-fine behaviour). This exploration can potentially reveal new capabilities of diffusion models, contributing to the advancement of the field.
- The paper is technically sound, and the claims made by the authors in sections 4 are supported by experiments. This experimental validation enhances the credibility of the proposed approach.
Weaknesses: - The paper lacks comparisons with other diffusion-based image editing techniques, like [7,18]. Including such comparisons would have provided a more comprehensive evaluation and demonstrated the advantages of the proposed method.
- The presentation and clarity of the paper could be improved. For example, the abstract contains too much detail, making it challenging to understand upon initial reading. Also the explanation of the image editing technique could be improved: what is DDIM (section 4)? what is epsilon in Equation 4? Finally, Figure 1 is not sufficiently clear to me, it may hampers the reader's comprehension.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: To make the paper applicable to real-world scenarios right away, the authors should include a comparative analysis with other image editing techniques.
**Some potentially interesting references**
- Pan, Xingang, et al. "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold." arXiv preprint arXiv:2305.10973 (2023).
- Brooks, Tim, Aleksander Holynski, and Alexei A. Efros. "Instructpix2pix: Learning to follow image editing instructions." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
- Meng, Chenlin, et al. "Sdedit: Guided image synthesis and editing with stochastic differential equations." arXiv preprint arXiv:2108.01073 (2021).
**Typos**
- line 20: double citation
- line 160 editted
- Figure 2 caption: editing
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have addressed the limitations of their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging our strengths:
- uncovering the effect of text prompt and dataset complexity on the latent space.
- confirming coarse-to-fine behavior of diffusion models (DMs).
- enhancing credibility through experimental validation.
---
> [W1] The paper lacks comparisons with other diffusion-based image editing techniques.
> [Q1] To make the paper applicable to real-world scenarios right away, the authors should include a comparative analysis with other image editing techniques.
[W1, Q1] Thanks for the great suggestion. We conduct qualitative comparisons with text-guided image editing methods. Our state-of-the-art baseline methods include: (i) [SDEdit], (ii) [Pix2Pix-zero], (iii) [PnP], and (iv) [Instruct Pix2Pix]. All comparisons were performed using the official code. Please refer to the global rebuttal paragraph 3 and Fig. R3 in the global rebuttal PDF for the results.
---
> [W2] The presentation and clarity of the paper could be improved. [W2-a] For example, the abstract contains too much detail, making it challenging to understand upon initial reading. [W2-b] Also the explanation of the image editing technique could be improved: [W2-b-i] What is DDIM (section 4)? [W2-b-ii] What is epsilon in Equation 4? [W2-b-iii] Finally, Figure 1 is not sufficiently clear to me, it may hamper the reader's comprehension.
[W2-a] Thank you for your constructive suggestion. We revise the abstract to be concise and digestible as follows:
>> Despite the success of diffusion models (DMs), we still lack a thorough understanding of their latent space. To understand the latent space $\mathbf{x}_t \in \mathcal{X}$, we analyze them from a geometrical perspective. Our approach involves deriving the local latent basis within $\mathcal{X}$ by leveraging the pullback metric associated with their encoding feature maps. Remarkably, our discovered local latent basis enables image editing capabilities by moving $\mathbf{x}_t$, the latent space of DMs, along the basis vector at specific timesteps.
>>
>> We further analyze how the geometric structure of DMs evolves over diffusion timesteps and differs across different text conditions. This confirms the known phenomenon of coarse-to-fine generation, as well as reveals novel insights such as the discrepancy between $\mathbf{x}_t$ across timesteps, the effect of dataset complexity, and the time-varying influence of text prompts. To the best of our knowledge, this paper is the first to present image editing through $\mathbf{x}$-space traversal, editing only once at specific timestep $t$ without any additional training, and providing thorough analyses of the latent structure of DMs.
[W2-b] Thank you for pointing out the clarity issue.
[W2-b-i] To make the image editing process easier to understand, we created a summary subsection and an overview figure for the whole process. Please refer to global rebuttal paragraph 2 and Fig. R2 in the global rebuttal PDF for more details. In our method, [DDIM] is used to invert the image into the initial noise $\mathbf{x}_T$, and again for denoising to generate the image.
[W2-b-ii] $\epsilon$ is a function representing the pretrained diffusion model. For clarity, in the revised manuscript we will write it as $\epsilon^{\theta}$, and state it explicitly as below :
>> (L166) ... where $\epsilon^{\theta}$ is the denoising function of the pretrained diffusion model, and $\gamma$ is ...
[W2-b-iii] Thank you for the valuable suggestion. Please refer to global rebuttal paragraph 1 and Fig. R1, R2 in the PDF for improvements to Fig. 1. Below is an excerpt from the content of the global rebuttal paragraph 1, extracted to enhance readability for the reviewers:
In order to make Fig. 1 more comprehensible, we divide Fig. 1 into two separate figures. Please refer to Fig. R1 and R2 in the global rebuttal PDF. First, **Figure R1** conceptually visualizes the local basis found through the pullback metric and provides a summary of the process for obtaining a local basis. Second, **Figure R2** provides an overview of the image editing method using the discovered local basis and briefly presents its results.
---
> Some potentially interesting references
Thank you for your valuable references. We will add those suggested references to the revised manuscript.
> Typos
Thank you for the careful comments. We will address and incorporate these revisions in the upcoming manuscript.
---
**References**
[SDEdit] : SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations, Meng et al., 2021
[Pix2Pix-zero] : Zero-shot Image-to-Image Translation, Parmar et al., 2023
[PnP] : Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation, Tumanyan et al., 2022
[Instruct Pix2Pix] : InstructPix2Pix: Learning to Follow Image Editing Instructions, Brooks et al., 2022
[DDIM] : Denoising Diffusion Implicit Models, Song et al., 2020
---
Rebuttal Comment 1.1:
Comment: We sincerely thank you for your first review once again.
As the discussion phase ends soon, we remain enthusiastic about receiving additional feedback from you. We are ready to accommodate your needs if you find our revised response requires additional clarifications and suggestions.
Thank you. | Summary: In this submission, the authors probe the latent space, xt ∈ X, of diffusion models (DMs) from a geometric perspective, utilizing the pullback metric to identify local latent basis in X and corresponding local tangent basis in H. To confirm their findings, they edit images via latent space traversal. The authors provide a two-pronged analysis, investigating the evolution of geometric structure over time and its variation based on text conditioning in Stable Diffusion. Notably, they discovered that in the generative process, the model prioritizes low-frequency components initially, moving to high-frequency details later, and that the model's dependence on text conditions reduces over time. The paper introduces image editing through x-space traversal and to offer comprehensive analyses of the latent structure of DMs, with a specific emphasis on the use of the pullback metric and the SVD of the Jacobian in computing a basis.
Strengths:
The paper presents a distinctive idea that provides an alternative method for editing in diffusion models, as well as enhancing comprehension of the latent space dynamics. By utilizing a geometric perspective, the authors make use of the pullback metric to investigate the latent space, offering insights into its structure and operation. The exploration of the evolution of geometric structure over time and its response to various text conditions offers additional insights into the dynamics of the latent space is interesting.
Weaknesses:
The first area where the paper could see improvement is in terms of the clarity of its analysis. Given its nature as an analysis paper, it's crucial that the analysis presented is as comprehensible as possible. However, the method and notation used in this work can lead to some confusion. For instance, Section 3, in its current form, may not be as accessible to all readers as it could be, and it could benefit from being revised for clearer communication of the ideas contained therein. Additionally, Fig. 1, which is presumably intended to illustrate key concepts, is perhaps too dense with information. Dividing Fig. 1 into two separate figures could make it easier to digest, enabling a clearer explanation of the approach.
A second aspect that could be improved upon is the overall presentation and proofreading of the paper. While the approach is relatively simple, its translation into the written form has not been as clear as one would hope. The text could benefit from a thorough proofreading to ensure that it is not just grammatically correct, but also that it conveys the authors' ideas in a way that is accessible to the wider machine learning community. As it stands, the paper's usefulness to this community may be hindered by its presentation.
Lastly, the paper could do more to address the computational implications of its approach. The authors use the power method to approximate the Jacobian, which, while effective, can be computationally costly. It would be beneficial if the authors were more transparent about this fact, allowing readers to fully understand the computational demands of the approach and evaluate whether or not it would be feasible in their own applications. Being upfront about such limitations can help to build a more honest and comprehensive understanding of the paper's methodologies and implications.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: What is the complexity of computing the Jacobian? What is the the actual runtime (in seconds) of the approach compared to other editing methods? I think answering these questions can provide a good context for readers.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: Societal impact has been properly address but I would like to see a deeper analysis of the computational complexity and runtime of the approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging the strengths of our paper: the distinctive idea for editing in diffusion models (DMs), and enhancing comprehension of the latent space dynamics, e.g., the evolution of geometric structure over time and the influence of various text conditions.
---
> [W1] ... the method and notation used in this work can lead to some confusion. For instance, [W1-a] Section 3, in its current form, may not be as accessible to all readers as it could be, and it could benefit from being revised for clearer communication of the ideas contained therein. [W1-b] Additionally, Fig. 1, which is presumably intended to illustrate key concepts, is perhaps too dense with information. Dividing Fig. 1 into two separate figures could make it easier to digest, enabling a clearer explanation of the approach.
[W1] Thank you for the constructive suggestion.
[W1-a] To clarify the image editing process in Section 3, we added a new subsection summarizing the overall procedure. This includes a visual overview in Fig. R2 and clear textual explanations. Please refer to global rebuttal 2 for a more detailed description.
[W1-b] To make Fig. 1 more comprehensible, we divide it into two figures, Fig. R1 and R2. Please refer to the global rebuttal 1 for more detailed improvements.
---
> [W2] A second aspect that could be improved upon is the overall presentation and proofreading of the paper. While the approach is relatively simple, its translation into the written form has not been as clear as one would hope. The text could benefit from a thorough proofreading to ensure that it is not just grammatically correct, but also that it conveys the authors' ideas in a way that is accessible to the wider machine learning community. As it stands, the paper's usefulness to this community may be hindered by its presentation.
[W2] Thank you for the advice on readability. We made the following improvements to help readers understand more easily:
+ We relocate the discussion on parallel transport (P.T.) from section 3.3 to the final subsection of section 3. This alteration was implemented to prevent any potential misunderstanding that using P.T. is the default approach in our editing method. By introducing P.T. as a special case after concluding the explanation of the image editing method, we aim to mitigate such misconceptions.
+ We consolidate all elements related to the image editing method within section 3. For instance, the process of generating initial noise $\mathbf{x}_T$ through DDIM inversion, previously outlined in section 4 (L178-182), has been now moved to the newly introduced subsection referenced in global rebuttal 2. This organization ensures that readers can fully comprehend the entire editing process by solely referring to section 3.
+ We make adjustments to Figure 8. Specifically, we have relocated Fig. 8(b) and the content spanning from L243 to L248 to the appendix. This decision was based on the rationale that Figure 8(b) primarily serves to validate the findings in (a) and (c). Including this information in the appendix was deemed a more suitable arrangement.
We will also carefully identify further shortcomings and revise them in the camera-ready version.
---
> [W3] Lastly, the paper could do more to address the computational implications of its approach. The authors use the power method to approximate the Jacobian, which, while effective, can be computationally costly. It would be beneficial if the authors were more transparent about this fact, allowing readers to fully understand the computational demands of the approach and evaluate whether or not it would be feasible in their own applications. ...
> [L1] I would like to see a deeper analysis of the computational complexity and runtime of the approach.
[W3, L1] In Appendix A, we present the computation time required for approximating the Singular Value Decomposition (SVD) of the Jacobian using the power method. The runtime of this power method varies based on the parameter $n$, which denotes the low-rank approximation of the original tangent space. Smaller values of $n$ lead to shorter computation times. For instance, when $n=3$, the process takes approximately 10 seconds. In contrast, for $n=50$, the computation time extends to around 3 to 4 minutes, particularly in the context of Stable Diffusion.
Notably, when only a single basis vector is needed, as is the case in scenarios like text conditional editing, the time taken to approximate the SVD of the Jacobian is remarkably brief—approximately 2.5 seconds.
---
> [Q1] What is the complexity of computing the Jacobian? What is the actual runtime (in seconds) of the approach compared to other editing methods? I think answering these questions can provide a good context for readers.
[Q1] Thank you for your valuable question. We conduct all comparisons on the Nvidia 3090. To ensure a fair comparison, we set $n=1$ and performed 50 steps of the DDIM algorithm. The time taken by each method is as follows:
| Image Edit Method | running time | Preprocessing |
|:-----------------:|:-------------:|:-------------:|
| Ours | 11 sec |N/A |
| SDEdit | 4 sec |N/A |
| Pix2Pix-zero | 25 sec |4 min* |
| PnP | 10 sec |40 sec** |
| Instruct Pix2Pix | 11 sec |N/A |
(*: generating 100 prompts with GPT, obtaining embedding vectors, **: storing feature vectors, queries, and key values)
For a more detailed discussion, please refer to global rebuttal paragraph 3.
---
**References**
[SDEdit] : SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations, Meng et al., 2021
[Pix2Pix-zero] : Zero-shot Image-to-Image Translation, Parmar et al., 2023
[PnP] : Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation, Tumanyan et al., 2022
[Instruct Pix2Pix] : InstructPix2Pix: Learning to Follow Image Editing Instructions, Brooks et al., 2022
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and thank the authors for their answers. Given the new details provided by authors in terms of computational complexity and runtime of their approach I cannot update my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for dedicating time to share your thoughts.
Firstly, we would like to mention that the main contribution of our paper is the geometric interpretation of the latent space and feature space. To the best of our knowledge, the intricate interplay between semantics and the geometric structure within the latent space of diffusion models remains unexplored. We believe that our paper will significantly enrich future research. \
(Also, please consider the other points written in the global comment.)
Moreover, we highlight that our method achieves a comparable complexity, even though efficiency was not our primary focus. \
(We would like to kindly clarify that the 100 seconds mentioned in Reviewer vuQy's rebuttal were used for computing all 50 local bases for analysis. Additionally, we wish to mention again that the reported 11 seconds include the computation time for the 1st local basis. Because SDEdit is the oldest basic, do-nothing approach, our method has comparable computational complexity except for SDEdit.)
Would you reconsider these points? | Summary: This paper studies the geometry of latent spaces of diffusion models (Dms) using the pullback metric. In the analyses, they mainly examine change of the frequency representations in latent spaces over time and the change of the structure based on text conditioning.
After the rebuttal:
I checked all reviewer comments and responses.
I agree with the other reviewers regarding limited algorithmic novelty of the work. Therefore, I keep my original score.
Strengths: - The paper is well written in general (there are several typos but they can be fixed in proof-reading).
- The proposed methods and analyses are interesting. Some of the theoretical results highlight several important properties of diffusion models.
Weaknesses: 1. Some of the statements and claims are not clear as pointed in the Questions.
2. The results are given considering the Riemannian geometry of the latent spaces and utilizing the related transformations (e.g. PT) among tangent spaces on the manifolds. However, vanilla DMs do not employ these transformations. Therefore, it is not clear whether these results are for vanilla DMs or the DMs utilizing the proposed transformations.
3. A major claim is that the proposed methods improve effectiveness of the DMs. However, the employed transformations can increase the footprints of DMs.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: It is stated that “To investigate the geometry of the tangent basis, we employ a metric on the Grassmannian manifold.” However, the space could be identified by another manifold as well. Why and how did you define the space by the Grassmannian manifold?
It is claimed that “the similarity across tangent spaces allows us to effectively transfer the latent basis from one sample to another through parallel transport”. How this improves effectiveness was not analyzed. In general, how do the proposed methods improve training and inference time? Indeed, the additional transformations can increase training and inference time. Could you please provide an analysis of the footprints?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Some of the limitations were addressed but potential impacts were not addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > [W1] Some of the statements and claims are not clear as pointed in the Questions.
> [Q1] It is stated that "we employ a metric on the Grassmannian manifold.” Why and how did you define the space by the Grassmannian manifold?
[Q1] These subspaces (as a vector space) of $\mathcal{T_{\mathbf{x}}}$ and $\mathcal{T_{\mathbf{h}}}$ assigned at the points $\mathbf{x} \in \mathcal{X}$ and $\mathbf{h} \in \mathcal{H}$ of diffusion models. Given that the Grassmannian manifold is characterized as a manifold comprising subspaces, it appears well-suited for representing the manifold of $\mathcal{T_{\mathbf{h}}}$. Additionally, the geodesic metric defined on this manifold quantifies the separation between two subspaces based on their principal angles. Hence, we posit that this geodesic metric offers a robust means of evaluating the similarity between different $\mathcal{T_{\mathbf{h}}}$s.
> [Q2-a] It is claimed that “the similarity across tangent spaces allows us to effectively transfer the latent basis from one sample to another through parallel transport”. How this improves effectiveness was not analyzed. [Q2-b] In general, how do the proposed methods improve training and inference time? (...) Could you please provide an analysis of the footprints?
[Q2-a] Figure 7 shows the impact of employing parallel transport (P.T.). The transported vectors can edit distinct samples while manipulating the same attributes. We will modify L227-299 as follows to better reflect our intent:
>> (L227-229) $\rightarrow$ ... us to successfully transfer the latent basis vector from one sample to another through parallel transport. Figure 7 shows that the transported vectors can edit distinct samples while manipulating the same attributes.
[Q2-b] We would like to emphasize that our approach does not require any form of training. Furthermore, the utilization of P.T. is not a default procedure in the editing process; its application is limited to specific instances where local basis vectors from other samples are used. The details regarding the typical editing footprint in the absence of P.T. are outlined in the global rebuttal 3. Here, for your reference, we focus solely on presenting the result table:
|Image Edit Method|running time|Preprocessing|
|:-:|:-:|:-:|
|Ours| 11 sec |N/A|
|[SDEdit]|4 sec|N/A|
|[Pix2Pix-zero]|25 sec|4 min|
|[PnP]|10 sec|40 sec|
|[Instruct Pix2Pix]|11 sec|N/A|
The time taken for editing using P.T. can be broken down into three components: 1) DDIM inversion and generation, 2) identification of the local basis, and 3) the parallel transport process. For optimal results when employing P.T., it is advisable to use a sufficiently large value of $n$ to mitigate distortion. However, this choice necessitates more computation time for generating the local basis. In our case, we opt for $n=50$, with the number of DDIM steps set at $100$, and utilizing an unconditional diffusion model trained on CelebA-HQ.
|DDIM inversion + generation|Identification of local basis|Parallel transport|
|:-:|:-:|:-:|
| 10sec | 100sec | 0.002sec|
---
> [W2] The results are given ... utilizing the related transformations (e.g. PT) among tangent spaces on the manifolds. However, vanilla DMs do not employ these transformations. Therefore, it is not clear whether these results are for vanilla DMs or the DMs utilizing the proposed transformations.
[W2] We wish to emphasize that our method works on frozen vanilla DMs without necessitating any fine-tuning or architectural modifications. It offers unsupervised image editing capabilities that are applicable to both unconditional DMs and conditional DMs. Furthermore, as explained in [Q2], it is important to note that the inclusion of P.T. is not a default step in our editing process. This technique is selectively employed in particular cases where local basis vectors are transferred for editing other samples.
> [W3] A major claim is that the proposed methods improve the effectiveness of the DMs. However, the employed transformations can increase the footprints of DMs.
[W3] To provide clarification, our primary focus centers on image editing utilizing DMs, without an emphasis on augmenting the performance of DMs themselves.
When performing image editing with a single latent basis vector (i.e., $n=1$), our editing process consumes approximately 11 seconds. This represents roughly 15% of the time required for vanilla inversion and reconstruction. Furthermore, even when employing Jacobian approximation with $n=1$, the computational overhead associated with our approach remains comparable with other state-of-the-art editing methods, as illustrated in the aforementioned table.
> [L1] Some of the limitations were addressed but potential impacts were not addressed.
[L1] We appreciate your observation and would like to address this concern by incorporating a societal impact and ethics statement into the revised manuscript, which is presented below:
>> **Societal Impact / Ethics Statement.** Our research endeavors to unravel the geometric structures of the diffusion model and facilitate high-quality image editing within its framework. While our primary application resides within the creative realm, it is important to acknowledge that image manipulation techniques, such as the one proposed in our method, hold the potential for misuse, including the dissemination of misinformation or potential privacy implications. Therefore, the continuous advancement of technologies aimed at thwarting or identifying manipulations rooted in generative models remains of utmost significance.
**References**
[SDEdit] : SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations, Meng et al., 2021
[Pix2Pix-zero] : Zero-shot Image-to-Image Translation, Parmar et al., 2023
[PnP] : Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation, Tumanyan et al., 2022
[Instruct Pix2Pix] : InstructPix2Pix: Learning to Follow Image Editing Instructions, Brooks et al., 2022
---
Rebuttal Comment 1.1:
Comment: We sincerely thank you for your first review once again.
As the discussion phase ends soon, we remain enthusiastic about receiving additional feedback from you. We are ready to accommodate your needs if you find our revised response requires additional clarifications and suggestions.
Thank you. | Summary: The paper proposes a study on the latent space of diffusion models and on how to manipulate it. It takes advantage of an observation made by previous work [22] on the flatness and semantic structure of the U-Net model used in DDIM and uses pullback metric from the latent space of the U-Net to the space of diffusion to measure some properties of the latter under different conditions.
The paper also proposes a method to manipulate the diffusion space through the different time steps, so as to carry out semantically meaningful edits.
Strengths: This paper is one of the first to study the behavior of the space diffusion models. It presents some interesting studies on the behavior of the process during time, showing that the early stages convey higher frequency while the last steps are more concerned with higher frequencies. Another interesting observation is that tangent spaces of different samples tend to be more aligned at T=1, while they diverge toward T=0 (end of the generative process).
The paper also shows (even if it had already been observed in 22) that the pullback metric is effective in transferring the shift along the semantically meaningful principal components of the U-Net latent space into the diffusion process, thus resulting in meaningful edits of the generated image, which frequency depends on the time the edit was performed.
Weaknesses: I may have misunderstood or missed some important information, but the method described is not really clear. Specifically, it is not really clear to me how the editing process works (sections 3.3 and 3.4):
- In 3.3, the letter v is used to indicate elements of both T_x and T_h, so it is not always clear to which space they are referring.
- In general, it is not clear why the idea expressed in 3.3 is useful and where it was adopted.
- In eq 4 what is the epsilon function? In general, isn’t the vector toward which to shift selected from T_H (so it should be u) and then transferred to T_x?
Another concern is about the generalization of the proposed method to other diffusion techniques, or with other score models (i.e. not UNet). I think that this point needs more discussion.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I was wondering what would happen if, instead of moving along one of the principal axis of T_H, you use directly the principal axis of T_x.
I would also discuss a bit more method 22 in the related work since it seems related to the proposed method.
Writing issues:
- Check the sentence at rows 74-75
- Row 152: we aims
- general grammar check
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I don’t foresee any particular negative societal impact. A discussion on how the proposed study may generalize to other domains and architectures would be of value.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging the strengths of our paper: first to study the behavior of the space of diffusion models (DMs), e.g., the coarse-to-fine behavior, the divergence of tangent space across different samples, and the meaningful image editing with pullback metric.
---
> [W1] It is not really clear to me how the editing process works (sections 3.3 and 3.4)
[W1] Please refer to the global rebuttal 2.
> + [W1-a] In 3.3, the letter v is used to indicate elements of both T_x and T_h, so it is not always clear to which space they are referring.
[W1-a] We thank you for identifying the typo, $\mathbf{v} \in \mathcal{T_\mathbf{h}}$, in Section 3.3 Line 148. It should be corrected to $\mathcal{T_\mathbf{h}} \rightarrow \mathcal{T_\mathbf{x}}$. The vector notations, $\mathbf{v}\in\mathcal{T_\mathbf{x}}$ and $\mathbf{u}\in\mathcal{T_\mathbf{h}}$ are specifically employed for each space.
> + [W1-b] In general, it is not clear why the idea expressed in 3.3 is useful and where it was adopted.
[W1-b] Parallel transport (P.T.) is a concept in differential geometry that involves transporting a vector between spaces while maintaining its direction relative to the space. We use the P.T. for two purposes:
* First, we use it for image editing. Considering the inherent characteristics of the unsupervised image editing method, it becomes imperative to manually inspect the semantic relevance of the latent basis vector within the edited results. Let us consider a scenario where our aim is to edit ten images of straight hair into curly hair. If we do not use P.T., we have to manually find a straight-to-curly basis vector for individual samples. P.T. allows transporting a straight-to-curly basis vector in one sample to all other samples to edit all images with only one manual inspection.
* Second, we employ it to verify the similarities among local geometrical structures across various samples as shown in Fig. 7. This empirical demonstration substantiates that the geodesic distance among local geometrical structures holds tangible significance rather than being merely a conceptual measure as depicted in Fig. 6.
> + [W1-c-i] In eq 4 what is the epsilon function? [W1-c-ii] In general, isn’t the vector toward which to shift selected from T_H (so it should be u) and then transferred to T_x?
[W1-c-i] $\epsilon$ is the denoising function of the pretrained DM. For clarity, we will explicitly state this as $\epsilon^{\theta}$ in the revised manuscript:
>> (L166) ... where $\epsilon^{\theta}$ is the denoising function of the pretrained DM, and $\gamma$ is ...
[W1-c-ii] Through the decomposition of the Jacobian, we can simultaneously obtain both $\mathbf{v}$ and its corresponding $\mathbf{u}$. However, we use only $\mathbf{v}$ during the editing procedure.
---
> [W2] Another concern is about the generalization of the proposed method to other diffusion techniques, or with other score models (i.e. not UNet). I think that this point needs more discussion.
> [L1] A discussion on how the proposed study may generalize to other domains and architectures would be of value.
[W2, L1] Thank you for raising this important issue. We add the discussion in the revised manuscript as follows:
>> Our approach exhibits broad applicability in cases where a feature space within the DM possesses a Euclidean metric, as exemplified by $\mathcal{H}$. This characteristic has been observed in the context of U-Net within [Asyrp]. The question of whether alternative architectures, such as those resembling the structures of [DiT] or [MotionDiffusion], could also manifest a Euclidean metric, presents an intriguing avenue for future investigation.
---
> [Q1] I was wondering what would happen if, instead of moving along one of the principal axis of T_H, you use directly the principal axis of T_x.
[Q1] As previously mentioned in [W1-c-ii], we directly manipulate the latent variable $\mathbf{x}_t$ with the discovered latent basis vector $\mathbf{v}$. This approach offers several advantages over the editing of $\mathcal{H}$ as described in [Asyrp]. First, direct control over $\mathbf{h}_t$ disrupts the coherence between inner features and skip connections within the U-Net, which can lead to artifacts, particularly when making substantial adjustments [DiffStyle]. As a result, [Asyrp] implements gradual changes across multiple timesteps. In contrast, the manipulation of $\mathbf{x}_t$ enables more substantial alternations within a single timestep. Second, unlike $\mathbf{u}$ which exclusively affects the deepest feature map, $\mathbf{v}$ exerts influence not only on the latent variable $\mathbf{x}_t$ but also on all the feature map inside the DMs.
---
> [Q2] I would also discuss a bit more method [25] in the related work since it seems related to the proposed method.
[Q2] To enhance the distinction between our paper and [Asyrp], we modified the related work as follows:
>> ... Kwon et al. [25] demonstrated that the bottleneck of the U-Net, $\mathcal{H}$, can be used as a semantic latent space. Specifically, they used CLIP to identify directions within $\mathcal{H}$ that facilitate genuine image editing. ... In contrast to the method, our editing approach involves the direct manipulation of latent variables within the latent space. Furthermore, we autonomously identify editing directions in an unsupervised manner.
---
> [Q3] Writing issues:
> + Check the sentence at rows 74-75
> + Row 152: we aims
> + general grammar check
[Q3] Thank you for the careful comments. We will address and incorporate these revisions in the upcoming manuscript.
**References**
[Asyrp] : Diffusion Models already have a Semantic Latent Space, Kwon et al., 2022
[DiT] : Scalable Diffusion Models with Transformers, Peebles et al., 2022
[MotionDiffusion] : Human Motion Diffusion Model, Tevet et al., 2022
[DiffStyle] : Training-free Style Transfer Emerges from h-space in Diffusion models, Jeong et al., 2023
---
Rebuttal Comment 1.1:
Title: discussion
Comment: Dear Authors,
thanks for taking the time to answer my doubts. I'm satisfied with the rebuttal and with the changes you promised to make in the revised manuscript.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your feedback and your great efforts. Any further questions/suggestions would be also appreciated.
Thank you. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable advice. Here, we compile reviews we want to share with all the reviewers. Please see our responses addressing the specific concerns below:
---
### 1. Improving Fig. 1 for clarity
**Reviewers *6jtZ* and *iEgM* suggested modifying Fig. 1 since it has too dense information.**
To enhance the comprehensibility of Fig. 1, we divide Fig. 1 into two figures. Please refer to Fig. R1 and R2 in the attached PDF file. **Figure R1** gives a conceptual visualization of the local basis derived from the pullback metric. This figure succinctly outlines the methods involved in obtaining the local basis. **Figure R2** delivers an overview of the image editing method using the discovered local basis. It also provides a concise presentation of the outcomes.
---
### 2. Additional subsection to overview the whole image editing process
**Reviewers *cKEc*, *6jtZ*, and *iEgM* asked the clear explanation of the editing process.**
We create a new subsection in section 3 that summarizes the entire procedure of image editing. This subsection provides clear explanations and also visually illustrates the method in Fig. R2. The detailed contents are as follows:
>> ### 3.4. The overall process of image editing
>> In this section, we summarize the entire editing process with five steps: 1) The input image is inverted into initial noise $\mathbf{x}_T$ using DDIM inversion. 2) $\mathbf{x}_T$ is gradually denoised until $t$ through DDIM generation. 3) Identify local latent basis $\{ \mathbf{v}_1, \cdots, \mathbf{v}_n \}$ using the pullback metric at $t$. 4) Manipulate $\mathbf{x}_t$ along the one of basis vectors using the $\mathbf{x}$-space guidance. 5) The DDIM generation is then completed with the modified latent variable $\tilde{\mathbf{x}}_t$. Figure R2 illustrates the entire editing process.
>>
>>In the context of a text-to-image model, such as Stable Diffusion, it becomes possible to include textual conditions while deriving local basis vectors. It aligns all the local basis vectors with the condition text. Comprehensive experiments can be found in Section 4.1.
>>
>>It is noteworthy that while our approach involves moving the latent variable within a single timestep, it achieves semantically meaningful image editing. In addition to the image manipulation within a single timestep, the direct manipulation of the latent variable, $\mathbf{x}_t$, of diffusion models is a pioneering approach to the best of our knowledge.
---
### 3. Comparative experiment to other state-of-the-art (SoTA) editing methods
**Reviewers *iEgM* and *6jtZ* provided constructive discussion about including comparative analysis for qualitative comparisons and runtime aspects.**
We conduct qualitative comparisons with text-guided image editing methods. Our state-of-the-art baseline methods include: (i) [SDEdit], (ii) [Pix2Pix-zero], (iii) [PnP], and (iv) [Instruct Pix2Pix]. All comparisons were performed using the official code. **Please refer to Fig. R3 in the PDF for the qualitative results.**
We also compare the time complexity of each method. For a fair comparison, we only identify the first singular vector $\mathbf{v}_1$, i.e., $n=1$, and set the number of DDIM steps to 50. All experiments were conducted on an Nvidia RTX 3090. The runtime for each method is as follows:
| Image Edit Method | running time | Preprocessing |
|:-------------------------:|:------------------:|:-------------------:|
| Ours | 11 sec |N/A |
| SDEdit | 4 sec |N/A |
| Pix2Pix-zero | 25 sec |4 min* |
| PnP | 10 sec |40 sec** |
| Instruct Pix2Pix | 11 sec |N/A |
(*: generating 100 prompts with GPT, obtaining embedding vectors, **: storing feature vectors, queries, and key values)
The computation cost of our method remains comparable to other approaches, although the Jacobian approximation takes around 2.5 seconds for $n=1$. This is because we only need to identify the latent basis vector once at a specific timestep. Furthermore, our approach does not require additional preprocessing steps like generating 100 prompts with GPT and obtaining embedding vectors (as in Pix2Pix-zero), or storing feature vectors, queries, and key values (as in PnP). Our method also does not require finetuning (as in Instruct Pix2Pix). This leads to a significantly reduced total editing process time in comparison to other methods.
**References**
[SDEdit] : SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations, Meng et al., 2021
[Pix2Pix-zero] : Zero-shot Image-to-Image Translation, Parmar et al., 2023
[PnP] : Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation, Tumanyan et al., 2022
[Instruct Pix2Pix] : InstructPix2Pix: Learning to Follow Image Editing Instructions, Brooks et al., 2022
Pdf: /pdf/cf38aa665fab3bd02d850b821b1976a970df7c16.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Batchnorm Allows Unsupervised Radial Attacks | Accept (poster) | Summary: This paper shows that for batch normalized deep image recognition architectures, intermediate latents that are produced after a batch normalization step by themselves suffice to produce adversarial examples using an intermediate loss solely utilizing angular deviations, without relying on any label. The success of the proposed method implies that leakage of intermediate representations may create a security breach for deployed models, which persists even when the model is transferred to downstream usage. The proposed attack also empirically works even for transformer architectures (ViT) that employ Layernorm over batchnorm.
Strengths: 1. The proposed method is well motivated and technically sound.
The proposed loss is motivated by the geometry of batch normed representations and their concentration of norm on a hypersphere and distributional proximity to Gaussians. Through theoretical analysis, the authors show the contribution of batch normalization to an unsupervised radial attack and the optimal layers for the latent to lie near the end of the network. These theoretical findings are also supported by their experimental results. Therefore, the proposed method is well motivated and technically sound.
2. The experimental results are promising.
Comparing to single-stage attacks, the proposed method can obtain higher attack success rates, which shows the effectiveness of the proposed attack method and loss function.
Weaknesses: 1. The significance of the proposed method is not clear.
While the proposed method can get rid of the reliance on label information, as shown in their experiments, the obtained attack success rates are inferior to well-recognized attacks, i.e., PGD. Therefore, it is unclear what is the significance of the proposed method. Since the authors consider white-box settings, attackers can use white-box attacks, like PGD, with predicted labels of the target images, which can also avoid the usage of ground-truth label information. A potential application of the proposed method seems to be the transfer learning setting. However, from the experimental results, i.e., Table 10, the proposed method is also inferior to PGD. Therefore, it is unclear what the proposed method can be used for.
2. Some claims are not validated or clearly explained.
1) The authors failed to clearly explain the meaning of Figure 1, which is closely related to their motivation. The notation of Figure 1 is different from that in the text, which can be confusing.
2) The authors claimed that “the 2-step process is key.” However, they failed to explain why the proposed 2-step process is better than 1-step methods.
3) The meaning of lines 171-173 is confusing, can the authors further clarify their claims?
4) When we have a different latent Zj from a point Xj of a different label from Xi, the authors proposed to use this latent in their loss function to generate adversarial samples. However, this scheme is inferior to the proposed method. Can the authors explain why?
3. Presentations can be improved.
1) The authors split the attack results of their method and baseline methods into different tables, i.e., Tables 1-4 and Table 8, which makes it inconvenient to compare different attacks. The authors can merge these tables for better readability.
2) The authors claimed one of their contributions as “To improve the supervised attack case when labels are available by using our loss in conjunction with the loss from the true label.” However, the corresponding experimental results are put into the appendix. This reviewer would suggest the authors put these experimental results into the main paper to support their claim.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
1. What can the proposed method be used for?
2. Why is the proposed 2-step process better than 1-step methods?
3. The meaning of lines 171-173 is confusing, can the authors further clarify their claims?
4. When we have a different latent Zj from a point Xj of a different label from Xi, the authors proposed to use this latent in their loss function to generate adversarial samples. However, this scheme is inferior to the proposed method. Can the authors explain why?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations:
The authors have acknowledged the limitations of their work that it cannot be easily extended to networks utilizing alternative forms of normalization such as LayerNorm or networks that employ no normalization whatsoever. It is acceptable that this study focuses on a specific but widely used module in modern neural networks. Please see the weaknesses section for the other limitations of this work.
The authors have discussed the potential negative societal impact of their work. The authors also proposed a mitigation strategy to alleviate potential harm of this study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review, our comments in response are as follows to the weaknesses :
Weakness 1: The proposed method is most suitable for the cases where a feature extractor (known to us) has been used to train a model (unknown to us). Generally in these cases the first K layers of the new fine-tuned model are left unchanged. Hence, we must attack this model without knowing anything except what the first K layers are. In fact, in these cases, not even a predicted label is usable, as the feature extractor does not emit labels but representation vectors. This case arises naturally in scenarios like :
- Knowing a model is fine-tuned from e.g. Resnet => the first few layers are likely to be unchanged, thus this applies
- Knowing a model uses a ready-made representation model and then trains on top of it
Our attack is successful in that scenario, whenever these first K layers utilize batchnorm or Layer norm, which is a significant fraction of all models. We believe this scenario to be more and more common as pre-trained and fine-tuned models become the norm instead of training from scratch. Note that in this case, PGD is not applicable as we do not know the full new model and cannot even get soft labels. In fact, we may not even know the new set of labels (when we pre-train on Imagenet, and we are given a new model that is for CIFAR-10, we can use our attack without knowing that it is CIFAR-10, or it has 10 categories, but the soft label approach requires this knowledge of what the label structure of the downstream dataset is)
2.1) We apologize for the oversight. The notation is different as the complete notation had not been introduced at this point in the paper, however, we believe that the figure should be as early as possible for introducing visual intuition to the reader. We will move the notation earlier and align the caption going forward.
2.2) For our attack, we explicitly state why the simple 1-stage solution does not work under angular similarity - If the angular loss is used directly, the exact hyper-spherical condition has a differentiability problem (Section 2.1). Of course, *other* attacks might be 1-stage, but they are empirically worse, as we show in our comparison to single-stage attacks (appendix O)
2.3) Regarding lines 171-173 : Suppose we are in a simple regime where we have an input $X$ with output either 0 or 1. For simplicity, say $X_1$ is mapping to 0. We are told that all valid inputs $X$ lie on a hypersphere, and that opposing points on the hypersphere are the most dissimilar, and that given an $X_1$, we must find $X_2$ with a different label i.e. 1. In the batch normed regime, the center of the hypersphere is the origin. In the non batch normed regime, there is not enough information to solve the problem, but in batch normed regime, we can reflect $X_1$ to get $X_2 = -X_1$ as our candidate for label 1. The point of the lines is to show batch norm gives us one more point of information which can be enough information leakage to make the robustness weaker for toy cases.
2.4) Consider the toy case of 2.3, and suppose the labels are one of : 0, 0.01, and 1 (Assume that the similarity of the labels reflects the numerical gap). In this case, given a point of label 0, the random point of a different label might be one with label 0.01. Due to high similarity of the two labels, there is no effective movement. When points of different label are being drawn at random, there is no way to check against this. Thus, under the case where $-X_1 = X_2$ is the most dissimilar point to $X_1$ and all similarity metrics are angular, a differently labeled $X_3$ to $X_1$ might result in only a small angular movement vs. moving towards $X_2 = - X_1$. Especially when $X_3$ and $X_1$ are of closely related classes (e.g. cat and dog vs airplane), the projection $- X_1 = X_2$ is a more effective target to move to vs randomly selecting an element of another class.
3.1) We will certainly merge these tables. Thank you for the catch.
3.2) These results will be put in the main paper instead.
We hope this addresses the questions / weaknesses. Please let us know if these address your concerns. Note that the questions have been answered under weaknesses (Question 1 as weakness 1, Q2 as 2.2, Q3 as 2.3, Q4 as 2.4).
With respect to your "Limitations" section, we would also like to note that Layernorm is empirically vulnerable to our attack and analyzed in the appendix for our method.
We hope we have addressed your concerns in full and resolved your issues with the paper - if not, please let us know.
---
Rebuttal Comment 1.1:
Title: Follow-up response
Comment: Dear reviewer, please let us know if the above rebuttal satisfied your queries, since the discussion period will now draw to a close. We would like to respond to any further queries or clarifications before then.
---
Rebuttal Comment 1.2:
Comment: Thanks for your detailed response. Most of my questions have been clarified.
However, I think that the claimed use case of the proposed method is still unrealistic. The attacker needs to know the exact feature extractor that the victim uses. During fine-tuning, the victim also needs to keep the feature extractor unchanged. More importantly, in the transfer learning setting (Table 10), the proposed method is also **inferior** to PGD. Besides, I think the authors should add all these clarifications and experiments requested by the reviewers to their paper to improve its contribution and presentation. Therefore, I will keep my score.
---
Reply to Comment 1.2.1:
Title: Reply to the response
Comment: Dear reviewer - knowing the exact feature extractor (up to the initial layers) is not at all an unrealistic scenario and is much easier to guess/leak than the actual weight matrices of the feature extractor, which is infinitely more complicated. A large fraction of vision models use Resnet, ViT etc. as a feature extractor in the sense of a fine-tuned base, and in NLP, the same had happened with BERT embeddings and now GPT models. (In fact, for recent large language models, one can even now *directly ask* the model if it is a language model trained by OpenAI, etc. to leak what kind of model it was initially fine-tuned from). Note that our adversarial examples also transfer well (appendix J) and it will suffice to simply know the family (i.e. Resnet) over the specific model (i.e. Resnet-18) which is quite a plausible leakage model.
Moreover, during fine-tuning, we are *not* saying that the entire feature extractor is unchanged, but that the first $k$ layers of the model used remain unchanged and frozen, which is actually in line with what happens, as generally only the last layers are fine-tuned in the pre-train setting.
Finally, the PGD advantage arises in transfer learning for the case where the full model is known. It is not in fact directly comparable to our method at all as it uses the full model, while we do not. The PGD attack is not put in as a fair competitor to the partial knowledge scenario discussed above and has only been put in to demonstrate a ``ceiling". So, in fact, there is no inferiority to PGD - the two attacks occupy different roles and work with different required information.
We will put all this in the paper in terms of content. We hope this further clarifies your doubts.
---
Rebuttal 2:
Title: Reminder from AC
Comment: Dear reviewer,
The author-reviewer discussion period ends in 2 days. Please review the authors' rebuttal and engage with them if you have additional questions or feedback. Your input during the discussion period is valued and helps improve the paper.
Thanks, Area Chair | Summary: In this paper, the authors proposed a label-free attack utilizing a portion of all layers, which does not require to have gradient access to the full model, and the generated adversarial methods generalize to the case where the model was fine-tuned afterwards. These results have relevance at the intersection of the theory and practice of adversarial robustness and the growing study of batch normalization and its weaknesses and drawbacks.
Strengths: The proposed attach method is a label-free method, and it is a quite strong one. Moreover, the authors use the diagram to demonstrate the idea on their proposed method, which is very clear. The paper is well-written, and easy to follow. Moreover, the authors extended their work to the layer normalization case, showing the proposed method is a quite generic one.
Weaknesses: The proposed method only tested in the white-box setting, which means the adversary has the access to all parameters of the model, while in practice, this might be not practical, as usually the service provider only provides the api for the service. I was wondering how the model performs under the black box setting.
The paper mainly presents the experimental results, but lacks of some theoretical insights why intuitively it would work.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See the weakness. Besides, I was wondering how the initial stage affects the attack results, i.e., say we have total budget on calling gradient of $L$, then to best use this method, how should we trade off the $t_{\textrm{init}}$ and $t_{\textrm{radial}}$.
In figure 2, it shows that the method does not work well on fixup resnet. I was wondering if the authors could conduct more experiments on different architectures to have more evidence.
Some typos: for example
Line: above Line 145, should be $\frac{<z, z'>}{||z|| ||z'||},$ and Line 145 starts with `where`.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your in-depth review. We address your concerns below :
Regarding weakness 1 : This is right, sometimes the service provider does not provide details. However, we often do know enough about the model’s initial layers as they are often based on some available public model. For example, Resnet, used here, has been used to initialize many different CV models. If the CV model is only provided via a service but we know it is Resnet-based, we can attack it with this method. Note that the adversarial examples exhibit good transfer properties (appendix section J) and simply knowing it is Resnet leaks enough information for us to attack it.
Regarding weakness 2 : In the field of theoretical ML, the presence of BN inducing norm concentration is currently a conjecture with some strong partial evidence behind it - as we show in our key citation of Daneshmand et. al. In fact, we believe this work (https://arxiv.org/abs/2106.03970 - Neurips 2021) to be sufficiently motivating from a theoretical POV in our attack as we argue in section 2.2. Due to space constraints in the main paper, we could not explicate ourselves fully - we promise to expand and build on this connection. We would urge the reviewer to look at the orthogonality gap concept in the linked paper which motivates our attack, especially section 4.1 and appendix F which conjectures that the Gaussianization hypothesis extends to networks which are not linear, i.e. the class we consider. **Despite being aware of these theoretical motivations, we did not wish to simply repeat a different paper's contents - and just cited Daneshmand**. For lack of space, these connections were cut from the main text. If you feel it is more appropriate to re-tell this story in the main paper and also cite the paper, we will do this and write out the motivation following the linked paper to explain why the attack actually has a strong partial theoretical motivation already existing in the literature.
To be clear, fully (not partially) proving the Daneshmand et. al conjecture will be a separate work in its own right, and in theoretical ML. We are focusing on the empirical side, and showing that the partial conjecture is enough to create a good attack. But, we want to make it clear that the theoretical motivation does exist, only in a different paper that we have cited. We will summarize its main content in the paper going forward to make this connection very clear.
Regarding questions :
1) We have performed some $t_{init}$ and $t_{radial}$ ablations, and attach them appropriately.
2) We have already performed experiments on several other architectures beyond Resnet as well, like ViT, Efficientnet, VGG16, and have put the results in the paper and the appendix. If you could name the *specific* architecture you wish to see that is not already present, that would be helpful - we will provide it.
3) Thank you for catching the typos, we will certainly correct them !
**Results upon varying the initial and angular loss tradeoff** :
All results were performed on the vanilla Resnets. The difference is expressed as a difference from the original case (20 initial iterations). The # iterations varied below is the # of such initial iterations. Positive values indicate stronger attack. There is no clear trend except that 20 is actually not the optimal value (15 is a bit stronger). The changes in top-1 accuracy appear in the main value and top-5 in parentheses for all these Resnets with $\epsilon = 0.03$ as that $\epsilon$ avoids accuracy going to $0$, making changes clear and visible.
| Resnet | 5 | 10 | 15 | 20 | 25 | 30 |
|--------|------------|-----|-----|----|-------|----|
| 18 | -0.5(-0.6) | 0.1(1.3) | -0.2(0.5) | 0.0 | 0.6(-1.4) | 1.7(0.1) |
| 34 | 1.5(0.4) | -1.5(-0.5) | 0.8(0.3) | 0.0 | -1.3(1.1) | -1.9(1.7) |
| 50 | 1.9(-1.6) | -1.3(-0.4) | 1.5(1.7) | 0.0 | -0.1(-1.4) | 0.5(-0.5) |
| 101 | 1.3(-0.4) | 0.1(1.3) | 1.8(-1.0) | 0.0 | -1.1(1.3) | -0.8(1.6) |
| 152 | 0.6(-0.3) | 1.6(1.1) | 1.2(0.2) | 0.0 | -0.1(-1.3) | -0.3(-0.5) |
Let us know if you have any other concerns. We hope we have addressed your issues regarding this paper appropriately.
---
Rebuttal Comment 1.1:
Title: Follow-up
Comment: Hi, please let us know if the above points adequately address all your concerns with this paper. If not, please let us know so that we can respond and clarify for any further issues. | Summary: The authors present and evaluate an algorithm to construct adversarial examples without labels by minimizing the cosine similarity between intermediate layers. The authors show that the attack only works with BatchNorm.
Strengths: - The paper presents strong evidence that the attack is successful on multiple datasets and with multiple architectures
- Despite not achieving state-of-the-art attack success rate when compared to attacks with labels, the paper demonstrates that attacks on BatchNorm architectures are possible without knowledge of the labels, which is required by previous methods
- Paper is generally well written and thorough
Weaknesses: - There are some clarity issues regarding the explanation for the attack, see Questions
- The authors make the assumption that norm is concentrated, and make an argument that it should be concentrated based on statistical assumptions about the data, but it seems that this should be verified empirically by plotting the distribution of the norms, rather than relying on an imprecise argument.
- The assumptions aren't verified to be necessary. It would be helpful to include an ablation where the norm concentration is broken.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - I don't fully understand why the attack relies on concentration of norm. It's true that without concentration of norm then for a latent representation Z that there is no unique "most dissimilar point", but presumably points opposite Z would still be dissimilar without this assumption.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Authors address limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review. We would first like to note that you state "The authors show that the attack only works with BatchNorm." Empirically, the attack is successful on other architectures - we carry out experiments on Layernorm architectures (ViT).
Regarding weaknesses :
Weakness 1 : Moved to question, and addressed under questions.
Weakness 2 : Our assumption on the concentration of norm is actually tested in the appendix (page 27) in Tables 49 and 50 empirically. While plots can provide visual intuition, statistical measures are more definitive and we show that FixUp resnets do not concentrate to the same degree normal ones do.
Weakness 3 : Please note that Appendix L contains ablations of non-concentrated layers unrelated to batch norm linearly inside the batch normalized architectures already in table 44, and might fit your needs. Another relevant ablation appears in page 12 of the appendix (table 28)
In general, regarding necessity of concentration - it is very difficult to break the norm concentration in a way that does not also change other things about the setup, making it difficult to test this assumption beyond what we have done in appendix L and FixUp. We have provided evidence that FixUp, for instance, breaks the attack and breaks concentration (see also, the above point). While this **could** result from other attributes of FixUp, this criticism can be raised about any way to not make the norm concentrate as well. If we remove batch norm and have no normalization at all, we will break concentration, but also the other things about the model that batch norm provides. As such, we believe appendix L should meet your requirements.
**If you are not satisfied with the ablations provided, you may provide details about what you would like to see in this ablation, we can try to test it and report here in time.**
We would also note that in general, if an attack proves to be successful outside the conditions we think are necessary, i.e. a concentrated norm, it makes the attack *stronger*, not weaker, from a purely empirical point of view as it allows attacking non-normalized models at least some of the time - even though it does mean the causation might be more complicated.
Re - Question : To be clear, we are **not** saying that the attack necessarily fails in the absence of norm concentration. We are saying that a simple, easily available piece of information like "This network uses batch norm" allows us to conclude concentration of norm is likely, and thus guess our attack might work. Whereas, in the absence of such a guarantee, we are simply less certain about its success. We are not saying it will fail in such a case. In other words, if our attack fails, the norm likely is not concentrated, but if the norm is not concentrated, it does not mean the attack is sure to fail. For example, our attack transfers to VGG16 (Appendix, page 16), which is batch-norm-free.
We hope this addresses your concerns with the paper, while clarifying and resolving any outstanding issues.
---
Rebuttal Comment 1.1:
Comment: I had missed these in the appendix, thanks for pointing this out. Having read the rebuttals and the other reviews, I would like to keep my current score, which is to recommend acceptance.
---
Reply to Comment 1.1.1:
Title: Thank you for the response
Comment: Thank you for the reply. We are glad you found the appendix useful and relevant. | Summary: This paper proposes an adversarial attack (angular attack) method that does not require any label information and works by only accessing the network's first part (up to a specific layer). The attack is based on the assumption that the BN layer converges and forms a hyperspherical latent space, where an angular loss is applied to guide the direction of the adversarial updates.
Strengths: 1. The proposed angular attack requires no labels, and only partial access to the network.
2. The hyperspherical assumption about the geometry of BN makes sense. Both positive and negative results (e.g., angular attack on Fixup ResNet) support this assumption.
Weaknesses: 1. Although Figure 2 is straightforward and informative, the other table results, including those in the appendix, are unclear. For example, it is unclear whether the numbers in Table 9 are absolute mean correlation or the fall in absolute correlation. Table 4 and Table 47 in the appendix are identical tables that only report the “max” over seven baseline methods. I think it makes more sense to include the full results of each baseline.
2. There are some missing ablation studies that I think should be included in the experiment section. a. How does the number of iterations affect the method (currently, the experiments only use a default iteration of 40). b. The current angular loss is computed by averaging the loss over the last two layers; I'd like to see more results of computing losses over more layers or just the last layer.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: 1. Table 10 (b) shows that the targeted attack performs worse than the original angular attack. Does that mean the proposed attack method cannot be used for targeted attacks (when some label information is available)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Hi, thank you for the review. We address the points raised as follows :
Weakness 1 : Table 9 denotes the absolute correlation. It is meant to demonstrate that this value falls monotonically.
Yes, Table 4 and 47 are the same, we apologize for this confusion. We have broken out the results of each method in the common rebuttal. We will include this in the appendix. Due to lack of space, we utilized the max approach for the main paper's table.
Weakness 2 : Yes, we chose 40 as this is a commonly used default iteration # across various papers e.g. the original work on PGD. We attach some ablations with different iteration #.
The averaging only occurs over the last 2 layers due to our preference of keeping things simple - we could certainly optimize over the parameter. Our point was to show that even without this optimization, the attack performs well. We attach ablations over this.
Question :
No, it does not mean that we cannot use the label information. It only means that when we randomly pick an instance of a different label, the angular attack often does better. But, the *combination* of label and angular attack can do better than either attack, so having an instance of a different label can still be very helpful. We provide some cases where our loss is provided in ensemble with the targeted loss and these perform better than either.
All ablations below performed on vanilla Resnets $\epsilon=0.03$ as this is where the accuracy fluctuates the most and at a higher epsilon many accuracies are directly going to zero.
**Table of attacks**
In order in which they are cited (appendix O, Line 219-220). Each value in the column represents added delta from lowest (i.e. gap by which a method is worse). All tests on $\epsilon = 0.03$. Top-1 in main, Top-5 in brackets. "-" denotes optimal.
| ResNet | | | | | | | |
|--------|----------|----------|----------|----------|----------|----------|----------|
| 18 | - | 6.8(2.23) | 6.97(6.54)| 3.96(2.33)| 3.6(2.49) | 1.08(1.1) | 3.78(2.55)|
| 34 | 3.9(6.31)| 3.07(3.36)| 3.03(1.73)| 2.26(3.2) | 1.95(1.96)| 2.77(4.63)| - |
| 50 | 0.32(0.04)| 5.55(7.57)| - | 1.97(0.33)| 3.58(4.16)| 0.63(7.49)| 2.37(4.65)|
| 101 | 3.18(6.04)| 3.27(3.6) | 5.55(6.22)| 7.55(8.65)| 1.05(1.68)| - | 3.14(5.01)|
| 152 | 3.34(6.15)| 2.45(3.64)| 3.24(5.28)| 1.18(1.95)| 6.91(9.91)| 1.06(2.62)| - |
**Table of iterations ablated**
Negative values indicate worse performance relative to baseline (40 iterations). We can see there is some gain in moving to 50 iterations. However the standard evaluation practice is with 40, so we did not optimize over this step. As above, we use vanilla Resnets, Imagenet, $\epsilon = 0.03$, top-1(top-5 in bracket) and so on.
| ResNet | 30 | 40 | 50 |
|--------|--------------|--------------|--------------|
| 18 | -0.52(-0.43) | 0.0(0.0) | 0.38(1.97) |
| 34 | -1.36(-0.7) | 0.0(0.0) | 0.3(0.87) |
| 50 | -1.16(-0.34) | 0.0(0.0) | 0.28(0.02) |
| 101 | -0.21(-0.02) | 0.0(0.0) | 0.63(0.74) |
| 152 | -1.56(-0.19) | 0.0(0.0) | 0.11(1.86) |
**Number of averaging over layers**
As above, negative values are worse relative to baseline (2 layers to be averaged over). Vanilla Resnets on Imagenet, $\epsilon = 0.03$ , $40$ iterations, top-1 with top-5 in brackets.
| ResNet | 1 | 2 | 3 | 4 |
|--------|---------------|----------|--------------|--------------|
| 18 | -1.37(-1.21) | 0.0(0.0) | 1.09(1.79) | 3.49(2.93) |
| 34 | -2.64(-3.08) | 0.0(0.0) | 2.95(2.67) | 3.47(0.93) |
| 50 | -2.79(-2.95) | 0.0(0) | 2.8(2.15) | 3.8(3.18) |
| 101 | -3.27(-3.55) | 0.0(0.0) | 0.47(0.31) | 0.39(0.5) |
| 152 | -2.74(-2.98) | 0.0(0.0) | 0.66(0.45) | 0.83(0.33) |
We can clearly see the 2-layer averaging is actually quite helpful. Bigger gains are possible with $3,4$ - but at the cost of more evaluations.
**Ensembling**
These gains are as above - on vanilla Resnets, $\epsilon=0.03$, Imagenet, 40 iters, top-1 with top-5 in brackets. Positive indicates gains. We simply combine the targeted and the angular loss to produce these results.
| ResNet | |
|--------|---------------|
| 18 | 3.38(1.62) |
| 34 | 0.88(2.05) |
| 50 | 0.64(3.81) |
| 101 | 0.89(1.79) |
| 152 | 0.99(2.1) |
Please note that a similar ablation with PGD also appears in appendix (Appendix N).
We hope that our rebuttal answers all your queries to your satisfaction and resolves your issues with the paper.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their response. The additional results answered my questions, and I am happy to raise my score.
However, I’d like to add that ablation studies are essential to show how sensitive the proposed method is to different hyper-parameters and help readers to understand the method better, so I can't entirely agree with the author’s argument on why they didn’t conduct them in the first place.
In addition, I agree with Reviewer ykjk that the presentation of this paper should be improved. Currently, the numbers in the tables (including those in the rebuttal) are not straightforward to understand (I personally don't think it's a good idea to only report the "delta" value and not include the baseline number. For example, if the baseline number is small, then a small delta means a large difference.). It’s also hard to compare different methods because they are not in the same table.
---
Reply to Comment 1.1.1:
Title: Reply to the reviewer
Comment: We are glad to hear the rebuttal addresses your concerns and also glad that you have raised your score, thank you !
Yes, we should have clarified this point. The ablation study should show we have not cherry-picked, as was our initial intent behind fixing the hyperparameters.
Due to the large number of models being compared, it is hard to convey the figures all in one table. But we will certainly make efforts in this direction. The delta values were used over raw values as we felt the delta values were clearer...but if the reverse is true, we will go back to raw values. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank you for your reviews, and have directly responded to all of you without any common rebuttal. Please let us know if your concerns are appropriately addressed fully. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Continuous-Time Functional Diffusion Processes | Accept (poster) | Summary: In this work the authors propose an extension to the diffusion models framework for function spaces.
In particular, given a Hilbert space $H$, they introduce a noising process based on a Wiener process on $H$. Depending on some assumptions on the type of noise, they then show the existence of the time-reversal process. Then they derive an ELBO via an extension of the Girsanov theorem.
Later, akin to the the classical Shannon-Nyquist sampling theorem, they show that for a subclass of functions and grids for which the function is uniquely determined by its values on this grid.
Eventually, the process is discretised at the evaluation points, yielding finite dimensional noising and denoising processes.
They apply their model to model the CelebA dataset, and show that with relatively few parameters their are able to obtain low FID.
Strengths: - I found it quite easy to follow the submission, especially Section 2 which I think is well fleshed out.
- The proposed methodology is rigorous and sound.
- Theorem 2 is nice and potentially useful as it gives sufficient conditions to recover the original function.
Weaknesses: - Perhaps the main weakness is the limited experimental setups and lack of ablation studies. As such it is hard to really understand the practical benefits of this specific approach.
- In particular, one advantage of working with functional data is to be able to handle data at discretised at arbitrary resolutions or on varying and irregular grid, yet this work practically focuses on the CelebA dataset. It is interesting to see the parameter efficiency by representing as the data as function, but indeed there is no hope to beat method that assume this fixed grid setting and can rely on specialised architectures (e.g. U-net). It may be of interest to tackle tasks such as upscaling, or data with missing values, irregular time-series etc.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - line 98: Why does $R$ has to be diagonal?
- line 105: Is it correct to interpret the 'trace-class' as the following: we have $\mathrm{Tr}(R) = \sum_i \lambda_i = Var(W_t)$ that is the trace of the operator is given by the arithmetic series of its eigenvalue which itself is the variance of the $R-Wiener$ process, so this can be seen as a finite variance constraint? Which seems pretty natural.
- Equation 3: How can we define a density $\rho_t$ since there is no 'Lebesgue-like' (translation invariant) measure on infinite dimensional Hilbert space? Is it w.r.t. a Gaussian measure?
- Table 1: It is nice to see that FDP can be parameter efficient. Is the reason for not trying larger architecture computational? If so what's the bottleneck? Would $\inf$ - DIFF with 1M parameter be performing as well?
- line 333: The MLP architecture seems very deep, have you tried something perhaps a bit shallower yet wider (e.g. 512 neurons)?
- Section 6: What noise $R$ was used for this experiment? From a Karhunen-Loeve perspective would make sense to use the correlation of the data process see [Angus et al 2022].
Spectral Diffusion Processes, Phillips, Angus and Seror, Thomas and Hutchinson, Michael and De Bortoli, Valentin and Doucet, Arnaud and Mathieu, Emile, 2022
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: - Something that has not been explored, yet can be left , is conditional sampling which I believe is often of interest e.g. looking at the neural processes literature.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Perhaps the main weakness is the limited experimental setups and lack of ablation studies. As such it is hard to really understand the practical benefits of this specific approach.
In particular, one advantage of working with functional data is to be able to handle data at discretised at arbitrary resolutions or on varying and irregular grid, yet this work practically focuses on the CelebA dataset. It is interesting to see the parameter efficiency by representing as the data as function, but indeed there is no hope to beat method that assume this fixed grid setting and can rely on specialised architectures (e.g. U-net). It may be of interest to tackle tasks such as upscaling, or data with missing values, irregular time-series etc.*
We do agree with the reviewer that additional experimental validation could strengthen the work. Please refer to the shared answer, as well as the additional pdf, where extra experiments are provided.
*Questions: line 98... has to be diagonal?*
We do apologize for the imprecision: to define a “diagonal” operator one must specify a basis. There is no requirement on the operator R to be diagonal, it is just a choice that simplifies the design of the SDEs coefficients. The only restriction we consider when dealing with Hilbert spaces of integrable functions is to have trace-class noise, which, as correctly guessed by the reviewer, just implies a finite variance condition
*line 105: Is it correct to interpret the 'trace-class' as the following... Which seems pretty natural.*
This is indeed correct (see also point above) and is an important requirement which if not satisfied can hinder existence of valid diffusion processes, a matter not explored by some of our competitors. We will clarify in the text of the revised manuscript.
*Equation 3: How can we define a density since there is no 'Lebesgue-like' (translation invariant) measure on infinite dimensional Hilbert space? Is it w.r.t. a Gaussian measure?*
We apologize for the technical confusion. It is indeed true that on infinite dimensional spaces there is not an equivalent of the Lebesgue measure, and this is the reason why e.g. [Lim2023] it is only possible to define the ratio w.r.t. some reference measure (like the Gaussian one).
In our case, our compact notation indicates the conditional density $\rho^{(d)}_t(x^i | x^{ j\neq i})$. This on the contrary is a single dimensional density which exists (it is the same object considered in [Pidstrigach2023]). In the submitted version of the manuscript (line 110-111) we explicit this “To avoid cluttering the notation, we shorten ...” We do understand however that this is an important technical point, and we will expand the notation in the final version of the manuscript
*Table 1: It is nice to see that FDP can be parameter efficient. Is the reason for not trying larger architecture computational? If so what's the bottleneck? Would inf- DIFF with 1M parameter be performing as well?*
We explored deeper MLP architectures and found no substantial benefits. Concerning Infty-Diff, as stressed in the main paper, the considered architectures are complex combinations of functional and non functional blocks (like convolutional UNet transformers). Consequently, it is not straightforward to reduce the number of parameters without the need to completely re-design the whole architecture.
*line 333: The MLP architecture seems very deep, have you tried something perhaps a bit shallower yet wider (e.g. 512 neurons)?*
Yes, and we did not obtain better results. See also the shared answer above.
*Section 6: What noise
was used for this experiment? From a Karhunen-Loeve perspective would make sense to use the correlation of the data process see [Angus et al 2022].*
We thank for the interesting question. All details about the actual values of b and r are reported in the supplementary material. Our procedure for the selection of coefficients of b and r is indeed intimately related to the observation of the power spectral density of the data. The design should satisfy the following requirements: Signal to Noise ratio (on a frequency by frequency basis) should decrease gracefully up to a low snr regime (which corresponds, roughly speaking, to the uninformative gaussian steady state) and the selection of coefficients must satisfy the requirements of corollary 1. We think it will be an interesting complement to the paper to express the connection between our methodology and the one described in [Angus2022].
Spectral Diffusion Processes, Phillips, Angus and Seror, Thomas and Hutchinson, Michael and De Bortoli, Valentin and Doucet, Arnaud and Mathieu, Emile, 2022
*Limitations:
Something that has not been explored, yet can be left , is conditional sampling which I believe is often of interest e.g. looking at the neural processes literature.*
In the submitted version of the manuscript we included some conditional sampling experiments like deblurring, inpainting and colorization. We moreover include in this rebuttal some super-resolution experiments conditioned on lower resolution images. Class-conditional experiments, possible with the proposed method with minor modification to the considered architectures, are an interesting proposal for future works.
---
Rebuttal Comment 1.1:
Title: response
Comment: Thanks for the clarifications!
> We explored deeper MLP architectures and found no substantial benefits
Would you have any idea why? Can the architecture of Infty-Diff be used here?
---
Reply to Comment 1.1.1:
Comment: *Would you have any idea why?*
We observed that deeper INR architectures suffer from meta-learning instabilities. On the other hand, Transformer based architectures are more stable when increasing the depth of the models, and show better performance.
*Can the architecture of Infty-Diff be used here?*
While it is certainly possible to consider a more specialized architecture, in the current work we focus on simple and modality agnostic architectures.
We consider a complete analysis of the different existing trade-offs an interesting topic for future works. | Summary: This work proposes *functional diffusion processes (FDPs)*, which generalizes the SDE-based continuous-time framework for diffusion models for data living in Hilbert spaces. The work builds on the theory of infinite-dimensional SDEs and their time reversals, as well as an application of the infinite-dimensional Girsanov theorem to derive an ELBO objective. A practical implementation, via discretization and implicit neural representations, is proposed and evaluated on the Celeb-A dataset.
Strengths: - The paper is generally clear and well-written throughout.
- This work adds to the growing literature on function-space diffusion models (a quite active area) and will likely be of significant interest to the machine learning community. The main theoretical portion of this work places continuous-time, SDE-based functional diffusion models on solid theoretical grounds.
- All theoretical claims throughout the paper are precisely stated and rigorously justified. To the best of my knowledge, the proofs of the claims (contained in the appendix) are correct.
Weaknesses: - The experiments (section 6) are the weakest part of the paper.
- The proposed methodology obtains significantly worse FID scores than standard (Euclidean) diffusion models. However, note that this is achieved with far fewer parameters.
- It was also somewhat unclear how the "complexity" column in Table 1 was derived, as this seems like a highly subjective measurement
- Also in Table 1, it was unclear why FID scores were being compared against FID-CLIP scores (for the model of Bond-Taylor & Willcocks, 2023).
- It was unclear if the parameter counts in Table 1 also included the parameters for the INRs, or merely the parameters for the score network.
- The relationship between this work and the various other concurrent function-space diffusion model works could be better clarified.
#### Minor comments
- The notation on line 169 was somewhat unclear to me on a first read, particularly what $\hat{\mathbb{Q}}_0, \hat{\mathbb{P}}_0$ represent
- There may be a small typo on line 169: should it read $d \hat{\mathbb{P}}^{\chi_T} = d \hat{\mathbb{P}}^{\rho_T} \frac{d \chi_T}{d \rho_T}$?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - The work restricts its attention to classes of functions which can be reconstructed exactly via finite sets of samples (Sect. 3). If my understanding is correct, the observation points may vary across functions. Does the proposed practical implementation allow for this? If so, how would you expect performance of the model to change if one were to e.g. train on one discretization and sample on another, such as an image super-resolution task?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The limitations of the work are adequately addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *The experiments (section 6) are the weakest part of the paper.
The proposed methodology obtains significantly worse FID scores than standard (Euclidean) diffusion models. However, note that this is achieved with far fewer parameters.*
Indeed, even if FID scores obtained with FDPs are not SOTA, the variant of the score network implemented as an INR achieves remarkably good image quality with several orders of magnitude fewer parameters than SOTA methods, which can be important in some application scenarios (e.g. limited resource devices). Moreover, our new set of experimental results obtained with Transformer-based score nets substantially improve FID scores, at the expense of a larger number of parameters.
Note also that the FID metric is not sufficiently reliable, despite being widely adopted in the literature . Indeed, it is prone to being heavily influenced by small perturbations that are imperceptible to humans [Gaurav]. We include some examples of non curated samples obtained with a different functional architecture (Transformers), showing that the quality of generated images is higher than what the simple FID analysis would suggest.
*It was also somewhat unclear how the "complexity" column in Table 1 was derived, as this seems like a highly subjective measurement*
We do agree that a single metric about complexity could be interpreted as subjective. We will include in the camera ready version of the paper details about the architectures of all the considered competitors instead of simply using general adjectives, because this is what we meant by “complexity”. For example, the work discussed in “Bond-Taylor & Willcocks, 2023” requires multiple blocks, like classical UNeT architectures.
*Also in Table 1, it was unclear why FID scores were being compared against FID-CLIP scores (for the model of Bond-Taylor & Willcocks, 2023)*
Classical FID score is not available for these competitors, so we just wanted to point that out for the reader.
*It was unclear if the parameter counts in Table 1 also included the parameters for the INRs, or merely the parameters for the score network.*
We clarified this in the shared answer: the implicit neural representation network IS the score network. So, this is the total number of parameters we have in our implementation. Using an INR to implement the score network is possible due to the alignment of the goal of score matching (denoising task) and the intrinsic denoising capabilities of INRs (Kim et al., 2022a)
The relationship between this work and the various other concurrent function-space diffusion model works could be better clarified.
We do agree, see also the answer to reviewer GA6N. We will dedicate more space in the camera ready version of the paper to such analysis.
*Minor comments
The notation on line 169 was somewhat unclear to me on a first read*
We apologize for the confusion, we added some extra english wording in the paper to clarify this
*There may be a small typo on line 169*
Yes, thank you for spotting the error.
*The work restricts its attention to classes of functions which can be reconstructed exactly via finite sets of samples (Sect. 3). If my understanding is correct, the observation points may vary across functions. Does the proposed practical implementation allow for this? If so, how would you expect performance of the model to change if one were to e.g. train on one discretization and sample on another, such as an image super-resolution task?*
As explained in the shared part of the answer, we have many new results on this theme, with a positive answer to the raised questions.
Parmar, Gaurav, Richard Zhang, and Jun-Yan Zhu. "On aliased resizing and surprising subtleties in gan evaluation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
---
Rebuttal Comment 1.1:
Comment: Thank your for the detailed response and clarifications.
After reading the other reviewer's comments, I still believe that this paper has strong contributions and will be of significant interest to the NeurIPS community. My score remains a 7 (accept). | Summary: The paper introduces a novel continuous-time diffusion-based generative model on function space. Unlike previous works in this area, which primarily focused on discrete-time formulations, the authors concentrate on stochastic differential equations (SDE) in their approach.
The paper begins by defining the forward process, also known as the noising process, as an stochastic differential equations (SDE) on function spaces. The paper subsequently demonstrates the existence of corresponding backward processes under certain conditions, similar to the formulation of finite-dimensional score-based generative models. Notably, two constraints in the forward process are required to guarantee the existence of the reverse diffusion process, which distinguishes the current work from its finite-dimensional counterparts. Firstly, the perturbation noises should be cylindrical Wiener processes, including $R$-Wiener process with a trace-class covariance operator $R$. Secondly, an operator $\mathcal{A}$ can be present in the drift term of the forward process, which generates a strongly continuous semigroup; the semigroup will serve as a contraction map.
The paper highlights the novelty of its results by emphasizing that previous approaches have not fully explored the limits of discretization. Therefore, the provided proof offers theoretical support for the utilization of various types of integrators, unlike previous approaches.
Furthermore, the authors introduce an evidence lower bound (ELBO) on the reverse diffusion process, leveraging an extension of the Girsanov theorem. This result provides a theoretical foundation for optimal guarantees, in contrast to previous approaches that relied on heuristics to minimize score matching objectives on function space.
The paper also extends the sampling theorem, enabling the reconstruction of the original function from its evaluations at a countable set of points. This extension justifies the training of function-space models with finite observations. Consequently, the authors exploit the sampling theorem to employ implicit neural representations for the model parameterizations, unlike previous methods that predominantly relied on neural operators.
Finally, the paper demonstrates the effectiveness of the proposed method on various image generation benchmark datasets. In the experiment, they use Fourier basis functions inversely proportional to the square of the basis index for the perturbation noise.
Strengths: I believe that the paper's theoretical results will make significant contributions to the machine learning community for several reasons.
First, the existence of the reverse process ensures the optimality of common model parameterizations (Markovian), specifically those that model only the drift term of the reverse process, including score modeling.
Secondly, the SDE formulation sheds light on the utilization of various integration methods for sample generation, such as DDIM or exponential integrators.
Furthermore, the derivation of the evidence lower bound (ELBO) using Girsanov results in Equations 12 and 13. In these equations, the covariance operator $R$ is presented in the norm of the $R^{1/2}(\mathcal{H})$ space. The inclusion of this additional $R$ helps address the problem encountered when defining the Kullback-Leibler (KL) divergence for previous generative models based on discrete-time diffusion in function space.
Lastly, the sampling theorem introduces intriguing open questions regarding the parameterizations of score-based generative models in function space. This opens up the possibility of employing new network architectures beyond neural operators.
Weaknesses: Most of the content in the paper is good. However, some improvements are needed in the experiment.
In particular, the characteristic of the function-spaced model is its resolution invariance. However, the analysis of this aspect is missing in the paper's results. I believe that some analysis regarding the resolution-invariant property of the learned models should be included, even if it's just a small amount. In such analysis, I don't think it is necessary to strictly rely on an implicit neural representation.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Unlike neural operators, implicit neural implementation is powerful, but achieving discretization invariance is challenging. How do you think is the best way to approach this?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Please refer to the comments provided in the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Weaknesses:
Most of the content in the paper is good. However, some improvements are needed in the experiment.
In particular, the characteristic of the function-spaced model is its resolution invariance. However, the analysis of this aspect is missing in the paper's results. I believe that some analysis regarding the resolution-invariant property of the learned models should be included, even if it's just a small amount. In such analysis, I don't think it is necessary to strictly rely on an implicit neural representation.*
Please refer to the common remark above.
*Questions:
Unlike neural operators, implicit neural implementation is powerful, but achieving discretization invariance is challenging. How do you think is the best way to approach this?*
In principle, INRs and meta learning are — by design — resolution invariant. Indeed, the modulations are obtained by minimizing the average squared distance between the reconstructed and original image. This operation is by construction agnostic to the resolution or the spacing of the grid at which points are evaluated. We include in the rebuttal new experiments showcasing how the INRs can be used to produce higher resolution images.
In practice, one can expect the meta learning procedure to present instabilities when tested at resolutions different from the training ones, as the various numerical approximation errors can cumulate and push the meta learning procedure into instability-prone regions. | Summary: The authors propose a model approach called Functional Diffusion Processes (FDPs) that generalizes score-based diffusion model to infinite-dimensional function space. The authors derive the reverse time dynamics and sampling theorems to find a subset of functions on a countable set of samples without losing information. The authors demonstrate their approach on a multi-layered perception.
Strengths: 1. The theoretical parts of the paper appears to be well structured. However, the paper itself does not appear to be self contained and has many references to the appendices to understand. For example, the paper makes many references to the assumptions being made, but they are never mentioned in the main paper and are only mentioned in the appendix.
Weaknesses: 2. The authors claim their approach is to create a new breed of generative models in function space, which do not require specialized network architectures, and that can work with any kind of continuous data. This itself is an interesting idea. However, after reading the introduction and looking at the experimental results. I cannot see what applications this may benefit and how much of an improvement it has to have a purely functional domain. I believe this could be better motivated.
3. This may be because I didn't understand the experiment in Section 6 and does not appear to be self contained. I can understand that there is a page limitation, however, I believe that the experiment section is a bit compressed and difficult to understand. Having said that, the authors do have additional experiments in the appendix which appears to be much better structured.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The theoretical parts of the paper is quite dense and I cannot say I have fully understood the paper to come up with any meaningful questions.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *The theoretical parts of the paper appears to be well structured. However, the paper itself does not appear to be self contained and has many references to the appendices to understand. For example, the paper makes many references to the assumptions being made, but they are never mentioned in the main paper and are only mentioned in the appendix.*
Thank you for raising your concern about our paper being self-contained. We understand the importance of making the paper accessible to readers.
While we aimed to maintain a concise presentation within the given page limit, we acknowledge that striking a balance between brevity and comprehensiveness is challenging. To improve the readability of our paper, we plan to incorporate a summary of the key assumptions in the main paper. This will provide readers with a more coherent understanding of the underlying concepts without relying heavily on appendices.
*The authors claim their approach is to create a new breed of generative models in function space, which do not require specialized network architectures, and that can work with any kind of continuous data. This itself is an interesting idea. However, after reading the introduction and looking at the experimental results. I cannot see what applications this may benefit and how much of an improvement it has to have a purely functional domain. I believe this could be better motivated.*
The motivations for adopting a functional perspective are multiple. First, we claim that working in a functional domain allows greater parameter efficiency and simpler architectural design, as already shown in the submitted version of the manuscript. Moreover, the approach allows using the same architectures for different modalities. We performed some preliminary experiments on audio datasets, which we will include in the camera ready, using the same exact architectures.
Finally, the functional approach allows the design (and use) of architectures which are dimension agnostic. This claim is supported in the rebuttal by the new set of experiments concerning super-resolution tasks.
*This may be because I didn't understand the experiment in Section 6 and does not appear to be self contained. I can understand that there is a page limitation, however, I believe that the experiment section is a bit compressed and difficult to understand. Having said that, the authors do have additional experiments in the appendix which appears to be much better structured.*
While it is true that we left out the experimental details (like selection of parameters b and r) for the appendix, we clearly stated the considered datasets, the competitors and the metrics on which the different methods were compared. We are convinced that the information in the main paper and the appendix to be sufficient to understand our experiments. Moreover, we will release, if the paper is accepted, the full source code of the software implementation of our work, with instructions to reproduce the experiments we did, and additional details for practitioners who would be interested in using our approach. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their thorough analysis of our work and for their very positive feedback on our paper.
We here provide some general comments which are common to all reviewers, and clarify the minor technical points directly in the messages to individual reviewers.
*Implicit Neural Representation: clarifications, discussion of limitations and considered alternatives*
First, we clarify that the considered INR effectively implements the score network, they are the same object (see lines 292-293)
In our experiments, we tried a variety of architectural choices for the INR (deep and narrow vs. shallow and wide architectures) and found, in accordance with literature results [Dupont2022b], that the former design choice consistently provides better results.
The reviewers correctly point out a possible drawback of the meta-learning approach for INRs, which require local adaptation steps. It is however important to stress that our design goal is to use as few inner steps as possible (we obtain our results with only 3 inner steps). This is in line with the approach described in [Dupont2022b].
Note that we view the design of the score network as an “engineering” choice. In the submitted paper, we list possible alternatives and present results with an INR, which has the clear benefit of requiring an extremely small number of parameters. In addition, we have new original results where the score network is implemented using a simple Transformer architecture (interpreted as a mapping between Hilbert spaces, as discussed in [Cao2021]). These new results, which we will add to a possible camera ready version of the paper, indicate improvements in terms of image quality (FID score=11, fig 2 in the additional pdf) and do not require meta-learning steps, but require more parameters (O(40M)) than the INR variant. In our experiments with Transformers, we adopted the UViT backbone[Bao]. This backbone treats all inputs, whether time or noisy image patches, as tokens. Instead of using UViT's learned positional embeddings, we modified it to incorporate 2D sinusoidal positional embeddings as described in [Fan].
*Resolution invariance/ Different modalities*
A shared request among the reviewers is an experiment to illustrate the benefits of FDPs for an example task which can leverage the intrinsic dimensionality agnostic property of a functional representation of data.
We present new experimental results on the task of super resolution. We demonstrate how the same INR trained in the submitted version of the paper can be seamlessly applied to increase the resolution of the generated data points. (Refer to additional pdf fig 1). This is a practical approach that leverages the properties of INRs.
Moreover, we have preliminary results on different data modalities (audio waveforms) using the very same Transformer architectures considered for image data, grounding our claim that FDPs allow to simplify the design of the score network for a variety of application domains.
In principle, nothing prevents using FDPs on datasets where data has been collected on irregular grids. Indeed, the sampling theorem described in Section 3 does not require regularity of the grid but only that the covering is “sufficiently fine grained”. Moreover, score networks based on either INRs or Transformers have no problems in dealing with irregularly spaced data (to the best of our knowledge, NFOs instead require regular spacing). The major technical problem to address in case of irregular data, is the numerical simulation of the infinite dimensional SDEs, for which efficient integration schemes are elusive. In our future work, we plan to investigate such peculiar integration schemes.
*Bao, Fan, et al. "All are worth words: A vit backbone for diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.*
*Fan, Haoqi, et al. "Multiscale vision transformers." Proceedings of the IEEE/CVF international conference on computer vision. 2021.*
Pdf: /pdf/8c931c7948c8c02f3a0bd8e2b203981d51c90d5a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors introduce a framework for infinite dimensional diffusion models in Hilbert spaces, including deriving a reverse process and novel ELBO loss objective. The authors introduce a novel score model network architecture.
Strengths: - This is a timely paper with a lot of interest in the community. There are many concurrent works also looking into similar topics e.g. [1] (I do not expect detailed comparison given they were only public recently).
- Although performance is not great compared to standard diffusion methods in terms of generative modelling, the results are better than Neural Processes and similar point wise methods (with the exception of [2], concurrent so cannot criticise for worse performance)
- The use of implicit optimization based encoder, g, in equation 16, is interesting. I have some concerns detailed below but nevertheless it is interesting.
- Theoretical results seem correct but have not been checked thoroughly
[1] Infinite-Dimensional Diffusion Models for Function Spaces, Pidstrigach et al, 2023
[2] Infinite resolution diffusion with subsampled mollified states., Bond-Taylor et al, 2023
Weaknesses: - The implicitly defined INR encoder, g, detailed in equation 16, considers gradient descent in an inner layer. Will this not be quite slow and hence generation / sampling would also be quite slow?
- Empirical performance / FID scores are quite bad in comparison to other standard methods. Although interesting, is there an application where this method would be more useful than standard methods?
- Functional diffusions could be interesting for dimension agnostic diffusions, super resolution, irregular data. There are limited experiments looking into these other applications.
- Complexity in table 1 is quite subjective. I would argue parameter could does not matter if it can be ran on a single GPU in reasonable wall clock time. The implicit layer, g, with gradient descent for equation 16 is sequential so I imagine this slows down generation significantly.
- Calling other concurrent / prior work heuristic without details. "However, these works do not formally prove the existence of a backward process and their score approximation and score matching optimization objective is heuristic". Does not [1] prove existence in Theorem 1? (I do not expect detailed comparison given they were only public recently, but given the authors claim this conccurrent work is heuristic I would be interested to know more).
[1] Infinite-Dimensional Diffusion Models for Function Spaces, Pidstrigach, 2023
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The implicitly defined INR encoder, g, detailed in equation 16, considers gradient descent in an inner layer. How many steps are taken? Is the gradient through the score network taken through these gradient descent steps? Is this not quite slow? Have the authors considered other point-wise encoders?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have a brief paragraph on limitations regarding perform, more challenging experiments and comparisons to other approaches like NFO, but could be further discussed. See questions and weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Weaknesses:
The implicitly defined INR encoder, g, detailed in equation 16, considers gradient descent in an inner layer. Will this not be quite slow and hence generation / sampling would also be quite slow?*
Please refer to the common remark for all reviewers, above.
*Empirical performance / FID scores are quite bad in comparison to other standard methods. Although interesting, is there an application where this method would be more useful than standard methods?*
Indeed, we obtain a FID score that is larger than the state of the art, but with several orders of magnitude fewer parameters! In addition, we have now a set of new results where the FID score is much lower (see Fig. 2, additional pdf). These new results have been obtained with an alternative implementation of the score network, based on a simple Transformer architecture. In general, the FID metric should be taken with a grain of salt, as it is not necessarily representative of the true underlying quality of the generated data [Gaurav].
Despite a rather involved mathematical formulation, the functional approach is interesting in particular for practitioners. Indeed, in principle, it allows working on any data modality using the same architecture for the score network. Standard methods, on the other hand, require careful design and fine tuning, which can be lengthy and expensive tasks. We have additional preliminary results on audio data, whereby FDPs use the same score network as for image data.
*Functional diffusions could be interesting for dimension agnostic diffusions, super resolution, irregular data. There are limited experiments looking into these other applications.*
Indeed, our goal in this paper is to develop in detail a new methodology: this requires a careful, rigorous and lengthy analysis, which sacrifices space for empirical validation.
Please refer also to the common remark above for additional comments and results.
*Complexity in table 1 is quite subjective. I would argue parameter could does not matter if it can be ran on a single GPU in reasonable wall clock time. The implicit layer, g, with gradient descent for equation 16 is sequential so I imagine this slows down generation significantly.*
First, we would like to clarify that by “complexity”, we refer to the (subjective) effort of a practitioner to design the score network architecture. Parameter count is listed separately from “complexity”. For example, the work InftyDiff, requires the cascade of multiple models (NFO, knn interpolators, Convolutional Unets), whereas the architectures we consider in our work are much simpler (INRs are literally vanilla MLPs with sinusoidal activation functions, the new transformer based score network has no convolutional layers, etc…).
As a reminder, we consider the implementation of the score network as an engineering choice: INR-based score nets are small and simple, but require meta-learning, the newly introduced Transformer-based score nets are larger, but do not require meta-learning.
*Calling other concurrent / prior work heuristic without details. "However, these works do not formally prove the existence of a backward process and their score approximation and score matching optimization objective is heuristic". Does not [1] prove existence in Theorem 1? (I do not expect detailed comparison given they were only public recently, but given the authors claim this concurrent work is heuristic I would be interested to know more).*
We apologize if the message was not clear. In the concurrent work [1], the existence of the backward process is proven, where the score functional is introduced. We are however, at the time of the submission of the manuscript, and to the best of our knowledge, the first one to show that the score matching objective is not only an heuristic but a sound variational bound. We achieve this thanks to the infinite dimensional generalization of the Girsanov theorem.
As it has also been requested by another reviewer, we will expand more how our work differs from related works, clarifying such details.
[1] Infinite-Dimensional Diffusion Models for Function Spaces, Pidstrigach, 2023
*Questions:
The implicitly defined INR encoder, g, detailed in equation 16, considers gradient descent in an inner layer. How many steps are taken? Is the gradient through the score network taken through these gradient descent steps? Is this not quite slow? Have the authors considered other point-wise encoders?*
Please refer to the common remark above. In short, yes INR requires meta-learning which can be challenging, and yes, we considered different implementations of the score nets using simple Transformer architectures.
Parmar, Gaurav, Richard Zhang, and Jun-Yan Zhu. "On aliased resizing and surprising subtleties in gan evaluation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
Personally I would drop the complexity argument. It is highly subjective and somewhat misleading. One could argue that given Unets are so common across fields now, that it has the lowest "engineering" complexity, indeed practically there are many open source libraries for Unets used in score based generative models.
Regarding using a transformer model and data agnostic approaches. There are a number of papers doing this now, one such paper [1] takes a related but somewhat heuristic approach using a transformer (perceiver) architecture and applied across modalities and for super-resolution tasks. This should also be compared (according to open review it was published in February, neurips guidance states that only work within 2 months of submission can be ignored as concurrent [2] ).
In light of improved experiments and including audio experiments, I have increased my score to weak accept.
[1] Diffusion Probabilistic fields, https://openreview.net/forum?id=ik91mY-2GN
[2] https://neurips.cc/Conferences/2022/PaperInformation/NeurIPS-FAQ
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the increased score and the feedback!
*Personally I would drop the complexity argument. It is highly subjective and somewhat misleading. One could argue that given Unets are so common across fields now, that it has the lowest "engineering" complexity, indeed practically there are many open source libraries for Unets used in score based generative models.*
This is indeed a valid argument, which we will include and expand in our discussion.
*Regarding using a transformer model and data agnostic approaches...*
Thanks for the suggestion, we will include the suggested reference [1] in the final version of the manuscript, clarifying the differences w.r.t. our work. | null | null | null | null | null | null |
LogSpecT: Feasible Graph Learning Model from Stationary Signals with Recovery Guarantees | Accept (poster) | Summary: This paper studies the problem of graph learning from stationary signals. The authors propose an algorithmic framework, LogSpecT, which addresses the shortcomings of an existing body of work that relies on graph learning using spectral templates.
Strengths: 1. The problem setting is graph learning for stationary graphs using spectral templates. This is a problem of interest in the graph signal processing domain and is motivated by practical applications. The major technical contribution of the paper is to solve the problem by introducing log barrier in the objective function. Introduction of log barrier facilitates tackling the graph learning scenarios that were not feasible by existing algorithms in this domain.
2. The paper is well written and the technical content is easy to follow.
3. The experiments demonstrate the LogSpecT ourtperforms the previous versions of spectral templates and two other baselines from the literature in the task of graph learning. Moreover, it is demonstrated that previous versions of graph learning algorithms may encounter infeasibility scenarios on real datasets with high likelihood.
Weaknesses: There may be a minor concern that the technical contributions of the paper are narrowly focused on overcoming the shortcomings of graph learning with spectral templates.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could you comment on the computational complexity of LogSpecT? In particular, I am curious whether the operations involving $C_n$ in Section 5 could be computationally prohibitive for high dimensional datasets.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations have been adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We now provide responses to the questions and concerns you have raised.
**Q1:** There may be a minor concern that the technical contributions of the paper are narrowly focused on overcoming the shortcomings of graph learning with spectral templates.
**Answer:** It appears to be a strict requirement that the graph signals are stationary. This assumption starts to gain popularity as the extension of the stationarity defined on the regular domain to the graph data [G15]. Since then, many GSP techniques have been developed under this assumption, e.g. graph learning under stationarity [SM$^+$], low-passness detection under stationarity [ZHW23] and multi-attribute graph signal processing under stationarity [ZHW22]. Along these works, many real datasets have been verified to be stationary or nearly stationary. For example, [G15] showed that a part of the weather data can be explained by stationary graph signals and [PV17] showed that the well-known USPS dataset and the CMUPIE set of cropped faces are nearly stationary. In a nutshell, the stationarity assumption has been a cornerstone in the GSP community. Moreover, it is quite insightful and interesting to push beyond the stationarity assumption and tackle the case when the graph signals are non-stationary. This is one of our future directions.
**Q2:** Could you comment on the computational complexity of LogSpecT? In particular, I am curious whether the operations involved in Section 5 could be computationally prohibitive for high dimensional datasets.
**Answer:** We consider the complexity in each iteration roughly when the number of nodes $m$ is larger than the number of observations $n$. In the update of $Z$ and $q$, the complexity is decided by the the calculation of $C_nS^{(k)}$. This can be written as $(1/nX)(X^\top S^{(k)})$, where $X \in \mathbb{R}^{m\times n}$ is the data matrix. Hence, the complexity of this step is in the order of $nm^2 + nm$. In the update of $S$, the complexity is decided by the calculation of $C^{(k)}$. The calculation of $C_n\Lambda$ can be conducted similarly, yielding the complexity of $nm^2 + nm$. In the calculation of $S^{(k)}C_n^2$, we may decompose it as $\frac{1}{n^2}(S^{(k)}X)(X^\top X)(X^\top)$. The complexity in this step is in the order of $nm^2+n^2m+nm$. In the calculation of $C_nS^{(k)}C_n$, we may calculate it as $\frac{1}{n^2}X(X^\top S^{(k)}X)X^\top$. The calculation involved here is in the order of $nm^2 + mn^2$. The update of the dual variables can be decomposed as the summation of the components calculated in the update of the primal variables. In summary, the total complexity of each iteration is in the order of $nm^2 + n^2m + nm$. The coefficients can be improved by storing some common components, e.g. $S^{(k)}X$ and so on. This would be quite efficient even in a high dimensional dataset.
[G15] Girault, B. Stationary graph signals using an isometric graph translation. In Proceedings of the
23rd European Signal Processing Conference (EUSIPCO 2015), pages 1516–1520. IEEE, 2015.
[SM$^+$17] Segarra, S., et al. Network topology inference from spectral templates. IEEE Transactions on Signal and Information Processing over Networks, 3(3):467–483, 2017.
[ZHW22] Zhang, C., et al. Product graph learning from multi-attribute graph signals with inter-layer coupling. arXiv preprint arXiv:2211.00909, 2022.
[ZHW23] Zhang, C., et al. Detecting low pass graph signals via spectral pattern: Sampling complexity and applications. arXiv preprint arXiv:2306.01553, 2023.
[PV17] Perraudin, N and Vandergheynst, P. Stationary signal processing on graphs. IEEE Transactions on Signal Processing, 65(13):3462–3477, 2017.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing a detailed response.
---
Reply to Comment 1.1.1:
Comment: Thank you for your time.
---
Rebuttal 2:
Comment: Hi,
Could you please acknowledge (at least) the authors' rebuttal, and engage in the discussion if you still have concerns?
Thanks in advance for helping NeurIPS reviewing process,
Best,
AC | Summary: In this paper, the authors considered the problem of learning graphs from stationary signals. In order to overcome the infeasibility issue of an existing method called rSpecT, the authors proposed a novel formulation by introducing the log barrier term to learn graphs without isolated nodes. The feasibility can be guaranteed by the new formulation. Furthermore, the recovery guarantees of the new formulation was established, and an efficient algorithm based on linearized ADMM was proposed to solve the formulation.
Strengths: This paper is very interesting, with novel ideas to solve an issue of a well-known method in graph signal processing. The authors proposed a novel formulation to overcome the infeasibility issue of rSpecT and established the theoretical recovery guarantees. Furthermore, an efficient algorithm based on linearized ADMM was designed to solve the formulation. Numerical results demonstrated the stability and superiority of the new method.
Weaknesses: 1. The descriptions of the experiments conducted on real networks require further improvement. Specifically, please provide more details regarding the number of observations, the nature of these observations, and what the edges represent within the networks.
2. As observed in Figures 2 and 4, the F-measure does not increase to 1, even when given a sufficiently large number of observations. The authors should provide commentary on this phenomenon to address any potential concerns.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please answer questions from Weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your positive comments and advice. We hope the following clarification can answer your questions.
**Q1:** The descriptions of the experiments conducted on real networks require further improvement. Specifically, please provide more details regarding the number of observations, the nature of these observations, and what the edges represent within the networks.
**Answer:** The experiments on real networks are conducted on graphs from the Protein database [BOS$^+$05] and the Reddit database [YV15]. The Protein database is a Bioinformatics dataset, where nodes are secondary structure elements (SSEs) and there is an edge between two nodes if they are neighbors in the amino-acid sequence or in 3D space. The Reddit database is a social network dataset. In this dataset, each graph corresponds to an online discussion thread, where nodes correspond to users, and there is an edge between two nodes if at least one of them responded to another’s comments. The observed graph signals are then generated synthetically according to the generative model (1), with the graph filtering chosen as $H(S) = \exp{\frac{1}{2}S}$. This is to make sure that the observed graph signals are stationary. For the experiments of real signals, we refer the reviewer to our global response, where we discuss the experiments on the handwritten digits USPS dataset, which is verified to be nearly stationary in [PV17].
**Q2:** As observed in Figures 2 and 4, the F-measure does not increase to 1, even when given a sufficiently large number of observations. The authors should provide commentary on this phenomenon to address any potential concerns.
**Answer:** Figure 2 presents the results of rLogSpecT on BA graphs. As rLogSpecT is an approximation of LogSpecT, its performance will approach that of LogSpecT when the sample size is growing. However, on BA graphs, even LogSpecT cannot achieve 1 F-measure. Hence, the approximation model rLogSpecT cannot achieve 1 F-measure, no matter how large the sample size is. The phenomenon observed in Figure 4 is similar. The ideal model cannot achieve perfect recovery on any real networks, let alone the approximation model rLogSpecT. It remains an open and intriguing problem in GSP how broad the class of networks is, on which LogSpecT can achieve perfect recovery. This is also one of our future directions.
[BOS$^+$05] Borgwardt, K., et al. Protein function prediction via graph kernels. Bioinformatics, 21:47–56, 2005.
[YV15] Yanardag, P and Vishwanathan, S. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 1365–1374, 2015.
[PV17] Perraudin, N and Vandergheynst, P. Stationary signal processing on graphs. IEEE Transactions on Signal Processing, 65(13):3462–3477, 2017.
---
Rebuttal Comment 1.1:
Comment: Thank you for your insightful and thorough response. Your detailed explanations have not only effectively addressed the concerns that were raised, but have also provided valuable clarity on the key points of the study.
---
Reply to Comment 1.1.1:
Comment: Thank you for your time. | Summary: This paper addresses the problem of learning a graph from stationary graph signals. To this end, the authors introduce two new optimisation problems called LogSpecT and rLogSpecT. They prove some recovery guarantees on the optimal value of (r)LogSpecT and give a convergence rate when the signals are sub-Gaussian. They also provide an efficient solver based on L-ADMM that admits closed solutions at each substep. Finally, some empirical results highlight the performance of their methods.
Strengths: 1\ The paper is very well written and easy to follow
2\ The problem is well-motivated and the solution seems theoretically sound
3\ The paper is solid and almost all aspects encountered in the graph learning problem are covered (optimization, recovery guarantees, convergence,...)
Weaknesses: 1\ There are no error bars. They should be added to really show the results of each method and to be able to compare them.
2\ A table with all the measurements reported could be a good thing. For example, in all the experiments, only the F measure is given, but precision and recall are also important for assessing the quality of the learned graphs.
3\ An illustration comparing the real graph with the learned graph (for example by plotting the adjacency matrix) could be interesting.
4\ The complexity and the time until convergence are not given for the methods. An additional experiment on this aspect should be added.
=======
Remark
the step size tau is never defined
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1\ What is the intuition behind the need for a delta greater than a certain value in Theorem 4.1?
2\ In the GSP literature, there are many more recent methods than the one of Kalofolias for learning the graph. Why choose to compare your methods only to the one of Kalofolias?
3\ Furthermore, why exclude this method from the comparison in the results of the section 6.3?
4\ Experiments are made with a maximum of 60 nodes. What is the maximum number of nodes that can be learned in a reasonable time? if possible, can we have some examples?
=====
More open questions:
5\ Is it possible to link the optimization problem to a maximum likelihood estimate?
6\ It seems that the filter h only has an impact on the covariance estimate. Is it true? If the answer is "no", would it be a good idea (if possible) to include information about the h filter in the optimization problem?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments and advice. We hope that the clarification may help to clear your concerns.
**Q1:** - There are no error bars. They should be added to really show the results of each method and to be able to compare them. - A table with all the measurements reported could be a good thing. For example, in all the experiments, only the F measure is given, but precision and recall are also important for assessing the quality of the learned graphs. - An illustration comparing the real graph with the learned graph (for example by plotting the adjacency matrix) could be interesting.
**Answer:** Thank you for your helpful advice on presenting the results. We showed the error bars in the real data experiments for fair comparison. However, we omit the error bars in the synthetic data experiments as both models achieve nearly perfect recovery or the difference between them is too obvious. We would like to present all the measurements in the appendix, together with an illustration comparing the groudtruth graph with the learned graph.
**Q2:** - The complexity and the time until convergence are not given for the methods. An additional experiment on this aspect should be added. - The step size tau is never defined.
**Answer:** For the running time and complexity of our proposed L-ADMM method, we refer the reviewer to our global response and our response to Reviewer U8JP, where we discussed the complexity in detail and added an experiment to present the residuals until convergence. Also, thank you for your kind reminder. $\tau$ in the paper is defined as the parameter in the linearization process and can be viewed as a step size in updating $S$. We will add these clarifications to our paper.
**Q3:** What is the intuition behind the need for a delta greater than a certain value in Theorem 4.1?
**Answer:** The intuition that $\delta$ should be larger than a certain value is that $\delta$ is to reflect our view on the error of estimating the covariance matrix. We can bear with the overestimate of the errors while the underestimate is unacceptable. From a theoretically viewpoint, the $\delta$ should be set to ensure that the feasible set of rLogSpecT at least contains the optimal solutions to LogSpecT. This part is required in our proof. Although this condition appears to be stringent, our analysis in Corollary 4.8 then shows that if $\delta$ is set to decrease in a proper rate, the condition will hold with high probability.
**Q4:** In the GSP literature, there are many more recent methods than the one of Kalofolias for learning the graph. Why choose to compare your methods only to the one of Kalofolias?
**Answer:** We only compared Kalofolias' method in the experiments and omitted other models for the reason that Kalofolias' method is the only graph learning model with the log barrier. Since our model is another graph learning model with the log barrier, it is natural to compare our model with Kalofolias'. We didn't compare our model with the more recent models as they are based on the probabilistic graphical model instead of the stationarity assumption. However, for fair comparison, we present the experiment results of ALPE, NGL-SCAD, NGL-MCP on the Protein and Reddit datasets in the global response.
**Q5:** Furthermore, why exclude this method from the comparison in the results of the section 6.3?
**Answer:** We excluded the performance of Kalofolias' method in section 6.3 for the reason that SpecT is the only efficient model for graph learning from stationary signals and Kalofolias' method is designed for smooth graph signals instead of stationary graph signals. However, for fair comparison, we add the experiments of Kalofolias' method on 50-node ER graphs with different sparsity levels in the global response. The results show that this model cannot learn graphs from the stationary signals.
**Q6:** Experiments are made with a maximum of 60 nodes. What is the maximum number of nodes that can be learned in a reasonable time? if possible, can we have some examples?
**Answer:** The experiment results on 100 node graphs are shown in the global response, which will take around 20 seconds to learn a graph. We note that the capability of our proposed models and algorithms depends not only on the devices, but also on the way to implement the algorithms. As L-ADMM is the first algorithm designed for our proposed LogSpecT, we will investigate the more efficient algorithms based on it in the future.
**Q7:** Is it possible to link the optimization problem to a maximum likelihood estimate? It seems that the filter h only has an impact on the covariance estimate. Is it true? If the answer is "no", would it be a good idea (if possible) to include information about the $h$ filter in the optimization problem?
**Answer:** The two additional open questions are quite insightful and interesting. For the first one, it is a well-known open problem in GSP. To understand why LogSpecT performs well is essential for designing more efficient models for graph learning from stationary signals and further pushing beyond the stationarity assumption. It is actually one of our future directions. For the second question, the impact of $h$ may lie in two folds. On one hand, it affects the estimated covariance matrix $C_n$, which may consequently affect the efficiency of our proposed algorithm. On the other hand, the prior knowledge of $h$ may help to enlarge the class of graphs that can be perfectly recovered by the spectral template. To understand the influence of $h$ in a rigorous way deserves further investigation.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clear and thorough response.
---
Reply to Comment 1.1.1:
Comment: Thank you for your appreciation. | Summary: In this paper, the authors improve upon an existing method, rSpecT, to impute graphs from stationary signals. The latter frames the recovery of the (weighted) adjacency matrix as a convex optimization problem with constraints —- namely the commutativity of the covariance matrix and the adjacency matrix, as well as the first row sum to be equal to 1, in order to avoid the trivial 0 solution. Here, the authors show that this yields infeasible solutions in a number of cases. Consequently, the authors propose to replace the row sum constraint by a log barrier. They show that the solution is always feasible (however, it ceases to be unique). They then show through a set of experiments that their solution outperforms the state of the art approaches in recovering the underlying graph.
Strengths: First, a disclaimer: although familiar with GSP, I am not an expert in the graph recovery literature. My comments must thus be taken sparingly.
Overall, the paper is clearly written and motivated.
The discussion on the infeasibility of the current state of the art method, rSpecT, was interesting. The solution that the authors bring is therefore well motivated.
Weaknesses: As highlighted before, I am not an expert in the field and I am not able to put this paper in context of the general literature on the topic.
A few questions however:
- the authors claim that their method is more computationally efficient with the use of ADMM. Do they have plots showing the running time? ADMM, although fast at each iteration, can be slow to converge.
- the experiments could have been expanded: in particular, the authors only present ER and BA graphs, but they could have varied the parameters to try and assess whether their method is robust across all sorts of topologies.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See weaknesses.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The method could have been more extensively evaluated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and advice. We hope the clarification can clear your concerns.
**Q1:** The authors claim that their method is more computationally efficient with the use of ADMM. Do they have plots showing the running time? ADMM, although fast at each iteration, can be slow to converge.
**Answer:** For the running efficiency of our proposed L-ADMM, we refer the reviewer to our global response and our response to Reviewer U8JP that talks about the complexity in each iteration. In the experiment, we show that L-ADMM can reach the accuracy of $10^{-6}$ in 3500 iterations for a graph with 100 nodes. Also, we can observe the local linear convergence for L-ADMM in our model, which indicates that our algorithm can run efficiently. We will add the detailed discussion in our paper.
**Q2:** The experiments could have been expanded: in particular, the authors only present ER and BA graphs, but they could have varied the parameters to try and assess whether their method is robust across all sorts of topologies.
**Answer:** In order to show the robustness LogSpecT, we investigate its performance when the sparsity level of the underlying graphs changes. Specifically, we set the parameter in the generation of ER graphs from 0.1 to 0.5, which represents the probability that an edge exists between two nodes. The number of nodes is set as 50 and the graph filter is chosen as QUA: $H(S) = S^2 + S + I$. The average results of 10 independent replicates are shown in Figure 1 in the global response. This figure demonstrates the robustness of LogSpecT.
---
Rebuttal 2:
Comment: Hi,
Could you please acknowledge (at least) the authors' rebuttal, and engage in the discussion if you still have concerns?
Thanks in advance for helping NeurIPS reviewing process,
Best,
AC | Rebuttal 1:
Rebuttal: We thank all the reviewers for your insightful comments and helpful advice. In the global response, we explain the additional experiments and results shown in the attached pdf. We hope our clarifications can help to clear up reviewers' concerns.
**1. Stability of LogSpecT and comparison with Kalofolias' method.**
We change the probability that an edge exists in the generation of 50-node ER graphs and use the QUA graph filter to generate stationary graph signals. LogSpecT and Kalofolias' method are conducted on the signals. We present the average F-measure over 10 independent replicates in Figure 1. Specifically, the parameter in Kalofolias' method is chosen from $\{0.01, 0.1, 1, 10, 100, 1000\} $ that yields the highest F-measure. The results show that LogSpecT performs well and stably regardless of the sparsity level. However, Kalofolias' method fails on the stationary graph signals.
**2. Comparison with more recent models on real networks and synthetic signals.**
We test ALPE [YCP21], NGL-MCP, and NGL-SCAD [YCP20] on the Protein and Reddit datasets with the synthetic stationary graph signals generated from lowpass-EXP: $H(S) = \exp{S}$. The true covariance matrix is assumed to be available. The mean and standard deviation of the F-measure from these models are collected in Table 1. The results indicate that the tested advanced algorithms based on the probabilistic graphical model perform poorly on these two datasets with stationary graph signals. A possible reason is that the stationary assumption does not fit the probabilistic graphical model.
**3. Efficiency of the L-ADMM algorithm.**
Since there does not literally exist customized algorithm for rLogSpecT, we compare our proposed L-ADMM with CVXPY [SS16]. The solver used in CVXPY is MOSEK. We conduct the experiments on 100-node BA graphs with 10000 low-pass EXP generated stationary graph signals. $\delta$ is set as $10\sqrt{\frac{\log n}{n}}$. For L-ADMM algorithm, the accuracy is set as $10^{-6}$ and the initialization is set as zero. The running time for L-ADMM takes around $20$ seconds while the solver takes over $110$ seconds. Note that the complexity of each iteration in L-ADMM can be further improved with the treatment of matrix multiplication mentioned in our response to Reviewer U8JP. We present the primal residual and the dual residual with respect to the iteration index in Figure 2. The results corroborate the efficiency of our proposed algorithm. Furthermore, the local linear convergence can be observed from the residuals, which can lead to fast convergence to high accuracy. It is one of our future directions to prove this property theoretically.
**4. Experiments on real datasets: the handwritten digits USPS dataset.**
As shown in [PV17], USPS dataset is nearly stationary with respect to the nearest neighbour graph. This dataset collects images of handwritten digits. In the experiment, each pixel is viewed as a node and the value on the pixel is view as the graph signal. We follow the approach in [PV17] to construct the 20 nearest neighbour graph, on which the data is verified to be nearly stationary. More specifically, we pick the 1296 digit 1 images. The weights between two nodes are decided by the Gaussian radial basis function. It turns out that the stationarity measure $s := \Vert\operatorname{diag}(U^\top C_n U)\Vert_2 / \Vert U^\top C_n U\Vert_F$ equals 0.78 in this dataset. Here $U$ is the eigen-matrix of the constructed graph and $C_n$ is the covariance matrix of the observed graph signals. We view the 20 nearest neighbour graph as the groundtruth and compare the learned graph from rLogSpecT with it. We present the subgraphs consisting of the top 10 nodes with the largest numbers of neighbours in the two graphs respectively in Figure 3. The F-measure of these two subgraphs is 0.96. This result corroborates the efficiency of our proposed rLogSpecT model.
[PV17] Perraudin, N and Vandergheynst, P. Stationary signal processing on graphs. IEEE Transactions on Signal Processing, 65(13):3462–3477, 2017.
[YCP20] Ying, J., et al. Nonconvex sparse graph learning under Laplacian constrained graphical model. Advances in Neural Information Processing Systems, 33:7101–7113, 2020.
[YCP21] Ying, J., et al. Minimax estimation of Laplacian constrained precision matrices. In International Conference on Artificial Intelligence and Statistics, pages 3736–3744. PMLR, 2021.
[SS16] Steven, D. and Stephen, B. Cvxpy: A python-embedded modeling language for convex optimization. The Journal of Machine Learning Research, 17(1):2909–2913, 2016.
Pdf: /pdf/98c40806372e3d72547180e3f98dd550d10b2a0e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This is an emergency review, regrettably, the paper is outside of my expertise.
Review:
This is a theoretical work concerned with graph learning from signals. The work identifies an issue with the feasibility of a previous well known model in the field, SpecT, then introduces a novel model, LogSpecT, for which this issue does not occur.
While I regrettably could not comprehend the main content, the topic, general presentation, and references make me believe that this paper is topically not suited for neurIPS. I concede that the CfP of neurIPS is somewhat vague and broad, but based on the common topics in the field, I believe for the conference this is very much a niche topic to which the vast majority of attendees would not be able to connect.
Superficial observations
- There isn't any mention of the word "neural"
- From 45 references, the vast majority is in signal processing. A single one is at neurIPS
- The intro is exceedingly dry to read, with no motivating example, nor any illustration. Especially, what are examples of signals considered for graph learning? What would the stationarity assumption mean in these examples?
I understand that reading a superficial review like this may be frustrating to the authors, but can do no better. I would much suggest the authors consider a specialist forum, where this type of content might be better understood and appreciated.
Strengths: see above
Weaknesses: see above
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: see above
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments. Here are some illustrations based on your questions/suggestions.
**Q1:** There isn't any mention of the word "neural"
**Answer:** Graph learning from signals studied in this paper has much relevance in the machine learning and neural science community.
For example, it has broad applications in bioinformatics [RX$^+$21].
More specifically, we consider the task of graph topology inference from observed data. It has been applied to recognize the functional connectivity related to neural activity patterns. The vast references in this field include [DE06, EO09, GX$^+$21], to name just a few. Due to limits of page length, we are only able to focus on the modeling issue and its theoretical guarantees in this paper.
**Q2:** From 45 references, the vast majority is in signal processing. A single one is at NeurIPS
**Answer:** We are working on a type of data that was newly proposed in the signal processing community and it has not gained much attention from the machine learning community. However, the task we were caring about has long been a focus in machine learning. For example, [K16, KP18] studied how to learn a large graph from smooth graph signals, [YCP20, YCP21, VYP22] investigated this task from the probabilistic graphical model.
**Q3:** The intro is exceedingly dry to read, with no motivating example, nor any illustration. Especially, what are examples of signals considered for graph learning? What would the stationarity assumption mean in these examples?
**Answer:** Thank you for your comments in the introduction part. In practice, the meaning of signals and graphs depends on the scenarios and the models. The signals can be understood as the nodal features in the graph learning task. For instance, in the financial market, the companies can be modeled as the nodes and the signals are the stock returns of each company. The graph learning task is then to investigate the relationship between the companies' stock returns. The relevance of the stationary assumption in several real data sets has been revealed in several existing literature. [G15] showed that a part of the weather data can be explained by stationary graph signals. [PV17] showed that the well-known USPS dataset and the CMUPIE set of cropped faces are close to stationary. [SM$^+$17] compared the performance of learning protein structures with/without the stationary assumption and validated this assumption. [ZHW22] applied the stationary assumption to US Senate Roll Calls and achieved good performance. In a nutshell, the stationarity assumption is a theoretical property that stems from the stationary time series and has been verified in several real data. This property has to be understood more from a theoretical viewpoint instead of the intuition.
[RX$^+$21] Rui, L., et al. Graph signal processing, graph neural network and graph learning on biological data: a systematic review. IEEE Reviews in Biomedical Engineering, 2021.
[DE06] Danielle, B. and ED, B. Small-world brain networks. The neuroscientist, 12(6):512–523, 2006.
[EO09] ED, B. and Olaf, S. Complex brain networks: graph theoretical analysis of structural and functional systems. Nature reviews neuroscience, 10(3):186–198, 2009.
[GX$^+$21] Gao, S., et al. Smooth graph learning for functional connectivity estimation. NeuroImage, 239:118289, 2021.
[K16] Kalofolias, V. How to learn a graph from smooth signals. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS 2016), pages 920–929. PMLR, 2016.
[KP18] Kalofolias, V. and Perraudin, N. Large scale graph learning from smooth signals. In International Conference on Learning Representations, 2018.
[YCP20] Ying, J., et al. Nonconvex sparse graph learning under Laplacian constrained graphical model. Advances in Neural Information Processing Systems, 33:7101–7113, 2020.
[YCP21] Ying, J., et al. Minimax estimation of Laplacian constrained precision matrices. In International Conference on Artificial Intelligence and Statistics, pages 3736–3744. PMLR, 2021.
[VYP22] Vinícius, J., et al. Learning bipartite graphs: Heavy tails and multiple components. Advances in Neural Information Processing Systems, 35:14044–14057, 2022.
[G15] Girault, B. Stationary graph signals using an isometric graph translation. In Proceedings of the 23rd European Signal Processing Conference (EUSIPCO 2015), pages 1516–1520. IEEE, 2015.
[PV17] Perraudin, N and Vandergheynst, P. Stationary signal processing on graphs. IEEE Transactions on Signal Processing, 65(13):3462–3477, 2017.
[SM$^+$17] Segarra, S., et al. Network topology inference from spectral templates. IEEE Transactions on Signal and Information Processing over Networks, 3(3):467–483, 2017.
[ZHW22] Zhang, C., et al. Product graph learning from multi-attribute graph signals with inter-layer coupling. arXiv preprint arXiv:2211.00909, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the comprehensive response.
---
Reply to Comment 1.1.1:
Comment: You are welcome. Hope our response can clear up all your concerns. | null | null | null | null | null | null |
Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery | Accept (poster) | Summary: The paper presents an approach to optimize hard text prompts for generative models through efficient gradient-based optimization. The method automatically generates hard text-based prompts for both text-to-image and text-to-text applications, allowing users to easily generate, discover, and mix and match image concepts without prior knowledge on how to prompt the model. The paper highlights the advantages of hard prompts over soft prompts and demonstrates the effectiveness of the approach in tuning language models for classification.
Strengths: 1. The paper's approach is efficient and can be optimized on smaller language models and then transferred to other, much larger models.
2. The method performs consistently across all four datasets and outperforms other gradient-based optimization baselines. It can achieve similar performance to CLIP Interrogator. However, the proposed method only uses the CLIP model for prompt discovery and 8 tokens in total demonstrating its simultaneous simplicity and strength.
3. The proposed approach can also be easily adapted to style transfer. Given several examples that share the same style, the method can extract their shared style characteristics into a single hard prompt and use this prompt to apply the style to new objects or scenes (Page 5).
Weaknesses: 1. While the method can generate relevant tokens and phrases, the prompts are not always coherent English. This suggests that while the optimization process is finding relevant words to the task, it lacks the ability to create full sentences.
2. Longer prompts do not necessarily produce better results when generating with Stable Diffusion, even though they strictly reduce loss on the CLIP image encoder. Long prompts thus overfit and are less transferable. Thus, it may require hyperparameter tuning, which may heavy and tedious.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: You mentioned that longer prompts do not necessarily produce better results and can lead to overfitting. Could you elaborate on the mechanisms behind this phenomenon and suggest potential strategies to mitigate this issue while maintaining the quality of the generated prompts?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: As the author has discussed, the generated prompts may still contain several un-interpretable tokens. Also, it may lack the ability to create full sentences.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Below, we address specific points you raised:
> You mentioned that longer prompts do not necessarily produce better results and can lead to overfitting. Could you elaborate on the mechanisms behind this phenomenon and suggest potential strategies to mitigate this issue while maintaining the quality of the generated prompts?
Upon extending the prompt length, we observed an increase in the occurrence of gibberish and Unicode tokens within the optimized prompts. Consequently, this led to diminished transferability in comparison to the usage of shorter prompts. There is a transfer happening here, as we optimize on CLIP, but test on text-to-image diffusion models. If we overoptimize a prompt against CLIP, it cannot transfer to the diffusion model.
We believe that introducing a constraint, such as enforcing optimized tokens to adhere to the English language, has the potential to alleviate the overfitting issue. To validate this notion, we conducted a preliminary assessment. When utilizing the keyword bank as a constraint during optimization, we noted a notable improvement in mitigating the overfitting concern, as illustrated in the table below. We will include more experiments with different constraints in the future version.
| | 4 tokens | 8 | 16 | 32 | 64 |
|:------------------:|:--------:|:-----:|:-----:|:-----:|:-----:|
| PEZ | 0.686 | 0.697 | 0.699 | 0.689 | 0.670 |
| PEZ + Keyword Bank | 0.684 | 0.699 | 0.712 | 0.716 | 0.708 |
Thank you for your feedback on this submission. Hope our additional experiment resolves your question. Please let us know if you have other questions and comments that we can address.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of rebuttal
Comment: Thank you for the rebuttal. My questions were addressed . Thus I keep my score. Good luck! | Summary: The paper presents an easy-to-use approach to automatically obtain hard prompts for images. The work introduces PEZ, a gradient-based approach to obtain hard prompts for images. The experiments comparing a popular baseline show improved CLIP score. Finally, the qualitative results show that the method can distill prompts into shorter sequences and concatenates
Strengths: The paper is easy to follow. The algorithm is well explained and the results are quite promising. The results for prompt inversion appear to be fair and outperform the baseline. The analysis is quite clear and thorough. Overall, the work presents a straightforward method to get prompt tokens for images.
Weaknesses: The contribution of the work is limited: the work is mostly an application of existing work that discretizes soft prompts to tokens [a,b]. There is no significant difference between the proposed work and existing related work. The authors should clarify, in-depth, the main differences between their work and other work. Further, like AutoPrompt, the work suffers from gibberish prompts according to the authors (see Figure 2, “uuuu romantic canvas impressionist …”). This limitation makes the hard prompts harder to interpret. The explanation regarding the emoji appearing in the hard prompt (Figure 2 and line 174) is not entirely convincing. It would be really helpful to also know how often the model produces gibberish.
Limited Evaluation: the experiments do not overwhelming support that PEZ is better than CLIP interrogator. On two out of the four datasets under consideration, CLIP interrogator achieves a higher CLIPScore. It is also unclear why AutoPromptSGD is not included in Table 1. Wouldn’t this be a good baseline?
Negative Qualitative Examples: The paper could benefit from showing negative results where the method doesn’t perform as well.
Nit:
It would be helpful to look at the results along with Tables/Figures on the same page. At the moment, style transfer, prompt concatenation, and prompt distillation do not appear on the same page.
[a] Gradient-Based Constrained Sampling from Language Models. EMNLP 2022.
[b] Toward Human Readable Prompt Tuning: Kubrick’s The Shining is a good movie, and a good prompt too? ArXiv 2022.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please address the questions in the weakness section.
Minor: since you are using cosine distance as a loss function, does it run into instability issues? Is there a particular reason you are using cosine distance as a loss function?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The method requires the model weights to produce hard prompts. Given that recent methods use APIs, it would be useful to highlight this as a potential limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We address each of your points below:
> 1. The contribution of the work is limited.
The changes between **PEZ** and $FluentPrompt$ may appear subtle, but they are important for performance, and the comparison can be found in the appendix. **PEZ's** main difference is how the rounding is done and can be thought of as the combination of optimizing soft prompts and $AutoPrompt_\text{SGD}$. $AutoPrompt_\text{SGD}$ updates the set of tokens at every step using these same tokens to get the gradient information for the update. Soft prompts optimize the same continuous set of embeddings $P$ which is used for every forward and backward pass and the update step $P=P-\eta\nabla_{\mathbf{P}}\mathcal{L}$. FluentPrompt adds a decaying noise term during the optimization process.
In contrast, **PEZ** is like optimizing a soft prompt, but for each gradient update the information is going to come from the nearest-neighbor rounding prompt $P'=\text{Proj}(P)$ instead of soft prompt, $P$. Then, the update is on $P$ not $P'$, $P=P-\eta\nabla_{\mathbf{P'}}\mathcal{L}$, where $P'=\text{Proj}(P)$. Thus, the process is storing a continuous set of embedding that is required for the forward pass. This means that although it may suffer from the stochastic rounding issues in the earlier steps, where the tokens project to the same set of tokens, eventually **PEZ** will find a new set of tokens since $P$ in a sense is summing the gradient information at every step. The problem with FluentPrompt and $AutoPrompt_\text{SGD}$ is sensitive to learning rates, but PEZ is quite robust under different learning rates as shown in Figure 5a. Conceptually, this tries to solve the problem that certain models require some interesting (and non-obvious) learning rates that can greatly vary from model to model.
Additionally, we have now conducted a series of experiments comparing the efficiency between PEZ and AutoPrompt$_\text{SGD}$ using varying numbers of steps, as outlined in the table below. Our findings indicate that PEZ achieves faster convergence compared to AutoPrompt.
| Method | 100 Steps | 200 | 400 | 600 | 800 | 1000 |
|:--------------:|:---------:|:-----:|:-----:|:-----:|:-----:|:-----:|
| AutoPrompt | 0.642 | 0.654 | 0.656 | 0.672 | 0.677 | 0.684 |
| PEZ | 0.668 | 0.674 | 0.675 | 0.684 | 0.690 | 0.695 |
We believe that both properties of PEZ are very important to the prompt optimization applications mentioned in the paper.
>Limited Evaluation: the experiments do not overwhelmingly support that PEZ is better than CLIP interrogator.
We have updated our local draft by noting that the CLIP interrogator is a purpose-built tool, built to generate fitting prompts for digital art images, from a word bank of possible artistic styles and expressions. That the proposed generic algorithm is able to match the performance of this tool is a strong statement of its general capability. We will include these updates in our camera ready version.
The issues with using the CLIP Interrogator in any other context than generating captions for art is highlighted in the difference we see in the Celeb-A dataset, where PEZ outperforms CLIP Interrogator as these images is "OOD" for both the BLIP model and the small word bank. Additionally, due to the compressibility of using PEZ, we can combine different prompts together as highlighted in Sections 4.3 and 4.4. However, under the current functionality of the CLIP Interrogator, this is not a possibility. In short, PEZ outperforms in recreating images that fall outside the CLIP Interrogator's prior and has additional functionality, while matching this purpose-built tool on its exact task.
A full table of these comparisons can be found in Table 2 in the Supplementary Material.
> Negative Qualitative Examples.
We also think it's helpful to show some failure examples. We have added some failure generations in the rebuttal PDF. As we explained in the experiment part, the results on the Celeb-A dataset aren't as good as on other datasets. This might be because learning to make human portraits is tough, especially for small details in the face. As we can see in Figure 1 in the rebuttal PDF, the prompts from PEZ can still make pictures of human faces that look similar to the target images, but not exactly the same.
Meanwhile, we provide other two typical failure cases in Figure 1: 1) instances where the generated images exhibit distinct styles from the target image (the second-to-last example); 2) cases where the prompt captures abstract concepts rather than specific objects (the final example). It's important to note, however, that these two scenarios can be mitigated by optimizing with various initializations.
We will include more failure examples and a comprehensive discussion and investigation of these cases in our future version.
> Since you are using cosine distance as a loss function, does it run into instability issues?
CLIP used cosine similarity to train the model, so we actually did not suffer from these instability issues as we used the same objective function.
> The method requires the model weights to produce hard prompts. Given that recent methods use APIs, it would be useful to highlight this as a potential limitation.
This is a great point and we have extended our discussion of this limitation. However, many API models are based on publically-available CLIP checkpoints, for example, Midjourney. A great example of this process can be found in Section 5, where we show that in fact, we can bypass the content filters of the Midjourney API using optimized prompts created with open-source CLIP weights.
Thank you again for your thoughtful review. We made a significant effort to address your feedback including multiple paper edits, and we would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address?
---
Rebuttal Comment 1.1:
Title: Reply to the Authors
Comment: Thank you for the detailed response. I really appreciate you uploading additional negative results in the PDF. I also liked your response to Reviewer LSBN where you have included prompts with more tokens (8 to 64). It would be awesome if you could include this ablation in the paper as well.
I do have a few concerns:
1. The additional table included in the rebuttal shows that PEZ converges faster than AutoPrompt. This is a positive result. I would include this detail in the paper. But, as pointed out in my review, Table 1 in the paper is incomplete without AutoPrompt. It might be the case that AutoPrompt performs better than PEZ on all the other datasets. If the work is about efficiency compared to AutoPrompt, then you might have to highlight the tradeoffs between PEZ and AutoPrompt in terms of efficiency vs. performance.
2. Minor: could you clarify what you mean by the following statement?
> It's important to note, however, that these two scenarios can be mitigated by optimizing with various initializations.
Overall, I think the authors have put in a lot of effort to improve their paper. However, I feel the paper could be positioned better to highlight the practical benefits in terms of efficiency compared to existing work. I think the work would benefit from another round of submission. I will gladly increase my scores if other reviewers feel strongly about the paper.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Regarding 1, we included the results for AutoPrompt in Table 2 of the Appendix. PEZ shows improvement over AutoPrompt. We have updated our working draft to incorporate these findings, as well as the efficiency results like the table below, into the main paper and will include them in the camera-ready version.
| prompt length | lr | method | 100 steps | 200 steps | 400 steps | 600 steps | 1000 steps |
|:-------------:|:---:|:----------:|:---------:|:-----:|:-----:|:-----:|:-----:|
| 4 | 0.1 | AutoPrompt | 0.461 | 0.515 | 0.536 | 0.551 | 0.578 |
| | | PEZ | 0.610 | 0.635 | 0.653 | 0.666 | 0.666 |
| | 1 | AutoPrompt | 0.614 | 0.639 | 0.663 | 0.667 | 0.670 |
| | | PEZ | 0.649 | 0.657 | 0.663 | 0.660 | 0.668 |
| | 10 | AutoPrompt | 0.643 | 0.649 | 0.663 | 0.665 | 0.662 |
| | | PEZ | 0.645 | 0.653 | 0.672 | 0.681 | 0.680 |
| 8 | 0.1 | AutoPrompt | 0.516 | 0.553 | 0.582 | 0.598 | 0.614 |
| | | PEZ | 0.632 | 0.656 | 0.671 | 0.677 | 0.678 |
| | 1 | AutoPrompt | 0.641 | 0.654 | 0.656 | 0.672 | 0.683 |
| | | PEZ | 0.667 | 0.673 | 0.674 | 0.684 | 0.690 |
| | 10 | AutoPrompt | 0.644 | 0.662 | 0.671 | 0.671 | 0.676 |
| | | PEZ | 0.670 | 0.677 | 0.679 | 0.679 | 0.682 |
| 16 | 0.1 | AutoPrompt | 0.551 | 0.595 | 0.628 | 0.631 | 0.639 |
| | | PEZ | 0.646 | 0.665 | 0.679 | 0.686 | 0.695 |
| | 1 | AutoPrompt | 0.663 | 0.672 | 0.680 | 0.680 | 0.687 |
| | | PEZ | 0.657 | 0.686 | 0.678 | 0.690 | 0.691 |
| | 10 | AutoPrompt | 0.665 | 0.665 | 0.672 | 0.682 | 0.679 |
| | | PEZ | 0.678 | 0.682 | 0.681 | 0.681 | 0.686 |
Regarding 2, for certain failure cases, we observed that users can achieve a more effective prompt by restarting the optimization with a different initialization for the hard prompt. This situation is similar to the "+5 seeds" scenario depicted in Table 1. We see that this phrase was ambiguous, so we have clarified it in our draft, and we thank you for pointing that out.
Thank you again for your detailed and constructive feedback. We have incorporated each of your suggestions into our draft, and we would appreciate it if you would consider increasing your score accordingly. Do you have any other questions we can address? | Summary: This paper works on hard prompt optimization with gradient methods, especially a discrete text prompt is discovered using CLIP, and optimized to prompt stable diffusion.
Strengths: 1. Without hand-crafted design of hard prompt, the proposed solution directly discover and optimize prompt with gradient descent, leading to very efficient prompt engineering.
2. The learned prompt works great on stable diffusion and achieve effective style transfer, further explains the superiority of the proposed solution.
Weaknesses: The proposed solution works on discrete text prompt optimization without constraints on meaningfulness of the text, making it hard to directly understand to the effectiveness of prompt optimization.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Ablation study shows performance saturation with middle number of prompt length. More analysis is needed to explain this result.
2. Prompt distillation seems interesting to explain the prompt length issue, however, how the distillation ratio is correlated with the quality of the generated samples should also be explained further.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We address each of your points below:
#### Weaknesses
> The proposed solution works on discrete text prompt optimization without constraints on meaningfulness of the text, making it hard to directly understand to the effectiveness of prompt optimization.
While we do experiment with fluency constraints, showing that it is possible to add such constraints (see experiments in the Supplementary Material), this is not actually our primary goal. We are not concerned with the fluency of the generated prompts. In all applications, a user would be able to provide any series of tokens to a generation API, and there is no need for the optimized prompt to be fluent. We agree that this is a departure from some of the literature on prompt optimization, which is concerned with finding interpretable prompts, but for the use cases we consider, interpretability is not required.
For the CLIP experiments, our study also provides interesting prompts that hint at a secret language within the diffusion models. We further highlight the "secret language" in Section 5 (Safety Concerns), where are able to bypass the Midjourney content filters. In safety questions, the ability to bypass a word filter with a non-interpretable prompt is a core necessity of the attack.
#### Questions:
Regarding Q1, our suspicion is rooted in overfitting, evident from the decreasing loss accompanied by a lower CLIP Score. Additionally, upon extending the prompt length, we observed an increase in the occurrence of model-specific unicode token usage within the optimized prompts. Consequently, this led to diminished transferability in comparison to the usage of shorter prompts.
We believe that introducing a constraint, such as enforcing optimized tokens to adhere to the English language, has the potential to alleviate the overfitting issue. To validate this notion, we have now conducted a preliminary assessment. When utilizing the keyword bank as a constraint during optimization, we noted a notable improvement in mitigating the overfitting concern, as illustrated in the table below.
| | 4 tokens | 8 | 16 | 32 | 64 |
|:------------------:|:--------:|:-----:|:-----:|:-----:|:-----:|
| PEZ | 0.686 | 0.697 | 0.699 | 0.689 | 0.670 |
| PEZ + Keyword Bank | 0.684 | 0.699 | 0.712 | 0.716 | 0.708 |
For Q2, we explore this question in Figure 8 in Supplementary Material. We find here that this is a step down in performance from a ratio of 0.3 to 0.1.
Thank you again for your thoughtful review. We made a significant effort to address your feedback including experiments, and we would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address? | Summary: This work proposes a new paradigm which optimizes discrete prompts simply by gradient projection to bridge the gap between relatively “easy-to-optimize” soft prompts and their hard counterparts. Their methodology is straightforward and effective in downstream text-to-image task requirements. And more specifically, comparing to existing gradient-based approaches, such as AutoPrompt, their approach is less hyper-parameter sensitive equipping with their gradient projection. All these advantages offer a new lens on how to optimize discrete prompts for text-to-image generation models, and LLMs likewise.
Strengths: 1. This framework is easy to implement, by simply gradient projection in the soft embedding space, and the performance seems promising compared to popular caption-based "CLIP interrogator". It does contribute another approach for such optimization problem.
2. The experiments do show the advantages of such framework, such as "the insensitiveness to hyper-parameters".
3. The paper organization is good, and the writing is well-polished.
Weaknesses: 1. I think adding more baselines in your main table would definitely be a plus! Currently, you mainly compare this to a heuristic-based yet popular method called "CLIP interrogator". Can you compare your approach with more diversified baselines, e.g., "AutoPrompt with different LRs", "RLPrompt", etc.. These can let readers gain a basic intuitive understandings of your method's performance against others.
2. In your main table, the improvements of yours compared to CLIP interrogator seems to be marginal, where your performance is almost the same level of performance with CLIP interrogator, even though you only include fewer tokens.
3. Besides, can you show some quantitative and qualitative examples on directly showing on the same set of images, what are the optimized prompts by CLIP interrogator, and what are the optimized prompts by yours (and perhaps, their word overlapping by distillation), such that we can enhance our understandings that your approach identify a "secret" language, like initial work [1] (in addition to possible application contributions, the secret language could also contribute to scientific research if your pipeline is good at identifying such phenomenon). Otherwise, to one of your contributions that your identified prompt is shorter, we may speculate that the prompts in CLIP interrogator can also be distilled into the same short length, and even with the same key tokens as your pipeline. Therefore, the secret language observation would be weaken. Besides, the CLIP interrogator even does not require any training with similar level of performance and even similar level of short lengths through distillation, why not we use their easier pipeline? --- Currently, figure 6 further enhances my confidence on such point as well, as your prompts include a lot of natural tokens in accordance with our common-sense understandings, such as cat images should use "cat" tokens.
4. Additionally, practitioners would be interested in your optimization efficiency as well for such algorithm to identify well-crafted prompts, comparing to many other soft/hard prompt optimization work. In terms of this, if you have better efficiency, that is also another good point, where I am also interested.
[1] Discovering the hidden vocabulary of DALLE-2, arxiv 2022
So right now, I am more viewing this work's contribution as a different perspective/approach to prompt optimization for text-to-image generation models, and even more models. Of course, that is interesting, and useful/insightful to the community. However, a key limitation would be such utilities are not quite clear alongside all current prompt optimization approaches. And the performance improvement seems to somewhat limited compared to CLIP interrogator.
Hopefully, my suggestions would be useful for you to improve your content.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We address each of your points below:
>1. I think adding more baselines in your main table would definitely be a plus! Currently, you mainly compare this to a heuristic-based yet popular method called "CLIP interrogator". Can you compare your approach with more diversified baselines, e.g., "AutoPrompt with different LRs", "RLPrompt", etc.
We include AutoPrompt (SGD) with different learning rates in Figure 5(a). Additionally, we believe RLPrompt is infeasible for optimizing images as each experiment (a single image in our case) takes between 1.5 and 4 hours according to [1]. This makes RLPrompt infeasible for this particular set of experiments. Also, please see Table 2 in the Supplementary Material for more baselines.
[1] Deng, Mingkai, et al. "Rlprompt: Optimizing discrete text prompts with reinforcement learning." arXiv preprint arXiv:2205.12548 (2022).
>2. In your main table, the improvements of yours compared to CLIP interrogator seems to be marginal, where your performance is almost the same level of performance with CLIP interrogator, even though you only include fewer tokens.
CLIP Interrogator is a purpose-built tool that has a much stronger prior than PEZ - that we are even able to match it with a general optimization scheme, is to us an indication that our much more general-purpose optimizer works quite well.
Specifically, CLIP-Interrogator uses the BLIP image-to-text model and a word bank of predefined artistic phrases to search over. Problems with using this tool in any other context than generating captions for art are highlighted in the difference we see in the Celeb-A dataset, where PEZ outperforms CLIP Interrogator as these images are "OOD" for both BLIP and the small word bank. Additionally, due to the compressibility of using PEZ, we can combine different prompts together as highlighted in Sections 4.3 and 4.4. However, under the current functionality of the CLIP Interrogator, this is impossible. In short, PEZ outperforms in recreating images that fall outside the CLIP Interragator's prior and has additional functionality, while matching this purpose-built tool on its exact task.
>3. Besides, can you show some quantitative and qualitative examples of directly showing on the same set of images, what the optimized prompts by the CLIP interrogator, and what the optimized prompts by yours (and perhaps, their word overlapping by distillation), such that we can enhance our understandings that your approach identifies a "secret" language, like initial work [1] (in addition to possible application contributions, the secret language could also contribute to scientific research if your pipeline is good at identifying such phenomenon). Otherwise, to one of your contributions that your identified prompt is shorter, we may speculate that the prompts in the CLIP interrogator can also be distilled into the same short length, and even with the same key tokens as your pipeline. Therefore, the secret language observation would be weakened. Besides, the CLIP interrogator even does not require any training with a similar level of performance and even a similar level of short lengths through distillation, why not use their easier pipeline? --- Currently, figure 6 further enhances my confidence on such a point as well, as your prompts include a lot of natural tokens in accordance with our common-sense understandings, such as cat images should use "cat" tokens.
Do you mind clarifying the latter part of this statement? We are a little confused by what exactly you are suggesting. Thank you!
Regarding the other questions, note the differences between CLIP interrogator that we mention above. We do evaluate CLIP Interrogator for shorter prompt lengths, see Table 1.
> 4. Additionally, practitioners would be interested in your optimization efficiency as well for such an algorithm to identify well-crafted prompts, compared to many other soft/hard prompt optimization work. In terms of this, if you have better efficiency, that is also another good point, where I am also interested.
Thank you for the suggestion. Prompted by your feedback, we have now conducted a series of experiments comparing the efficiency between PEZ and AutoPrompt$_\text{SGD}$ using varying numbers of steps, as outlined in the table below. Our findings indicate that PEZ achieves faster convergence compared to AutoPrompt.
| Method | 100 Steps | 200 | 400 | 600 | 800 | 1000 |
|:--------------:|:---------:|:-----:|:-----:|:-----:|:-----:|:-----:|
| AutoPrompt | 0.642 | 0.654 | 0.656 | 0.672 | 0.677 | 0.684 |
| PEZ | 0.668 | 0.674 | 0.675 | 0.684 | 0.690 | 0.695 |
Thank you again for your thoughtful review. We made a significant effort to address your feedback including experiments. We have updated our draft accordingly and will include these updates in our camera-ready version, and we would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address?
---
Rebuttal Comment 1.1:
Title: Reviewer Lu7o Replies
Comment: <1. baselines>
- From your rebuttal, I have understood what is your key advantage you want to show now. Precisely, that is the **efficiency** compared to other baselines (w/ your similar level of performance as many baselines), such as AutoPrompt, RLPrompt, etc. It would be much better for you to consider adding efficiency comparisons visualizations/discussions. Additionally, I am not sure whether RLPrompt would fail in this case, as image-level/instance-level optimization takes fewer time, which deserves some experiments as well. If it is weaker/much slower than your approach in a few cases, it would be good to include this and highlight several times in your paper, to make it more clear about your key advantages. Moreover, from your new AutoPrompt experiments, it would be good for you to add one **rigorous** figure on the efficiency comparison results, e.g., using the best setup and also the less good setup (such as other LR, prompt length in AutoPrompt), to position your approach. This would definitely add some values to your paper.
<2. CLIP interrogator>
- Thanks for your clarification. It does seem that interrogator performs much weaker with fewer tokens. But as you said, it would be good for you to incorporate some efficiency studies as well. That can address my concerns on your approach vs. CLIP interrogator.
<3. Secret Language>
- This part is just for one of your claims. You said you identify a secret language of text-to-image diffusion models [1]. But in the initial paper, the lexical items that they used are purely gibberish tokens, w/o any significant real-world meanings, such as "Apoploe vesrreaitais" for birds. But in your results, and also your new pdf results, you always include some natural tokens, for instance, "butterfly emoji + users', 'cruise + green render lights', "balloon relationships", “cat” for cat image, and many more shown in your paper figures. I am curious about more fine-grained studies on your generated prompts. For instance, others might suspect that when deleting your natural meaningful tokens, your generated results would fail completely, which may indicate a weaker claim of "secret language". Or in other words, perhaps, most of your used nonsense tokens are mainly search artifacts. Therefore, my comments are to say it would be better for you to provide some rigorous studies.
I believe this point is also mentioned by reviewer DXf6, for which I agree with him/her.
[1] Discovering the Hidden Vocabulary of DALLE-2
<4. efficiency results>
- See 1, it would be better for you to provide rigorous studies to position your approach.
For current rating, I am saying that this work should be further improved to be an interesting paper published in this conference. So I select the borderline rating, in which I would like to wait for other reviewers or ACs for final judgements.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you for your positive and thoughtful response.
> <1. baselines> and <4. efficiency results>
We appreciate this suggestion. We have updated our draft to include a rigorous comparison of clip score v.s. wall-time. Due to the rebuttal rules, we can only present a markdown table here, but we will include our associated figure in the camera ready version also including RLPrompt, which is omitted here due to its very high runtime.
| prompt length | lr | method | 100 steps | 200 steps | 400 steps | 600 steps | 1000 steps |
|:-------------:|:---:|:----------:|:---------:|:-----:|:-----:|:-----:|:-----:|
| 4 | 0.1 | AutoPrompt | 0.461 | 0.515 | 0.536 | 0.551 | 0.578 |
| | | PEZ | 0.610 | 0.635 | 0.653 | 0.666 | 0.666 |
| | 1 | AutoPrompt | 0.614 | 0.639 | 0.663 | 0.667 | 0.670 |
| | | PEZ | 0.649 | 0.657 | 0.663 | 0.660 | 0.668 |
| | 10 | AutoPrompt | 0.643 | 0.649 | 0.663 | 0.665 | 0.662 |
| | | PEZ | 0.645 | 0.653 | 0.672 | 0.681 | 0.680 |
| 8 | 0.1 | AutoPrompt | 0.516 | 0.553 | 0.582 | 0.598 | 0.614 |
| | | PEZ | 0.632 | 0.656 | 0.671 | 0.677 | 0.678 |
| | 1 | AutoPrompt | 0.641 | 0.654 | 0.656 | 0.672 | 0.683 |
| | | PEZ | 0.667 | 0.673 | 0.674 | 0.684 | 0.690 |
| | 10 | AutoPrompt | 0.644 | 0.662 | 0.671 | 0.671 | 0.676 |
| | | PEZ | 0.670 | 0.677 | 0.679 | 0.679 | 0.682 |
| 16 | 0.1 | AutoPrompt | 0.551 | 0.595 | 0.628 | 0.631 | 0.639 |
| | | PEZ | 0.646 | 0.665 | 0.679 | 0.686 | 0.695 |
| | 1 | AutoPrompt | 0.663 | 0.672 | 0.680 | 0.680 | 0.687 |
| | | PEZ | 0.657 | 0.686 | 0.678 | 0.690 | 0.691 |
| | 10 | AutoPrompt | 0.665 | 0.665 | 0.672 | 0.682 | 0.679 |
| | | PEZ | 0.678 | 0.682 | 0.681 | 0.681 | 0.686 |
> <2. CLIP interrogator>
The CLIP interrogator consistently requires the same amount of time regardless of the length of the prompt. This is because it calculates the clip scores between the target image and all tokens in the prior bank, then selects the top-k to fill approximately 70 tokens. However, it's crucial to highlight that our method offers competitive prompts with significantly fewer tokens than the CLIP interrogator. This provides users with increased flexibility, allowing them to concatenate tokens or use the prompt as a style guide. Such flexibility stands as a notable advantage of our pipeline.
Furthermore, quantifying the efficiency of the CLIP interrogator in terms of the number of steps can be challenging, so we have added an efficiency comparison based on wall time to our working draft and will include it in our camera-ready version.
> <3. Secret Language>
Prompted by your feedback, we have now conducted an experiment to see if all gibberish tokens are indeed search artifacts (not meaningful for generation). We present an example here and have added others to our working draft, including both instances where gibberish tokens are valuable and ones where they are not.
Consider the following text, "translucent abyss assaulted surfing featured regrann nbappinterest", which produces a surfer in a wave tunnel found in Figure 9 of the Appendix. We find that some tokens like "nbappinterest" and "assaulted" are not necessary for good generations. However, "regrann" is crucial, despite sounding unrelated to the image at hand and despite "regrann" not being a real word. Nonetheless, "regrann" is critical for producing sensible images in numerous ways (i.e. not producing a second person, making sure the surf board is in one piece, etc.). These two words/tokens contribute to the image in ways that are non-obvious at first glance. We believe that this example and our other examples suggest that the language model extracts meaning from tokens which a human would not. We do agree that adding such examples is valuable, and we have now additionally clarified in our working draft that we are referring to the extraction of meaning from tokens which a human would not extract meaning from or where a human may extract vastly different meaning.
Thank you again for engaging and for your helpful suggestions. We would greatly appreciate it if you would consider improving your score in light of our detailed response. Do you have any other questions we can address? | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and thoughtful feedback. We have attached a PDF containing additional figures, showing explicit failure cases of the approach as requested by one reviewer.
We would like to emphasize the significance of this work. Our proposed, general-purpose, prompt optimization method is able to achieve comparable results to the purpose-built art prompt tool, CLIP interrogator, without any prior domain-specific knowledge, image-to-caption models, or keyword banks. Additionally, we find that PEZ demonstrates that it is possible to break content filters in deployed generative AI models, such as Midjourney - a new vulnerability that we uncover immediately, without even tuning the optimization scheme.
Meanwhile, the differences between PEZ and other baselines like AutoPrompt may appear subtle, but they are crucial for performance. As shown in Figure 5 (a), PEZ is much more robust under different learning rates. Also, our additional experiments, presented in the table below, suggest that PEZ converges much faster than the baseline under the same learning conditions. We believe that the flexibility and efficiency of PEZ are very important for applications like prompt optimization, really making "hard prompts easy" to use. This has been reflected in the usage and implementation of this method in a number of applications, as well as in GUI tools for hobby users of diffusion models.
| Method | 100 Steps | 200 | 400 | 600 | 800 | 1000 |
|:--------------:|:---------:|:-----:|:-----:|:-----:|:-----:|:-----:|
| AutoPrompt | 0.642 | 0.654 | 0.656 | 0.672 | 0.677 | 0.684 |
| PEZ | 0.668 | 0.674 | 0.675 | 0.684 | 0.690 | 0.695 |
Pdf: /pdf/d2822148fb2be0bda547545c6f397062dff8d9da.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Federated Learning via Meta-Variational Dropout | Accept (poster) | Summary: The submission proposed a novel Bayesian meta-learning approach metaVD for federated learning. metaVD learns to predict client-dependent dropout rates via a hypernetwork, helping address the model personalization and limited non-i.i.d. data problems. At the same time, metaVD compressed the model parameter, alleviated the overfitting and reduced the communication costs.
Strengths: MetaVD encompasses a new posterior aggregation strategy to consolidate local models into a global one. In addition, MetaVD predicts the dropout rates of parameters via a hypernetwork, enabling parameter compression. This not only allievates the overfitting problem but also reduces the communication costs of exchanging model parameters.
Weaknesses: In general, this article is written in a fluent, simple, and easily understandable manner. However, there are two main issues that need to be addressed:
1. The proposed method in the article is straightforward and intuitive, but many aspects lack theoretical guarantees and analysis.
2. In the past years, Bayesian federated learning has made significant progress, but many relevant works have not been discussed or compared in the article.
3. if my understanding is right, the propsoed method has the risk of data leakage.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. “Recently, the Bayesian learning paradigm was introduced to the FL to tackle overfitting by considering the uncertainty of the model parameters [28–30]. However, they could also struggle with diverging local models if the data from different clients exhibit significant statistical variability.” This statement is incorrect. In recent years, Bayesian federated learning has made significant advancements. Many Bayesian methods have been employed not only to address overfitting and introduce uncertainty but also to tackle personalized scenarios. I list some works as reference below.
2. "Moreover, previous Bayesian FL methods are not developed for model personalization, and the performance can be degraded in non-i.i.d. client data." This statement is incorrect for the above reasons.
3. Why was the variational distribution in Equation 3 designed in such a form? I understand that this is an engineering approach, but can this parameterized form effectively approximate the posterior distribution of the weights? Is there any theoretical analysis available? Furthermore, why is the variance parameter $\alpha_m$ used to model the dropout rate? Is there any inherent connection between the two? While this may be an engineering approach, is there any theoretical or intuitive analysis supporting it? It is important to note that while engineering approaches are often driven by empirical performance, they can also be guided by theoretical insights or intuitions.
4. In eq(5), what is the definition of $g_m$? How does the expression of $r_k^m$ come out? Any derivation?
5. In the server-side optimization, when you update the parameters $\psi,e^m$, you are trying to optimize the $L_{ELBO}^m$ w.r.t. $\psi$ and $e^m$, but $L_{ELBO}^m$ requires the data on each client, doesn't this imply data leakage?
6. In the experiment section, as I mentioned earlier, this article lacks thorough investigation into Bayesian federated learning (FL), as it fails to mention and compare many Bayesian FL approaches specifically designed to address personalized scenarios. To name a few examples but not exaustive:
[1] Personalized federated learning with gaussian processes, NIPS 2021.
[2] Federated Bayesian Neural Regression: A Scalable Global Federated Gaussian Process. H Yu et al. 2022.
[3] Personalized federated learning via variational bayesian inference, ICML, 2022.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discussed the limitations of their work, but they did not address the potential negative societal impact it may have.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank the reviewer for insightful comments.**
Here are answers to your questions:
**1. Bayesian PFL work is not discussed.**
When we started our research last year, we were aware of only a few papers [17,15] dealing with the personalization of models using Bayesian methods. Meanwhile, we realized that measuring the algorithm’s performance on unseen clients during training is an important task [5]. In real FL applications, most clients may not participate in training. Then pFL benchmark paper [1] also extensively analyzed the gap between participating and non-participating clients across various pFL scenarios. Thus, we adopted their pFL evaluation protocols for our experiment.
On the other hand, pFedBayes [15] did not show results for OOD clients. They assume an independent local model for each device while learning a shared global model. In most of their experiments, the performance of local models showed better results than the global model, but there are no local models for unseen clients. pFedGP [17] learns a shared kernel for all clients and infers personalized classifiers for each client. However, the generalization gap in Figure 4 seems large (compared to our results in Table 2). Also, the inducing point (IP) approximation to speed up the computation of inverting a large kernel matrix did not show better results than the original pFedGP [17] in their OOD experiment.
For this reason, we focused on the meta-learning-based PFL methodology for most of our baseline because this approach is a researched methodology to apply to out-of-distribution (OOD) tasks. Also, we adopted the product of the posterior strategy for the global model aggregation. The posterior aggregation approach considers the uncertainty of weight in the model aggregation. However, pFedBayes and pFedGP update the global parameter (or the kernel) similar to FedAVG. That is why we cited Bayesian FL works related to posterior aggregation at most.
We had no intention of disregarding the existing Bayesian PFL works. We will add a thoughtful discussion paragraph about the Bayesian PFL works and report the results of pFedGP [17] as a baseline (we could not find the official codes for the other papers). Only one article [19] in the arXiv addresses PFL with the Bayesian meta-learning (updated in July 2023 and not yet officially published). Please note that our work is still novel in this direction as our work is the first approach to utilize the variational dropout uncertainty in the model aggregation in the Bayesian PFL.
**2. The introduction about Bayesian FL.**
The “previous Bayeain FL” means those approaches mentioned in the same paragraph. We will change the sentence or add an explanation accordingly. (e.g., “Recently, Bayesian PFL methods [17,15,18] are introduced in this field and have been shown effective in dealing with non-IID. However, their approach still has limitations (e.g., Strong constraints in pFedBayes and high computational cost in pFedGP). Their model aggregation rule does not specifically considers the model uncertainty.” )
**3. The variational distribution in equation 3.**
As in L123, we extended the variational dropout (VD) [10, 21, 12, 22] approach for the local posterior. The VD is a regularization technique in Neural Networks (NN). This involves randomly turning off NN parameters during training by applying discrete Bernoulli random noises; a method initially used to prevent over-fitting [C].
Later, it was found that continuous noises sampled from Gaussian distributions work similarly to the Bernoulli dropout due to the central limit theorem [A, B]. The $m$-th client's weight is interpreted as a form of Gaussian posterior distribution $q(w^m) = \mathcal{N}(w^{m} \vert \theta, \alpha^{m} \theta^2)$ where a mean equal to the global NN's weight $\theta$, and variance is the square of the weight (i.e., $\theta^2$) times the client-specific dropout variable $\alpha^m$.
[A] Fast dropout training. ICML, 2013.
[B] Regularization of neural networks using dropconnect. ICML, 2013.
[C] Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 2014.
**4. The derivation of the aggregation rule in equation 5.**
The definition of $g^m$ is a weighting factor of each client proportional to the local data size (mentioned in L56 and L108).
We presented the derivation of the aggregation rule in the general comment.
**5. The risk of data leakage?**
In the FL system, the server and client only exchange the parameters (summarized statistics) instead of raw data, thus reducing the risk of data leakage. In our study, MetaVD only exchanges the NN parameter $\theta$ and dropout parameter $\alpha^m$. In addition, the dropout is applied to the weight of just one layer before the output layer (L202). Therefore, the risk of data leakage is relatively small or negligible compared to other existing FL baselines.
Computing the gradient w.r.t $\psi$ and $e^m$ happens in the server after the server receives $\theta^m_*$ and $\alpha^m_*$. The expression in L171 $\nabla_\psi \mathcal{L}^m_{\text{ELBO}}(\alpha^m) \approx (\nabla_\psi \alpha^m)^{T} \Delta \alpha^m$ is an approximation of computing gradient w.r.t $\psi$. $\nabla_\psi \alpha^m$ is the gradient of a hyper network's output w.r.t $\psi$. $\Delta \alpha^m = \alpha_*^m - \alpha^m$ is the changes in the local dropout variable, approximating the vector-Jacobian product. For this updating rule, we followed the recently proposed PFL algorithm, pFedHN [5] (please see Section 3.2 of them). In this work, we extended the hyper-network to approximate the dropout variable (or conditional variance) to fully utilize the weight uncertainty in the (aggregation, regularization, and personalization of) FL.
**6. The investigation into Bayesian PFL.**
(Please see "1" above and the Bayesian PFL in the general comment)
**Reference**
(Please use the reference list in the general comment) | Summary: This paper proposes a new federated learning (FL) framework to address the various issues of FL. Specifically, this framework (a) leverages Bayesian FL to address the issue of non i.i.d. data among clients, (b) instantiates its BFL model with a variational dropout posterior to efficiently handle large amount of clients, and (c) applies meta learning adaptation strategies (e.g., MAML and Reptile) at client-sides to personalize the local models. The paper demonstrates the performance of this approach on several vision datasets, with ablation studies to confirm the effectiveness of proposed components.
Strengths: The proposed method makes sense and addresses various relevant problems in the field of FL. I think the paper did an excellent job combining various ideas, and presenting them as one coherent framework. In addition, the empirical results positively improve over existing baselines, which suggests strong practical merits. I also like that the paper is really rigorous with conducting ablation studies.
Weaknesses: The most apparent weakness of this paper is the lack of technical novelty. Bayesian FL, Variational Dropout, and Meta-learning are all very well-known strategies. While I am not opposed to a creative combination of ideas, such idea should have a scientific merit that outweighs the sum of its individual components. Here, each component is behaving exactly as expected and there seems to be no technical challenge in terms of integrating them. I think the novel adjustment here is the hypernetwork (although I am not sure if this strategy has been previously adopted in the context of variational dropout --- e.g., hierarchical prior is sort of similar, but not exactly the same).
Some other weaknesses:
- Lack of error bar/standard deviation in any of the reported results.
- I would prefer some sort of performance vs. communication rounds to demonstrate convergence. Result tables are acceptable, but very uninformative.
- It was previously claimed that the hypernetwork helps with large number of clients, especially when some of them have limited data. I think this should be supported by an ablation study showing the effect of increasing no. clients/ decreasing client data (the current ablation study only compares MetaVD to NaiveVD/EnsembleVD on the default client setting).
Some other minor issues:
- Various typos (e.g., FEMINIST, line 187; Hetrogenity, table 2 header ...)
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - What is the benefit of the hyper-network, as opposed to directly learning the client-specific dropout rates ?
- What is the overhead computation cost of the proposed framework? I believe both local meta-learning and training a server-side hypernetwork is quite costly, and it is quite unfair to compare 1000 comm rounds of MetaVD to 1000 comm rounds of FedAvg.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors have discussed some limitations of the work in the conclusion. I think the method is purely theoretical and there is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for the helpful review!**
In this work, we extended the hypernetwork to approximate the dropout variable of NN's weight to enable the full utilization of the weight uncertainty in the aggregation, regularization, and personalization of FL. Our efficient composition is also implemented on various meta-learning based baselines and PFL benchmark datasets.
Here we briefly summarize the contribution of our works.
**Contribution**
* In FL, we show a new hyper-network-based Bayesian network approach is feasible.
* Our method is the first approach of employing the variational dropout uncertainty in the posterior aggregation for the PFL.
* We show that the conditional Gaussian (dropout) noise injection to NN weight method is a versatile approach that can be combined with any optimization-based meta-learning approach.
* Experimentally, we have verified the performance of our work in various PFL benches. We release the experiment code for all baselines for reproducibility. We put much effort into implementing the stable gradient propagation to hypernetwork and the global parameter of meta-learning algorithm using the PyTorch functorch library.
**Lack of error bar/standard deviation.**
We presented the results of baselines with the best parameter settings optimized via a hyperparameter search tool, Optuna. We performed an extensive analysis of various different FL scenarios.
**An ablation study showing the effect of increasing no. clients/ decreasing client data.**
Some portions of clients have small datasets in the non-IID setting. They increase with a higher alpha (Dirichlet parameter). The reason we reported the simple ablation study is that the EnsembleVD cannot actually learn the independent dropout parameters well. So, technically, EnsembleVD was reduced to FedAVG with Dropout(0.1).
**1. The benefit of the hypernetwork.**
In our experiment, the dropout rates in EnsembleVD were not well optimized due to restricted client participation. On the other hand, joint optimization of MetaVD with hypernetwork converges well. This can be observed via the changes in the KL divergence loss term. This demonstrates that MetaVD's hypernetwork offers a more data-efficient approach to learning the client-specific model uncertainty than Ensemble VD.
**2. What is the overhead computation cost of the proposed framework?**
We found that all models converged in 1000 rounds in our experiment. The only difference between MetaVD and FedAVG is one layer before the last layer. Since we only applied the conditional dropout to only the last layer, the increase in computation cost is quite small.
The actual (approximate) training time in Cifar-10 dataset is 4 hours for FedAVG, 5 hours for MetaVD, 6 hours for PerFedAvg (pFedGP in our experiment seems to take almost 12 hours). | Summary: This paper proposes a novel Bayesian personalized federated learning approach using meta-variational dropout. The proposed approach employs a shared hypernetwork to predict the client-dependent dropout rates for each model parameter, enabling effective model personalization and adaptation in the limited non-i.i.d. data environment. The effectiveness of this approach is demonstrated empirically on various FL datasets, including CIFAR-10, CIFAR-100, FEMINIST, and CelebA and multi-domain FL datasets.
Strengths: 1. The paper is clearly written and well organized.
2. The proposed method in the paper is well-motivated and technically correct.
3. The Bayesian approach to FL is interesting and seems suitable for personalization. The introduced MetaVD seems to work well in practice.
4. The method is extensively tested on a variety of FL datasets.
Weaknesses: 1. The discussion with related work needs to include other Bayesian treatments for personalized FL, e.g. [1], [2]
2. The experiments lack a comparison with relevant methods mentioned above [1-2], as well as the baseline in [3].
[1] Personalized Federated Learning via Variational Bayesian Inference. ICML 2022.
[2] Fedpop: A Bayesian approach for Personalised Federated Learning. NeurIPS 2022.
[3] Personalized Federated Learning using Hypernetworks. ICML 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Page 3, line 115: i.e., $\phi=\phi^1,..,\phi^M$ should be $\phi=\{\phi^1,..,\phi^M\}$.
2. Page 3, line 132: add a definition of $\mathbf{1}$ for $\epsilon^m \sim \mathcal{N}(\mathbf{1}, \alpha^m)$ .
3. In equation (5), how was the aggregation rule derived? It is recommended to provide more details (referring to [4] is possible).
4. It would be clearer to specify the specific sections of the Appendix. For example, on page 6, "More details are in Appendix."
5. The sparsity in Table 7 seems to only consider the personalized layer (the last fully connected layer). I am curious about the proportion of discarded parameters to the total parameters of the model.
6. I hope the authors can provide a discussion on the communication complexity of the additional hypernetwork.
7. Algorithm 1 seems confusing. It would be clearer to separate Algorithm 1 into two distinct algorithms.
[4] A Bayesian Federated Learning Framework with Online Laplace Approximation.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank the reviewer for the positive and encouraging feedback!**
As the review suggested, the recommended recent (Bayesian) PFL works will be thoroughly compared in the Background section.
We will update the difference between the recent PFL approaches [1,2,3,4] and ours.
We will also add the experimental results of pFedGP [17] as a Bayesian PFL baseline, as we could not find officially released codes for the other methods.
Here are answers to your questions:
**1. The correction for L115.**
Thank you for the correction.
**2. A definition of for $\epsilon \sim N(\mathbf{1}, \alpha)$ in L132.**
$\mathbf{1}$ is a K dimensional all-ones vector. K is the dimension of the NN's weight. ($\vec{\mathbf{1}}$ might be appropreate.)
**3. The derivation of the aggregation rule.**
Thank you for the suggestion! Please see "the derivation of the aggregation rule" in the general comment.
**4. Specify the specific sections of the Appendix.**
We will add the section number for all the references to Appendix.
**5. The sparsity in Table 7.**
Yes, it is applied to the personalized layer before the last layer. Here is the computation of the proportion: 12900 (params of a personal FC layer) * 0.8 (dropout rate) / 416612 (total params) * 100 = 2.48 (\%) approximately. We are showing the possibility of not increasing the model size while incorporating the additional model uncertainty parameter in Bayesian PFL.
**6. Communication complexity of the additional hypernetwork.**
The server only sends the global parameter and the approximated dropout variable to the client while keeping the hyperparameter in the server. Hence, if we apply dropout to the full NN layers, the communication complexity of our work is $\mathcal{O}(2K)$ where K is the dimension of full NN parameters. This doubles the cost compared to FedAVG. This is due to the modeling of uncertainty in the Bayesian framework similar to predates. However, our MetaVD is applied to just a single layer before the last output and also can prune some of the parameters. The increase in communication costs could be negligible.
**7. Algorithm 1 seems confusing. It would be clearer to separate Algorithm 1 into two distinct algorithms.**
Thank you for the suggestion. We will try to optimize the readability of Algorithm 1.
**Reference**
(Please use the reference in the general comment)
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Considering other reviewers have also mentioned concerns regarding the readability and details, I will maintain the current score. | Summary: The paper introduces a novel approach called meta-variational dropout (MetaVD) for addressing challenges in federated learning (FL). Traditional FL faces issues such as model overfitting and divergence of local models due to non-i.i.d. data across clients. MetaVD leverages Bayesian meta-learning to predict client-specific dropout rates using a hyper network. Extensive experiments conducted on various FL datasets demonstrate that MetaVD achieves good performance.
Strengths: 1. The proposed approach is novel.
2. Authors performed experiments on a variety of tasks and datasets and showed promising results.
3. The proposed method showed consistent improvements over the considered baselines with a good margin.
Weaknesses: A primary weakness is that the authors have not compared their results with existing state-of-the-art personalized FL algorithms, e.g., [1,2] and the related work/baselines referenced within them.
Other notes and requests for clarification:
1. In L66, where the authors discuss the convergence guarantees of the original FL algorithm, it is important to provide a citation.
2. The authors have written about the “Challenges of FL” in Section 2. They should briefly describe how the proposed approach addresses these challenges.
3. Figure 1 requires additional details to aid understanding. The authors should clarify the meaning of the "x" in the modules of the figure and explain how it illustrates the difference in aggregation between MetaVD and FedAvg.
4. In Table 3, the signs of "+" and "-" should be reversed to ensure accurate representation.
5. Table 4 should explicitly mention that the results presented are for out-of-distribution (OOD) clients to provide a clear context for the findings.
6. Figure 1 is not referenced or discussed in the paper. It should be either referred to in the main text or removed if it does not contribute significantly to the paper's content.
7. Additional information regarding the computation of gradients for the hypernetwork parameters should be included.
8. The authors should provide an explanation as to why the PerFedAvg+MetaVD+DP combination in Table 7 leads to improved performance despite dropping almost 80% of the parameters.
9. Regarding equation 5, the authors should elaborate on the implications of the inverse dependence of the aggregation weights on the square of the model weight. This information would enhance the understanding of the aggregation process and its impact on the final results.
[1] “Fusion of Global and Local Knowledge for Personalized Federated Learning”, TMLR 2023.
[2] “FedALA: Adaptive Local Aggregation for Personalized Federated Learning”.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: (see "weaknesses" above).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors haven’t discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the encouraging feedback on our work!
We appreciate introducing the important PFL works (FedSLR, FedALA). We will cite them in the paper.
**PFL Baseline**
We summarize the baselines and related works **presented** in the submitted paper
|**Method**|category|status|
|-----------|--------|------|
|FedAvg|FL|baseline|
|FedProx|FL|baseline|
|FedAvg+FT [1]|Personalized FL|baseline|
|(pFed) Reptile [2]|Personalized FL|baseline|
|(pFed) MAML [3]|Personalized FL|baseline|
|PerFedAvg [4]|Personalized FL|baseline|
|pFedHN [5]|Personalized FL|cite|
|FedAG |Bayesian FL|cite|
|FedBE |Bayesian FL|baseline|
|FedPA [11]|Bayesian FL|cite|
|FedAVG + SNIP [9]|FL + Pruning|baseline (in appendix)|
|MetaVD (ours)|Bayesian PFL|ours|
* Reptile [2], MAML [3], and PerFedAvg [4] are the SOTA PFL baselines extended from few-shot/meta-learning approaches. We chose these meta-learning-based PFL approaches as our key baseline since they perform well on out-of-distribution (OOD) clients. A summary of baselines is presented in section D of the appendix.
* We extensively analyzed all baselines to find the best parameter settings using a hyperparameter search tool, Optuna. We conducted experiments on various FL algorithm evaluation scenarios following the recent FL benchmark paper, pFL-Bench, which makes our approach comparable to 20 existing SOTA pFL methods experimented in the pFL-Bench paper. For example, our results in Table 4 on FEMNIST are comparable to Table 2 of the pFL-Bench paper. In the CIFAR-10 and CIFAR-100 experiments, the sample sizes for participating and non-participating clients are slightly different; otherwise, they are almost the same.
Here are answers to your questions:
**1. L66.**
In L22, we cited two references; one discusses the convergence rate of FedAvg with IID data, and the other is a survey paper.
**2. “Challenges of FL” in Section 2.**
* Heterogeneity of client data: MetaVD is a technique of modulating a global NN parameter (of FedAvg, or (pFed) Reptile/MAML) with a conditional dropout technique. Predicting the client-specific dropout variable via a hypernetwork enables reconfiguring a single NN for various different tasks. We also utilized the client-specific dropout rate as a weighting factor in the global model aggregation.
* Sparse connectivity (or client’s participation in training): Table 5 in our experiment illustrates that reducing the participating client size does not degrade the prediction scores when we applied MetaVD. We hypothesize this is partially due to the conditional dropout preventing the overfitting into a small subset of clients.
* limited data: The initial parameters learned in meta-learning help predict a local model with only a few data adaptations; thus, meta-learning-based PFL is advantageous to the client with limited data. The conditional dropout also prevents the local model overfitting by additionally regularizing (or conditioning) the initial parameters.
* Communication cost: We demonstrated the possibility of pruning the model parameter in FL communication.
**3. Figure 1 requires additional details.**
We will update Figure 1 as follow:
* We will add the meaning of “X” in Figure 1: it represents the pruned parameters in each communication round.
* When aggregating local parameters, MetaVD utilizes the model’s weights as well as the weights’ uncertainty to find its global optimum. This corresponds to the global posterior mean of the product of two local Gaussian posteriors. In contrast, FedAVG only considers the mean of each local model. We will add this description with more details to the figure to explain the difference in the global aggregation between them.
**4. The reversed sign in Table 3.**
Thank you! We will reverse the sign.
**5. Specify OOD clients in Table 4.**
We will update the phrase (L261) in the caption.
**6. Reference Figure 1.**
(see “3.” above)
**7. The computation of gradients for the hypernetwork parameters.**
(Please see ``the update rule for hypernetwork'' in the general comment)
**8. Why the PerFedAvg+MetaVD+DP combination in Table 7 leads to improved performance.**
The performance enhancement after pruning our model aligns with Occam's Razor, which favors simpler explanations and helps prevent overfitting. This principle explains why pruning or dropout training can increase performance, a phenomenon also observed in existing works [9, 13]. A comparison with the SNIP pruning algorithm [9] is included in the Appendix on page 9.
**9. The implications of the inverse of $\theta^2$ in Eq 5.**
In our work, the $m$-th client's posterior distribution is defined as Gaussian dropout posterior [10,12] of the form $q(w^m) = \mathcal{N}(w^{m} \vert \theta, \alpha^{m} \theta^2)$ where a mean equal to the global NN's weight $\theta$, and variance is the square of the weight (i.e., $\theta^2$) times the client-specific dropout variable $\alpha^m$. Thus, the $\theta^2$ emphasizes the effect of larger weights on the estimated variance $\sigma^2 = (\alpha^m)\theta^2$.
The $\alpha^m(\theta^m)^2$ in Eq 5 describes the estimated uncertainty in the model parameter for the $m$-th client. The inverse of this term is used to compute the aggregation weights, $r^m$. This means that parameters with more uncertainty (larger values or higher dropout rates) will have less influence on the aggregated global model parameter $\theta^{\text{agg}}$. Based on this technique, the federated learning system can adaptively adjust each client's local model's contribution to the global model. This method can help reduce the impact of outliers or anomalous clients that may have deviating model parameters due to unique local data distributions or noise. Also, in statistical learning theory, the Fisher information is directly proportional to the precision (or inverse variance) of Gaussian distribution.
**Reference**
(Please use the reference in the general comment) | Rebuttal 1:
Rebuttal: # General comment to all reviewers
We sincerely thank the reviewer for our work's positive and encouraging feedback.
Here, we summarize some common answers to the reviewers.
## Bayesian PFL
Since we hear the reviewers' suggestion of discussing some recent Bayesian PFL works, we will add a thoughtful discussion paragraph about the recent development in Bayesian PFL works.
We will remove Table 1 in the Background section and add a paragraph to distinguish our work.
We will report the results of pFedGP[17] as a baseline in the experiment (we could not find the official codes for the others).
We can find only one paper [19] in the arXiv addressing PFL with the Bayesian meta-learning (updated in July 2023 and not officially published). Our work is still rare in this direction. Also, our work is the first approach to utilize the conditional variational dropout uncertainty in the model aggregation in the Bayesian PFL.
|**Method**|Category|Status|
|-----------|-----------|-------|
|pFedGP [17]|Bayesian PFL|→ baseline|
|pFedBayes [15]|Bayesian PFL|→ cite|
|FedBNR [18]|Bayesian PFL|→ cite|
|Fedpop [16]|Bayesian PFL|→ cite|
|FedPPD |Bayesian PFL|→ cite|
|FedABML [19]|Bayesian Meta PFL|→ cite|
|MetaVD (ours)|Bayesian Meta PFL|ours|
## The aggregation rule in Eq 5
Eq 5 is derived from the product of Gaussian local posteriors.
In general, the mode of the product of $M$ Gaussians, $\prod_{m=1}^{M} \mathcal{N}(\mu_m, \sigma_m)$, simplifies to $\mu_{\text{agg}} = \sum_{m=1}^M r_m \mu_m$ where $r_m = (\sigma^2_m)^{-1} / \sum_{m=1}^M (\sigma^2_m)^{-1}$.
It is a popular statistical theory [23] (mentioned in L160).
The difference in our approach is that we utilize the Gaussian dropout posterior, $q(w^m) = \mathcal{N}(w^{m} \vert \theta^m, \alpha^{m} (\theta^m)^2)$, for each m-th client, where $\theta^m$ and $\alpha^m$ are locally updated NN's weights and dropout parameters, respectively. If we consider the mean $\mu_m = \theta^m$ and variance $\sigma_m^2 = \alpha^m (\theta^m)^2$, we can get Eq 5 as an approximation of the mean of aggregated posterior.
In Eq 5, the definition of $g^m$ is a weighting factor of each client proportional to the local data size (mentioned in L56 and L108). This formulation is directly adapted from Eq 3 of the paper FedPA [11] and [6,7]. In fact, minimizing the local posteriors using the KL divergence in L112 is equivalent to maximizing the logarithm of the product of weighted posteriors as described in Eq 8 of [7]. In the heterogeneous data, the product rule can achieve a smaller aggregation than the mixture of Gaussian posteriors. We can also find a similar derivation in the continual learning paper [8].
## The update rule for hypernetwork
The expression in L171 is an approximation of computing gradient w.r.t $\psi$.: $\nabla_\psi \mathcal{L}^m_{\text{ELBO}}(\alpha^m) \approx (\nabla_\psi \alpha^m)^{T} \Delta \alpha^m $ where $\nabla_\psi \alpha^m$ is the gradient of a hyper network's output w.r.t $\psi$ and $\Delta \alpha^m = \alpha_*^m - \alpha^m$ is the changes in the dropout variable, where $\alpha_*^m$ is the locally updated dropout rate using the local data and $\alpha^m$ is the initially predicted dropout variable from hyper-network. $\Delta \alpha^m$ is an approximation of the vector-jacobian product, which we are inspired by the work of pFedHN[5] and LOOKAHEAD[14]. Given the difference $\Delta \alpha^m$, the update rules for $\psi$ and $e^m$ are:
For $\psi$:
$\Delta \psi = (\nabla_\psi \alpha^m)^{T} \Delta \alpha^m$
$\psi \leftarrow \psi + \eta \frac{1}{M} \sum_m g_m (\nabla_\psi \alpha^m)^{T} \Delta \alpha^m$, using the weights $g_m$ (defined in L56)
For $e^m$:
$\Delta e^m = (\nabla_{e^m} \psi)^{T} (\nabla_\psi \alpha^m)^{T} \Delta \alpha^m$, using the chain rule given $\alpha^m = h_\psi(e^m)$
$e^m \leftarrow e^m + \eta (\nabla_{e^m} \psi)^{T} (\nabla_\psi \alpha^m)^{T} \Delta \alpha^m$
## Reference
[1] pFL-bench: A comprehensive benchmark for personalized federated learning. NeurIPS, 2022.
[2] Federated meta-learning with fast convergence and efficient communication. arXiv preprint, 2018.
[3] Improving federated learning personalization via model agnostic meta learning. arXiv preprint, 2019.
[4] Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. NeurIPS, 2020.
[5] Personalized federated learning using hypernetworks. ICML, 2021.
[6] Asymptotically exact, embarrassingly parallel MCMC. arXiv preprint, 2013.
[7] A Bayesian Federated Learning Framework with Online Laplace Approximation, arXiv, 2021.
[8] Overcoming catastrophic forgetting by incremental moment matching. NeurIPS, 2017.
[9] Snip: Single-shot network pruning based on connection sensitivity. ICLR, 2018.
[10] Variational dropout and the local reparameterization trick. NeurIPS, 2015.
[11] Federated learning via posterior averaging: A new perspective and practical algorithms. ICLR, 2021.
[12] Variational bayesian dropout with a hierarchical prior. CVPR, 2019.
[13] Learning both weights and connections for efficient neural network. NeurIPS, 2015.
[14] Lookahead Optimizer: k steps forward, 1 step back. 2019.
[15] Personalized federated learning via variational bayesian inference. ICML, 2022.
[16] Fedpop: A bayesian approach for personalised federated learning. NeurIPS, 2022.
[17] Personalized federated learning with gaussian processes. NeurIPS, 2021.
[18] Federated Bayesian Neural Regression: A Scalable Global Federated Gaussian Process. arXiv preprint, 2022.
[19] Personalized Federated Learning via Amortized Bayesian Meta-Learning. arXiv preprint, 2023.
[20] What do we mean by generalization in federated learning?. ICLR, 2022.
[21] Variational Bayesian dropout: pitfalls and fixes. ICML, 2018.
[22] Dropout as a bayesian approximation: Representing model uncertainty in deep learning. ICML, 2016.
[23] Products and convolutions of Gaussian probability density functions. Tina-Vision Memo, 2003. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Graph of Circuits with GNN for Exploring the Optimal Design Space | Accept (poster) | Summary: Automatic circuit design and optimization is an active field of research. Many different algorithms have been proposed with different design objectives, e.g., layout optimization, size optimization, and topology generation. Prior works adopted GNNs to represent circuit topologies, but the authors suggest a different approach in this work. After constructing a dataset consisting of sample circuits and their labels, they are mapped as a node in a graph. Then, a GNN model is employed to predict the labels of unlabeled samples. Two different optimization algorithms are proposed, and they exhibited relatively good performance in experiments.
Strengths: - This is a new approach to applying GNNs to circuit design.
- The method can be applied to a wide range of circuit topologies (in theory).
- The surrogate model works relatively well in experiments.
Weaknesses: - Probably the most interesting idea in this work is how to use graphs for circuit optimization. The way of embedding circuits into a node of a graph looks novel. However, the performance of the surrogate model is quite low, as displayed in Tables 1 and 2. It might be slightly better than prior methods, but its accuracy is still too low, so the method is not very practical. Can this really substitute conventional SPICE simulations when optimizing the design? There is no guarantee that this algorithm can produce an equally good circuit compared to SPICE-based approaches.
- Since the GNN models and optimization algorithms are not entirely new, this work might not see much interest in the machine learning community, especially considering prior works have been mostly published in EDA conferences and journals (e.g., DAC, ICCAD, and TCAD).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - The optimized two-stage OTAs display good performances in Table 4. However, analog circuits often show totally different behavior in transient simulations. Were all the final designs verified in transient simulations?
- 2-stage OTA and ring oscillator are very simple circuits. Did the algorithm work well for more complex circuits?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - Not a practical substitute for SPICE models
- Only tested on simple circuits
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1, L1**: Thanks for your comments.
**Accuracy of the surrogate model**:
The model under consideration exhibits somewhat modest performance in metrics such as GAIN (0.6) and UGB (0.6). However, a contrasting trend emerges when evaluating metrics like GM, PM, Noise, Power, Frequency, and Delay, where the $R^2$ scores consistently surpass 0.8. This performance has been achieved under a very limited labeled scenario (comprising merely 500 labeled samples). This dataset size stands in stark contrast to the larger datasets employed by comparable studies.
It's worth noting that the potential for performance enhancement remains by introducing a small number of additional samples. We also intend to explore encoding of the design space to further improve the $R^2$ score.
Hence we believe that the proposed surrogate model can act as an effective proxy to the SPICE simulations during optimization.
**Substituting SPICE simulations during optimization**:
The pre-trained surrogate model was employed during optimization and we performed an actual (through simulation) vs predicted value comparison corresponding to the most optimal design parameter's performance metrics. We observed that the predicted results from the surrogate model closely resembles with the simulated values across most metrics. Thus providing strong evidence that the proposed model can act as a proxy to SPICE simulations.
**W2**: Thanks for your comments. Though graph has been introduced in the circuit designing problem this work is first to incorporate semi-supervised learning framework. This significantly reduces the reliance over extensive labeled datasets. Furthermore, our work yields the following benefits:
(1) Formulating a node regression problem by considering each node to be a circuit.
(2) Incorporate technology and topology information to facilitate knowledge transfer.
(3) Since the graph obtained is homogeneous it allows any GNN architecture to be easily integrated as opposed to prior works which generate heterogeneous graphs.
(4) Incorporating semi-supervised learning framework, which has reduced the reliance on extensive labeled datasets.
(5) Create a static and online framework incorporating graph-based surrogate models to obtain the most optimal parameters.
**Focus to Machine Learning Community**:
Though the algorithms underpinning our contributions draw upon existing knowledge but we provide a distinctive way of integrating them together and provide an efficient and accurate end-to end pipeline for the circuit designing problem.
The integration of existing algorithms has become a highly intriguing and well-received area of research, capturing considerable attention in top-tier conferences like **NeurIPS** [2,3], **AAAI**[1], **ICLR** [6] and **ICML** [4,5] in **2022-23**. This integration is especially focused on circuit optimization, covering both analog and digital circuits.
**References**:
[1] Domain Knowledge-Based Automated Analog Circuit Design with Deep Reinforcement Learning, author={Weidong Cao and Mouhacine Benosman and Xuan Zhang and Rui Ma}, year={2022}.
[2] Versatile Multi-stage Graph Neural Network for Circuit Representation, author = {Yang, Shuwen and Yang, Zhihao and Li, Dong and Zhang, Yingxueff and Zhang, Zhanguang and Song, Guojie and Hao, Jianye}, year = {2022}.
[3] Learning to Design Circuits, author={Hanrui Wang and Jiacheng Yang and Hae-Seung Lee and Song Han}, year={2020}.
[4] Learning to Design Analog Circuits to Meet Threshold Specifications,
author = {Krylov, Dmitrii and Khajeh, Pooya and Ouyang, Junhan and Reeves, Thomas and Liu, Tongkai and Ajmal, Hiba and Aghasi, Hamidreza and Fox, Roy}, year = {2023}
[5] Circuit-{GNN}: Graph Neural Networks for Distributed Circuit Design,
author = {Zhang, Guo and He, Hao and Katabi, Dina}, year = {2019}
[6] CktGNN: Circuit Graph Neural Network for Electronic Design Automation, author = {Zehao Dong and Weidong Cao and Muhan Zhang and Dacheng Tao and Yixin Chen and Xuan Zhang}, year = {2023}
**Q1**: Thanks for asking this question and in fact this is a very important point. In this work we have simulated the design using optimized parameter obtained from our algorithm. The results shown in table.4 and table.5 in main paper as well as Table.1 and table.6 are generated from simulations only using the optimized parameter. Further, we have also performed a transient step response of unity gain amplifier which uses our designed opamp. We observed from this step response that the system is stable and phase margin of this circuit should be somewhere between 50-55 degree (as there is only 1 visible peaking in the ringing response and then it settles) and thus it meet the desired performance. Kindly refer the simulation results supplemented.
**Q2, L2**: Thanks for asking this question. Although two stage amplifier and ring oscillator are simple circuit but they cover a wide variety of circuits as two stage amplifier is an example of circuit which mostly works in linear region of operation where small signal analysis is valid and operating points are calculated only once while on the other hand ring oscillator operates mostly in large signal region where transistor operates in different region of operation and analysis are mostly performed using periodic steady states where operating points are continuously changing in a repetitive fashion. Further two stage OTA process voltage domain signal while in ring oscillator time domain information of signal is important. Therefore even though these circuits may look simple but it covers different and important category of analog circuits and covers a wide range of analog circuits. Most of the analog circuit can be thought of an extension and combination of these simple building blocks.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ response. Unfortunately, many concerns have not been addressed yet, despite additional details provided in the rebuttal. I will not change the score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response. To the best of our understanding, we have tried to answer your concerns. We will do our best to address your remaining concerns but it will be very helpful if you could point us to the issues that need to be better resolved. | Summary: - This paper propose a GNN based framework, dubbed as Graph of Circuits Explorer (GCX), to optimize circuit design by predicting the performance parameters of the nodes in circuits, e.g., Gain, Bandwidth, Noise, etc.
- This paper 1) utilizes a semi-supervised learning framework is employed for the graph-based surrogate model, and 2) creates a comprehensive feature vector that integrates information about various technology nodes and topologies, emphasizing generalizability in zero-shot and few-shot learning frameworks.
- With the proposed techniques, this work can enhance circuit performance prediction accuracy and efficiency while reducing the reliance on extensive labelled datasets.
- It demonstrated the proposed frameworks's effectiveness via simulation under 180nm CMOS technology, with model generalizability tested on 65nm and 45nm CMOS process nodes.
Strengths: - The presentation of this paper is very solid and clear, both in terms of the text description and the figure illustration.
- This work conduct comprehensive validation across different GNN model variants, other circuit performance prediction methods, and different ablation settings. The improvements show are consistent.
Weaknesses: - The contribution and distinction over the inherent characteristics of GNNs can be stressed out more. This paper extensively leverages the GNN's ability of predicting the unknown nodes' results based on their surrounding neighbors. However, this part has been widely researched for in the transductive learning scenario of GNNs and the label propagation in earlier graph processing works. The authors could better stress out these inherent (vanilla) techniques' limitations and what they do to specifically address them.
- Practical significance of the results' improvements could be supplemented. Although the evaluations consistently generate improvements, it is a little unclear what the technique can enable in the practical EDA tasks while the existing methods can be lacking. Also, the tested metrics' significance in actual applications can be added.
- Is there any potential limitation of the proposed methods?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please address the comments in the weakness section.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have not yet included the limitations and broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: Thanks for the feedback. We had performed following set of experiments highlighting the robustness of our proposed algorithms to some of the commonly encountered problems with vanilla GNN architectures:
(1)**Over-smoothing in GNNs**: Over-smoothing is an inherent problem with deeper GNNs and result in performance degradation.
**Proposed solution**: (1) Generate sparse graphs: the formulation discussed in the paper provides sparsity control on the graph. As a result over-smoothing is controlled. (2) Use variants of GNN: GCX(SAGE) as proposed in the paper provides highest resilience to over-smoothing. (refer table for experimental results)
| GCX(.) | Gain ($p=$50%) | Gain ($p=$70%) | UGB ($p=$50%) | UGB ($p=$70%) | GM ($p=$50%) | GM ($p=$70%) | PM ($p=$50%) | PM ($p=$70%) | Noise ($p=$50%) | Noise ($p=$70%) | Power ($p=$50%) | Power ($p=$70%) |
|-------|------------|------------|-----------|-----------|----------|----------|----------|----------|-------------|-------------|-------------|-------------|
| SAGE | 0.33 | 0.58 | 0.30 | 0.55 | 0.76 | 0.81 | 0.65 | 0.79 | 0.73 | 0.93 | 0.90 | 0.92 |
| GAT | 0.07 | 0.11 | 0.05 | 0.06 | 0.64 | 0.78 | 0.63 | 0.66 | 0.32 | 0.40 | 0.50 | 0.51 |
| GCN | 0.03 | 0.09 | -0.05 | -0.01 | 0.09 | 0.33 | 0.13 | 0.13 | 0.08 | 0.10 | 0.40 | 0.45 |
**Table 1: $R^2$ scores with different GNN architectures with deeper layers - Sparse Graph: (Average Degree- 3.5)**
| GCX(.) | Gain ($p=$50%) | Gain ($p=$70%) | UGB ($p=$50%) | UGB ($p=$70%) | GM ($p=$50%) | GM ($p=$70%) | PM ($p=$50%) | PM ($p=$70%) | Noise ($p=$50%) | Noise ($p=$70%) | Power ($p=$50%) | Power ($p=$70%) |
|-------|------------|------------|-----------|-----------|----------|----------|----------|----------|-------------|-------------|-------------|-------------|
| SAGE | 0.36 | 0.52 | 0.44 | 0.49 | 0.36 | 0.70 | 0.78 | 0.83 | 0.82 | 0.88 | 0.79 | 0.89 |
| GAT | 0.00 | 0.00 | 0.00 | 0.00 | 0.28 | 0.52 | 0.44 | 0.52 | 0.12 | 0.18 | 0.38 | 0.40 |
**Table 2: $R^2$ scores with different GNN architectures with deeper layers - Dense Graph: (Average Degree- 10.1)**
(2) **Rigidity to specific topologies**: Incorporating multiple topologies to facilitate generalizabilty has not been widely explored.
**Proposed solution**: We attempt to perform knowledge transfer across different technology files and topologies. We observe encouraging results for test case under study (refer the following table).
| Model GCX(SAGE) | 3-S,FS (65,45nm) $p$=350 | 3-S,FS (65,45nm) $p$=450 | 3to5-S,FS (65,45nm) $p$=350 | 3to5-S,FS (65,45nm) $p$=450 |
|----------------|------------------|------------------|-------------------|-------------------|
| **Frequency** | 0.73 | 0.86 | 0.45 | 0.78 |
| **Delay** | 0.76 | 0.85 | 0.66 | 0.72 |
| **Power** | 0.81 | 0.88 | 0.67 | 0.80 |
| **Average** | **0.77** | **0.86** | **0.59** | **0.76** |
**W2**:
**Practical Significance**:
Minimize the dependence on extensive labeled datasets, consequently leading to a reduction in the duration of the design cycle.
**Comparison with practical EDA tools**: (1) Current EDA tools completely leverage supervised learning framework which required extensive simulations. This makes the process resource and time intensive. Our proposed method employs semi-supervised learning to act as proxy to simulations under limited label scenario. (2) The usage of graph and graph-based surrogate models have not been widely explored in the EDA techniques and current works are very restrictive in terms of generalizability.
**Significance to Applications**: We consider different metrics like Power consumption, Noise, PM etc which contribute to stability and efficient performance of the circuit. The demonstrated results validate that our proposed algorithm balances the trade-off between performance (metrics like Gain, UGB, Noise, PM) and resource consumption (power).
**W3**: After thorough experimentation across various paradigms we observe our proposed approach has following limitations:
**Low $R^2$ score**: We currently observe that $R^2$ score corresponding to metrics like Gain and UGB are low, which results in incorrect predictions. We intent to explore techniques like encoding of the design space to further elevate the score.
**Limited success in knowledge transfer**: Our current approach shows promising results in facilitating knowledge transfer. However, to realize a more comprehensive replacement of SPICE simulations, it is imperative to enhance the implementation of our surrogate model to ensure its accuracy and reliability. | Summary: This paper proposes a graph based semi supervised framework that is capable of driving analog design
Strengths: * The paper is generally well written and organized.
* The figures are nicely drawn to make understanding necessary concept easier.
* The proposed mehtod is well illustrated, the idea of using semi-supervised methods looks interesting and promising.
* Experimental results are well demonstrated and explained, comparison against existing methods look promising.
Weaknesses: Please see questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * on conclusion, from what I understand the result is more optimal than other methods, how to determine that it yields the most optimal parameters?
* For different technologies, does it need separate training or the knowledge is transferrable across technoilogies?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: * The authors discuss some of the limitations in knowledge transfer.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: This is a very important question and thanks for pointing it out.
**Algorithmic viewpoint**: Optimality of the design parameters depends on how well the objective function is optimized. For the test cases we considered, FOM was chosen as the objective function (details are discussed in supplementary). When the stopping criteria (discussed in Q3- reviewer LVTs) is met the algorithm gives out the most optimal set of design parameters.
**Designer's viewpoint**: The significance assigned to each metric transforms across different circuits and applications. For an amplifier, gain might reign supreme, whereas Phase Margin (PM) might command precedence elsewhere. Conversely, lower power consumption could be paramount in other scenarios. In each case the definition of optimality changes and results in different set of design parameters. For our case, after taking the designer's recommendation we assign equal importance to all the specifications under consideration and the algorithm provides the optimal parameters. Notably, our algorithm can easily incorporate these additional attributes and give out the parameters accordingly.
**Q2**: Thanks for asking this question. As discussed in the paper we attempted to perform few-shot and zero-shot learning by generating a comprehensive feature vector which facilitates knowledge transfer across technology nodes and topology. Our proposed approach achieved an encouraging performance in few-shot learning and the results are as follows:
| Model GCX(SAGE) | 3-S,FS (65,45nm) $p$=350 | 3-S,FS (65,45nm) $p$=450 | 3to5-S,FS (65,45nm) $p$=350 | 3to5-S,FS (65,45nm) $p$=450 |
|----------------|------------------|------------------|-------------------|-------------------|
| **Frequency** | 0.73 | 0.86 | 0.45 | 0.78 |
| **Delay** | 0.76 | 0.85 | 0.66 | 0.72 |
| **Power** | 0.81 | 0.88 | 0.67 | 0.80 |
| **Average** | **0.77** | **0.86** | **0.59** | **0.76** |
So, by fine-tuning our pre-trained model to the newer set of samples our approach facilitates knowledge transfer.
**L1**: After thorough experimentation across various paradigms we observe our proposed approach has following limitations:
(1) **Low $R^2$ score**: We currently observe that $R^2$ score corresponding to metrics like Gain and UGB are low, which results in incorrect predictions. We intent to explore techniques like encoding of the design space to further elevate the score.
(2) **Limited success in knowledge transfer**: Our current approach shows promising results in facilitating knowledge transfer. However, to realize a more comprehensive replacement of SPICE simulations, it is imperative to enhance the implementation of our surrogate model to ensure its accuracy and reliability. | Summary: This article proposes a graph-based circuit design framework to help address the issue of label scarcity in circuit design. The paper is well-written, virtually free of errors or inconsistencies. Information is presented accurately and timely, creating a smooth narrative flow between the main paper and supplementary content. However, I do have some queries regarding the specific technical aspects of the paper. I would be immensely grateful if the authors could provide satisfying answers.
Strengths: (1) The technology presented is quite innovative, successfully merging popular Graph Representation Learning with the semi-supervised nature of GNNs and the fundamental characteristics of circuits.
(2) The writing exhibits smooth logic, with technical details clearly expressed.
Weaknesses: (1) The citations in this paper primarily focus on circuit design and system-related conferences or journals. I would like to see more analysis related to GNNs, as GNNs typically require deeper layers for larger circuits (components of which should be quite complex). However, an overly deep receptive field might lead to an over-smoothing issue. Is there a chance of over-smoothing occurring when each device is learning within the network?
(2) While this paper presents many circuit-related simulation experiments, it lacks validation in real-world scenarios. To my knowledge, circuit design in practical settings is costly, which may present challenges for the authors in testing on specific circuits. Nevertheless, I am still interested in querying this issue and hope the authors could offer some related thoughts.
(3) The importance of each device in the circuit might vary. In certain situations, some key devices are crucial to the entire circuit. Therefore, I believe the authors' discrete encoding approach may be unsuitable.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) Please include more detailed analysis of the specific GNN networks, especially addressing potential issues such as over-smoothing.
(2) Is it feasible to conduct experiments in real circuits? If so, please include such tests, which could involve smaller, simpler circuits. If not, it would be helpful to clearly explain the reasons for not testing in real-world situations.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: (1) Please include tests on various GNN frameworks.
(2) Please clarify in the experiment why only simulation results are presented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1, Q1, L1**: Thanks for raising the concern. We experimented by creating deeper GNNs and the results are as follows:
| GCX(.) | Gain ($p$=50%) | Gain ($p$=70%) | UGB ($p$=50%) | UGB ($p$=70%) | GM ($p$=50%) | GM ($p$=70%) | PM ($p$=50%) | PM ($p$=70%) | Noise ($p$=50%) | Noise ($p$=70%) | Power ($p$=50%) | Power ($p$=70%) |
|-----------|----------------|----------------|---------------|---------------|--------------|--------------|--------------|--------------|----------------|----------------|----------------|----------------|
| SAGE | 0.33 | 0.58 | 0.30 | 0.55 | 0.76 | 0.81 | 0.65 | 0.79 | 0.73 | 0.93 | 0.90 | 0.92 |
| GAT | 0.07 | 0.11 | 0.05 | 0.06 | 0.64 | 0.78 | 0.63 | 0.66 | 0.32 | 0.40 | 0.50 | 0.51 |
| GCN | 0.03 | 0.09 | -0.05 | -0.01 | 0.09 | 0.33 | 0.13 | 0.13 | 0.08 | 0.10 | 0.40 | 0.45 |
**Table 1: $R^2$ scores with different GNN architectures with deeper layers - Sparse Graph: (Average Degree- 3.5)**
| GCX(.) | Gain ($p$=50%) | Gain ($p$=70%) | UGB ($p$=50%) | UGB ($p$=70%) | GM ($p$=50%) | GM ($p$=70%) | PM ($p$=50%) | PM ($p$=70%) | Noise ($p$=50%) | Noise ($p$=70%) | Power ($p$=50%) | Power ($p$=70%) |
|-----------|----------------|----------------|---------------|---------------|--------------|--------------|--------------|--------------|----------------|----------------|----------------|----------------|
| SAGE | 0.36 | 0.52 | 0.44 | 0.49 | 0.36 | 0.70 | 0.78 | 0.83 | 0.82 | 0.88 | 0.79 | 0.89 |
| GAT | 0.00 | 0.00 | 0.00 | 0.00 | 0.28 | 0.52 | 0.44 | 0.52 | 0.12 | 0.18 | 0.38 | 0.40 |
**Table 2: $R^2$ scores with different GNN architectures with deeper layers - Dense Graph: (Average Degree- 10.1)**
GCN exhibits worst performance when subjected to deeper layers. Initial experiment (best performing model) was conducted with GCN (2 Hidden layers), GraphSAGE (1 Hidden layer) and GAT (1 Hidden layer). To demonstrate over-smoothing we incorporate additional layer to the corresponding architectures.
We have acknowledged the occurrence of "over-smoothing" as GNNs become deeper, potentially compromising surrogate model accuracy. To counter this, we've adopted strategic measures during algorithm development to curb over-smoothing's adverse effects effectively:
**Construct Sparse graph**- The optimization formulation suggested in the paper controls the sparsity of the graph, thus reducing the effect of over-smoothing.
**Use variants of GNNs**: Architectures such as SAGE and GAT exhibit enhanced resilience to over-smoothing. Our proposed GCX(SAGE) model, showcases superior performance in mitigating the over-smoothing challenge.
By adopting the above mentioned preventive techniques we obtain significant resilience to the over-smoothing problem with deeper GNNs.
**W2, Q2, L2**:Thanks for asking this question. It is true that performance of circuit design is finally tested on a silicon. However, process of designing, fabricating it in chip and validating in lab is a costly and time taking affair. Hence these designs are first validated through extensive simulation to ensure the performance. As the model used in these simulations are obtained from real validation on silicon hence they closely predict the behaviour of circuits realized on silicon. Currently we have also simulated all the optimized circuits using the same models (180nm CMOS process TSMC foundry models which are made available by foundry) to ensure its correctness and our results are shown in supplementary material. We have a plan to validate it on silicon however this process will require another year as we are currently in design process and fabrication and getting chip back will take at least an year or so. Hence we at this stage are providing simulation results which in circuit community is a close confirmation of validity of a circuit.
**W3**: Thanks for the comments. Our present encoding technique efficiently consolidates crucial circuit information into a singular feature vector. Nonetheless, it remains evident that specific components within a circuit might hold greater significance than others. While our current encoding methodology assigns uniform weightage to all components, our approach offers the advantage of tailoring weightage to specific design parameters. This adaptability enhances the comprehensibility of feature encoding and enables a finer grasp of the overall circuit configuration. | Rebuttal 1:
Rebuttal: **Q1)** **Reviewer-5fbs**: Transient Analysis Simulation results
Pdf: /pdf/793cd3be4e4ac128448190e6509a1ac73e210ca8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The design automation of analog circuits poses significant challenges in terms of the large design space, complex interdependencies between circuit specifications, and resource-intensive simulations. To address these challenges, this paper presents an innovative framework called the Graph of Circuits Explorer. GCX enables the creation of a surrogate model that facilitates efficient exploration of the optimal design space within a semi-supervised learning framework which reduces the need for large labelled datasets. The proposed approach comprises three key stages. The effectiveness of the proposed approach is demonstrated through simulated performance evaluation of various circuits, using derived parameters in 180 nm CMOS technology. Furthermore, the generalizability of the approach is extended to higher-order topologies and different technology nodes such as 65 nm and 45 nm CMOS process nodes.
Strengths: - This paper introduces a novel and innovative approach to optimize circuit design by leveraging graph representation and graph-based semi-supervised learning. The approach is designed to enhance accuracy and efficiency while reducing the reliance on extensive labelled datasets.
- To achieve this goal, a semi-supervised learning framework is employed for the graph-based surrogate model. a new method is introduced to create a comprehensive feature vector that integrates information about various technology nodes and topologies, emphasizing generalizability in zero-shot and few-shot learning frameworks. By integrating these approaches, two new optimization methods on graph-based surrogate models are proposed: Efficient Analog Sizing via Constrained Optimization (EASCO) and Analog Sizing through Real-time Online Graphs (ASTROG).
Weaknesses: - This paper leverage a GNN, more experiences about how to extract the feature need to be performed, such as transformer
- The process of training is not claimed clearly, the paper claims that "Semi-Supervised Learning on Graph of Circuits", More details need to be added. such as the process of Semi-Supervised Learning.
- How to define the node weights of a graph? when training process, does the relation between two nodes is neccessary for training?
-"Stopping criteria are predefined, and the optimal design is outputted if these criteria are met; otherwise, the process moves to the next step.", the definition of "Stopping criteria" is not claimed clearly.
- the details of dataset is not claimed clearly.
- "Given the constraints on labelled data availability, zero-shot learning was found to be ineffective in achieving satisfactory performance. However, there was some promising progress observed with few-shot learning, which indicates the potential of utilizing a small amount of labelled data to enhance the learning process." more details and analysis need to be provided
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: as written in Weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: as written in Weaknesses
Flag For Ethics Review: ['No ethics review needed.', 'Ethics review needed: Discrimination / Bias / Fairness Concerns', 'Ethics review needed: Inadequate Data and Algorithm Evaluation', 'Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)', 'Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)', 'Ethics review needed: Compliance (e.g., GDPR, copyright, license, terms of use)', 'Ethics review needed: Research Integrity Issues (e.g., plagiarism)', 'Ethics review needed: Responsible Research Practice (e.g., IRB, documentation, research ethics)', 'Ethics review needed: Failure to comply with NeurIPS Code of Ethics (lack of required documentation, safeguards, disclosure, licenses, legal compliance)']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: Thanks for the feedback. We tried experimenting with Graph Transformer architecture across different performance metrics with appropriate hyper-parameter tuning. The loss function shows a swift decline however the obtained $R^2$ scores are very poor.
With our current time constraints in mind, our focus remains on investigating the implementation details to better understand the underlying factors influencing our outcomes.
**W2**: Thanks for raising the concern. We hope the following discussion provides a more detailed description:
(1) Start by generating a detailed dataset (details described in W4) and $l$ labels are simulated to corresponding samples, where $l << n$. (2) Subsequent step is creation of a graph comprised of circuit instances, denoted as $X=(X_l, X_u)$; represented as a weighted adjacency matrix ($W$).(3) Within this graph, each circuit instance $i$ corresponds to a node. Corresponding circuit-level parameters form a feature vector $(X_i \in \mathbb{R}^{k})$.(4) Termed as the Graph of Circuits Explorer (GCX), this graph, $G(V,E,W,X_u,X_l,Y_l)$, is seamlessly integrated with Graph Neural Networks (GNNs) to facilitate label propagation across unlabeled samples. (5) Semi-supervised learning framework of GNNs accurately learn the underlying graph function while constraining loss computation solely to the available labeled samples. (6) We employ label propagation—on performance metrics such as Gain, UGB, GM, PM etc—across unlabeled circuit instances ($|X_u|$-nodes).(7) We have two distinct labeled data scenarios, specifically $p= 30\\%$ and $p=50\\%$ , for training the Graph Neural Networks (GNNs).
**W3**: **Node weights in a graph**: Thanks for raising the concern. The weights assigned to nodes in a graph are obtained based on the proximity of design parameters among different circuit instances. When the Euclidean distance between these design parameters is minimal, the corresponding instances are assigned higher edge weights. This leads to more pronounced connections between circuit instances that share similar design parameters within a $d$-dimensional space, in contrast to instances characterized by distinct parameters. The resulting node weights are derived as a weighted adjacency matrix, achieved through the resolution of an optimization formulation outlined in the paper.
**Significance of node weights**:
(1) **Sparsity in graph**- Computational time reduces during training, over-smoothing problem can be controlled.
(2)**Identifying prominent neighbors**- Edge weights help to get rid of unnecessary connection of the node. Information via message passing is obtained only through important neighbors.
**Stopping criteria**: For both the test cases we exclusively define our objective function as Figure of Merit (FOM). Details about FOM can be found in the supplementary material.
Stopping criteria is achieved when optimum value of FOM is reached i.e $[ (FOM)_n - (FOM)_{n-1} ] \leq \epsilon$ where $n$ is the number of iterations in both EASCO and ASTROG algorithms. Each algorithm is run TEN times to ensure reproducibility (specifications are satisfied).
**W4**: Thanks for raising the concern. Design space bounds, Performance metrics and Specifications are carefully considered after designer's recommendation (mentioned in experiment section). Features (design space points) $X = (X_u, X_l)$ are generated by uniformly sampling points across the $d$-dimensional space forming a matrix of size $X \in \mathbb{R}^{n \times d}$ . Since we employ semi-supervised learning we simulate labels corresponding to only $l$ samples where $l << n$ (samples are randomly chosen to avoid bias). The dataset obtained is : $(X_l,X_u,Y_l)$. Refer table for better understanding:
| Samples | Design parameters | Performance metrics |
|---------|-------------------|--------------------|
| 1 | $X_1$ | $Y_1$ |
| 2 | $X_2$ | $Y_2$ |
| - | - | - |
| $l$ | $X_l$ | $Y_l$ |
| $l+1$ | $X_{l+1}$ | $\times$ |
| - | - | - |
| $n$ | $X_n$ | $\times$ |
**Sample Distribution**: $X_{1-l}, Y_{1-l}$ corresponds to labeled samples; Samples from $X_{(l+1)-n}$ constitute $X_u$ unlabeled samples.
Two-stage Miller Compensated OTA: Design parameters ($X_i$) are: Reference current ($I_{ref}$), Reference Voltage ($V_{dd}$), Transistor widths and length ($W_{1-8}, L_{1-8})$, Capacitance ($C_c,C_L$) and load resistance ($r_L)$. Performance metrics ($Y_i$) are : Gain, UGB, GM,PM, Noise, Power.
Three-stage Ring Oscillator: Design parameters ($X_i$) are: Reference Voltage ($V_{dd}$), Transistor widths and length ($W_{1,2}, L_{1,2})$ and Capacitance ($C_1$). Performance metrics ($Y_i$) are: Frequency, rms Jitter, Delay, Power.
**W5**: Thanks for raising the concern. We performed two set of experiments with $p$= 350 and 450 labeled samples for technology nodes $65\,nm$ and $45\,nm$ with $180\,nm$
Technology transfer: $180\,nm \rightarrow 65\,nm$ and $45\, nm$ for 3-stage.
Technology + Topology transfer: $180\,nm \rightarrow 65\,nm$ and $45\, nm$ from 3-stage to 5-stage.
| Model GCX(SAGE) | 3-S,FS (65,45nm) $p$=350 | 3-S,FS (65,45nm) $p$=450 | 3to5-S,FS (65,45nm) $p$=350 | 3to5-S,FS (65,45nm) $p$=450 |
|----------------|------------------|------------------|-------------------|-------------------|
| **Frequency** | 0.73 | 0.86 | 0.45 | 0.78 |
| **Delay** | 0.76 | 0.85 | 0.66 | 0.72 |
| **Power** | 0.81 | 0.88 | 0.67 | 0.80 |
| **Average** | **0.77** | **0.86** | **0.59** | **0.76** | | null | null | null | null | null | null |
Hyperbolic Space with Hierarchical Margin Boosts Fine-Grained Learning from Coarse Labels | Accept (poster) | Summary: The paper adresses the issue of coarse to fine learning, _i.e._ coarse labels are present during training but the metric are computed on fine-grained labels. The paper introduces a method Poincaré embeddings with hierarchical cosine margins (PE-HCM). The paper suggests embedding the image representation in the hyperbolic space for coarse label classification to take advantage of its natural hierarchical organisation. It has a second branch that clusters with k-means instances that share the same coarse labels to simulate fine-grained pseudo-labels. It uses a hierarchical loss that aims at making the cosine distance between pairs match their relation, i.e. same instance label (same instance with different data augmentation), same cluster label (assigned by k-means), same coarse label (supervision), different coarse label (supervision). The paper validates experimentally the method on several benchmarks, CIFAR-100 and BREEDS - a collection of 4 datasets. The paper conducts ablation studies of the different components of the method.
Strengths: S1. The method is well justified. The adaptive hierarchical distance is interesting and works well experimentally.
S2. Experimental results are encouraging.
S3. Ablation studies are present.
Weaknesses: W1. Some works are missing in the related works: MaskCon (MaskCon: Masked Contrastive Learning for Coarse-Labelled Dataset, CVPR 2023) and Grafit (Grafit: Learning fine-grained image representations with coarse labels, ICCV 2021) that both tackle the coarse to fine task. And UnHyp (Unsupervised Hyperbolic Metric Learning, CVPR 2021) that uses hyperbolic space to create a hierarchy.
W2. The paper lacks comparison to state-of-the-art methods such as MaskCon and Grafit. It could conduct experiments on the large scale iNaturalist-18 dataset.
W3. The paper lacks comparison to unsupervised baselines, such as UnHyp or more recent state-of-the-art self-supervised-learning methods, such as Dino (Emerging Properties in Self-Supervised Vision Transformers, ICCV 2021), iBot (iBOT: Image BERT Pre-Training with Online Tokenizer, ICLR 2022) etc.
Minor:
W4. The paper could remind the reader, in the benchmark datasets section, what classes are present in the different datasets, and what are the coarse classes on the different datasets.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1. Line 145: The paper gives notations for coarse and fine-grained labels. Could the method take advantage of a hierarchy of classes rather than solely a coarse label?
Q2. What is the impact of the granularity of the coarse labels, _i.e._ how much will the performance deteriorate when the coarse labels become more and more broad and on the contrary what happens when given the fine-grained labels?
Q3. Line 227: the paper defines a distance that is used in the HCM module. It uses the Euclidian cosine distance, this is surprising as the paper states that it compares instances in the hyperbolic space. The paper should elaborate this design choice.
Q4. Table 2: reports results on the BREEDS benchmark. I am not sure to understand what are the results indicated as “5-way” and “all-way”.
Typos:
- Table 1: Coarase —> Coarse
- Table 5: European —> Euclidian
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Thank you for the positive comments. Below please find our responses to some specific comments.
### Comment_1:
> *Additional related works*
Thanks for your kind reminder. These works do share some connections with our method. We did not notice them because these works have different task-of-interest from ours and are seldom discussed in prior few-shot fine-grained literature. We will cite the recommend works and add the discussions (as detailed in the following responses).
### Comment_2:
> *Comparison to SOTA methods such as MaskCon and Grafit*
Thank you for the suggestion. Since MaskCon and Grafit are proposed for a different task (i.e., retrieval), we conduct a comparison on both our few-shot fine-grained (FSFG) task and the retrieval task with CIFAR-100, as shown in the table below.
| Methods | 5-way (FSFG) | All-way (FSFG)|Recall@1 (Retrieval) | Recall@5 (Retrieval) |
|-|-|-|-|-|
| Grafit (Reported in paper) | - |- |60.57 | 82.32 |
| MaskCon (Reported in paper) | - |- |**65.52** | **83.64** |
| MaskCon (Our Replication) | 81.05 |40.81 | 65.35 | 83.57 |
| Ours | **85.45** |**47.76** |62.11 | 80.68 |
**Analysis:** As shown, under FSFG, our method still achieves superior results over MaskCon (almost 7% improvement on all-way FSFG). Under the retrieval scenario (their specialized task), our method already surpasses Grafit but is lower than MaskCon. The reason might be: MaskCon's self-supervised design emphasizes the finest granularity and thus is favored by instance-based retrieval. In contrast, our approach harmonizes embeddings across multiple granularities and thus is favored by FSFG.
Given the time constraints, we couldn't finish the experiment on the large-scale iNaturalist-18 dataset in 7 days, but we pledge to incorporate results on this dataset in the final version.
### Comment_3:
> *Comparison to unsupervised baselines and more recent SOTA SSL methods*
Thank you. We compare the two most recent unsupervised methods (DINO and iBot with their publicly released model) in the table below. It is observed that our method consistently outperforms both DINO and iBot across different metrics and datasets by a large margin.
| Methods| CIFAR-100 || Living17|| Non-Living26||
|-|-|-|-|-|-|-|
|| 5-way| All-way| 5-way| All-way| 5-way| All-way|
| Dino-ResNet50 | 51.23 | 12.67 | 83.62 | 44.31 | 72.58 | 33.34 |
| iBOT-ViT-Base16 | 46.72 | 10.68 | 75.41 | 31.38 | 74.89 | 32.95 |
| Ours-ResNet50 | **81.42** | **36.28** | **90.94** | **53.09** | **89.97** | **50.12** |
### Comment_4:
> *The detailed classes in different datasets*
Thanks. We adopt the protocol dataset split in their original papers. Briefly:
- Living-17, rooted at "living thing" in WordNet with coarse-grained classes at a depth of 5, includes salamander, turtle, dog, bear, and monkey.
- NonLiving-26, rooted at "non-living thing" and also at a depth of 5, encompasses bag, boat, car, digital computer, and ship.
- Entity-13, originating from "entity" at a depth of 3, features bird, reptile, and mammal.
- Entity-30, also rooted in "entity" but at depth 4, includes classes like serpentes, building, footwear, and fruit.
Since the number of classes is large, we do not have enough space to enumerate all the class names. Please kindly refer to the original paper for details.
### Comment_5:
> *Could the method take advantage of the hierarchy rather than solely a coarse label?*
Our method already utilizes an estimated hierarchy. Specifically, given the coarse labels, we estimate the finer-grained classes through unsupervised clustering (Ln. 49) and form an estimated hierarchy (with inevitable noises). If we have access to the ground truth of fine-grained labels, the results are expected to be better.
### Comment_6
> *Impact of the granularity of the coarse labels*
Thanks for this good question. Generally, if the given labels become more fine-grained, the results become better. During rebuttal, we adjust the granularity by splitting the dataset into different classes (the larger number has finer granularity).
The results are summarized below.
| | Living17 | | | NonLiving26 |||
|-|-|-|-|-|-|-|
| Num of Splitted Classes | 9 | 17 | 34 | 13 | 26 | 52 |
| 5-way | 82.84 | 90.94 | 90.58 | 82.53 | 89.97 | 88.91 |
| All-way | 43.65 | 53.09 | 63.94 | 42.51 | 50.12 | 54.78 |
| Intra-class | 50.74 | 57.11 | 68.46 | 53.06 | 58.63 | 62.68 |
Thank you for pointing out this new aspect. We believe this investigation provides valuable insight into our method, as well as the FSFG task itself.
### Comment_7
> *Using Euclidean cosine distance in the hyperbolic space*
The property of Conformality, as discussed in Ln. 181, plays a pivotal role by preserving the angles between feature vectors of instances within the hyperbolic space. This fundamental characteristic ensures that regardless of the choice of angular measurement, two feature vectors with the same angle will maintain that angle in the hyperbolic space. Consequently, employing the Euclidean cosine distance for this purpose is a valid approach. More importantly, during our experimental process, we observed that optimizing with the Euclidean cosine distance provided more stability in gradient computations compared to the hyperbolic cosine distance. This observation in the empirical results guided our choice to use the Euclidean cosine distance, as it aligned well with our objectives and offered more robust training. In the final version, we will follow your suggestion to include a more detailed explanation.
### Comment_8
> *The meaning of "5-way" and "all-way"*
"5-way" refers to tasks that involve 5 randomly selected fine-grained categories. "All-way", encompasses all available fine-grained categories in the dataset.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer tnZ4,
We would like to appreciate you again for your valuable comments. Moreover, it is important for us to know whether our responses have addressed your concerns, and we look forward to receiving your further feedback. In any case, we remain available to answer your questions.
Best Regards,
The authors | Summary: This paper proposes a method developed in the hyperbolic space to embed visual embeddings to deal with the task of fine-grained learning from coarse labels. By discovering the advantages of hyperbolic space, the authors design the hierarchical cosine margin manner and an adaptive hierarchical cosine distance, which favour modelling fine-grained objects. Experiments are conducted on five benchmark datasets. Evaluations show that the proposed method achieves superior recognition accuracy over competing solutions on these datasets.
Strengths: + The proposed method is novel and makes sense, which has great technical contributions, especially for the introduction of hyperbolic space into the fine-grained learning from coarse labels.
+ The fine-grained learning from coarse labels task has good practical potential, which also is a challenging problem in computer vision and machine learning. Such a task deserves further exploration.
+ The hierarchical cosine margins and the adaptive hierarchical cosine distance are interesting. These modules are crucial for the whole framework of the proposed method and they are also tailored designs for the task studied in this paper.
+ The experiments are comprehensive and convincing. Overall, they can validate the effectiveness of the proposed method, as well as the proposed modules.
+ The paper is well-written and easy to follow.
Weaknesses: Although the motivation of Figure 1 is relatively clear, it can be further explained.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - As stated in the section of Hyperparameters, different numbers of clusters lead to different accuracy. How to choose an appropriate hyperparameter for a specific task for users?
- Some minor issues of paper writing should be fixed, e.g., “Fig” should be fixed as “Fig.” or “Figure”.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: This paper does not explicitly discuss its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Thank you for the positive comments. Below please find our point-to-point responses.
### Comment_1:
> *Explain the motivation of Figure 1 in more details*
Thanks for your suggestion. Figure 1 primarily illustrates the properties of the Poincaré disk space and the impact of cosine distance constraints on the mapped space. One intrinsic characteristic of this mapping, critical for our work, is the Conformality property derived from Riemannian geometry. It ensures that angles between curves or vectors in the Euclidean space remain consistent when projected onto the Poincaré disk.
By emphasizing this property in Figure 1, we intend to provide a clearer comprehension of the principles underpinning our method. Detailing this geometric phenomenon underscores the robust theoretical base of our approach and lends deeper insight into our motivation. In our revised manuscript, we will ensure this aspect is delineated more prominently.
### Comment_2:
> *Selection of appropriate hyperparameter*
Thank you for pointing out the importance of choosing appropriate hyperparameters. In our experiments across five datasets, we found a recurring pattern that offers a guideline. Typically, when clustering coarse-grained categories, the number of clusters that yields optimal results is roughly twice the actual number of fine-grained subclasses within that coarse category.
While this heuristic emerged consistently across our tested datasets, we acknowledge that it may not be universally optimal. However, it does provide a good starting point for users. In practice, fine-tuning based on validation performance is always recommended to ascertain the best hyperparameter for a specific task. We will clarify this heuristic in our revised manuscript to guide potential users of our method.
### Comment_3:
> *Minor issues in paper writing*
Thank you for highlighting this inconsistency. We will align the use of "Fig." and "Figure" throughout the paper in the revised version. We appreciate your attention to detail and will carefully proofread the manuscript.
---
Rebuttal Comment 1.1:
Title: After rebuttal
Comment: After reviewing the author's response and considering the feedback from other reviewers, I have decided to maintain my initial score of "strong accept". I believe this is a solid paper. | Summary: The paper presents a novel method, PE-HCM, for fine-grained learning from coarse labels. The authors propose the use of the hyperbolic space for sample embedding and introduce a hierarchical cosine margins manner to enhance discriminative ability. The method combines supervised learning with coarse-grained labels and instance-level labels obtained through data enhancement and clustering. Experimental results on benchmark datasets demonstrate the effectiveness of the proposed approach.
Strengths: * The paper addresses an important problem in the field of fine-grained learning, which is the challenge of learning from coarse labels. Fine-grained recognition is crucial in various applications, and the proposed method offers a solution to improve the performance in this context. The significance of the problem adds value to the research presented in the paper.
* The paper introduces several novel concepts and methods. The use of the hyperbolic space for sample embedding is a unique approach that has shown promise in other related works. Additionally, the introduction of the hierarchical cosine margins manner to enhance discriminative ability is an innovative strategy that contributes to refining the hierarchical divisions for fine-grained recognition. The combination of these concepts and methods in the proposed PE-HCM method demonstrates the originality of the research.
* The paper exhibits good writing quality, characterized by clarity and soundness. The paper effectively communicates the motivation, methodology, and experimental results to the readers. The descriptions of the proposed method and the experimental setup are clear and well-presented. The writing style contributes to the overall understanding of the research and enhances the readability of the paper.
* The paper demonstrates a high level of experimental rigor. The authors provide detailed descriptions of the benchmark datasets used, including their characteristics and sizes. They also compare the proposed method with several state-of-the-art models, covering a wide range of relevant approaches in the field of embedding learning. By conducting extensive experiments on multiple benchmark datasets, the authors validate the effectiveness of their proposed method and show its superiority over existing techniques. The inclusion of statistical measures such as mean accuracy and confidence intervals further enhances the reliability of the experimental results.
* The paper showcases the adaptive ability of the proposed method through the introduction of the Adaptive Hierarchical Cosine Distance (AHCD) strategy. The authors track the update processes of the distance parameters and show that the method adjusts these parameters during training to align with the actual data distribution. Furthermore, the paper provides a thorough analysis of the trade-off hyperparameter α, demonstrating the sensitivity of the method to its value and offering insights into finding an appropriate balance between discriminative hierarchical embedding and generalization ability.
Weaknesses: * While the paper includes theoretical illustrations of sample pair distances, it would be beneficial for the authors to provide empirical evidence by visualizing the feature distances between real samples.
* The paper focuses on the evaluation of the proposed method on specific benchmark datasets but lacks a broader discussion on the generalizability of the results. Providing insights into the transferability of the proposed approach to other fine-grained recognition tasks or datasets would strengthen the overall impact of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The paper introduces the use of the hyperbolic space for embedding samples and applies the hierarchical cosine margins to enhance the discriminative ability. While the results demonstrate improved performance in fine-grained recognition, I would like to inquire about the interpretability of the learned embeddings. Can the authors provide more insights into the interpretability of the hyperbolic embeddings and how the hierarchical cosine margins contribute to the separability of fine-grained categories? It would be helpful to include visualizations or case studies that illustrate how the learned embeddings capture meaningful hierarchical relationships and contribute to the discriminative power.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper concludes by discussing potential directions for future research, highlighting the importance of refining category-specific semantic hierarchical distance constraints. This suggestion indicates the authors' awareness of the limitations of the proposed method and opens up opportunities for further exploration and improvement in fine-grained learning from coarse labels.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Thank you for the positive comments. Below please find our point-to-point responses.
### Comment_1:
> *Visualization and interpretability*
Thank you for your suggestion. We have provided the visualization by t-SNE on CIFAR-100 in the global rebuttal file. As shown, it can be observed that compared to training the model using only coarse labels, our method leads to a more concentrated distribution of sample points for each category, while also accentuating the distinctions between different classes. The visualization validates the effectiveness of our method of contributing to the discriminative power from the qualitative aspect. We will also add it in the final version.
### Comment_2:
> *Broader discussion*
Thanks for this suggestion. Our proposed approach, rooted in the principles of hyperbolic embeddings and hierarchical cosine margins, is intrinsically designed to capture hierarchical relationships. Such hierarchies are ubiquitous in nature and span various tasks beyond fine-grained recognition, encompassing a broad spectrum of datasets and domains.
While this study focuses on specific benchmarks, we acknowledge the reviewer's point regarding the potential applicability of our method to a diverse set of tasks influenced by natural hierarchies. In our future work, we intend to explore the versatility of our method in tasks characterized by hierarchical structures. By expanding our experimentation horizon, we will delve into the broader potentials and possible limitations of our approach.
We genuinely appreciate your feedback and believe that broadening the applicability of our method in subsequent research will shed light on its capabilities in handling tasks with inherent hierarchical relationships, thereby increasing its impact and significance in the broader scientific community.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. I would also maintain my initial score of accept. | Summary: This paper addresses the challenge of learning fine-grained embeddings from coarse labels, where detailed distinctions required for fine-grained tasks are often lacking. The authors propose a novel method that embeds visual embeddings into a hyperbolic space and enhances their discriminative ability using a hierarchical cosine margins approach. The hyperbolic space offers advantages such as capturing hierarchical relationships and increased expressive power, making it suitable for modeling fine-grained objects. By enforcing relatively large/small similarity margins between coarse/fine classes in the hyperbolic space, the proposed hierarchical cosine margins approach is introduced. While enforcing similarity margins in the regular Euclidean space is popular for deep embedding learning, extending it to the hyperbolic space is non-trivial and valuable for coarse-to-fine generalization. Experiments are conducted on five benchmark datasets.
Strengths: 1. Novel Method: The paper introduces a novel method, Poincare embedding with hierarchical cosine margins (PE-HCM), which addresses the challenging task of fine-grained learning from coarse-grained labels. This innovative approach bridges the gap between coarse-grained and fine-grained labels, enabling the transfer of knowledge from coarse-grained categories to fine-grained recognition tasks.
2. Technical Contribution: The proposed method incorporates key technical contributions. It utilizes the hyperbolic space to capture hierarchical relationships and provide increased expressive power. Additionally, the hierarchical cosine margins enforce proximity relationships among sample pairs, enabling the model to learn fine-grained features and distinctions. The adaptive strategy for updating target distances further enhances the model's ability to capture underlying relationships and adapt to the fine-grained characteristics of the data.
3. Paper Writing: The paper is well-written and effectively communicates the motivations, challenges, and proposed solutions. The introduction provides a clear background and problem statement, leading to a comprehensive description of the proposed method. The technical details, including the use of hyperbolic space and hierarchical cosine margins, are explained concisely, making the paper accessible to readers.
4. Experimental Performance: The proposed method is thoroughly evaluated on five benchmark datasets, demonstrating its superior performance compared to competing solutions. By achieving state-of-the-art recognition accuracy, the experimental results highlight the effectiveness and potential practical applications of the proposed method in fine-grained recognition tasks.
Weaknesses: 1. Limited Analysis of Failure Cases: The paper could include a discussion of the failure cases of the proposed method. Analyzing scenarios where the method does not perform well would provide insights into potential weaknesses and opportunities for future improvements.
2. Minor Issues: There are several typos throughout this paper, e.g., in Table 1, Coarase should be Coarse.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I am curious about how does the proposed method perform when the available coarse-grained labels are noisy or contain inaccuracies?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Thank you for the positive comments. Below please find our responses to some specific comments.
### Comment_1:
> *Limited analysis of failure cases*
Thanks for this suggestion. We fully concur that a deep dive into the scenarios where our model underperforms can provide valuable insights and pave the way for future refinements.
During our experiments, we used to observe a typical case: if the classes in the same level have very different scopes (e.g., some classes have very large scope and some classes have very small scope), our method tends to achieve relatively low results. We infer it is because our method uses the same margin for each level. We will add the discussion into the manuscript. We also believe that such investigation will help our future research.
### Comment_2:
> *Several typos*
Thank you for pointing out the typographical errors. We will meticulously review the manuscript to correct these typos, including the one you've highlighted in Table 1, and ensure the clarity and professionalism of the revised submission.
### Comment_3:
> *How does the proposed method perform when the available coarse-grained labels are noisy or contain inaccuracies*
Thank you for this good question. Adding noises to the coarse-grained labels considerably compromises the accuracy. During rebuttal, we randomly add 20% noises to the ground-truth labels on CIFAR100 dataset, and observe a decrease of 10.28% 5-way accuracy. Investigating the noise problem is of realistic value and may point the new direction for our future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. I would maintain my initial score of accept. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable comments. We provide point-to-point responses to each reviewer, as well as a supplmentary PDF for some visualization results.
Pdf: /pdf/5ee7862cbc57d4c6968ec57cd5a690899734a3aa.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
UP-DP: Unsupervised Prompt Learning for Data Pre-Selection with Vision-Language Models | Accept (poster) | Summary: This paper investigates the task of data pre-selection by learning a better representation from the joint feature space of both vision and text in an unsupervised manner. The paper focuses on training text prompts to extract joint features with enhanced representation, specifically with the BLIP-2 parameters kept fixed. The aim is to achieve a diverse cluster structure that encompasses the entire dataset.
Strengths: **[New task]** This paper tackles data pre-selection for labelling without accessing the information of downstream tasks, which is quite new to the community.
**[Well-illustrated figures]** The figures shown in this paper are clear enough for better understanding.
**[Good presentation]** The paper is well-written and easy to follow.
Weaknesses: **[Unconvincing illustration]** In Figure 1, BLIP-2 is pre-trained with prompt and image together. This explains why using only image features yields poor performance. Consequently, the evidence presented does not convincingly demonstrate the superiority of multimodal features.
**[Need in-depth analysis]** (i) It is unclear why the self-trained model can be used for sample selection. (ii) The motivation of medoid selection is not given. It would be nice to see the rationale.
**[Missed ablation studies]** The ablation study for two hyperparameters are not given.
**[Disorganized reference format]** Please reformat the references as per some published papers.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please refer to the weaknesses part.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Please refer to the weaknesses part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer X1Uz,
We would like to thank you for your valuable comments. Below is our response to your questions:
Q1: “BLIP-2 is pre-trained with prompt and image together. This explains why using only image features yields poor performance. Consequently, the evidence presented does not convincingly demonstrate the superiority of multimodal features.”
A1: You're correct in noting that BLIP-2 is pre-trained using both prompts and images, and I appreciate your concern regarding the demonstration of the superiority of multimodal features. The results presented in Figure 1d and Tables 1 & 2 already illustrate the superiority of using multimodal features.
Quantitatively, Table 1 demonstrates that our multimodal approach ("Ours") surpasses the use of Image features (“USL-I”) in data pre-selection tasks. Table 2's KNN classification results further emphasize the superior quality of multimodal features. Specifically, utilizing Flowers102_Prompt for multimodal extraction outperforms image features by an impressive 10% in absolute value. Moreover, Figure 1d visually reinforces this advantage. It clearly displays how multimodal features offer a more distinct and scattered distribution compared to image features. This distinction significantly enhances our ability to differentiate between classes, ultimately improving the task of data preselection and KNN classification.
Additionally, our method not only offers improved multimodal features but also provides an enhanced joint sampling strategy. These improvements collectively enhance the performance of the data pre-selection task, as demonstrated in Table A in the rebuttal.
Q2: “It is unclear why the self-trained model can be used for sample selection. The motivation of medoid selection is not given. It would be nice to see the rationale.”
A2: As demonstrated in recent literature [1][2], the first step in data selection is to obtain lower-dimensional and semantically meaningful features, which are usually acquired by models [3][4][5] using self-supervised learning. Consequently, self-trained models play a pivotal role in the data selection task.
In our approach, as shown in Table 1, certain datasets (e.g. EuroSAT) prove to be effectively out-of-distribution (OOD) for BLIP-2, resulting in random guessing during zero-shot performance. To address this, we employ contrastive learning techniques to fine-tune BLIP-2's feature representation specifically for these previously unseen datasets. The enhanced feature representation subsequently leads to more accurate clustering, facilitating the identification of representative instances for the entire dataset.
The concept of utilizing a medoid (or centroid) to identify a representative sample within a cluster is a widely adopted practice [6][7]. This approach is recognized for its effectiveness in selecting a sample that lies at the center of the cluster, ensuring a faithful representation. Furthermore, as highlighted in Table A, we integrate the output of the cluster head to directly sample the representative instance from each cluster, thereby adding an additional dimension to our approach. Further details can be found in the "global" response to all reviewers.
Q3: “The ablation study for two hyperparameters are not given.”
A3: We appreciate your observation regarding the concern about the ablation study for the two hyperparameters. We recognize this aspect and have addressed it by conducting the ablation study (Table D). Further details can be found in our response to reviewer ZwSf (A4).
[1] Wang, Xudong, Long Lian, and Stella X. Yu. "Unsupervised selective labeling for more effective semi-supervised learning." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[2] Xia, Xiaobo, et al. "Moderate coreset: A universal method of data selection for real-world data-efficient deep learning." The Eleventh International Conference on Learning Representations. 2022.
[3] Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International conference on machine learning. pp. 1597–1607. PMLR (2020)
[4] He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 9729–9738 (2020)
[5] Oord, Aaron van den, Yazhe Li, and Oriol Vinyals. "Representation learning with contrastive predictive coding." arXiv preprint arXiv:1807.03748 (2018).
[6] Max Welling. Herding dynamical weights to learn. In ICML, pp. 1121–1128, 2009.
[7] Sorscher, Ben, et al. "Beyond neural scaling laws: beating power law scaling via data pruning." Advances in Neural Information Processing Systems 35 (2022): 19523-19536.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. It has solved most of my concerns so I decide to raise my score. | Summary: The paper addresses data pre-selection (akin to active learning) problem using the highly successful vision-language models (VLMs) of late. In relation to existing approaches, the proposed approach has a few advantages, e.g., no need to have a small initial set of labeled data, no need to have multiple rounds of selection, labeling and retraining, once selected the data can be used for multiple future, unknown downstream tasks etc. The authors start with a BLIP-2 model and an unlabeled set of data $D$. The BLIP-2 model is extended to have learnable context/prompt and a few MLPs, namely instance-level and cluster-level heads. The instance-level head is employed to produce a contrastive training between two views of each unlabeled instance. Cluster-level head, on the other hand, first assigns cluster memberships to the instances and then helps to train the model with a cluster-level contrastive loss. After training for a few epochs using these two losses, cluster-level MLP along with the learned contexts/prompts are used to get the cluster assignment of unlabeled data (which can come from different downstream dataset). Finally, the medoids from each cluster form the representative selections for active learning. Experiments performed on linear probing and domain generalization show the efficacy of the proposed approach over the state-of-the-arts on benchmark datasets.
Strengths: 1. The use of VLMs for unsupervised active learning is appreciable. VLMs. Now-a-days, are known for good zero-shot transfer. The already well-learned representations can and did help the active learning cause.
2. The use of learnable contexts/prompts has been shown to be useful for few-shot transfer. The use of these for active learning is interesting.
3. The use of cluster-level contrastive loss cleverly avoids the use of any initial labeled set that is required in traditional cluster level losses in getting the initial clustering (akin to group contrastive loss in semisupervised literature e.g., [a]).
4. Experimental analysis and ablations show the efficacy of the proposed approach compared to sota approaches and the importance of different components of the approach as well.
[a] Singh et al., Semi-supervised action recognition with temporal contrastive learning, CVPR 2021.
Weaknesses: 1. One important ablation that could be useful is running the approach without the learnable prompts/contexts. What I mean is updating only $g_I$ and $g_C$ but not employing $V$ in Algorithm 1. This will help gauge the importance of the contexts/prompts vis-à-vis the instance and cluster level MLPs. Does ‘Initi_Prompt’ row in Table 2 do this?
2. Line 260+: This is more of a clarification query. When the datasets are described, I don’t see any mention of which dataset is used for pre selection. I am assuming these 7 datasets are downstream task datasets. The question is coming from Table 1. While in Table 2, it seems that the first column tells what is the dataset on which the prompts are learnt, in table 1, it is not clear. Is it that the learning is done on the same datasets on which the linear probing performances are shown for Table 1?
3. Line274+: I am not getting what is meant by 'with the learned prompts' in the baseline using USL. Does it mean everything else here is same as BLIP-2, but in addition a few prompts are learned also?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions are already asked above (in the ‘Weaknesses’ section). Those are mostly queries for further clarification. Here in addition, let me list a few presentation related issues (typos mainly).
- Line 32, 78, Figure 1 caption: Will these be ‘data efficient learning’ instead of ‘data efficiency learning’?
- Line 231: ‘map’ -> ‘maps’
- Line 244: ‘optimizing’ -> ‘optimization’
- Line 246: ‘combine’ -> ‘combination’
- In Figure 2a, which feature extractor is used? Is it anything different from BLIP-2?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are described well in the paper. At the same time, the authors tried to address the limitation in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer uJMt,
We would like to thank you for your valuable comments. Below is our response to your questions:
Q1: “One important ablation that could be useful is running the approach without the learnable prompts/contexts. What I mean is updating only and but not employing in Algorithm 1. This will help gauge the importance of the contexts/prompts vis-à-vis the instance and cluster level MLPs. Does ‘Initi_Prompt’ row in Table 2 do this?.”
A1: Thank you for the insightful suggestion to run an ablation study without the learnable prompts/contexts. We have conducted the suggested study, and the results are presented in Table F. These findings clearly demonstrate the necessity of training the prompts. The motivation behind training the prompts/contexts is to enable the model to extract features more suited to specific datasets. If we only train g_I and g_C without adjusting the prompts/contexts, the image features would remain static, potentially slowing convergence and lowering performance.
Q2: “Line 260+: This is more of a clarification query. When the datasets are described, I don’t see any mention of which dataset is used for pre selection. I am assuming these 7 datasets are downstream task datasets. The question is coming from Table 1. While in Table 2, it seems that the first column tells what is the dataset on which the prompts are learnt, in table 1, it is not clear. Is it that the learning is done on the same datasets on which the linear probing performances are shown for Table 1?”
A2: Thank you for bringing up this clarification query. Your assumption is correct. In Table 1, the same dataset is used both for pre-selection and evaluation in the linear probing experiments. Let me explain the process using the DTD dataset as an example: We first train a prompt for BLIP-2 using images from the DTD dataset. This trained prompt is then used to extract features and conduct data pre-selection within the same DTD dataset. Finally, we perform a linear probe to evaluate the quality of the selected instances within the DTD dataset itself. Table 2 serves a different purpose, as it evaluates the generalization of our method by using prompts learned from one dataset to make pre-selections in another. For instance, the intersection of Caltech101_Prompt and DTD means that we use the prompt learned from the Caltech101 dataset to extract features from the DTD dataset and select instances within DTD.
Q3: “Line274+: I am not getting what is meant by 'with the learned prompts' in the baseline using USL. Does it mean everything else here is same as BLIP-2, but in addition a few prompts are learned also?”
A3: To perform the USL method, it requires image features. USL-I uses the image features extracted directly from the BLIP-2 model's image part without any additional training. USL-M uses multimodal features that have been learned through our proposed methods.
---
Rebuttal Comment 1.1:
Title: Post rebuttal comments
Comment: Thanks, authors, for the detailed response. I had a careful read of the responses to my concerns as well as to my fellow reviewers’. My clarification queries as well as the request for the ablation were addressed very good. The responses to my fellow reviewers’ queries are also apt e.g., the new experiments on additional tasks, additional backbones or linear probing results etc. I was already positive about the work and am seeing no particular reason to change. | Summary: This paper presents an unsupervised approach for data preselection, which aims to select instances for labeling from an unlabeled dataset in a single pass. The authors leverage the text features in multimodal models, specifically BLIP2, to enhance the representation for data preselection. They argue that a well-designed joint feature space of vision and text can yield improved results. To achieve this, they train text prompts to extract joint features with enhanced representation, ensuring a diverse cluster structure that covers the entire dataset. The authors employ two loss functions, namely instance-level contrastive and cluster-level contrastive, to train the learnable text prompts. These loss functions encourage the adapted multimodal model's joint representation space to be more diverse and well-separated, suitable for clustering. Experimental results on seven different datasets, along with a comparison against three baselines, demonstrate the effectiveness of the proposed approach.
Strengths: I believe the strength of the paper is as follows:
- The integration of vision and language models for data pre-selection holds promise due to the added benefits of leveraging text modality.
- This paper introduces a novel approach that promotes diverse and clustered representations, addressing limitations in the current state-of-the-art BLIP2 model through the lens of prompt learning.
- The proposed method is lightweight and more efficient in terms of training costs. The paper makes a significant contribution, evident in its clear presentation and compelling results.
Weaknesses: While the paper's results are strong, there are two notable aspects that could be addressed:
1. Missing CLIP baseline: Comparing the proposed approach with a CLIP baseline would provide a fundamental point of reference. Since the authors employ contrastive loss functions and prompt learning, which can be applied to any vision and language model, including CLIP as a baseline would enhance the comparative analysis.
2. Lack of integrability: A limitation of the prompt tuning method is the lack of interpretability. While prompt tuning improves model performance, it does not provide a clear explanation of why and how the model works in the combined language model embedding space. Addressing this limitation would enhance the understanding and justification of the proposed approach.
3. Performance/training-time trade-off: The paper utilizes the best-performing model of BLIPV2, which has over 7 billion parameters. However, it does not explore the performance and training-time trade-off with different BLIPV2 models. Investigating the performance of the proposed approach on BLIPV2 models with fewer parameters would provide insights into its scalability and suitability for models of varying sizes.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Following questions address the weaknesses and limitations mentioned earlier. Asking the authors these questions could help improve the paper and provide a more comprehensive understanding of the proposed approach:
1. Could you compare your proposed method when built on top of CLIP as well? Including a comparison with CLIP as a baseline would provide valuable insights into the effectiveness of your approach and its advantages over a widely used vision and language model.
2. Could you provide some visualizations, such as attention weight visualizations, to explain why the trained prompts lead to better data preselection and generalization? Visualizations would enhance the interpretability of your method and shed light on the mechanisms behind its improved performance.
3. Have you replicated your proposed method on other variants of the BLIPv2 model? It would be insightful to see how your approach's performance changes when applied to BLIPv2 models with varying levels of expressiveness, particularly when the model size decreases. This analysis would help assess the scalability and adaptability of your method to different model configurations.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There are not any limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer soCe,
We would like to thank you for your valuable comments. Below is our response to your questions:
Q1: “Could you compare your proposed method when built on top of CLIP as well? Including a comparison with CLIP as a baseline would provide valuable insights into the effectiveness of your approach and its advantages over a widely used vision and language model."
A1: Thank you for the suggestion to include a comparison with CLIP as a baseline. The decision not to utilize CLIP for data pre-selection in our study stems from the specific characteristics of CLIP's interaction between vision and language components, as outlined in our response to reviewer ZwSf (A5).
Q2: “Could you provide some visualizations, such as attention weight visualizations, to explain why the trained prompts lead to better data preselection and generalization? Visualizations would enhance the interpretability of your method and shed light on the mechanisms behind its improved performance.”
A2: Thank you for your insightful recommendation. In fact, we have already taken steps in this direction by providing visualizations and analyses of the learned prompts, as detailed in Appendix A, Table 3. We observed that certain learned words like "wood" in the EuroSAT dataset (related to the forest class), specific numbers in FGVCAircraft (possibly representing aircraft codes), and words such as "plane" and "butterfly" in different contexts demonstrate relevance to their respective tasks. However, many of the learned words lack a coherent connection to the tasks, leading us to hypothesize that the encoded meanings may extend beyond the existing vocabulary's scope. Interestingly, we also found shared words across different datasets, including "learn," "saint," "add," and "attend." These commonalities could explain the substantial generalizability of the learned prompts across various tasks.
Q3: “Have you replicated your proposed method on other variants of the BLIPv2 model? It would be insightful to see how your approach's performance changes when applied to BLIPv2 models with varying levels of expressiveness, particularly when the model size decreases. This analysis would help assess the scalability and adaptability of your method to different model configurations”
A3: Thank you for the valuable suggestion. We conducted additional experiments with different versions of BLIP-2 (v1 built with ViT-L and v2 built with ViT-G). The results are presented in Tables C and D. Our method outperforms other baselines using both versions, with the larger BLIP-2 v2 (with ViT-G) demonstrating the best performance.
Even though BLIP-2 boasts over 7 billion parameters, our method exclusively trains a few specific components: the parameters of prompts (4 vectors), instance-head, and cluster-head. Consequently, the count of trainable parameters remains below 0.25 million, necessitating less than 60 minutes on a single GPU (RTX 3090) for training. The variation of BLIP-2 will not alter the trainable parameters, ensuring straightforward scalability.
Here are more details regarding the variants of BLIP-2. It comprises two variants for the image encoder component: ViT-G and ViT-L, as well as four variants for the large language models side: OPT_2.7B, OPT_6.7B, FlanT5_XL, and FlanT5_XXL. For our data pre-selection task, we only require the Image encoder component without the language generation part, which narrows our options down to BLIP-2 with ViT-G and ViT-L.
---
Rebuttal Comment 1.1:
Title: Post rebuttal comments
Comment: Thanks, authors for their detailed clarifications and for providing the experiment I requested. I have also reviewed the comments made by other reviewers and the authors' responses to them. The authors' rebuttal effectively addressed my concerns, so I will not be lowering my score. | Summary: The paper studies the problem of data pre-selection, which aims to select instances for labeling from an unlabeled dataset to enhance performance for downstream tasks with a limited annotation budget. The authors suggest that combining visual and textual features in a joint space can result in a better representation for data pre-selection. They introduce UP-DP, an unsupervised prompt learning approach that adapts vision-language models(specifically BLIP-2) for data pre-selection. The proposed approach outperforms the state-of-the-art on benchmark datasets and exhibits generalizability.
Strengths: 1. The paper introduces a novel approach for data pre-selection that incorporates unsupervised prompt learning in the vision-language model. This approach effectively exploits the multimodal features and enhances the discrimination among classes.
2. The authors provide a clear motivation for the task of data pre-selection and highlight the unique challenges it poses compared to semi-supervised learning and active learning.
3. The paper compares with the state-of-the-art on multiple benchmark datasets, demonstrating its effectiveness and superior performance. Also, it highlights the generalizability of the learned prompts across different datasets.
Weaknesses: 1. The authors claim that the purpose of the data pre-selection is to optimize performance for undefined diverse downstream tasks; however, they only conduct experiments on the image classification task. This limited scope is insufficient to demonstrate the effectiveness of UP-DP for various downstream tasks such as detection or segmentation. The paper would benefit from conducting more experiments on other tasks. And on top of this, it would be interesting to investigate the generalizability of the learned prompts across different tasks.
2. In Table 1, the "Zero-Shot BLIP-2" setting is unreasonable. It lacks a justifiable rationale to use the prompt for the CLIP model to evaluate the zero-shot performance of the BLIP-2 model. If the authors intend to use this baseline, they should train the learnable prompt for BLIP-2 from scratch.
3. In Table 1, the author does not include a baseline that utilizes only image features extracted from BLIP-2. By comparing with this baseline, the authors can demonstrate the efficiency of the proposed method.
4. In Table 1, the authors compare "Random", "USL-I/M", but it is suggested to compare with more approaches in the field. Including additional comparisons would strengthen the paper's evaluation and provide a better context for understanding the performance of the proposed approach.
5. The ablation studies of the proposed method are limited. It would be beneficial to conduct more comprehensive experiments to analyze the performance of UP-DP under different settings. For instance, the authors should provide a detailed analysis of the impact of the instance-level and cluster-level contrastive loss.
6. All the experiments are carried out using the BLIP-2 model during the data pre-selection stage. However, it remains unclear whether the proposed method is exclusively effective for this particular model. It is important to consider other visual-language models, such as CLIP, to establish the efficacy of the approach.
7. The paper contains several grammar issues. For example, on page 5, line 191, it states "presents an efficient pre-training," and on page 6, line 237, it states "Thus we can from a positive pair." These sentences require revision for improved clarity and grammatical correctness.
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: 1. Is it fair to use the prompt for CLIP model to assess the zero-shot performance of the BLIP-2 model? It's supposed to train the prompt for BLIP-2 from scratch.
2. Please demonstrate that UP-DP is valid and effective for other tasks like detection and segmentation.
3. Table 1 requires revision and additional information. It should incorporate a baseline that utilizes only image features extracted from BLIP-2. Also, it is beneficial to compare with more approaches in the field.
4. Please include more ablation studies. At least, demonstrate the impact of the instance-level and cluster-level contrastive loss.
5. During the data pre-selection stage, all the features are extracted from BLIP-2 model. However, it remains unclear whether the method is exclusively effective for this specific model. It is recommended that the authors demonstrate the effectiveness of using other vision and language models as well.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors need to analysis the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer ZwSf,
We would like to thank you for your valuable comments. Below is our response to your questions:
Q1: “Is it fair to use the prompt for CLIP model to assess the zero-shot performance of the BLIP-2 model? It's supposed to train the prompt for BLIP-2 from scratch.” [Question context: “It lacks a justifiable rationale to use the prompt for the CLIP model to evaluate the zero-shot performance of the BLIP-2 model. If the authors intend to use this baseline, they should train the learnable prompt for BLIP-2 from scratch.”]
A1: As discussed between lines 280 to 289, our primary goal isn't to benchmark against BLIP-2's zero-shot performance. Instead, we aim to emphasize the necessity of adapting BLIP-2 for data pre-selection. This is especially relevant for certain datasets like EuroSAT, which is an Out-Of-Distribution (OOD) dataset for BLIP-2, leading it to make random classifications under the zero-shot settings.
Using the prompt designed from CLIP to BLIP-2 is suitable since these prompts aren’t model-specific but dataset-specific and are crafted by human experts for general zero-shot classification. As shown in Appendix C, Table 6, For example, the prompt of OxfordPets dataset is “a photo of a [CLASS], a type of pet.”
Q2: “Please demonstrate that UP-DP is valid and effective for other tasks like detection and segmentation.”
A2: Thank you for your insightful suggestion. In response, we have conducted additional experiments specifically for the segmentation task utilizing the PASCAL VOC dataset. As demonstrated in Table B, our UP-DP method indeed outperform other baselines in semantic segmentation tasks. More details can be found in the "global" response to all reviewers.
Q3: “Table 1 requires revision and additional information. It should incorporate a baseline that utilizes only image features extracted from BLIP-2. Also, it is beneficial to compare with more approaches in the field.”
A3: Thank you for the suggestions. USL-I in Table 1 is the baseline that uses image features extracted from BLIP-2, the detail is described in lines 276-279 and 307-308, which shows that our method can outperform the method only using image features.
We certainly recognize the importance of comprehensive comparisons with various approaches. However, as elaborated in the Related Works section, other data efficiency approaches like active learning-based methods don't align with the specific requirements of data selection in our context, as they usually necessitate an initial labeled set and predefined downstream models. To our best understanding, USL is the most relevant approach for comparison in this study.
Q4: “Please include more ablation studies. At least, demonstrate the impact of the instance-level and cluster-level contrastive loss”
A4: We greatly appreciate your request for additional ablation studies, particularly regarding the impact of both instance-level and cluster-level contrastive losses. Following your valuable suggestion, we are currently conducting more in-depth ablation studies that delve into the following key aspects:
1. Different Sampling Strategies (Table A)
2. Different Versions of BLIP-2 (Table C)
3. Balance and Influence of Loss Weights (Table D)
4. Impact of Varied Annotation Budgets (Table E)
5. BLIP-2 Training with and without Prompt (Table F)
6. Difference Length of Learned Prompt (Table 4 in Appendix)
In Table D, as illustrated, we've experimented with a range of weight ratios, spanning from 1:3 to 3:1, for two distinct losses within different versions of BLIP-2. The consistency of the results highlights the significance of both loss components during training. Moreover, your suggestion proved fruitful; employing a 3:1 ratio instead of a 1:1 ratio, as showcased in Table D, has demonstrated further performance enhancement for our approach.
Q5: “During the data pre-selection stage, all the features are extracted from BLIP-2 model. However, it remains unclear whether the method is exclusively effective for this specific model. It is recommended that the authors demonstrate the effectiveness of using other vision and language models as well.”
A5: We appreciate the recommendation to explore the effectiveness of our method with various vision and language models. The primary reason for utilizing BLIP-2 is its strong interaction between vision and language parts at a fine-grained level (namely, the tokens of image patches, and text). Through slight training adjustments to the prompt, we were able to efficiently enhance the extracted features, thereby making BLIP-2 a suitable model for data pre-selection. However, regarding “old” vision-language (V-L) models like CLIP, the interaction is limited to a cosine similarity score between the embeddings of the whole image and the full text, and this prevents us from extracting multi-modal features with fine-grained interaction between image patches and text tokens. Although we would like to extend our method to more advanced V-L models beyond BLIP-2, it currently represents the most recent and suitable V-L model. | Rebuttal 1:
Rebuttal: We extend our gratitude to all the reviewers for their meticulous comments and constructive suggestions. We are heartened by the reviewers' keen interest in our work and their recognition of its novelty, both in terms of the task and methodology.
We wish to emphasize the significance of our approach. Our proposed method not only enhances the feature representation of the dataset but also trains a cluster head capable of jointly selecting the most representative instances while encompassing the entire dataset for data pre-selection. We substantiate this through our supplementary experiment, Table A, included in the attached one-page PDF.
Appreciating the reviewers' input, we achieved further performance gains through the requested ablation studies. By exploring various sampling strategies and fine-tuning hyperparameters for the two losses, we elevated our method's performance, as evidenced in Table A and D.
Moreover, in response to reviewers' suggestions, we have conducted an experiment on the semantic segmentation task utilizing the Pascal VOC dataset [1]. We have also added more ablation studies for the classification task. These experiment results are available in the attached one-page PDF, organized as follows:
1. UP-DP Sampling Variant with Learned Heads (Table A), as requested by Reviewer X1Uz.
2. Semantic Segmentation Task Experiment (Table B), as requested by Reviewer ZwSf.
3. Impact of Different Versions of BLIP-2 (Table C), as requested by Reviewer soCe.
4. Linear Probe Result with BLIP-2 (Tables C-F), as requested by Reviewer ucTW.
5. Impact of Weights between Instance-level and Cluster-level Loss (Table D), as requested by Reviewers ZwSf and X1Uz.
6. Impact of Annotation Budget (Table E), as requested by Reviewer ucTW.
7. Impact of Training without Prompt (Table F), as requested by Reviewer uJMt.
Here is additional information about these experiments:
Table A: Beyond employing multimodal features from 'f' to select medoids to post unsupervised prompt learning, we explored multiple variants. Table A reveals that 'g_I' utilizes the projected feature from the instance head to identify medoids, while 'g_C' employs cluster probability predicted by the cluster-level head. This approach selects instances with the highest confidence score for each cluster, resulting in a significant performance enhancement.
Table B: We employed the Pascal VOC dataset for the semantic segmentation task, employing the variants of the latest DINOv2 [2] as the segmentation models. The total number of training instances was 1454, and we set the annotation budget at 100. Table B demonstrates our method's consistent superiority over other baselines in segmentation tasks.
Tables C-F: To accommodate time constraints, we chose the relatively challenging DTD dataset (with the fewest instance) for our ablation study. The annotation budget for Tables C, D, and F was set at 100, with a sampling strategy utilizing the distribution peak from cluster output by 'g_C', as previously mentioned."
[1] Everingham, Mark, et al. "The pascal visual object classes challenge: A retrospective." International journal of computer vision 111 (2015): 98-136.
[2] Oquab, Maxime, et al. "Dinov2: Learning robust visual features without supervision." arXiv preprint arXiv:2304.07193 (2023).
Pdf: /pdf/324515802defd3bcf8154ec01f23a70fd0f0d5d7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a novel method to perform data preselection for the task of image classification. Data preselection refers to the task of finding the images for annotating labels and then used for training. The paper builds around the powerful visual-language model BLIP, and proposes learnable prompts as inputs to BLIP to help perform unsupervised clustering for data preselection. The paper then presents results on seven image classification benchmarks, showing the superb performance of the proposed method.
Strengths: + The presented method is a novel application of visual language model to data preselection.
+ I find the proposed learnable prompting in conjunction with unsupervised clustering novel, and as suggested by experiment effective.
+ The presented method is effective in data preselection, as demonstrated by the comparison against baseline methods.
Weaknesses: - The decision to annotate 200 images per benchmark (LINE 265 - 273) seems arbitrary. Why this number? It would be great if the number can be varied and then plot the model performance accordingly to understand the effect of annotated data set size on model performance.
- USL-M, which shares the same multimodal features from BLIP-2 as the proposed UP-DP method, isn't really outperforming the baseline USL-L on EuroSAT (Table 1). Such a result contradicts the claimed effectiveness. What is the explanation?
- No results on linear probe on BLIP-2 using Random Sampling. This is needed in order to showcase the effectiveness of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer ucTW,
We would like to thank you for your valuable comments. Below is our response to your questions:
Q1: “The decision to annotate 200 images per benchmark (LINE 265 - 273) seems arbitrary. Why this number? It would be great if the number can be varied and then plot the model performance accordingly to understand the effect of annotated data set size on model performance.”
A1: Thanks for your suggestions. We included additional results for annotation budgets from 50 to 300. As demonstrated in Table E, our method consistently outperforms other baselines regardless of the annotation budget size, thereby underscoring the robustness and effectiveness of our approach.
The reason for uniformly setting it at 200 images for most datasets is that, within the context of data pre-selection, we lack knowledge about the downstream task (e.g., prediction categories). Therefore, our sole controllable factor is the annotation budget. By considering varying class numbers and dataset sizes, this uniform budget spans a wide range of downstream task difficulties, with an average of 2 to 5 images per class.
Q2: “USL-M, which shares the same multimodal features from BLIP-2 as the proposed UP-DP method, isn't really outperforming the baseline USL-L on EuroSAT (Table 1). Such a result contradicts the claimed effectiveness. What is the explanation?”
A2: Thanks for noting the USL-M and USL-L performance on EuroSAT in Table 1. In some instances, USL-L slightly surpasses USL-M, but given the high standard deviation from USL, it's not statistically significant. For instance, in the case of ViTG-14, the p-value is 0.77. Furthermore, when we examine the average performance across all seven datasets, USL-M consistently outperforms USL-L in five different architectures.
Q3: “No results on linear probe on BLIP-2 using Random Sampling. This is needed in order to showcase the effectiveness of the proposed method.”
A3: Thank you for the suggestions. We've incorporated additional experiments that provide the performance of the linear probe on BLIP-2 (refer to BLIP-2 ViTL and BLIP-2 ViTG in Table C-F). These experiments illustrate that our approach still surpasses other baselines. It's important to note that using a linear probe on BLIP-2 contradicts the premise of data preselection, as we cannot assume that the downstream task uses the same model as the one employed in data pre-selection. This consideration is the primary reason behind our decision to utilize a linear probe on CLIP rather than on BLIP-2 in the main paper. | null | null | null | null | null | null |
Regret Minimization via Saddle Point Optimization | Accept (poster) | Summary: This paper focuses on regret minimization in sequential decision-making under uncertainty. It introduces the average-constrained decision-estimation coefficient, a saddle-point objective that characterizes the worst-case regret, enabling optimization of the information trade-off directly by the algorithm. Moreover, the paper presents a version of the Estimation-To-Decisions (E2D) algorithm (ANYTIME-E2D), with practical implementation details, improved bounds for high-dimensional linear bandits, and the first empirical results of the E2D algorithm.
Strengths: The paper offers several results. The regret results are new since they are based on (3).
Weaknesses: It is unclear how innovative the paper is. The brunt of the mathematics in the paper is drawn from existing prior results. Many mathematical derivations rely on standard Lagrangian (equations (3)-(7) and the proof of Lemma 1).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: What is the importance and impact of the new algorithm? Why is it better, more effective?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review. We provide clarifications on the contributions and impact below.
**Importance and impact of the new algorithm:**
- Anytime algorithms are of great importance in practice as the horizon is often not known in advance. A theoretically sound anytime version of the E2D algorithm has not been proposed before as far as we know.
- The anytime analysis and bounds lead to improved regret in linear bandits models in the regime $n < d^2$ (resp up to $n < d^4$ in linear feedback models), which is relevant in high-dimensional and kernel bandits - we are not aware of a similar result in the literature (Remark 2). Further note that the E2D algorithm by Foster et al (2021) uses a fixed trade-off parameter, therefore cannot achieve the optimal regret rate on different horizons simultaneously (and we are not aware of any other approach that achieves this)
- The algorithm is different from the prior work in that the trade-off parameter is chosen for the current estimated model $\hat f_t$; The suggested choice by Foster et al (2021) is optimized for the worst-case, which, as we show, can lead to suboptimal performance, *which we demonstrate numerically in Appendix E*.
**Technical innovations of the paper:**
While our results build on the framework by Foster et al (2021; 2023) to which we give full credit, we make several important and practically relevant contributions:
- The anytime analysis and bounds are novel. The anytime upper bound deviates from the proof by Foster et al (2021) in a non-obvious way.
- We explicitly derive the E2D objective for linear bandits, avoiding a computational dependence on the (infinite) model space (Lemma 6) - this has not been stated before and is important to note in particular when comparing E2D to established approaches such as UCB.
- We prove bounds on the estimation error for regularized least squares in general linear feedback models (Theorem 2). We believe that it is important to highlight that a standard least squares estimator is feasible as an estimation oracle in the E2D framework.
- Lemma 3 and 4 clarify the relation bounds the DEC via the information ratio IDS, DC and PAC with a direct and simple argument. A consequence of this is the improved bound for linear bandits (Lemma 5).
*To conclude, we strongly believe that we make several novel and valuable contributions to the E2D framework propsed by Foster et al (2021, 2023).*
We will update the paper to reflect these points better.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the answers to comments. I am staying with my score since I still believe there isn't much innovative in the paper. For example, the entire derivation of Lagrangian is standard. | Summary: The authors consider a framework to solve the bandit problem by means of the minimax problem, which has been rapidly developed in recent years.
Among them, the paper focuses on the decision-estimation coefficient (DEC).
The DEC was developed in a series of studies by Foster+ and is known to characterize the upper and lower bound of the worst-case regret in a general class of bandit problems.
The Estimation-to-Decisions (E2D) framework, an algorithm using DEC, determines action selection probabilities based on DEC.
However, the algorithm based on offset DEC, which is an existing DEC, is expected to have conservative performance because it needs to determine the tradeoff parameter based on the regret upper bound.
Furthermore, the recently developed constrained DEC does not require a tradeoff parameter, but has the problem that the strongly duality of a Lagrangian saddle point fails.
To solve these problems, the authors propose average-constrained DEC, a formulation in which the feasible region of the max player in the constrained DEC formulation is relaxed.
This eliminates the need to determine the tradeoff parameter based on the regret upper bound while maintaining the good properties of the optimization problem, resulting in an anytime algorithm.
Furthermore, by considering a variant of average-constrained DEC, the authors show that an improved rate can be obtained for high-dimensional linear bandits, and this is confirmed through actual numerical experiments.
Strengths: - Overall, the paper is written in a very clear manner. In particular, it clearly summarizes the characteristics of the DECs proposed so far.
- It clearly points out the problems of existing algorithms and presents simple solutions to solve them.
- The authors devise a specific procedure for obtaining a regret upper bound not only when the model is discrete, but also when linear bandits ( possibly with side observations) are used.
- While no numerical experimental evaluation of the DEC has been done so far, the effectiveness of the DEC is demonstrated through numerical experiments in the high dimensional linear bandits.
Weaknesses: - Abstract and introduction are slightly misleading, as it appears that the average-constrained DEC results can be used directly to obtain results for high-dimensional linear bandits.
- Although partial monitoring is mentioned in l120-125, the authors are actually solving the problem in a setting where rewards are also observed (or can be assumed to be included), as described in Example 1.2. The authors do not explain to what extent there is a gap with partial monitoring, and it is not clear to what extent the proposed algorithm is applicable to partial monitoring.
- As mentioned by the authors, there are other approaches to characterizing sequential decision problems by minimax problems, such as IDS and ExpByOpt. A brief explanation of why the focus is on DEC would be desirable.
Minor issues and typos:
- l31: $M_f$ should be $M_{f^*}$
- the meaning of "localization" was unclear and it would be better to explain the meaning in short.
- In footnote 3, $\max_{f \in co(\mathcal{M})}$ is $\max_{f \in \mathcal{F}}$?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - While the original DEC is formulated based on the Hellinger distance, this paper uses KL divergence for simplicity. What are the advantages and disadvantages that arise from these?
- (e.g.,l169) The meaning of telescope was unclear. Can you explain the meaning of this?
- (after l244) For the part of inequality (i), shouldn't the second term of max contains $\mathrm{dec}^{ac}_{\epsilon_t}(\hat{f}_t)$? (The order and the last equation are correct.)
- (l226) Why is the assumption "$\max_f \{ \epsilon^2 \mathrm{dec}^{ac}_{\epsilon} (f)\}$ is non-decreasing in $\epsilon$" reasonable? Is there any exception?
- Foster+2022 also devised DEC in the adversarial setting. Is the same kind of reparameterization possible in adversarial setting as in the paper? Is there any exception?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and questions. We address the points raised below:
**High-dimensional bandits:**
- We believe that the average-constrained DEC results do allow us to directly obtain results for high-dimensional linear bandits as shown in Remark 2. We will happily improve the clarity of our explanation if further comments are provided. The existing results on E2D imply a similar bound but only for a fixed horizon, whereas the anytime algorithm is optimal simultaneously for any stopping time. *This is a non-trivial finding not implied by prior works*.
- The main results (Theorem 1 and also Theorem 2) do not require the rewards to be observed, and therefore apply to partial monitoring (this is the same for the existing results on E2D). Example 1.2 does not require the reward to be directly observed: It is sufficient that the reward vector is in the span of the observations. We will clarify this in the updated version of the paper. Note that Assumption 1 is only used in Lemmas 2-4.
- We focus on the stochastic setting, where DEC was shown to tightly characterize the sample complexity of regret minimization [Foster et al. 2021, 2023]. The relation to IDS is apparent in Lemma 2, i.e. IDS respectively the information ratio certifies an upper bound on the DEC objective. In other words, without changing the proof, the bounds for DEC are always as least as good as for IDS. ExpByOpt is related but for the adversarial setting. We will make sure to clarify this further in the updated version of the paper.
**Minors:** Thank you for pointing out minor issues and typos - these will be fixed and clarified in the updated version of the paper.
**Questions:**
- Hellinger vs KL: The (squared) Hellinger distance was used by Foster at al to prove near-matching upper and lower bounds. To what extent and in which cases the KL can be used instead of the Hellinger distance is currently unclear to us. In the examples we consider, the KL is sufficient. Note that the choice of divergence is independent of the main contribution of the paper - proving anytime bounds.
From a practical perspective, the main advantage of the KL is the somewhat simpler closed-form for Gaussian distributions, the upper bounds for least-squares (Theorem 2), and the fact that the exponential weights algorithm directly bounds the KL. On a technical level, the KL upper bounds the squared Hellinger distance. Consequently, bounds on the estimation error for the KL imply bounds for the estimation error for the Hellinger distance. On the other hand, using the Hellinger distance might make it hard to prove upper bounds on the DEC.
- By telescoping we mean that the sum over instantaneous estimation errors is bounded by the estimation error EST_n - this fact is directly used in our regret upper and the analysis by Foster et al (2021), whereas Foster et al (2023) require a stronger result, which is achieved by a more complicated refinement procedure.
- The second term in the max in (i) is obtained by using the inequality $\lambda\_t \leq \epsilon\_t^{-2} \text{dec}^{\text{ac}}\_{\epsilon\_t}(\hat f\_t)$ (Lemma 1). Plugging this upper bound cancels out the first $ \text{dec}^{\text{ac}}\_{\epsilon\_t}(\hat f\_t) $ term so that only $ \frac{1}{\epsilon\_t^2} \text{dec}^{\text{ac}}\_{\epsilon\_t}(\hat f\_t) \mu\_t I\_{\hat f\_t} e\_{f^*}$ remains.
- The assumption was added by mistake and is not needed to derive equation (9) from Theorem 1.
- To what extent the results apply to the adversarial setting as in (Foster et al 2022) is an interesting question for future work. Since in this case, the trade-off parameter also corresponds to the learning rate in exponential weights, achieving an anytime result might require a different proof, i.e. an anytime analysis of FTRL.
We will update the paper to reflect these points better.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
I have checked the replies to the questions and they are all reasonable.
> Example 1.2 does not require the reward to be directly observed: It is sufficient that the reward vector is in the span of the observations.
The assumption in Example 2.2 is $\phi_{\pi} \phi_{\pi}^\top \preceq M_{\pi}^\top M_{\pi}$.
If this is related to the assumption that the reward vector is a span of observations, why not use it as an assumption in Example 2.2?
In the reviewer's view, it would be easier to understand for readers as it matches the definition of observability used in partial monitoring.
---
Reply to Comment 1.1.1:
Comment: We do agree that introducing observability conditions would be ideal. For the current work, we focused on a simple condition that allows us to related the DEC to the decoupling coefficient and the PAC-DEC.
There are trivial ways to relax the condition, e.g. by bounding the ratio$\\|\phi_\pi \phi_\pi^\top\\|/\\|M_\pi^\top M_{\pi}\\|$ in the spectral norm.
It is less clear if existing analysis in finite/linear partial monitoring imply bounds on the DEC. There are subtle technical differences on how the information ratio is defined in the frequentist setting, which prevent us from directly applying the existing results, e.g. [1]. Currently, we do not know if this is just a technical obstacle, or some deeper insight is required.
[1] Kirschner, Johannes, Tor Lattimore, and Andreas Krause. "Linear Partial Monitoring for Sequential Decision-Making: Algorithms, Regret Bounds and Applications." arXiv preprint arXiv:2302.03683 (2023). | Summary: This paper studies the estimation-to-decisions framework for sequential decision-making problems with structured observations. They propose the ANYTIME-E2D algorithm, improving precious approaches with a novel bound for linear bandits. Numerical simulations are presented to show the performance of their algorithm against the baselines.
Strengths: - This paper is in general well-written and the overall structure is nicely organized. The proofs seem to be theoretically rigorous.
- The significance of the paper is relatively high, since the proposed algorithm can be applied to linear bandit settings and is achieving improved rates in the high-dimensional regime, which is getting increasing attention in the literature.
Weaknesses: - My first concern is the originality and significance of this work, over the existing E2D papers. It is claimed that they introduce the new objective $\operatorname{dec}_\epsilon^{a c}$, which enables the optimization of the information trade-off directly instead of the regret bound. However, no clear evidence is presented to validate the superiority of the new objective. I wonder if there are any special cases where the original objective fails to balance the trade-off, while the new one could.
- Secondly, the authors could provide more compelling motivation for the notion of 'anytime' algorithms. As a new reader of the E2D literature, I don't get the advantage of an anytime algorithm over its original version.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As is discussed in the Weakness part, I hope the authors could provide more explanations and examples about the advantage of the proposed new objective and the anytime variant algorithm.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I don't see any limitations or potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and valuable comments. We address the points raised below:
**Significance and originality compared to prior work:**
The average-constrained DEC objective and the corresponding anytime analysis has several advantages compared to the approach proposed by Foster et al:
- Our analysis leads to practically important anytime bounds not achieved by prior work. The anytime upper bound deviates from the proof by Foster et al (2021) in a non-obvious way.
- In linear bandits, we prove improved regret in the regime $n < d^2$ (resp up to $n < d^4$ in linear feedback models). While the existing E2D approach achieves the same bound, it does so *only for the horizon fixed before running the algorithm*. The anytime algorithm achieves the optimal trade-off simultaneously for all horizons.
- *We validate these claims numerically in Appendix E*
- It is true, however, that the E2D approach by Foster et al (2021) achieves (near-)optimal worst-case bounds for fixed horizon and appropriately chosen trade-off parameter. In this sense, the existing E2D paper does not “fail”.
- Another significant difference is that the trade-off parameter in the average-constrained DEC objective is chosen for the current estimate, whereas the choice suggested by Foster et al (2021) is essentially a conservative worst-case bound derived from the analysis. Choosing the trade-off parameter adaptively can lead to improved empirical performance as we show in Appendix E.
*To conclude, we strongly believe that we make several novel and valuable contributions to the E2D framework propsed by Foster et al (2021, 2023).*
**Motivation for anytime algorithm:**
The main advantage of the anytime algorithm is that it does not require the horizon as input parameter. This is important for several reasons:
- The horizon might simply not be known before running the algorithm, i.e. the experimenter wishes the algorithm to keep running, possibly until some external termination criterion is met.
- Fixing the horizon as in E2D (Foster et al, 2021) leads to optimization of the trade-off parameter for the given horizon, and the performance can be suboptimal on smaller and larger horizons - as we demonstrate theoretically (Remark 1) and numerically in Appendix E.
We will update the paper to reflect these points better.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my concerns. I will keep my score because the superiority of the proposed algorithm versus Foster et al (2021, 2023) is only validated in a numerical way. | Summary: This paper focuses on regret minimization in sequential decision-making through min-max optimization. The authors introduce an anytime variant of the estimation-to-decisions algorithm that utilizes the average-constrained decision-estimation coefficient. The proposed algorithm is shown to effectively balance exploration and exploitation. The algorithm achieve a slightly improved regret performance in the context of high-dimensional linear bandits.
Strengths: The paper proposed anytime variant of the estimation-to-decisions algorithm and show the algorithm could potentially improve the linear bandits for a given fixed period.
Weaknesses: 1) A major concern is the contribution of the paper compared to the existing literature, particularly the work by Foster et al. in 2021 and 2023. The concept of average constrained DEC appears to be very similar to the (constrained) DEC introduced in Foster et al.'s papers. It is unclear what distinguishes the introduction of the average constrained DEC in this paper and what advantages it brings.
2) The min-max problem in E2D algorithm is inefficient to solve in general, where the paper needs to find the dual variable via grid search and the choice of stepsize could be an issue.
3) While the paper states that the formulation of the average DEC leads to a practical algorithm, it would be more convincing to provide numerical results to support this claim. Comparisons with Foster et al.'s work from 2021 and 2023, as well as classical methods like UCB, Thompson sampling, and Information Direct Sampling, would justify the practicality and effectiveness of the proposed algorithm.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see the weakness.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and valuable comments. We address the points raised below:
**Contributions compared to the existing literature:**
We discuss the relation to the existing E2D literature in detail in Section 3.1. While our results build on the framework by Foster et al (2021; 2023), we make several non-trivial and practically relevant contributions:
- The anytime analysis and bounds are novel. The anytime upper bound deviates from the proof by Foster et al (2021) in a non-obvious way.
- The algorithm is different from the prior work in that the trade-off parameter is chosen for the current estimated model $\hat f_t$; The suggested choice by Foster et al (2021) is optimized for the worst-case, which, as we show, can lead to suboptimal performance (see Appendix E for numerical results).
- The anytime analysis and bounds lead to improved regret in linear bandits models in the regime $n < d^2$ (resp. up to $n < d^4$ in linear feedback models), which is relevant in high-dimensional and kernel bandits - and we are not aware of a similar result in the literature (Remark 2). Note further that the E2D algorithm by Foster et al (2021) uses a fixed trade-off parameter, therefore cannot achieve the optimal regret rate on different horizons simultaneously (again - we are not aware of any other approach that achieves this).
- We explicitly derive the E2D objective for linear bandits, avoiding a computational dependence on the (continuous) model space (Lemma 6). This is important because so far it was left unclear whether E2D can be implemented even in simple case like linear bandits.
- *We implement our approach, the E2D baseline and UCB algorithm in Appendix E* - these are the first empirical results for E2D that we are aware of.
- We prove bounds on the estimation error for regularized least squares in linear feedback models (Theorem 2). We believe that it is important to highlight that a standard least squares estimator is feasible as an estimation oracle in the E2D framework.
*To conclude, we strongly believe that we make several novel and valuable contributions to the E2D framework propsed by Foster et al (2021, 2023).*
**Solving the minimax problem**
- It is correct that solving the minimax problem requires a one-dimensional line search (which we show is feasible at least in smaller examples, see Appendix E). It is currently unclear whether exactly solving the minimax problem is efficient.
**Numerical results**
- While our focus is on the anytime analysis, we do provide numerical results in Appendix E (including E2D from Foster et al (2021), Thompson Sampling and UCB).
We will update the submission to reflect these points better.
We further noted that this review gives low scores for presentation and soundness. In addition to the contributions highlighted above, we kindly ask the reviewer to clarify further issues with the presentation and soundness of the paper. We will be happy to provide further clarification.
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors' response. As the paper argued that anytime E2D is the main contribution of "the anytime upper bound deviates from the proof by Foster et al (2021) in a non-obvious way." Can the authors discuss in detail the technical challenges with "replacing T with t" and how this paper addresses them? As acknowledged in the response, it seems we can just use the "anytime argument" in Foster et al (2021) to establish similar results.
---
Reply to Comment 1.1.1:
Comment: > As acknowledged in the response, it seems we can just use the "anytime argument" in Foster et al (2021) to establish similar results.
Foster et al (2021) do **not** present an anytime analysis; nor do we say so in our response. What is true, however, is that for any **fixed** horizon, both algorithms recover the same results. The point is, however, to achieve optimal regret uniformly over time, which matters, as we show and perhaps surprisingly, even in (high-dimensional/kernel) linear bandits.
> Can the authors discuss in detail the technical challenges with "replacing T with t" and how this paper addresses them?
The main insight is to use the $\epsilon$-parameterized average-constrained DEC in the analysis. The technical steps are as follows: 1) re-parameterize the saddle point problem in terms of $\epsilon$, 2) Use Sion's theorem to swap the inner min/max 3) modifying the regret upper bound to include the new average-constrained quantity (which perhaps is a more natural quantity to analyze than the offset DEC; while being computationally more tractable than the hard-constrained DEC from Foster et al (2023)). 4) Use Lemma 1 to bound $\lambda^*$ in terms of the average-constrained DEC.
This result is not directly implied by the existing work. Second, there are several more contributions in the paper (efficient computation for linear bandits, new bounds on estimation error, experiments, improved bounds for high-dimensional linear bandits) as we have already detailed above. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their time and valuable inputs.
We respond to each review individually below.
We further point out that our submission contains an experimental evaluation of the proposed approach on two simple test cases in Appendix E. While the focus of the paper is on deriving the anytime bound, the experimental results include a direct comparison of E2D, TS and UCB and corroborate the claims made in Lemma 5 and Remark 2. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Test-Time Distribution Normalization for Contrastively Learned Visual-language Models | Accept (poster) | Summary: This paper reveals a mismatch between the pre-training objective of contrastively trained vision-language models and their downstream usage. The authors propose Distribution Normalization (DN) to solve this problem. The results on a wide variety of downstream tasks show the effectiveness of the proposed method.
Strengths: 1. The finding about the misalignment between pretraining and downstream tasks for contrastively trained vision-language models is important.
2. The proposed method, DN, is simple but effective, and also conveniently implemented in practice.
3. The experiments on various tasks show the effectiveness of the proposed method, and also there are sufficient ablation studies.
Weaknesses: Overall, I think this paper is well-presented and reveal an important problem in contrastively trained vision-language models. However, I still have the below concerns:
1. A recent work, TPT [1], is missing, which aims to learn prompts for vision-language models at the test-time. I recommend that the authors will compare DN with TPT and integrate DN into the TPT’s framework.
2. I suggest that the authors will modify “visual-language” to “vision-language” in the next version.
3. Another concern is that because DN needs a small amount of unlabeled data, its setting actually is not the zero-shot one, thus I suggest the authors will refine the related descriptions, e.g. Line 175.
[1] Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models. NeurIPS 2023.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see the weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have adequately discussed the limitations of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### 1, Add TPT baseline
Thanks for providing the related work, and we will include the comparisons in a revised version. Here are the results of TPT in our setting and a comparison of our results. (top1/top5). We found that our CLIP+TTA+DN* achieves a comparable top-1 accuracy and a higher top-5 accuracy compared to TPT, while at the same time being much more efficient as TPT needs to do gradient updates for each test sample.
| |imagenet1k | Cifar100 | SUN397 | Stanford Cars | Caltech101 | Flowers102 |avg |
|-----------|---------------|------|--------|---------------|------|--------|---|
| TPT | 63.5/87.1 | 65.2/88.1 | 59.4/88.8 | 61.5/90.2 | 83.2/96.0 | 64.5/81.3 | 66.2/88.6 |
| CLIP+TTA+DN* | 63.2/88.9 | 67.1/90.7 | 58.1/90.7 | 61.5/92.2 | 83.1/95.5 | 63.5/84.4 | 66.1/90.4 |
### 2, Change to vision-language models and refine related descriptions
Thanks for the reminder. We will change to “vision-language models” and refine the related descriptions to include the need for extra unlabeled samples. | Summary: This paper proposes distribution normalization (DN) for contrastively trained vision-language models. The idea is motivated by an analysis of the InfoNCE loss. The authors identify that the common practice of taking dot product for zero-shot inference is only a zero-order approximation of the InfoNCE loss. The proposed method improves the dot product inference by using unlabeled test data to make a first-order approximation.
Strengths: 1. The analysis in Section 3.2 is intuitive and motivates the proposed method.
2. The method is efficient: it only needs to subtract the mean of the text/image features of the test data.
3. The experiment section evaluates the method on multiple tasks and various contrastively pre-trained models.
Weaknesses: 1. There are some missing baselines for CLIP zero-shot classification. For example, [1] proposes test-time prompt tuning for CLIP, which is less efficient than DN, but it works on a single test sample. [2] uses multiple test samples from the same distribution to do logit normalization, which is similar to the proposed method.
[1]. Shu, Manli, et al. "Test-time prompt tuning for zero-shot generalization in vision-language models.", NeurIPS 2022.
[2]. Allingham, James Urquhart, et al. "A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models.", ICML 2023.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Regarding performance, I find DN's improvement margins on CLIP to be smaller than on other models. Any idea what would cause this discrepancy between different contrastively-trained models?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: 1. As mentioned by the author, as a limitation of this work, DN requires access to multiple test samples and assumes they are all from the same distribution. Such an assumption may not hold in practical settings.
2. Many existing works have explored The idea of using test data for normalization at test time [2, 3, 4]. The authors should consider citing them and discussing their connections and differences. In addition, there is another work also analyzes the loss functions for test-time adaptation [5].
[3]. Schneider, Steffen, et al. "Improving robustness against common corruptions by covariate shift adaptation." NeurIPS 2020.
[4]. Wang, Dequan, et al. "Tent: Fully test-time adaptation by entropy minimization.", ICLR 2021.
[5]. Goyal, Sachin, et al. "Test-time adaptation via conjugate pseudo-labels.", NeurIPS 2022.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### 1, Two more baselines missing
Thanks for providing the related work, and we will include the comparisons in a revised version. Here are the results of TPT in our setting and a comparison to our results. (top1/top5). We found that our CLIP+TTA+DN* achieves a comparable top-1 accuracy and a higher top-5 accuracy compared to TPT, while at the same time being much more efficient as TPT needs to do gradient updates for each test sample.
| |imagenet1k | Cifar100 | SUN397 | Stanford Cars | Caltech101 | Flowers102 |avg |
|-----------|---------------|------|--------|---------------|------|--------|---|
| TPT | 63.5/87.1 | 65.2/88.1 | 59.4/88.8 | 61.5/90.2 | 83.2/96.0 | 64.5/81.3 | 66.2/88.6 |
| CLIP+TTA+DN* | 63.2/88.9 | 67.1/90.7 | 58.1/90.7 | 61.5/92.2 | 83.1/95.5 | 63.5/84.4 | 66.1/90.4 |
For Zero-shot Prompt Weighting (ZPW), we agree with the reviewer that this would be an interesting baseline to have, but we would also like to emphasize that our proposed method is, in principle, meant to be auxiliary to ZPW. Specifically, since ZPW only weighs the importance of the dot-product similarity of each prompt template from a prompt pool, DN can still be applied by subtracting the mean of these embeddings from the logits. This also does not interfere with the weighted average of logits cross-product with prompt scores. Further, results from the ZPW paper are actually tested on a slightly different dataset split and we have previously looked into the released code for this paper but found that it is currently still in development with several unresolved dependencies. Should the stable version of the code be released prior to submission, we would be happy to include the performance of ZPW + DN as an additional baseline.
### 2, Are Improvements on CLIP smaller than other methods?
Yes, your observation is correct. We have provided a general response on this issue and kindly ask the reviewer to refer to the “Performance Gain” general response for this concern.
### 3, Differences with other related works.
Thanks for providing these related works. [3] estimates statistics for batch normalization within the large models. This method is not applicable to CLIP because CLIP does not have batch normalization to avoid information about the batch leaked in the pre-training. [2] and [3] involves minimizing a loss over all test samples. They require more unlabeled samples, are less efficient than DN, and cannot handle general multimodal alignment tasks such as cross-modal retrieval and image caption metrics. We will add detailed comparisons with those works in a revised version.
[3]. Schneider, Steffen, et al. "Improving robustness against common corruptions by covariate shift adaptation." NeurIPS 2020.
[4]. Wang, Dequan, et al. "Tent: Fully test-time adaptation by entropy minimization.", ICLR 2021.
[5]. Goyal, Sachin, et al. "Test-time adaptation via conjugate pseudo-labels.", NeurIPS 2022.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their response and additional results. I have carefully read the author’s reply and other reviewers’ comments. My questions have been properly answered, and I find the additional evaluation (including those suggested by other reviewers) to further support the proposed method. Therefore, I’m updating my recommendation to weak accept.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for seeing the value of the work and appreciate the reviewer for raising the score! | Summary: CLIP is trained using InfoNCE loss where positive and negative pair alignment is done during training. However, at inference, we simply take the dot product with text embeddings which is not the optimal/similar to the way pretraining was done. Authors propose to rectify this inference and training objective misalignment, based on a first order approximation of InfoNCE loss. Specifically, they subtract the mean representation (calculated over a validation set) from the test image and label/caption representation before applying the dot product. The empirical evaluations of the proposed approach however do not give strong enough gains, especially on CLIP.
Strengths: - Approach is easy to implement.
- Paper writing is generally smooth and clear. However, section around Eqn 8 (L130 to L133) should be improved and is not clear.
Weaknesses: - Any reason why the proposed approach does not work well on MSCOCO retrieval image to text.
- Empirical gains are pretty nominal, especially for the CLIP models. The authors average their gains over CLIP and other not widely used models (TCL and ALBEF). However, the gains seems to be much lower on CLIP itself.
- Moreover, as mentioned in L252, it seems that there are larger gains with models pretrained on smaller datasets. This suggests that as the pretraining corpus size is increased, say we use CLIP pretrained on LAION5B, there may not be any gains. Can the reviewers test their approach on CLIP pretrained on larger datasets like DataComp XL ( https://huggingface.co/laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K ) or LAION5B.
- Authors should test their approach on larger and more standard CLIP architecture like ViT-B-16 and ViT-L, which have higher zeroshot accuracies.
- Table 2, authors report 61.0 as the zeroshot accuracy of CLIP ViT-B-32, which is however 63.2 (https://arxiv.org/pdf/2103.00020.pdf). Can the authors please clarify this.
- Can the authors provide finetuning experiments on image classification datasets as well like ImageNet or StanfordCars/Flowers/etc. considered in Table 2.
- (typo) Eq 5, D_s should be D_t in expectation.
- The supplementary material is supposed to be submitted as a separate PDF and not part of the main paper, to the best of my knowledge.
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: Please see the weakness section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
### 1, proposed approach does not work well on MSCOCO retrieval image to text.
In the setting of mscoco image to text, while DN does not perform well with CLIP-B32, it has shown effectiveness with other CLIP variants such as CLIP-B16 and CLIP-L14 as follows. A method's performance may depend on the specific characteristics of the dataset and the pre-trained model, and it’s important to highlight that it has shown improvement in the vast majority of settings. This indicates that DN is generally a reliable enhancement for multimodal models.
### 2, Improvements not strong, especially on CLIP
We have provided a general response on this issue and kindly ask the reviewer to refer to the “Performance Gain” general response for this concern.
### 3, Improvements on CLIP are smaller compared to ALBEF and TCL. What about CLIP-B16, CLIP-L, and CLIP pre-trained on DataComp XL?
We have conducted experiments on CLIP-B16, CLIP-L14 and found a similar if not larger improvement of DN for those larger models compared to CLIP-B32 used in the paper. Although the improvement is smaller for CLIP pre-trained on laion for the classification task, its improvement on cross-modal retrieval is still significant and comparable to the improvement for the original CLIP. This shows that DN is most effective when the downstream task resembles the contrastive pre-training procedure as cross-modal retrieval datasets contain diverse image-caption pairs similar to the pre-training dataset, as the purpose of DN is to align the downstream use with the pre-training objective.
Furthermore, we believe the large improvement of DN on ALBEF and TCL is due to the large distribution shift from the small pre-training dataset to other datasets. This shows that DN is particularly successful in adapting to out-of-distribution scenarios. This out-of-distribution adaptation ability can be important even for models pre-trained on large datasets, especially when applied to real-world sustained distributions different from the training set.
Cross-modal Retrieval (Top1/Top5/Top10)
| | MSCOCO I2T| MSCOCO T2I | Flickr30k I2T| Flickr30k T2I | avg |
|-----------|---------------|------|--------|---------------|---|
| CLIP-B16 + TTA | 53.6/77.5/85.1 | 33.8/58.7/69.1 | 85.4/97.9/99.1 | 66.6/89.0/93.7 | 59.9/80.8/86.8 |
| CLIP-B16 + TTA + DN* | 54.6/78.5/86.1 |35.7/60.7/70.8 | 87.3/98.0/99.6 | 69.3/90.2/94.6 | 61.7/81.9/87.8 |
| CLIP-L14 + TTA | 57.7/80.1/87.8 | 36.8/61.3/71.2 | 88.4/98.9/99.9 | 69.9/90.6/94.8 | 63.2/82.7/88.4 |
| CLIP-L14 + TTA + DN* | 58.8/81.3/88.4 | 38.6/63.1/72.9 | 89.3/98.8/99.8 | 72.1/91.7/95.5 | 64.7/83.7/89.2 |
| CLIP-B32-Laion + TTA | 58.5/80.9/88.1 | 40.0/65.8/76.0 | 85.8/96.7/98.9 | 71.1/91.4/94.8 | 63.9/83.7/89.5 |
| CLIP-B32-Laion + TTA + DN* | 60.7/82.3/88.9 | 40.8/66.5/76.4 | 86.6/97.1/98.9 | 71.7/91.6/94.7 | 65.0/84.4/89.7 |
Classification (Top1/Top5)
| |imagenet1k | Cifar100 | SUN397 | Stanford Cars | Caltech101 | Flowers102 | avg |
|-----------|---------------|------|--------|---------------|------|--------|---|
| CLIP-B16 + TTA | 67.1/91.5 | 67.7/90.1 | 60.0/91.4 | 63.3/93.1 | 84.5/96.7 | 69.8/84.8 | 68.7/91.3 |
| CLIP-B16 + TTA + DN* | 67.8/92.0 | 71.1/92.2 | 61.9/92.4 | 64.6/93.9 | 84.9/96.6 | 70.0/85.0 | 70.1/92.0 |
| CLIP-L14 + TTA | 73.1/93.4 | 77.6/94.0 | 62.1/92.5 | 76.3/97.7 | 86.0/97.3 | 72.8/89.1 | 74.6/94.0 |
| CLIP-L14 + TTA + DN* | 74.2/94.1 | 80.4/95.4 | 63.8/93.3 | 77.2/97.8 | 86.2/97.1 | 74.0/89.1 | 76.0/94.5 |
| CLIP-B32-Laion + TTA | 66.9/89.4 | 75.7/94.0 | 63.9/93.5 | 87.1/99.2 | 87.3/97.7 | 70.3/86.4 | 75.2/93.4 |
| CLIP-B32-Laion + TTA + DN* | 67.2/90.3 | 76.2/94.3 | 64.3/93.7 | 87.1/99.1 | 86.8/97.2 | 71.1/85.8 | 75.5/93.4 |
### 4, 61.0 is different from 63.2.
The original CLIP paper maintains a list of potential prompts and validates the best prompt from the list using a validation dataset. In our paper, to avoid dependence on a validation dataset, we use a fixed prompt template “a photo of a {}” for all datasets, and this prompt template may not be the optimal one from the template list from the original CLIP paper.
### 5, Finetuning experiments on Imagenet.
Finetuning CLIP on the image classification dataset will effectively turn the pre-trained CLIP into an image-classification model. This is different from the pre-training contrastive objective of CLIP. Since the goal of DN is to better align the contrastive objective with downstream uses, this extension is outside the scope of DN, and we believe it is more appropriate to be left for a future direction. We believe that our current extensive results over the tasks of cross-modal retrieval, zero-shot classification, image-caption metrics, and contrastive fine-tuning on image-caption datasets are sufficient in validating the role of DN to better align downstream use of vision-language models with their pre-training objectives.
---
Rebuttal Comment 1.1:
Title: Re: Response to rebuttal
Comment: Thanks to the authors for there response.
### Not experimenting with ensemble of prompt templates.
Even the proposed approach uses some samples from the validation set, so I don't really understand the argument of not using validation set for prompt ensembling or so.
It's a pretty common knowledge now that using multiple ensemble of templates does improve accuracy. While the exact 80 prompts in the CLIP paper might have been handpicked a bit, but authors should definitely try out there approach when using ensemble of prompts. This is especially because all there gains are simply superseded by prompt ensembled evaluation. Now if there approach doesn't give any gains over prompt ensembling, I do not see any practical significance.
Even for retrieval tasks (which authors are pushing as the main experiment of the paper), they should definitely try ensembling.
### Lack of various evaluation settings
The settings where the approach has to be used (for significant gains) seems to be too constrained. The authors argue that finetuning on image classification datasets is outside the scope of the paper. Even in zeroshot evaluation, the authors seem to argue that major improvements will come only in retrieval tasks. Can the authors share results when CLIP is finetuned for retrieval tasks, like on MSCOCO? Also not an important question, but can the approach be applied for multimodal captioning models as well to improve captioning performance like on CoCa and BLIP?
### Comparison to fewshot finetuning
The authors mention in the general response above that DN needs few test samples, and is therefore a fewshot and not zeroshot evaluation. Doesn't this, in turn, require the authors to add comparisons with a whole new host of baselines from fewshot finetuning of CLIP literature (CoOp: https://arxiv.org/pdf/2109.01134.pdf, TipAdapter: https://arxiv.org/pdf/2109.01134.pdf, CoCoOp: https://arxiv.org/abs/2203.05557)?
---
Reply to Comment 1.1.1:
Comment: ### 1, Use prompt ensembling
Thanks for pointing this out. We have tried using DN with prompt ensembling using the 80 prompt templates used by the original CLIP paper as mentioned by the reviewer. We have tested the effects of ensembling prompt templates for cross-modal retrieval, classification, and image caption metrics. We found that DN still achieves consistent improvement for all tasks on top of other techniques like TTA and prompt ensembling, especially significant on cross-modal retrieval and image caption metrics.
Cross-modal Retrieval with prompt ensembling(Top1/Top5/Top10)
| | Flickr30k I2T| Flickr30k T2I | MSCOCO I2T| MSCOCO T2I | avg |
|-----------|---------------|------|--------|---------------|---|
| CLIP-B32 + TTA | 82.5/96.4/98.4 | 63.6/87.2/92.2 | 54.0/77.7/85.4 | 31.9/57.1/68.1 | 58.0/79.6/86.1 |
| CLIP-B32 + TTA + DN* | 84.3/97.0/98.5 | 66.1/88.2/93.1 | 54.1/77.8/85.6 | 33.6/59.0/69.4 | 59.5/80.5/86.7 |
Classification with prompt ensembling(Top1/Top5)
| |imagenet1k | Cifar100 | SUN397 | Stanford Cars | Caltech101 | Flowers102 | avg |
|-----------|---------------|------|--------|---------------|------|--------|---|
| CLIP-B32 + TTA | 64.3/89.6 |67.2/90.7 | 59.7/91.7 | 60.8/91.8 | 83.7/96.2 | 63.1/82.6 | 66.5/90.4 |
| CLIP-B32 + TTA + DN* | 64.3/89.6 | 68.2/90.7 | 60.1/91.8 | 61.0/91.8 | 83.7/96.3 | 63.3/82.9 | 66.8/90.5 |
Image Caption Metrics with prompt ensembling(Kendall Tau)
| |Flickr8k-expert | Flickr8k-cf | THumb | avg |
|-----------|---------------|------|--------|---------------|
| CLIP-B32 + TTA | 51.7 | 33.9 | 18.6 | 34.7 |
| CLIP-B32 + TTA + DN* | 53.5 | 34.9 | 20.3 | 36.2 |
### 2, Lack of various evaluation settings.
We respectively disagree that the evaluation setting of DN is limited. Since DN is proposed to align the downstream use better with the pre-training objective, it is in principle applicable to all cross-modal alignment tasks and is most effective in cases where the downstream task is most similar to the pre-training image-caption contrastive objective, including **both cross-modal retrieval and image caption metrics**. However, even in the classification task, DN also achieves a significant improvement, 0.9% to 1.4%, in most popular CLIP architectures pointed out by the reviewers. Furthermore, a larger gain (as large as 7% in classification task) is observed for other popular vision-language models like TCL and ALBEF pre-trained on smaller datasets, showing that DN is particularly effective in the case when there is a large distribution shift. In comparison, CoOp, TipAdapter, and CoCoOp only work in the few-shot setting and only apply to the classification task. Results of fine-tuning experiments on retrieval datasets have been provided in Figure 2 in the main paper. Since DN is similar to test-time adaptation but operates in a more relaxed setting than standard test-time adaptation methods as cited by reviewer 5Jmx, it is expected the larger the distribution shift, the larger the improvement. We have provided a detailed discussion with related test-time adaptation literature in response to reviewer 5Jmx and will include them in a revised version. For captioning models, it’s unclear how we can apply DN to improve captioning performances since DN is proposed for cross-modal alignment tasks where the similarity between text and images is measured. We would like to leave it for an interesting future direction.
### 3, Comparison with few-shot tuning
Although we used the word few-shot, few-shot methods as cited by the reviewer cannot be applied in our setting. In our setting, DN can work with as few as **10 unlabeled samples** while the CoOp, TipAdapter, and CoCoOp models provided by the reviewer need at least a few samples per class, which adds up to **hundreds or even thousands of labeled samples**. We are happy to include their results as baselines in a revised version, but we have shown already that DN consistently works in a much relaxed setting compared to the few-shot learning literature.
We are happy to address any further concerns. | Summary: This paper addresses the problem of retrieval and classification accuracy using vision-language representation. The authors claimed that there is a mismatch between training objectives and inference-time operation. Specifically, InfoNCE used negative information but test time only use positive similarity scores. The authors presented findings that rectifying such misalignment can boost performance consistently across a variety of downstream tasks. They argued that it is sufficient to use the first-order approximation of InfoNCE loss, i.e. just subtracting the mean in a test batch before doing dot product at test time, and called it Distribution Normalization (DN). They applied their test-time augmentation to a number of existing approaches such as CLIP, ALBEF, and TCL, and show certain improvements over the baselines on common classification and retrieval benchmarks such as MSCOCO and Flickr30K.
Strengths: The paper is clearly written and the proposed approach and statement are simple and easy to understand. The experiments show consistent improvements over baselines that don't use test-time augmentation.
The authors applied their approach to some of the top performers and tested them on the various common benchmarks. They also provided a decent set of ablation studies on how test-time batch size affects performance and how this simple normalization help in the fine-tuning setting.
Weaknesses: On line 110, the claim that 0.1 is small is not very convincing, since in practice, the value of the referred expressions is typically much smaller, especially after L2 normalization. Such oversimplification as this makes the final outcome, I.e. DN method, looks simplistic.
The approach in this paper required test-time augmentation, and when compared to other test-time augmentation approaches such as TTA of [54], the result was even inferior in some cases. Only when in combination with TTA, there was marginal improvements (over TTA[54])
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### 1, The claim that 0.1 is small is not convincing
In our paper, we claim that the value of the referred expression is smaller than 0.1 (or even much smaller than 0.1 according to the reviewer) and therefore second and higher order terms are negligible. The result of this simplification is to make DN as simple as possible so that it is easy and efficient to implement. As provided in Appendix D.2, we have also shown that adding the second and higher order terms won’t make a big difference compared to our first order approximation:
Part of Appendix D.2
| |Flickr8k-expert| Flickr8k-cfI |THumb|
|-----------|---------------|------|--------|
| CLIP+ DN | 54.3 | 35.4 | 23.5 |
| CLIP+ DN (full) | 54.3 | 35.4| 23.4 |
| TCL+ DN | 42.0 | 26.4 | 14.4 |
| TCL+ DN (full) | 41.8 | 26.4 | 14.3 |
| ALBEF+ DN | 34.8 | 21.8| 5.5 |
| ALBEF+ DN (full) | 34.8 | 21.8| 5.5 |
We would be happy to share more details if the reviewer can please explain more why this is a weakness.
### 2, The method is inferior to TTA. TTA + DN only has marginal improvement over TTA
We have provided a general response on this issue and kindly ask the reviewer to refer to the “Performance Gain” general response for this concern. | Rebuttal 1:
Rebuttal: We appreciate the reviewers’ agreement on the efficiency and wide applicability of our proposed DN, and reviewer XtRE for recognizing the effectiveness of DN. There were several issues pointed out by multiple reviewers which we will address here in the general response.
### Performance Gain:
First of all, while reviewer HYdZ is right in saying that DN is comparable to TTA on average, we want to emphasize that DN is not mutually exclusive to TTA methods as they serve different purposes. DN aligns objectives between training and testing while TTA resolves issues caused by the instability of a single view of the image. As such, they can work hand in hand together to produce strong results. For example, as shown in Section 4, DN can be combined with TTA techniques such that CLIP+TTA+DN consistently performs better than CLIP+TTA across all tasks.
More importantly, our absolute performance gain over baseline methods is comparable to the recently published [1, 2] works that reviewers XtRE and 5Jmx cited in their reviews. For cross-modal retrieval task top-1 retrieval accuracy, our CLIP+TTA+DN* achieves another 1.6% average performance boost over CLIP+TTA. The equivalent average performance boosts for classification and image captioning metrics are 0.9% and 1.6%. A similar improvement of CLIP+DN compared to CLIP is 2.2% for retrieval, 0.8% for classification and 1.3% for image caption metrics. In fact, when using the same CLIP architecture CLIP-B16 as in [1] and [2], the improvements are even as large as 1.4% top-1 accuracy for classification (see full results in the response to UGnD). In comparison, the average improvements of top-1 accuracy of [1] and [2] for zero-shot cross-dataset classification are 1.52% and 2.12% respectively compared to their baselines, but their method does not generalize to tasks beyond classification. In contrast, DN is widely applicable to a wide variety of important multi-modal alignment tasks including cross-modal retrieval, classification, and image captioning metric and at the same time achieve decent improvements across the board.
In addition, we would like to direct the reviewers’ attention to the more neglected experiments with TCL and ALBEF that achieve a much larger gain compared to CLIP (as large as 7% average top-1 improvement on classification). We surmise that this is because TCL and ALBEF are pre-trained on a smaller dataset compared to CLIP, which makes them more susceptible to larger distribution shifts in downstream applications. The relatively larger improvement from adding DN here should not be overlooked, as it shows that it can be particularly helpful when added to models with significant real-world distribution shifts such as specialized expert models pre-trained on a smaller dataset. This does not mean that DN will be ineffective for CLIP with a larger model architecture and pre-trained on a larger dataset. As presented in response to reviewer UGnD, DN brings a comparable if not larger performance boost to CLIP-B16, CLIP-L14, and CLIP pre-trained on the laion2B dataset.
### Need for Extra Test Samples:
There were also several comments about the need for test samples. We want to ensure that it is clear that we only need a very small number of test samples as shown in Table 4, not the entire test set as reviewer jN4m claimed. However, we do admit that these few test samples need to be of the same distribution as reviewer 5Jmx pointed out. But we don’t agree that this is impractical in many real-world applications as long as we collect these test samples close in time (e.g., trending images on social networks). This requirement is strictly the weakest among the test-time adaptation literature mentioned by reviewer 5Jmx. We also agree that since we need a small number of test samples, we should be clear that DN is few-shots rather than zero-shots (L175), and will refine related descriptions in the paper.
### Practicality:
Finally, DN is particularly valuable in light of the practical tradeoff between efficiency and accuracy. [1] requires doing test-time adaptation gradient updates for each single test sample and [2] requires iterating over hundreds of prompt templates in the prompt pool. Both methods create a considerable computational overhead. In contrast, DN is particularly simple to implement — we simply subtract the mean.
[1] Shu, Manli, et al. "Test-time prompt tuning for zero-shot generalization in vision-language models.", NeurIPS 2022.
[2] Allingham, James Urquhart, et al. "A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models.", ICML 2023. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper focus on the misalignment of training and testing of CLIP model. Specifically, CLIP is trained with an InfoNCE loss containing both positive and negative samples, while tested lack negative samples information. They reveal that the common downstream practice of taking a dot product is only a zeroth-order approximation of the optimization goal, and propose distribution normalization to narrow the gap at test time considering both effectiveness and efficiency. Experimental results show some improvements on different datasets.
Strengths: -This paper propose an interesting perspective for the difference of the optimization goal between train- and test-time.
-The analysis of InfoNCE Loss is easy to understand.
-The paper is well written, and easy to follow.
Weaknesses: -There is no relevant introduction about test-time tasks in the first paragraph, and the necessity of performing this task at test-time is questionable.
-The proposed distribution normalization requires prior knowledge of the entire test set, which is often not met in practice, making it unable to handle sustained data.
-The performance gain is limited. Is it the limitation of the first-order approximation?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are adequately addressed in the paper and seem reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### 1, The necessity of performing this task at test time is questionable.
We are not sure about what “this task” refers to, and would be happy to give more explanations if the reviewer can please clarify this point. However, just to clarify as best as we could understand the comment, DN is conducted to improve performance by aligning training and test time objectives, which we proved with strong test-time results. Reviewer’s comment on why this is necessary and how our approach is questionable is puzzling.
### 2, DN requires knowledge of the entire test, unable to handle sustained distribution
We have provided a general response on this issue and kindly ask the reviewer to refer to the “Need for Extra Test Samples” general response for this concern.
### 3, The performance gain is limited
We have provided a general response on this issue and kindly ask the reviewer to refer to the “Performance Gain” general response for this concern.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their rebuttal and address my concerns, I decide to raise my score to borderline accept.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for agreeing to raise the score! We wonder if the reviewer can also update the score in the system? | null | null | null | null | null | null |
Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision | Accept (poster) | Summary: The paper proposes the Interactive Multi-Fidelity Learning (IMFL) framework to develop small domain-specific Large Language Models (LLMs) under limited annotation budgets. Specifically, IMFL balances low-fidelity LLM annotations and high-fidelity human annotations to maximize model performance. Experiments on four domain-specific tasks demonstrate that IMFL outperforms single fidelity annotations, offering a cost-effective solution for domain-specific LLM development.
Strengths: 1. The paper addresses the practical challenges of deploying large language models in domain-specific tasks, such as their scale and high annotation costs.
2. This work considers the practical setting of addressing the challenges of deploying LLMs, hence is somehow realistic.
3. The IMFL framework offers an effective solution for the cost-effective development of small domain-specific LLMs by leveraging multi-fidelity learning.
4. Experiments on financial and medical tasks provide empirical evidence of the superiority of IMFL over single fidelity annotations.
Weaknesses: 1. While the paper demonstrates superior performance compared to single fidelity annotations, it would be valuable to provide a more comprehensive comparison with alternative approaches or baselines to highlight the specific advantages of IMFL.
2. It would be interesting if the author can provide more rationale for the selection of the dataset.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Have you considered studying the effect of using GPT-3.5 and GPT-4 as annotators in your research? It would be interesting to explore how the performance and annotation quality differ when employing these different versions of the language model (with different prices/label quality) as annotators.
2. Have you tried on other AL baselines such as [14-16] mentioned in your paper? This would make the results with more AL baselines more insightful.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer ogFx,
Thank you very much for taking the time to engage with our paper thoroughly, and constructive comments. We’re happy to hear that you thought our work is practical and effective, addressing the ground challenges of deploying LLMs in domain-specific tasks. Please find our response below:
> **It would be interesting if the author can provide more rationale for the selection of the dataset.**
Thanks for the great suggestion. Our focus in this work is to build a novel end-to-end framework for cost-effective fine-tuning of LMs on specialized domain-specific tasks that are beyond the scope of standard pre-training. Thus for evaluation, we mainly select datasets that focus on domain-specific applications with direct real-world implications, such as finance and medicine. The finance datasets are selected following the BloombergGPT [1] paper. The medical datasets are selected following prior work such as MedPaLM [2]. To increase the diversity of datasets and tasks, we select FPB to cover sentiment analysis and Headline to cover news classification, while PubMedQA and MedQA are very commonly used QA datasets for evaluating the capabilities of LMs in the medical domain.
[1] Wu, Shijie, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. "Bloomberggpt: A large language model for finance." arXiv preprint arXiv:2303.17564 (2023).
[2] Singhal, Karan, Shekoofeh Azizi, Tao Tu, S. Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales et al. "Large language models encode clinical knowledge." Nature (2023): 1-9.
> **Have you considered studying the effect of using GPT-3.5 and GPT-4 as annotators in your research? It would be interesting to explore how the performance and annotation quality differ when employing these different versions of the language model (with different prices/label quality) as annotators.**
Thank you for the comment. Our framework is flexible with any LMs for annotations. We have investigated the impact of different models by comparing GPT-3.5 with GPT-3 (see Table 6), as GPT-4’s API was not available at the time of writing. Our observation is that GPT-3.5 provides better annotation quality but at the cost of a (slightly) higher price. Per the reviewer’s request, we have added a comparison of the annotation quality of GPT-3.5 and GPT-4 and summarized the results in the following tables. GPT-4 annotator shows better accuracy but the price is much higher than GPT-3.5 and GPT-3. We will add them to the updated version.
| | GPT-3 Annotation | | | GPT-3.5 Annotation | | | GPT-4 Annotation | | |
|:--------:|:----------------:|:------:|:------:|:------------------:|:------:|:------:|:----------------:|:------:|:------:|
| | retrieval | 5 shot | 0 shot | retrieval | 5 shot | 0 shot | retrieval | 5 shot | 0 shot |
| Headline | 75.59 | 72.51 | 70.25 | 79.4 | 76.15 | 73.31 | 80.13 | 78.34 | 77.2 |
| MedQA | 51.42 | 44.89 | 42.03 | 59.45 | 53.57 | 50.82 | 82.67 | 81.38 | 78.87 |
> **Have you tried on other AL baselines such as [14-16] mentioned in your paper? This would make the results with more AL baselines more insightful.**
Our work focuses on addressing the problem of cost-effective adaptation of LMs for domain-specific applications where existing works [14-16] are not directly applicable. Specifically, [14] studies contrastive AL and is computationally intensive, as its effectiveness relies on using a large batch size, making it unsuitable for low-resource settings. [15] and [16] mainly focus on the cold-start problem in AL which is orthogonal to our work. Advances in this line of research might be helpful to improve our framework in the cold-start scenarios, but a nuanced study is beyond our focus.
---
Rebuttal Comment 1.1:
Title: Thanks for your rebuttal!
Comment: Thanks for your response.
For the question of "studying the effect of using GPT-3.5 and GPT-4 as annotators", I am actually curious about the authors' thoughts on the **optimal selection of the labeling sources**. I feel the findings of the "GPT-4 annotator shows better accuracy but the price is much higher than GPT-3.5 and GPT-3." is interesting, and I am also interested in a question as **"if the annotation budget (in $) is given, which labeler should we use? Should we use cheaper annotators with lower quality, or using more expensive ones to create a smaller but more precise labeled set?"** I think it would be even better if the authors could provide more insights on this.
---
Reply to Comment 1.1.1:
Title: Response to reviewer ogFx
Comment: We appreciate the reviewer's insightful questions. Our study aimed to shed light on these important considerations, and we are glad to provide additional insights.
It is essential to determine the desired level of annotation quality for the specific task at hand. Our results suggest that although more expensive options like GPT-4 often show higher annotation accuracy, the performance gap between it and other cheaper options, e.g., GPT-3 or GPT-3.5, varies across different tasks. For instance, on the Headline dataset, GPT-3.5 offers a similar annotation accuracy as GPT-4 (79.4% vs 80.13%) while being 20x cheaper (0.0015 dollar 1k tokens vs 0.03 dollar 1k tokens), which makes it the better choice on this task. Additionally, the selection of LM annotation depends on budget constraints. If given a very limited budget, the expensive option may not be a valid option as it cannot provide a sufficient quantity of annotations. In practice, we recommend first running a small-scale pilot study on the task at hand to compare the trade-offs of different LM annotations by defining some simple proxy metrics before performing large-scale annotation and fine-tuning. In general, it is challenging to choose the optimal annotation LM without defining some quantitive measures of the “value for money”, which, as mentioned in our discussion section, is still an open research problem since modeling and optimizing such cost in real-world settings involves many complex factors, e.g., task complexity, desired label quality, and available resources. | Summary: This paper studies the problem of how to fine-tune a relatively small, domain-specific language model under the constraints of limited annotation resources. The authors propose an Interactive Multi-Fidelity Learning (IMFL) approach. This approach aims to optimize the performance of the fine-tuned language model through multiple rounds of human-model collaborative annotation and fine-tuning. In each round, the method applies an exploration-exploitation query strategy (such as diversity sampling, uncertainty sampling, etc.) to balance high-fidelity human annotations and low-fidelity model annotations. Moreover, the IMFL method also introduces prompt retrieval and variable batch sizes to make better use of the annotated samples. The authors test the proposed IMFL method on four datasets from the financial and medical domains.
Strengths: - In general, I think the experimental design of this paper is sound and the paper is easy to follow and understand.
- I think the problem studied in this paper is interesting and highly meaningful. The authors offer a straightforward and practical strategy for optimizing the performance of the fine-tuned language model under limited human and computational resources. Notably, the designs of the variable batch size and the coordination of human and model annotations in each round are innovative.
- The authors validate their method on four different datasets, consistently outperforming the baseline models in all cases. Particularly, I appreciate the extensive experiments and analyses performed in Section 4 to verify the effectiveness of each component of their framework, along with practical suggestions for future practitioners.
Weaknesses: One major weakness of this paper is its strong assumption that we already have a good pool of unlabeled data, and all the methods are experimented on these pools. However, in many real-world settings, we do not have such an immediately available unlabeled data pool. The quality of the unlabeled data pool might be crucial to the effectiveness of the proposed method. Without relevant experiments, we don't know if the method proposed will still work in real-world settings.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Why is the budget set only to 1000? While I understand that human annotations may be constrained in real-world applications, I think it would be meaningful to include the performance and analysis of the model under different budget settings in the paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have explained the limitations of their study in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer YvhV,
Thank you very much for your detailed and thoughtful comments. We are glad to hear that you found the paper interesting, highly meaningful, and innovative, and our experiments are sound and strong to verify the effectiveness. Below we address the constructive comments and feedback raised in the review.
> **Assumption on the availability and quality of the unlabeled data pool.**
Thanks for the comment. As highlighted in the Discussion and Limitation section, IMFL's performance is limited by the size of the unannotated dataset and the diversity of examples presented in the dataset. This is because IMFL functions by annotating existing samples rather than creating new samples. The authors believe that the assumption of an unlabeled data pool is mild as this is the basis for applying any data-driven / deep learning techniques. Nevertheless, the authors do agree that the quality of the unlabeled data pool is closely related to the final model performance. Therefore, in practice, some basic data wrangling techniques such as data cleaning, down-selection, and filtering, could be employed before applying the proposed IMFL to further improve the final model performance. These efforts are helpful from a practical standpoint but a nuanced study on the data pipeline is beyond the scope of our research.
> **Why is the budget set only to 1000? I think it would be meaningful to include the performance and analysis of the model under different budget settings in the paper.**
Thanks for the valuable suggestion. The authors agree that such analysis is meaningful and thus we have included experiments on the impact of different human annotation budgets and summarized the results in Table 2 in the supplementary material. Our results suggest that reducing the amount of allowed human annotation budget would result in decreased model performance (though still outperforming using all GPT-generated annotations).
In our experiments, we select a default annotation budget of 1000 based on the following considerations: (1) Computational Costs: Using a larger annotation budget would result in an increasing amount of annotation and computational costs for both our experiments and practical deployment. An annotated budget of <= 1000 is a typical setting used by many prior works [2-5] that consider the low-resource setting. (2) Dataset Annotation Constraints: The human annotation budget for our experiments is also limited by the availability of annotated domain-specific datasets. For example, PubMedQA only has 1000 expert-annotated QA instances [1]. In general, the authors believe an annotation budget of 1000 is practically meaningful for NLP tasks in specialized domains like medicine and finance and such experimental design choices should not affect the validity of our findings.
[1] Jin, Qiao, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. "PubMedQA: A Dataset for Biomedical Research Question Answering." In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2567-2577. 2019.
[2] Grießhaber, Daniel, Johannes Maucher, and Ngoc Thang Vu. "Fine-tuning BERT for low-resource natural language understanding via active learning." arXiv preprint arXiv:2012.02462 (2020).
[3] Yuan, Michelle, Hsuan-Tien Lin, and Jordan Boyd-Graber. "Cold-start Active Learning through Self-supervised Language Modeling." In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 7935-7948. 2020.
[4] Schröder, Christopher, Andreas Niekler, and Martin Potthast. "Revisiting Uncertainty-based Query Strategies for Active Learning with Transformers." In Findings of the Association for Computational Linguistics: ACL 2022, pp. 2194-2203. 2022.
[5] Maekawa, Seiji, Dan Zhang, Hannah Kim, Sajjadur Rahman, and Estevam Hruschka. "Low-resource interactive active labeling for fine-tuning language models." In Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 3230-3242. 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response! I have read the rebuttal, and it has alleviated my concerns raised in the weaknesses. | Summary: This article is devoted to a new algorithm for development of small domain-specific LMs under limited annotation budgets. one of the main ideas of the algorithm is to use as data annotators a combination of a Human and LLM (Large language models). The authors also proposed two innovative developments in their algorithm: 1) prompt retrieval to improve LLM annotation, 2) variable batch size. The paper also presents a new confidence-based query strategy based on applying k-means algorithm to embeddings of the sub-sampled unannotated data and using least confidence for selected items. Of the strengths, I can emphasize the following: a new query strategy for selecting elements is presented, which showed an increase in comparison with other approaches, which introduces scientific novelty to the article. With the help of the tables given in the scientific work, the qualitative increase is shown by the approaches described in the article. Of the weaknesses , I can note that it is not described in detail what happens at the stage “Execute prompt retrieval from Ah” which is described in Algorithm 1. I would like to get a detailed description for "prompt retrieval", for those who are not familiar with this term. There are no tables for hyperparameters with which the models were trained. I would like to see additional statistics on datasets: average, minimum and maximum sentence length.
==== After rebuttal
Thank you for the reply, I have decided raise my score to 6.
Strengths: * The article presents a new algorithm for learning of small domain-specific LMs under limited annotation budgets the effectiveness of which is confirmed by comparison with various baselines. A new query strategy for selecting elements is presented, the effectiveness of the new strategy is confirmed by experiments. Two innovative designs are proposed to improve the learning process (prompt retrieval to improve LLM annotation, variable batch size), these approaches may be useful in other similar tasks.
Weaknesses: * It is not clear which measure was used in clustering - cosine or euclidean distance, a comparison is needed. There is no information about which model is used to get embeddings in Exploration-Exploitation Query Strategy. Why was Sentence-BERT used to get embedding in Design 1? It seems that you need to use domain-specific models to get them in both cases. Since we use embedding of proposals, we would like to compare with strategies based on them, for example from the article “Deep Deterministic Uncertainty: A Simple Baseline”. In the article, the authors claim that they selecting cluster centers to reduce intra-iteration redundancy, but after all, at the next iteration, the selected elements in the new subset may be close to those already selected. Why do we reduce redundancy only inside the iteration?
* There are no tables for hyperparameters of models.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: -
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer EwLY,
Thank you for your detailed review and suggestions to improve the paper. We are glad to hear that you found that our idea is new, and effective and our designs are innovative and useful in broader tasks. Below we address the concerns and questions raised in the review.
> **A detailed description of "prompt retrieval" is desired for those who are not familiar with this term.**
Thank you for your suggestion. The goal of prompt retrieval is to retrieve annotated data instances that are semantically similar to the query as in-context learning examples to improve the LLM annotator's performance without the need to fine-tune it. Intuitively, the selected instance-annotation pairs would serve as an illustration to facilitate generating better and more accurate annotations for domain-specific applications. To operationalize this idea, we follow existing studies to retrieve annotated instances that are close to the queried instance in some embedding space that captures semantic or lexical similarities. The design of prompt retrieval is originally described in Section 2.2. Here we provide a more detailed description of the procedure below.
Given the queried instance $x$, annotated data pool $\mathcal{A}$, sentence encoder model (Sentence-BERT), and the number of in-context examples $k$, we execute the following steps:
1. Compute the embeddings of the queried instance and instances from the annotated pool.
2. Retrieve the nearest $k$ neighbors of the queried instance $x$, denoted as $x_1, x_2, ...,x_k$, from $\mathcal{A}$ according to a pre-defined similarity metric (e.g., cosine similarity) measured in the embedding space.
3. Concatenate $k$ instances along with their annotations to form the in-context examples, which are then used to construct the prompt for the queried instance to generate annotation from the LLM annotator.
> **Hyperparameters of the model.**
The hyperparameter setting regarding fine-tuning is presented in Table 3 of the supplementary material. We have summarized additional hyperparameters related to prompt retrieval and query strategy in the following table.
| Hyperparameters | Values |
|------------------------------|-------------------------------------------|
| Sentence encoder model | sentence-transformers/all-mpnet-base-v2 |
| Dimension of embeddings | 768 |
| Pooling method | mean pooling |
| Similarity retrieval metric | cosine similarity |
| Number of clusters | annotation budget at a specific iteration |
| Number of retrieved examples | 5 |
| Size of unlabeled data pool | 3000 |
> **Additional statistics on datasets: average, minimum and maximum sentence length.**
Thanks for the suggestion. We have summarized the additional statistics in the table below, with the average, minimum, and maximum sentence length (in tokens) included, and will add it to the revised version.
| Sentence length | FPB | Headline | MedQA | PubMedQA |
|:-----------------:|:-----:|:------------:|:------:|:--------:|
| Average | 28.16 | 12.98 | 167.29 | 20 |
| Minimum | 2 | 2 | 17 | 6 |
| Maximum | 148 | 39 | 551 | 50 |
> **The measure used in clustering and the embedding model used in Exploration-Exploitation Query Strategy.**
For k-means clustering, we used Euclidean distance which is the default measure. In the exploration-exploitation query strategy, we use the same Sentence-BERT model as used in prompt retrieval.
> **Why was Sentence-BERT used to get embedding in Design 1? It seems that you need to use domain-specific models to get them in both cases.**
Our proposed framework is flexible with different embedding models. We choose to use Sentence-BERT in our implementation because it is a general-purpose sentence encoding model which can be employed in different tasks from diverse domains without requiring task-specific fine-tuning. Although encoders fined-tuned on domain-specific tasks would potentially further improve the performance by making the prompt retrieval more effective, it would require more in-domain annotated data, which conflicts with our objective of designing a cost-effective adaptation framework.
> **Why reduce redundancy only inside the iteration?**
The proposed EEQ strategy aims to reduce both intra-iteration and inter-iteration redundancy. Typically, uncertainty-based methods acquire similar samples within an iteration, known as intra-iteration redundancy, while diversity-based approaches acquire similar samples across iterations, known as inter-iteration redundancy. Existing hybrid methods try to avoid intra- and inter-iteration redundancies by combining diversity and uncertainty sampling but may suffer from these redundancies due to unifying the uncertainty and diversity objectives into a single query function, which tends to prioritize one objective over the other.
Our EEQ strategy is an independent two-step selection by first executing the diversity sampling and then the uncertainty sampling. In the first step, we select a subset of an unlabeled data pool consisting of diverse data points in the embedding space. In the second step, we acquire high-uncertainty data points (the ones that are predicted with low confidence by the current model) from the subset.
---
Rebuttal Comment 1.1:
Title: Follow-up on Rebuttal
Comment: Dear Reviewer EwLY,
We want to thank you for your constructive feedback and thoughtful reviews that helped to improve our paper. As the open discussion deadline is approaching, we would like to take this last opportunity to make sure that all your questions have been properly answered. We would be more than happy to provide more information or clarification should you have any additional questions. If we have addressed your concerns, we would appreciate your consideration in raising the rating to vote toward accepting our paper. Thank you for your continued engagement in advancing our manuscript.
Authors | Summary: This paper introduces an algorithm to fine-tune LMs for domain specific tasks under a certain budget constraint. They use a mix of human and a high-fidelity LM for annotations. They fix the annotation budget for each and introduce an algorithm to sample from the unannotated pool and distribute it between human labelers and LM annotator. They perform extensive comparisons and ablation studies and show the effectiveness of their approach.
Strengths: - The results section clearly demonstrate the effectiveness of their method under the conditions they considered.
- The paper is well written and easy to follow. Claims are clearly stated, supported by empirical data.
- The results are well explored and discussed. Good intuitions provided.
Weaknesses: - The authors considered very limited tasks. They do not consider the impact of their approach on more generative tasks.
- Their approach and conclusions strongly depends on the LM they used for annotations. Although this is specified in the limitation of their work, more powerful LMs will make the effect of their approach less profound.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - The authors state "Unfortunately, such an approach is susceptible to the misinformation of LLMs through hallucination, which risks generating unreliable or falsified labels and will, in turn, demolish the model’s utility for high-stakes applications like healthcare and finance, where the truth is of utmost importance." Do they have evidence (from their work or others) that this is the case?
- Are the samples for the GPT-3.5 in Fig 5 drawn randomly or in accordance to the uncertainty score?
- The authors do not discuss how the uncertainty score is calculated. A brief description would be useful.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer Ltph,
Thank you very much for the constructive comments and feedback. We are happy to hear that the reviewer found our method effective, and our results are well explored and discussed. Below, we have tried to address all of your feedback and questions. Please take a look and let us know in case you would like additional clarification on any of these points.
> **Impact of approach on more generative tasks.**
Our focus in this work is to build a novel end-to-end framework for cost-effective fine-tuning of LMs on specialized domain-specific tasks that are beyond the scope of standard pre-training. Despite the rarity of such domain-specific benchmarks compared to general NLG tasks, we conducted a relatively comprehensive evaluation of our method, within our allowable computational budget, on four diverse tasks covering two of the most important application domains, i.e., finance and healthcare. We believe our proposed framework can be easily extended to more generative tasks such as open/close-book question-answering provided with proper data annotation and evaluation criteria.
> **More powerful LMs will make the effect of the approach less profound.**
We agree with the reviewer that the performance of our framework is dependent on the LM used for annotation, which is verified in our experiments in Section 4.4. Generally speaking, our method benefits from more powerful LMs that provide higher annotation quality and thus could help to further reduce the cost of human annotations. This does not conflict with our objective. In fact, fine-tuning on domain-specific data is an essential step to obtaining a powerful and reliable LM for highly specialized domains. However, one common challenge faced in practice is the expensive cost of recruiting experienced human annotators in these domains, which our study aims to address. We envision that in real-world development environments, our framework can be applied iteratively on streaming data flows to continuously improve an LM product.
> **Evidence for misinformation of LLMs through hallucination.**
Thanks for the comment. Yes, there are numerous pieces of evidence and experiments in the current literature that support this. For instance,
Dash et al. [1] conducted an evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery. Their experiments (see Table 2 in [1]) showed that while general-purpose LLMs are able to provide safe and credible responses, they often do not fully meet the specific information need of a given question, e.g., responses containing hallucinated references.
Karan et al. [2] presented MultiMedQA, a benchmark medical dataset, and evaluated PaLM and Med-PaLM, which perform encouragingly but still reveal key gaps between human evaluation and remain inferior to clinicians. It is observed that models like PaLM and GPT may hallucinate convincing medical misinformation or incorporate biases that could exacerbate health disparities [3].
Bang et al. [4] provided a comprehensive evaluation of ChatGPT using 23 datasets, showing that ChatGPT suffers from hallucination problems, i.e., generated hallucinated information beyond the given knowledge, which is supported by the experiments on misinformation detection related to COVID-19, and factuality on TruthfulQA.
We will add corresponding references in our final version.
[1] Dash, Debadutta, Rahul Thapa, Juan M. Banda, Akshay Swaminathan, Morgan Cheatham, Mehr Kashyap, Nikesh Kotecha et al. "Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery." arXiv preprint arXiv:2304.13714 (2023).
[2] Singhal, Karan, Shekoofeh Azizi, Tao Tu, S. Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales et al. "Large language models encode clinical knowledge." Nature (2023): 1-9.
[3] Nori, Harsha, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric Horvitz. "Capabilities of GPT-4 on medical challenge problems." arXiv preprint arXiv:2303.13375 (2023).
[4] Bang, Yejin, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia et al. "A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity." arXiv preprint arXiv:2302.04023 (2023).
> **Are the samples for the GPT-3.5 in Fig 5 drawn randomly or in accordance with the uncertainty score?**
Thanks for the comment. The samples for the GPT-3.5 in Fig.5 are drawn based on the uncertainty score. We will revise the caption to clarify this.
> **The authors do not discuss how the uncertainty score is calculated. A brief description would be useful.**
Thanks for the suggestion. We've included a brief description of the calculation of the uncertainty score below and will add it to the final version.
For the tasks of natural language generation, the probability of the entire sequence, $\mathbf s$, is the product of the conditional probabilities of new (next) tokens given past tokens, whose resulting log-probability is $\log p( \mathbf s | x) = \sum_{i=1}^{n} \log p(s_i | \mathbf s_{<i})$, where $s_i$ is the $i$-th output token and $\mathbf s_{<i}$ denotes the set of previous tokens. In this work, we define the uncertainty score $C$ by using the geometric mean of the token-level log-probabilities: $C = \frac{1}{n} \sum_{i=1}^n \log p(s_i | \mathbf s_{<i})$. Since the target LM is an offline model with complete white-box access to its token-level log probability, the uncertainty score can be easily calculated during model inference.
---
Rebuttal Comment 1.1:
Title: Follow-up on Rebuttal
Comment: Dear Reviewer Ltph,
We want to thank you for your constructive feedback and thoughtful reviews that helped to improve our paper. As the open discussion deadline is approaching, we would like to take this last opportunity to make sure that all your questions have been properly answered. We would be more than happy to provide more information or clarification should you have any additional questions. If we have addressed your concerns, we would appreciate your consideration in raising the rating to vote toward accepting our paper. Thank you for your continued engagement in advancing our manuscript.
Authors | Rebuttal 1:
Rebuttal: Dear reviewers,
Thank you all for taking the time to review our paper and we sincerely appreciate all the feedback. In particular, we feel encouraged to see that the reviewers find that
- **The topic of our paper is important, interesting, and practical**: “This paper is interesting and highly meaningful. The authors offer a straightforward and practical strategy ... ” (reviewer YvhV); "The paper addresses the practical challenges of deploying LLMs in domain-specific tasks" (reviewer ogFx); "This work considers the practical setting of addressing the challenges of deploying LLMs, hence is somehow realistic." (reviewer ogFx).
- **Our idea is innovative**: “A new query strategy for selecting elements is presented, which introduces scientific novelty to the article” (reviewer EwLY); "Two innovative designs are proposed to improve the learning process (prompt retrieval to improve LLM annotation, variable batch size)" (reviewer EwLY); "Notably, the designs of the variable batch size and the coordination of human and model annotations in each round are innovative" (reviewer YvhV);
- **Our proposed solution is effective**: "The results section clearly demonstrates the effectiveness of their method." (reviewer Ltph); "Particularly, I appreciate the extensive experiments and analyses performed in Section 4 to verify the effectiveness of each component of their framework, along with practical suggestions for future practitioners." (reviewer YvhV); "The IMFL framework offers an effective solution for the cost-effective development of small domain-specific LLMs by leveraging multi-fidelity learning." (reviewer ogFx).
- **Our experiments are thorough and the results are promising**: "The results are well explored and discussed. Good intuitions provided." (reviewer Ltph); "The authors validate their method on four different datasets, consistently outperforming the baseline models in all cases. " (reviewer YvhV); "Experiments on financial and medical tasks provide empirical evidence of the superiority of IMFL over single fidelity annotations." (reviewer ogFx)
- **And our paper is easy to follow and well-written**: “The experimental design of this paper is sound and the paper is easy to follow and understand.”(reviewer YvhV); “The paper is well written and easy to follow. Claims are clearly stated, supported by empirical data.” (reviewer Ltph).
Based on the reviewer's feedback, we have made the following changes to further improve the clarity of the manuscript:
- Added a brief description of uncertainty score calculation as suggested by reviewer Ltph.
- Added a detailed description of the prompt retrieval and the hyperparameters as suggested by reviewer EwLY.
- Added a statistical analysis of datasets, including the average, minimum and maximum sentence length as suggested by reviewer EwLY.
- Added a discussion and clarification about the exploration-exploitation query strategy, including the details about clustering, embedding model, redundancy, and hyperparameters, as suggested by reviewer EwLY
- Added additional experiments to compare the effect of GPT-3.5 and GPT-4 in terms of annotation quality and performance as suggested by reviewer ogFx
We are committed to address all comments and we welcome any further questions or discussions.
Authors of paper 2221 | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Delegated Classification | Accept (spotlight) | Summary: This work provides a framework for incentive-aware delegation of machine learning tasks. It considers a principal-agent game, where the principal can spend a limited budget on the outsourcing the training of a machine learning model, with the hope of getting the most accurate model, and the agent provides a machine learning model to the principal, but aims to invest minimal effort in training. This work considers the problem of optimal contract design, where the principal commits to a contract that determines how much the agent will be paid for every possible accuracy level of the model, and the agent chooses a profit-maximizing number of samples to use to train the model in response to the contract. This contract design problem requires that the “learning curve”--a function that maps the number of samples to the accuracy of the model is known to both the principal and agent. When the learning curve has a nice structure, it is possible to leverage a connection to the Neyman-Pearson lemma to see that the optimal contracts have a simple threshold form.
Strengths: 1. The paper proposes an interesting new problem. The framing of the problem is well-motivated, clear, and sensible.
Figure 2 is a great figure! It nicely captures the motivation for this problem and demonstrates possible contract types.
2. The main contribution of this work is demonstrating that under easy to understand assumptions (Monotone Likelihood Ratio Property, concavity), the optimal contract of the proposed principal-agent game takes on a simple threshold form. The technical contribution of this paper is closely related to the connection between optimal contracts and the Neyman-Pearson lemma established by Bates et al., 2022. It is great to see this work build on the direction of Bates et al, 2022 and demonstrate a connection between optimal contract design and statistical hypothesis testing. The taxonomy given in Table 1 is helpful.
3. The discussion in Lines 352-363 of how overestimation and under-estimation can have quite different implications is useful beyond the scope of this work. It suggests that when estimating learning curves, not all errors should be treated equal and we may want to penalize overestimation more strongly. This may be of interest to communities that aim to estimate/learn scaling laws.
Weaknesses: A potential weakness of the model that the authors propose is the assumption that the principal has access to a learning curve– which captures the stochastic performance of a machine learning model as a function of sample size. Nevertheless, the authors address this weakness quite thoroughly–the authors cite many related works that suggest that the learning curve can be predicted from scaling laws and also analyze the construction of contracts in the partial information setting, where the learning curve is not known and must be estimated.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. In Section 2 Lines 110-113, the authors discuss that $h_{n}$ is a stochastic quantity. It would be helpful to add a sentence here that $h_{n}$ is distributed according to a distribution that depends on $n$ (and more details on how to evaluate the expectation will be provided later in the work). At first glance, this equation is somewhat unclear because we have not yet defined the distribution over $h_{n}$ for a given $n$?
2. The second plot of Figure 3 is somewhat hard to parse. Could the authors add some clarifications?
3. It may be helpful to emphasize that the contract $t$ is a function that takes on a different value depending on each possible value of the validation accuracy. (The connection between the domain of $t$ and the sample size of the validation set can easily be missed).
4. What do the authors mean by “robustness” in Line 372?
5. Nit writing: Line 120 “the principal cannot now” -> “the principal cannot know.”
6. Nit: Should the subscript of $f_{n}$ in Equation 2 be $f_{a}$?
7. Nit writing: Line 308 has an incomplete sentence.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the encouraging review and positive feedback! We address your questions below:
> In Section 2 Lines 110-113, the authors discuss that $h_n$ is a stochastic quantity. It would be helpful to add a sentence here that $h_n$ is distributed according to a distribution that depends on $n$ (and more details on how to evaluate the expectation will be provided later in the work).
Thank you for this suggestion! We will add a clarification.
> The second plot of Figure 3 is somewhat hard to parse. Could the authors add some clarifications?
Figure 3 (Center) shows the budget required for incentivization of a classifier with accuracy $\\ge 85\\%$, as a function of validation set size $m$. With increasing $m$, the required budget decreases, as the principal is able to gather more information about the agent's chosen action. Asymptotically, the required budget approaches the action cost under full information (“first-best”), indicated by the horizontal dotted lines.
Solid/dashed curves compare different implementations of the numerical LP solver (local/full solver), as described in Appendix C.2.1.
We believe what may be confusing is that the plot blends economic aspects with optimization aspects. In the final version of the paper we will simplify the plot to focus on the economic aspects, and include a separate discussion on optimization.
> It may be helpful to emphasize that the contract $t$ is a function that takes on a different value depending on each possible value of the validation accuracy. (The connection between the domain of $t$ and the sample size of the validation set can easily be missed).
Thank you for this suggestion! We will clarify this in the paper.
> What do the authors mean by “robustness” in Line 372?
In this context, robustness refers to the ability to design optimal contracts even when the contract design setting has parameters which are uncertain to some extent (e.g when the outcome distribution of each action is estimated from data).
> Nit writing:
Line 120 “the principal cannot now” -> “the principal cannot know.”;
Should the subscript of f_n in Equation 2 be f_a?;
Line 308 has an incomplete sentence.
Thank you for pointing out these typos! They will be corrected in the forthcoming revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for these clarifications! I recommend accepting this paper. | Summary: The paper introduces an interesting problem, and provides interesting theoretical results based on a connection to classical result on statistics. The problem setup follows the standard principal-agent problem with moral hazard, and the principal commits to a contract to incentivize the agent behaving favor of the principal. They mainly analyze that thresholded structure is optimal in case of contract for classification task, and provide some fruitful implications upon it.
Strengths: Overall, the paper is well-written and easy to follow.
The problem setting they considered is novel and of interest to NeurIPS community, especially given the increasing attention to contract theory/delegation mechanism.
The analysis looks sound and the results are thorough.
Weaknesses: I'm not entirely sold on the main results and their implications/takeaways, though I agree its technical soundness.
Details are presented below on questions/limitations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Model
* Section 2.1 describes contract design without budget constraint, but the problem setup actually involves budget constraint. I wonder why the authors consider this problem setup, and what happens if the principal does not have a budget constraint (though I understand that both may have plausible applications)
* It seems the agent should be aware of the distribution $f_a$ to compute the expected payment; distribution over possible outcomes from action a. How can this be made practically? Also, $f_a$ corresponds to $f_n(j)$?
Results
* Given the vast literature on contract theory, why the existing techniques cannot be applied to the presented problem setup? I couldn't find the authors discussing on it.
Minor comments
* Reference unresolved in L93
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: * Thresholded rule is optimal - Yes, it makes sense that thresholded contract would be efficient against expected utility maximizer, however, it would carry over a huge amount of variance for the agent's reward. How would the optimal structure change if the agent exhibits risk-averse nature?
* Again, such an extreme thresholded contract is not widely applied in practice - at least some amounts of minimal wage exist, or rather a linear contract is typical (e.g., Carroll15). Besides, finding an agent who admits such a thresholded contract would be difficult than that via posting more conservative/safer contract, thereby possibly inducing the quality of agent to be lower than expected. I'd like to see some discussions on it.
* In the above context, if the budget constraint is given in ex-post manner, given the use of constant/linear/threshold with the same maximal payment (as in Fig 2), I doubt that the quality of the agent would be endogenous depending on the pricing rule to be exploited (e.g., giving constant B would attract highest-quality agent), rather than being exogenous. I'd appreciate some (empirical or theoretical) discussions on it.
* As the paper contributes to an applied modeling of real-world scenario with claiming the efficiency of thresholded rule, I expected to see some comparisons between various contract structures in the experiment.
* Also, the title of delegated classification looks a bit overly abstract to me. I would (though weakly) suggest making it more explicit, e.g., including contract sort of notions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and insightful questions!
> What happens if the principal does not have a budget constraint?
In the non-budgeted setting, and when MLRP holds, min-pay optimal contracts take on a rather extreme form: a single non-zero payment that can be arbitrarily large and arbitrarily rare [DRT18]. In particular, Appendix B.3 shows a concrete setting where the maximal payment of an optimal non-budgeted contract is exponential in the validation set size $m$. Every entity has bounds on the payment it can make, and exponential growth makes such contracts unscalable. Given that MLRP is likely to hold for reasonable learning tasks, we believe this justifies the exploration of budgeted contracts.
Without a budget constraint - and if willing to forgo global optimality - a linear contract may be a viable alternative, as such contracts are known to have appealing robustness properties (e.g. [C15]). However, they also have significant shortcomings, such as arbitrarily-inferior performance compared to optimal contracts [DRT18]. They also require the principal to attach a precise monetary value to each possible outcome, which for some settings may be unrealistic (e.g., when the principal is a government or non-profit organization).
A contract design that requires the principal only to set a budget, and guarantees the best-in-class classifier given this budget, is thus arguably more natural in many situations. We will add to the paper a discussion comparing the different approaches.
> It seems the agent should be aware of the distribution $f_a$ to compute the expected payment; distribution over possible outcomes from action a. How can this be made practically?
We envision two avenues through which the distribution $f_a$ could be estimated by the agent. First, prior experience - as a service provider, the agent is likely to have past experience training classifiers in different scenarios, and thus may have the ability to generalize across learning curves. This approach has been applied in the empirical learning curves literature [LB05]. Second, extrapolation - similar to the partial information model introduced in Sec. 4.2, the agent may also train exploratory models and make a decision based on the extrapolated learning curve. Due to the agent's ability to collect data and train models in-house, we expect their extrapolation quality to be higher than that of the principal.
> Also, $f_a$ corresponds to $f_n(j)$?
Thank you, there was a typo in eq. (2), and $f_a$ should come in place of $f_n$.
$f_n(j)$ is the $j$-th component in the outcome probability distribution corresponding to action $n$.
> Given the vast literature on contract theory, why the existing techniques cannot be applied to the presented problem setup?
Existing results from the contract design literature cannot be directly applied because min-budget contract design problems have a different structure. We discuss similarities and differences in Appendix B.3, and will expand the discussion in the forthcoming revision. Our LP duality proof is distinct but inspired by [DRT18], and understanding whether additional techniques can be transferred from the theory of budget-unconstrained contracts to min-budget ones is a very interesting direction for future work.
> Reference unresolved in L93
Fixed. Thanks for pointing this out!
> How would the optimal structure change if the agent exhibits risk-averse nature?
This is an interesting question, for which the answer is not immediate. For comparison, note that in auction theory, answering questions regarding the incorporation of behavioral aspects has warranted substantial empirical and theoretical work, e.g. [VW21]. Our work focuses on a standard rational agent model, which we believe is reasonable as a first step. Hopefully our work can help establish the necessary foundations for extending to more elaborate agent models.
> Such an extreme thresholded contract is not widely applied in practice - at least some amounts of minimal wage exist, or rather a linear contract is typical (e.g., Carroll15).
Thanks for this question! While working on it, we found a proof showing that the rationality assumption is made without loss of generality with respect to minimum wage: We show that any min-budget contract with minimum wage can be represented as a sum of a min-budget contract (eq. (4)), and a constant representing the minimum wage. In other words, an optimal contract with minimum wage can be obtained by adding a constant to a solution of eq. (3). The proof follows by constructing a modified version of the MIN-BUDGET linear program (eq. (5)), and introducing a change of variables. The full statement and proof will be added to the paper.
We also add that threshold contracts are common in many practical scenarios. For example, a salesperson may get a bonus upon selling a certain amount of units, and a student may earn a certificate upon exceeding a certain grade average.
> Besides, finding an agent who admits such a thresholded contract would be difficult than that via posting more conservative/safer contract, thereby possibly inducing the quality of agent to be lower than expected.
Your question seems to imply a more elaborate setup in which there exists a population of agents, and where agents have types (i.e., “quality”). Again, this is a very intriguing question, but remains outside our scope.
> I expected to see some comparisons between various contract structures in the experiment.
In Appendix B.3, we compare the performance of min-budget to min-pay contracts, illustrating the impracticality of the latter as discussed above.
—
References:
* [LB05] Leite & Brazdil, 2005 - Predicting relative performance of classifiers from samples
* [C15] Caroll, 2015 - Robustness and Linear Contracts
* [DRT18] Dütting et al., 2018 - Simple versus Optimal Contracts
* [VW21] Vasserman & Watt, 2021 - Risk Aversion and Auction Design: Theoretical and Empirical Evidence
---
Rebuttal Comment 1.1:
Comment: Thanks for your response.
I have no further questions at the moment. | Summary: This paper presents a novel theoretical principal-agent framework for examining the incentive-aware delegation of machine learning tasks. In this context, a principal can design a monetary contract to stimulate an agent to exert private efforts towards training a classifier. In the proposed framework, the agent's private actions is the number of samples he/she can process, while the principal operates within a monetary budget $B$. Importantly, the contract design is entirely based on observed outcomes, modeled as the prediction accuracy of the agent's trained classifier against the validation samples.
The paper first shows that the budget-optimal contract is essentially an all-or-nothing contract when the agent has only binary actions. Then the paper also provides several (structural) characterizations on the optimal contract if the action-to-outcome distributions satisfy certain regularity assumptions (e.g., monotone likelihood ratio property, concavity). Finally, the paper also provides empirical results to study how the model parameters affect the design of the contracts and the resulting agent’s trained classifier.
Strengths: The paper is rigorous and very well-written. I largely view this work as an modeling paper. The proposed principal-agent framework is novel and interesting, and it provides an economic view to understand the dynamics when a training task is delegated to another entity, potentially possessing conflicting interests. The paper also provides several characterizations on the optimality of contract when the considered problem has some certain structure (e.g., binary-action, binary-outcome), and also some characterizations for the general action/outcome space but with imposing some additional assumptions. The empirical section also adequately evaluates the proposed framework.
Weaknesses: My main concern regarding the paper lies within the technique results. It is noted that many of the characterizations about the optimal contract align closely with, or can be derived from, recent research on the algorithmic principal-agent problem.
For instance, as noted by the author (line 262), the optimal contract for binary-action can be derived using the Linear Programming (LP) duality. The paper instead uses a proof approach related to hypothesis testing. It could enhance the paper's value if the author elucidated the advantages of this particular approach. Can it illuminate more complex instances, and if so, how?
In addition, in my humble opinion, it seems that the current characterizations could also be derived by merely considering a pure principal-agent problem, thereby bypassing any classification or machine learning elements. Since the primary aim of this paper is to frame the delegated training problem as a principal-agent problem, I believe the exploration could be enriched if it delves deeper into how typical tradeoffs (e.g., the number of samples used by the agent has some implications on the prediction accuracy in a quantitive way via some generalization error) in standard machine learning tasks affect the considered game.
Other questions:
1. missing refs in Line 93
2. it is a bit confusing in Proposition 2, is that $B$ the budget? If so, then by definition of all-or-nothing contract, there should be only one outcome that has positive payment?
3. In Program (5) in Appendix B.2, the objective should be $t$?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and insightful feedback!
> My main concern regarding the paper lies within the technique results. It is noted that many of the characterizations about the optimal contract align closely with, or can be derived from, recent research on the algorithmic principal-agent problem.
This is imprecise, and we apologize if this was the impression conveyed. Let us clarify:
The contract design literature deals predominantly with minimizing expected payment. This turns out to be inappropriate for modeling delegation of learning (see response to reviewer ybdo), and motivates the introduction of *budget-optimal contracts*. To the best of our knowledge, there is no prior work on budget-optimal contracts, and therefore existing results do not necessarily apply.
Budgets make the contract design problem challenging because they introduce an additional hard constraint. At the same time, properties of the underlying learning task, which motivate monotonicity and concavity, can make the problem feasible (see below). The fact that budget-optimal contracts admit a simple form (“all-or-nothing”) is a novel result, which we prove using duality theory (see below). Such techniques are also used in min-pay contracts, but the connection is not immediate.
> For instance, as noted by the author (line 262), the optimal contract for binary-action can be derived using the Linear Programming (LP) duality. The paper instead uses a proof approach related to hypothesis testing.
This is also imprecise, though in re-reading this section, we see how this could have been implied from our formulation of line 262 (which we will correct). We apologize, and again wish to clarify:
The connection to hypothesis testing is not a proof technique, but rather a result:
* We prove Prop. 2 (optimal contract for binary-action) by a stand-alone proof that relies entirely on LP duality (Appx. B.4.1).
* Prop. 2 is not directly connected to hypothesis testing, but the functional form of the optimal contract suggests a possible connection to Neyman-Pearson (NP).
* Based on this observation, we formalize the equivalence between optimal contracts and optimal tests in Thm. 1, where one direction of the proof makes use of Prop. 2 (Appx. B.4.2).
“Loose” connections between moral hazard and hypothesis testing have been known for some time, but formal evidence has been limited: (i) the maximal likelihood ratio appearing in optimal min-pay contracts [S15], and (ii) a connection established by Bates et al. [BJSS22] in a setting distinct from ours (adverse selection, rather than moral hazard), and to a different variant of the NP lemma (asymmetric rather than symmetric). Thm. 1 provides a direct correspondence between optimal contracts and optimal hypothesis tests, and as such, establishes a clean formal connection. Our hope is that this will enable tighter future connections, and transfer between the two domains.
We will clarify both of the above points in the forthcoming revision of the paper.
> It seems that the current characterizations could also be derived by merely considering a pure principal-agent problem, thereby bypassing any classification or machine learning elements.
The structure of a delegated contract derives from the structure of the underlying machine learning problem, coupled with the incentives of the different parties involved. For example:
* Expected outcomes in the delegated task are determined by the *learning curve* of the underlying learning task. The study of learning curves has drawn recent interest in the learning community (e.g., see [VL22]). Our results leverage recent results in this field, such as scaling laws [K20] and monotonization [BDKMMS22].
* The variance in outcomes in delegated learning is induced (in part) by the randomness of the training set $S$. This is also a key element in PAC analysis, manifested differently in our setup (i.e., as variance rather than as a probabilistic guarantee).
* The principal evaluates outcomes using a *validation set*. This affects the design of the contract by imposing structure that we leverage. Validation is a fundamental concept in learning, and to the best of our knowledge, is unique to delegation of ML.
All of the above play important roles in our modeling choices and analysis.
> I believe the exploration could be enriched if it delves deeper into how typical tradeoffs … in standard machine learning tasks affect the considered game.
We fully agree. In Sec. 3, our analytic results rely on several basic properties of learning tasks, as discussed above. In Sec. 4, we move beyond this, and empirically study how additional elements of the underlying classification problem affect delegation and its outcomes, such as:
* Using "empirical" contracts, computed on the basis of an outcome-probability matrix estimated from finite data
* Extrapolating to large-sample actions from small-sample data, and for different extrapolation approaches
* Differences in contracts and outcomes for the same data but using different model classes (e.g., MLP vs. GBDT)
* Sensitivity and implications of different error types (i.e., under- vs. over-estimation) for the principal
> Missing refs in Line 93
Fixed. Thank you!
> In Proposition 2, is $B$ the budget?
Yes. In all-or-nothing contracts, payment takes one of two values ($t_j\\in\\{0,B\\}$), and multiple $t_j$’s can be positive.
> In Program (5), the objective should be $t$?
Objective is $B$ because we minimize the $l^\infty$ norm of $t$. This is similar to a technique presented in [BV14; Sec. 4.3.1].
—
References:
* [VL22] Viering & Loog, 2022 - The shape of learning curves: a review
* [K20] Kaplan et al., 2020 - Scaling Laws for Neural Language Models
* [BDKMMS22] Bousquet et al., 2022 - Monotone Learning
* [S15] Salanié, 2015 - The Economics of Contracts: A Primer (Chapter 5)
* [BJSS22] Bates et al., 2022 - Principal-Agent Hypothesis Testing
* [BV14] Boyd & Vandenberghe, 2004 - Convex Optimization
---
Rebuttal Comment 1.1:
Comment: Thank the author for responding my questions. I do not have further questions. | Summary: The paper studies the problem of a decision maker (the principal) delegating the task of training a machine learning model to an agent. Both parties are strategic, and the principal must commit to a contract to encourage the agent to invest effort (e.g., labeling training samples). The authors consider the principal's problem of designing the optimal contract subject to a budget constraint, which maximizes the expected performance of the model trained by the agent given that the agent best responds to the contract. Technically, they give (1) an LP-based algorithm for computing the optimal contract, (2) a characterization connecting the optimal contract to hypothesis testing when the agent has only 2 possible actions, and (3) another characterization stating that optimal contracts have a simple form when the learning curve satisfies certain conditions. They validate these findings with experiments, and also empirically study a setting where the principal does not have enough information and therefore must estimate the learning curve.
Strengths: The problem is natural and exhibits quite some theoretical depth. The paper is well written, and in particular, the introduction nicely motivates the problem. The technical results are nice and clean, and they also appear to be practically meaningful, as supported by experimental results.
Weaknesses: While I like the paper overall, one concern is regarding the model: the model is quite specific, and I wonder if / to what extent the results can be generalized and remain (approximately) valid. Another practical concern is regarding scalability: in the experimental section the authors say that their full LP solver works for m no larger than 20, which doesn't sound like a very practical number; there is the local solver but it's not totally clear what it does, or how reliable it is. Also see detailed comments for some minor points.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: (also putting minor comments here)
Figure 1: at this point it's not totally clear to me what role m plays in the model. In particular, how should m be chosen (either by nature or by the principal), and how does the choice affect the performance of the contract? I'm sure this will become clear later, but it might make sense to briefly comment earlier, perhaps in a footnote.
Line 56, "MLRP": what does this mean?
Line 93: broken citation
Line 120: "... principal cannot now how many examples ..."
Line 130: "a-priory"
Line 266, "under MLRP n_2 is always implementable": I'm not sure I get this --- what if c_2 - c_1 > B?
Line 272, "important special case of binary-outcome": this (corresponding to m = 1) sounds less practical to me. Any justification for the importance of this case?
Theorem 2: I feel the way this result is presented in Table 1 is somewhat misleading. The impression I initially got from the table is that the problem of computing the unconditional, unrestricted optimal contract is NP-hard. Or is this actually implied by Theorem 2?
Line 307, "as m increases, required budgets": unfinished sentence?
Line 310, "the local solver is easy to run": what does the local solver do?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback and the great suggestions! We address your questions below:
> Figure 1: at this point it's not totally clear to me what role $m$ plays in the model. In particular, how should $m$ be chosen (either by nature or by the principal), and how does the choice affect the performance of the contract? I'm sure this will become clear later, but it might make sense to briefly comment earlier, perhaps in a footnote.
Thanks for the suggestion. We will add clarification for the role of $m$ in the caption of figure 1, and in the problem setup section.
> Line 56, "MLRP": what does this mean?
MLRP stands for “monotone likelihood ratio property”. It is a common assumption in statistics [LR05] and contract design literature [DRT18]. In our setting, the MLRP assumption can be taken to state (informally) that the better the evaluation result of the classifier, the more likely that it was trained using more data. A formal definition of MLRP is given in Line 264. We will clarify the definition and the intuition behind it, and better connect the two.
> Line 93: broken citation;
Line 120: "... principal cannot now how many examples ...";
Line 130: "a-priory"
Typos fixed. Thank you for pointing them out!
> Line 266, "under MLRP n_2 is always implementable": I'm not sure I get this --- what if c_2 - c_1 > B?
The meaning of “implementable” is that the min-budget optimization problem (eq. (4)) is guaranteed to have a solution. The min-budget optimization problem finds the minimal budget $B^*$ required for incentivizing training with a certain dataset size (in the case described above - $n_2$). We also note that $B^* \\ge c_2 - c_1$ for every implementable contract incentivizing $n_2$, as the expected payout must cover the agent’s additional cost.
$B$ represents the total budget available to the principal, and we treat it as an external constraint in order to maintain parity with the contract design literature (e.g [S15]). When $B^*>B$, the solution is considered economically infeasible.
> Line 272, "important special case of binary-outcome": this (corresponding to m = 1) sounds less practical to me. Any justification for the importance of this case?
This case captures a coarse measurement of the agent’s performance - the agent’s effort can either succeed or fail. It is an important case since it is a primary focus of many works in the literature, such as [BFNW12, HSV16, DEFK21]. It is also practical in situations where the principal can either accept the agent’s work and pay the agreed upon price, or altogether reject the work and not pay.
> Theorem 2: I feel the way this result is presented in Table 1 is somewhat misleading. The impression I initially got from the table is that the problem of computing the unconditional, unrestricted optimal contract is NP-hard. Or is this actually implied by Theorem 2?
Table 1 focuses on characterization of contracts that take a simple form. In our context, simple-form contracts have the “all-or-nothing” property ($t_j \\in \\{0, B\\}$), as opposed to paying arbitrary amounts ($t_j \\in [0,B]$). Theorem 2 implies that optimal simple contracts are NP-hard to find in the general case, by reduction from 3-SAT. In contrast, the problem of computing the unconditional, unrestricted optimal contract is not NP-hard. We will make sure to clarify this better in the next version.
> Line 307, "as m increases, required budgets": unfinished sentence?
Thanks for pointing this out! This was a typo (now fixed), and the complete sentence is “As $m$ increases, required budgets and the difference between them both decrease.”.
> Line 310, "the local solver is easy to run": what does the local solver do?
The local solver computes the optimal contract using a relaxed version of the MIN-BUDGET linear program (eq. (5)).
In more detail, the MIN-BUDGET linear program (eq. (5)) can in principle be solved with general-purpose numerical solvers (e.g. GLPK), but is prone to numerical instabilities; we observed these starting at around $m\approx 20$. To overcome this, the local solver relaxes some of the incentive compatibility constraints (Appendix C.2.1). This leads to a closed-form solution (given by Proposition 2) which can be computed efficiently and at scale. Under the conditions of Theorem 3, namely MLRP + concave survival, the local solver is provably optimal. We will clarify this and add a discussion of the different solvers in the revision.
—
References:
* [LR05] Lehman & Romano, 2005 - Testing Statistical Hypotheses (Section 3.4)
* [DRT18] Dütting et al., 2018 - Simple versus Optimal Contracts
* [S15] Salanié, 2015 - The economics of contracts: a primer (Chapter 5)
* [DEFK21] Dütting et al., 2021 - Combinatorial Contracts
* [HSV16] Ho et al., 2016 - Adaptive Contract Design for Crowdsourcing Markets
* [BFNW12] Babaioff et al., 2012 - Combinatorial Agency
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! I have no further questions. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features | Accept (poster) | Summary: This work proposes a defense method against backdoor attacks. The proposed method involves inserting a trainable transformation layer inside a backdoor model while keeping other model parameters fixed that is supposed to purify the poisonous features while allowing benign features to pass without modification.
Strengths: 1) The proposed transformation block constitutes a single 1x1 convolution and batch normalization layer which makes it easier to adopt in any model architecture.
2) Faster training time since only the added layer is trained while keeping other model parameters fixed.
3) The proposed method provides good defense performance (low ASR) against various attacks.
Weaknesses: 1) To filter out poisonous features from a poisonous sample, the method requires the creation of poisonous samples. While the authors claim to approximate the trigger, they utilize adversarial samples instead of poisoned ones, which are fundamentally different and not equivalent approximations.
2) Previous research [41] has utilized adversarial perturbation to address removing backdoor attacks. It is crucial to emphasize the key differentiation between [41] and the proposed NPD-TU method, aside from the difference in training the entire model versus solely training the added transformation layer.
3) The proposed method exhibits a decrease in benign accuracy, as indicated by the results presented in Table 2. This decline in accuracy is a common occurrence observed in adversarial training techniques.
4) The authors insert the transformation block before the third convolution layer of the fourth layer for PreAct-ResNet18. How is this design decision made, i.e., for a new model which might be potentially backdoored? How does the defender decide where to insert this polarizer block?
5) How does the proposed method perform under an adaptive attack, where the attacker knows the defense strategy? One approach to testing this could be training the backdoor model using adversarial samples. It would be interesting to see this result.
6) How does the proposed method affect the performance of a benign model?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) How compatible is neural polarizer with other model architectures like inception and Densenet?
2) How is the defense performance against all2all attacks for attacks other than BadNets?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations:
I have raised some concerns in the weaknesses section. Those are some possible limitations of the work. The authors should revise their limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your dedicated time reading our paper and providing us with your meticulous review. Your insightful questions and concerns are greatly appreciated. Please inform us if these responses effectively address all your inquiries or if there are additional questions you'd like to raise.
**Q1. Utilize adversarial samples instead of poisoned ones, which are fundamentally different and not equivalent approximations.**
**R1:** Thanks for this insightful concern. It is essentially the same as Q2 proposed by Reviewer qQYz. Due to the space limit, we would like to refer you to **R2 to Reviewer qQYz**.
**Q2. Key differentiation between i-BAU [41] and NPD-TU.**
**R2:** There are still three significant differences between NPD-TU and i-BAU:
* **Formulation:** NPD-TU has a min-min formulation, while i-BAU has a min-max formulation. The reason is that NPD-TU computes targeted adversarial perturbation (T-AP), while i-BAU computes untargeted AP (U-AP). The benefit of T-AP is explained in the **first response**.
* **Algorithm:** i-BAU adopts an implicit hypergradient algorithm, involving the inverse of Hessian matrix, which is very costly and difficult to scale to large image size. In contrast, NPD alternatively solves the inner and outer minimization using efficient PGD and SGD algorithm, respectively. The running time of NPD and i-BAU is presented in **Table 1**, which shows NPD is more efficient than i-BAU, especially on Tiny ImageNet.
* **Performance**: As shown in Tables 1 and 2 in manuscript, i-BAU performs poorly on defending against several attacks (especially on Tiny ImageNet dataset). In contrast, our NPD-TU and NPD show better defense performance across all attacks.
**Table 1: Running time (sec.) in comparison with state-of-the-art defenses with 2500 images on PreAct-ResNet18.**
|Defense|FP|NAD|NC|ANP|i-BAU|NPD|
:-:|:-:|:-:|:-:|:-:|:-:|:-:
CIFAR-10|1169.01|74.39|896.45|58.75|57.23|55.16
Tiny ImageNet|3357|351|42512|1692|887|332
**Q3. Where to insert this polarizer block for a new backdoored model?**
**R3:** Thanks. Due to space limit, we refer you to **R2 of Reviewer BztP** for more details, which contains an in-depth analysis and performance of NP insertion in various layers across VGG19, Inception-V3, and Densenet161 networks, respectively. Briefly, NPD is robust across different network structures. And, it remains effective and reliable when the NP is inserted into deep layers of the network.
**Q4. Defend against adaptive attacks.**
**R4:** Thanks for this constructive suggestion. We train the backdoored models with adversarial training (AT) to serve as the adaptive attack. The defense performance against the AT backdoored models is shown in **Table 2**.
* **NPD still performs well on AT models**, while i-BAU performs poor. The possible reason is that i-BAU is essentially adversarial training, while NPD adopts dynamic targeted adversarial perturbation, which is different from AT.
* **Compared to the defense against backdoored models with standard training** (see Table 1 in main manuscript), there is a slight ASR increase from 1.62% to 5.22%. However, AT models also sacrifices the ACC.
**Table 2: Defense performance of NPD against adaptive attacks on CIFAR-10.**
||Backdoored||i-BAU||NPD||
-|:-:|:-:|:-:|:-:|:-:|:-:
||ACC|ASR|ACC|ASR|ACC|ASR
Blended|86.00|99.63|83.98|30.43|83.64|3.92
Input-Aware|84.98|94.99|83.17|71.12|83.13|4.47
LF|84.15|94.30|84.15|94.30|83.39|4.89
SSBA|84.34|93.32|82.74|24.54|83.38|5.22
**Q5. Influence on benign models.**
**R5:** Our NPD defense on benign models trained on different datasets is presented in Table 3 ('PreAct' indicates Pre-ActResNet18 and 'VGG' indicates VGG19-BN). It shows that there is a slight influence on benign models.
**Table 3: Defense performance on benign models.**
||CIFAR-10||GTSRB||Tiny ImageNet||
:-:|:-:|:-:|:-:|:-:|:-:|:-:
||PreAct|VGG|PreAct|VGG|PreAct|VGG
Benign Model|93.70|92.07|98.15|98.11|57.35|47.18
After Defense|92.58|91.00|98.43|98.06|52.15|47.01
**Q6. How compatible is neural polarizer with other model architectures like Inception and DenseNet?**
**R6:** The evaluations of NPD and other SOTA defenses against different attacks across Inception-V3 and DenseNet161 are shown in **Table 4**. Our NPD shows superior performance across the two networks, demonstrating the robustness of NPD under different model architectures.
**Table 4: Defense results in comparison with NC and i-BAU on Inception-V3 and DenseNet161.**
|||Backdoored||NC||i-BAU||NPD||
-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
|||ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR
Inception-V3|BadNets|90.68|94.50|89.87|1.21|82.41|0.52|91.69|4.70
||Blended|93.73|99.82|93.73|99.82|83.63|8.84|93.71|0.01
||Input-Aware|91.60|98.80|91.60|98.80|91.03|22.93|91.58|0.69
||SSBA|93.33|98.30|91.56|50.73|84.74|1.92|92.23|1.13
DenseNet161|BadNets|84.33|89.68|82.69|2.82|79.49|39.63|82.06|1.91
||Blended|86.37|98.79|86.38|98.79|77.86|48.44|83.18|4.22
||Input-Aware|84.46|94.41|84.45|94.41|81.96|17.14|83.44|1.56
||SSBA|84.18|84.13|83.26|14.50|80.51|11.16|81.41|8.48
**Q7. Defense performance against all2all attacks.**
**R7:** Thanks for this constructive suggestion. The dynamic and sample-specific prediction strategy of target label enables NPD to defend against all2all attacks. We conduct a supplementary experiment on CIFAR-10 dataset. The low ASR in **Table 5** tells that NPD can successfully defend against all2all backdoor attacks, showing the adaptation and robustness of our method.
**Table 5: Defense performance against all2all attacks on CIFAR-10 dataset with 5% benign data and 10% poisoning ratio on PreAct-ResNet18 (%).**
||Backdoored||NPD||
:-:|:-:|:-:|:-:|:-:
||ACC|ASR|ACC|ASR
Blended|91.59|83.50|91.07|6.24
LF|86.60|78.55|90.10|2.10
Input-Aware|91.91|84.80|90.53|4.88
SSBA|91.30|85.04|91.14|1.29
---
Rebuttal 2:
Title: Seeking Your Valuable Feedback
Comment: Dear Reviewer **DJTs**,
We would like to extend our appreciation for your time and valuable comments. We are eagerly looking forward to receiving your valuable feedback and comments on the points we addressed in the rebuttal. Ensuring that the rebuttal aligns with your suggestions is of utmost importance.
Thank you for your dedication to the review process.
Sincerely,
Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for such a detailed response. The authors have addressed most of my concerns with detailed experimental results. Thus I
have increased my score from borderline accept to weak accept.
---
Reply to Comment 2.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer DJTs,
We sincerely appreciate your thoughtful response and the time you've dedicated to reviewing our paper. We will incorporate your suggestions and insights into the revised manuscript. Thank you once again for your thorough review and your positive evaluation. Your support and input are greatly appreciated.
Sincerely,
Authors | Summary: This paper proposes propose a backdoor defense method by inserting a learnable neural polarizer into the backdoored model as
an intermediate layer, in order to purify the poisoned sample via filtering trigger information while maintaining benign information. To more effectively remove backdoor, this paper leverages the reverse trigger and target label to remove backdoor.
Strengths: 1. This paper is well written and technically sound.
2. The experiments are sufficient, and its performance achieves SOTA
Weaknesses: 1. Neural Polarizer needs to reverse the backdoor trigger and the target label, which is similar to Neural Cleanse. Furthermore, former papers demonstrate that trigger reverse mainly works well on static trigger such as BadNets. The results on sample-specific trigger are bad and the results of reverse trigger and target label are mostly wrong. Thus, the scope where Neural Polarizer can be used is limited.
2. The novelty is limited. Because trigger reversed has been proposed before, the contribution of this paper is to finetune the model with the reversed trigger on Neural Polarizer layer.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you explain why the method works with the reversed trigger which may not be correctly reversed?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable time in reading our work and positive review on our techniques and experimental results. We appreciate your insightful questions and comments.
**Q1. Neural Polarizer (NP) needs to reverse the backdoor trigger and the target label, which is similar to Neural Cleanse. The scope where NP can be used is limited. The novelty is limited.**
**R1:** Although both our NPD and NC employ adversarial techniques and targeted label to fine-tune the backdoored model, our NPD shows novelty and differs from NC in the following three aspects:
**1. Dynamic and sample-specific prediction strategy of target label.** We predict the target label for each training sample by $y_i^{'} = \arg\max_{k_i \neq y_i} \hat{f}_{k_i}(\mathbf{x}_i, \boldsymbol{\theta})$, while NC estimates a common target label for all samples. It has three strengths:
* **The predicted target label enables us to generate targeted adversarial perturbation (T-AP), which shows better performance compared to untargeted adversarial perturbation(U-AP)**. The reason is that T-AP is more effective as surrogate perturbations to the triggers. We have provided ablation study in Table 4 in the manuscript. We also provide a comparison of generated T-AP, U-AP, and corresponding poisoned sample in the latent space of a backdoor model in **supplementary PDF**, which shows T-AP is more visually similar to poisoned sample than U-AP.
* **The dynamic and sample-specific target label prediction is applicable to both all2one and all2all attack settings** (see **Table 1** for defense on all2all attacks). Since the defender doesn’t know whether the attack is all2one, all2all, or has multi-trigger multi-target, our strategy can flexibly cope with these situations and plays a better false tolerance role.
**2. Targeted and sample-specific adversarial perturbation.** Compared to targeted universal adversarial perturbation and the local mask strategy used in NC, our NPD adopts the targeted and sample-specific adversarial perturbation (AP) and doesn't restrict the format of AP. Therefore, NPD can cope with various types of backdoor triggers, e.g., sample-specific, or global. This is also reflected in our experimental performance.
**3. Fine-tuning Paradigm.** In comparison to NC which needs to fine-tune the whole network, our NPD only fine-tunes the inserted transformation layer. It is **data-efficient** which performs well with very few clean data, and **computationally efficient**, which needs less running time for defense. Please refer to Tables 6 and 7 in the manuscript for experimental details.
A systematic and detailed analysis of NPD's mechanism and novelty are presented in the first **Common Response**.
**Table 1: Defense performance under various all2all attacks on CIFAR-10(%).**
||Backdoored||NPD||
:-:|:-:|:-:|:-:|:-:
||ACC|ASR|ACC|ASR
Blended|91.59|83.50|91.07|6.24
LF|86.60|78.55|90.10|2.10
Input-Aware|91.91|84.80|90.53|4.88
SSBA|91.30|85.04|91.14|1.29
**Q2. Why does the method work with the reversed trigger which may not be correctly reversed?**
**R2:** Thanks for this insightful concern. Since it is difficult to access or exactly recover the unknown trigger, one feasible solution is to find some surrogates or approximations, such as AP in our NPD, i-BAU and NC. Moreover, we argue that the targeted AP (T-AP) adopted in NPD is a good surrogate to the trigger and better than the untargeted AP (U-AP), in three ways:
* **Intuitively, T-AP is more likely to be close to the trigger than U-AP.** Because, the set of T-AP is a sub-region of that of U-AP, and the trigger is a particular T-AP.
* **We visualize and quantitatively measure the distance of adversarial example with T-AP (i.e., targeted AE) and poisoned sample in Fig.1 in supplementary PDF.** It shows that T-AP is more visually similar to the trigger than U-AP. The distance between targeted AE and poisoned sample in the latent space of backdoored model is much smaller than that between untargeted AE and poisoned sample. It verifies that T-AP serves as a better surrogate to poisoned sample.
* **T-AP makes significant contribution to backdoor defense.** To investegate the effects of T-AP and U-AP on backdoor defense, we compare NPD with two variants NPD-UP and NPD-UU (see Table 4 in manuscript). It shows an obvious gap between NPD and NPD-UP, i.e., the average ASR 1.99% vs. 8.32%. It demonstrates that T-AP makes significant contribution to the success of NPD.
Hope above points could address your concern. Thanks again for your constructive comment.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed rebuttal and most of concerns are solved. Thus, I raise my scores.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: Dear Reviewer qQYz,
Thank you very much for your time and for your thoughtful feedback. We sincerely appreciate your diligence in evaluating our work. We are pleased to learn that our detailed rebuttal has addressed most of your concerns and has positively influenced your assessment of the manuscript. All suggested experiments and analysis will be added into the revised manuscript.
Thank you again for your time, consideration, and invaluable feedback.
Sincerely,
Authors
---
Rebuttal 2:
Title: Seeking Your Valuable Feedback
Comment: Dear Reviewer **qQYz**,
We wish to express our gratitude for your dedicated time and insightful comments. We are anxiously awaiting your valuable feedback and insights regarding the points we addressed in the rebuttal. Ensuring your satisfaction with our rebuttal is of utmost importance to us.
We sincerely appreciate your commitment to the review process and value the time.
Sincerely,
Authors | Summary: The paper proposes a lightweight and effective backdoor defense by inserting a trainable neural layer block. Without modifying original backdoor model, the proposed method can remove backdoor behaviors by filtering poisoned features via the trainable neural block. The authors conduct sufficient experiments to demonstrate the effectiveness of proposed method.
Strengths: 1. The proposed method is simple but effective. The inserted neural block only includes one convolution layer followed by a BN layer. Training this layer block is not time-consuming.
2. The experiments are very sufficient. The comparison experiments are conducted with six backdoor defenses against seven backdoor attacks on three datasets. Analysis experiments are also conducted including the effectiveness of losses, poisoning ratios and clean ratios.
Weaknesses: 1. The paper does not introduce how to choose different layers to insert the neural polarizer. Since the proposed method largely depends on network architectures. Please also provide more results using more network architectures e.g. vgg.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Does this proposed method work for a network without batch normalization layer?
2. Please explain more about how to choose layers to insert neural polarizer.
3. Could the authors visualize features before and after neural polarizer to qualitatively analyze the effectiveness (e.g. using GradCam)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for carefully reading our paper. We are encouraged by the positive comments of simple but effective method and sufficient experiments.
**Q1. More results using more network architectures, e.g., vgg. Does this proposed method work for a network without batch normalization layer?**
**R1:** We appreciate your valuable question. We perform defense results of NPD in comparison with NC and i-BAU in **Table 1**. It shows that NPD works for VGG19, which doesn't have batch normalization layer. Please refer to **Table 2 in R2** for more experimental results, which contains performance of NPD under more architecture, i.e., Inception-V3, and Densenet161 networks. In brief, our NPD approach demonstrates robustness and effectiveness across a wide range of network architectures.
**Table 1: Performance of NPD in comparison with NC and i-BAU on VGG19 network.**
||Backdoored||NC||i-BAU||NPD||
:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
|ATTACK|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR
BadNets|89.36|95.93|87.90|49.66|14.61|0.43|87.73|4.06
Blended|90.17|99.12|89.09|96.72|88.61|59.86|87.95|5.78
Input-Aware|77.67|94.58|74.45|4.02|72.15|16.22|77.11|1.47
SSBA|89.48|91.86|89.48|91.86|88.08|3.74|88.70|4.31
**Q2. How to choose layers to insert neural polarizer.**
**R2:** Thanks and we acknowledge that the selection of layer is important for NP. Actually we have investigated the performance of NP insertion into different layers on PreAct-ResNet18 in manuscript. To further investigate the impact of layer selection, we conduct experiments on three more backbones in **Table 2**, i.e., VGG19, Inception-V3, and Densenet161 networks, besides PreAct-ResNet18 and VGG19-BN in our paper. From Table 2 below and Figure 4 in our paper, we find that: inserting NP into the last three layers achieves remarkable performance (high ACC and low ASR), while a decrease in accuracy occurs when shallow layers are selected.
Therefore, the last feature layer is recommended, which yields better defense performance and maintains robustness against backdoor attacks.
**Table 2: Performance of neural polarizer insertion in different layers across various network architectures(%).**
|||Shallow||Middle||Third-to-last||Penultimate||Last||
:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
|||ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR
VGG19|BadNets|77.58|2.83|87.41|0.40|88.68|5.17|87.28|3.81|87.73|4.06
||Blended|77.59|17.19|88.89|25.53|89.27|9.78|89.31|7.40|87.95|5.78
||Input-Aware|70.31|1.47|70.99|20.37|74.30|7.13|72.59|7.77|77.11|1.47
||SSBA|72.76|0.67|88.45|20.20|89.05|10.53|88.63|6.12|88.70|4.31
Inception-V3|BadNets|11.37|92.84|90.76|10.82|91.49|8.54|90.62|2.51|91.69|4.70
||Blended|10.00|100.00|92.70|69.77|93.71|8.91|93.05|4.51|93.71|0.01
||Input-Aware|19.22|0.38|90.52|10.50|91.65|2.46|91.04|5.30|91.58|0.69
||SSBA|10.00|100.00|92.21|89.81|93.35|7.52|92.88|6.13|92.23|1.13
DenseNet161|BadNets|72.78|44.22|81.24|80.09|81.32|9.19|82.46|8.48|82.06|1.91
||Blended|75.09|3.79|82.87|85.98|82.71|13.83|83.75|9.16|83.18|4.22
||Input-Aware|77.20|17.11|83.65|43.57|82.81|3.56|82.02|1.76|83.44|1.56
||SSBA|70.99|9.57|81.57|30.27|82.72|7.09|82.06|8.41|81.41|8.48
**Q3. Visualize features before and after neural polarizer by Grad-CAM.**
**R3:** To better understand the effect of neural polarizer (NP) in purifying poisoned features, we visualize the benign samples, poisoned samples, and their Grad-CAM visualization before and after NP in **Fig. 2-4 in supplementary PDF**. We visualize BadNets, Blended, and Trojan attacks since they use visible triggers. As the figures show, NP corrects the network's attention from triggers to the subject of figures, showing the NP successfully removes backdoors in backdoored models.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal, and most of my concerns are solved. Thanks!
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer BztP,
Thanks for your feedback. We are strongly encouraged by your recognition of our efforts. All suggested experiments and analysis will be added into the revised manuscript. Thanks again for your valuable time and constructive comments.
Sincerely,
Authors | Summary: This paper proposes a novel defense method to filter trigger information from poisoned samples. It inserts a learnable intermediate layer (called neural polarizer) into the backdoored model, and proposes bi-level optimization solution to approximate perturbation and target label. Experimental results demonstrate the effectiveness of Neural Polarizer over other defense baselines.
Strengths:
1.The paper is well-written and easy to follow.
2.The proposed Nueral Polarizer is interesting.
3.The experiments are comprehensive.
Weaknesses: 1.The paper proposes NPD (Neural Polarizer based backdoor Defense) as a normal setting. It also proposes NPD-TP (assume the target label is known) and NPD-TU (assume having full access of benign samples). The NPD-TP and NPD-TU relax the limitation, and is kind of unfair compared to other defense baselines. However, based on Table 1 and Table 2, most of best performance comes from NPD-TP/NPD-TU, while NPD does not perform very well under some situations.
More specific, In Table 1, defense on CIFAR-10: NPD does not always perform the best. For example, ASR under WaNet is 0.80 (larger than WaNet with FP and NAD), ASR under Trojan is 6.51 (larger than Trojan with i-BAU and EP). Similar phenomenon happens in Table 2: defense on Tiny ImageNet.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors:
1.Regarding approximating perturbation $\Delta$ and target label T (eq.5): in inner minimization, the algorithm first estimates the target label, then applies PGD to generate perturbations. I assume this step is very important, because it decides the outer minimization. That are the estimated perturbations look like? Are they similar with the real triggers, or they are just the adversarial perturbations? Are there any experiments/visualizations to elaborate that?
2.How to choose which layer to insert the neural polarizer? Does your method robust to the choice of layer?
3.What if there are more labels. For example, CIFAR-100 or even more labels. Does the proposed method still work?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we would like to show our sincere appreciation to the reviewer for the valuable time and constructive comments on our submission.
**Q1. The variants NPD-TP (assume the target label is known) and NPD-TU (assume having full access to benign samples) relax the limitation, and is kind of unfair compared to other defense baselines. Most of best performance comes from NPD-TP/NPD-TU, while NPD does not perform very well under some situations.**
**R1:** Thanks. We would like to explain from the following three points.
* **Assumption of NPD-TP and NPD-TU.** We would like to clarify that NPD-TP and NPD-TU share the same assumption that the target label is known. We think that the sentence at Line 186 "*the targeted universal adversarial perturbation (TUAP) for all benign samples, dubbed NPD-TU*" may mislead the reviewer to think that NPD-TU assumes to have full access to benign samples. Actually, that means the access of all benign samples used for fine-tuning, rather than all benign samples in the original training dataset. That is same with NPD, NPD-TP and all other compared defense methods.
* **The role of NPD-TP and NPD-TU.** The intrinsic difference between NPD and NPD-TP/NPD-TU is the target label: the former obtains the target label via a dynamic strategy of target label prediction, while the latter assumes to have the true target label. Thus, NPD-TP/NPD-TU serves as the reference to study the effect of the dynamic prediction strategy in NPD. We would like to refer you to the analysis of its effect and mechanism in the **first common response** posted at the top position.
* **Performance of NPD.** If removing NPD-TP and NPD-TU from Tables 1 and 2 in the main manuscript, we can see that NPD still has a significant performance advantage over all other compared defense methods. Specifically, in Table 1, the most competitive method is i-BAU, its average ACC/ASR/DER are 88.95/6.11/92.42%, while those of NPD are 90.75/1.62/95.56%; in Table 2, the most competitive method is ANP, its average ACC/ASR/DER are 42.64/15.87/80.03%, while those of NPD are 50.55/2.17/90.84%. Although NPD doesn't perform the best at all cases, we believe the huge advantage of the overall performance is enough to demonstrate the effectiveness of NPD.
Hope above points could address your concern, and we will clearly clarify the assumption and role of NPD-TP and NPD-TU in the revised manuscript. Thanks again for your constructive comment.
**Q2. What do the estimated perturbations look like? Are they just adversarial perturbations?**
**R2:** Thank you for your insightful question. We would like to refer you to the **common response** for a comprehensive analysis of the mechanism of NPD, where **visualization of the estimated perturbations** is provided in **Fig. 1** in the **supplementary PDF**.
More importantly, we would like to briefly summarize some important information for your reference. In the visualization, both targeted adversarial perturbation (T-AP) and untargeted adversarial perturbation (U-AP), as well as the corresponding adversarial examples (T-AE, U-AE) are provided. To explain the effect of T-AP, we also present the $L_2$ distance between T-AE, U-AE, and poison samples in the latent space. From the visualization, we find that
1. **Visualization:** T-AP exhibits more visual similarity to the trigger both in the input space and latent space compared to U-AP, showing targeted perturbations are more effective as surrogate perturbations to the triggers.
2. **Quantitative analysis:** In latent space, the $L_2$ distance between T-AE and the poisoned sample is 36.4, which is significantly smaller than the distance between U-AE and the poisoned sample, i.e., 93.84. This quantitative analysis further supports the efficacy of our targeted adversarial perturbations as a better surrogate for unknown triggers.
Overall, both visualization and quantitative analysis provide substantial evidence to validate the effectiveness of NPD in producing targeted adversarial perturbations, which is different from ordinary adversarial perturbations. Hope the above points could address your questions.
**Q3. How to choose the layer to insert the neural polarizer? Does your method robust to the choice of layer?**
**R3:** We appreciate your question. Due to space limit, we refer you to **R2 of Reviewer BztP** for more details, which contains an in-depth analysis and performance of NP insertion in various layers across VGG19, Inception-V3, and Densenet161 networks, respectively. Briefly, NPD is robust across different network structures. And, it remains effective and reliable when the NP is inserted into deep layers of the network.
**Q4. What if there are more labels, e.g., CIFAR-100?**
**R4:** Thanks for this suggestion. Actually, we have evaluated NPD on Tiny ImageNet, which has 200 classes. The results on CIFAR-100 in comparison with two SOTA defenses are presented in **Table 1**. The average (ACC, ASR) of NPD is (65.49%, 1.96%), while those of NC and i-BAU are (67.4%, 34.41%) and (63.35%, 12.46%), respectively. Our NPD shows significant advantages.
**Table 1: Comparison with the state-of-the-art defenses on CIFAR-100 dataset with 5% benign data and 10% poisoning ratio on PreAct-ResNet18 (%).**
||Backdoored||NC||i-BAU||NPD||
:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
||ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR
BadNets|67.23|87.43|66.05|0.14|60.37|0.04|64.27|0.06
Blended|69.28|99.59|67.55|98.94|63.75|0.79|66.30|4.43
LF|68.82|94.53|67.67|21.48|63.85|0.18|65.37|0.22
SSBA|68.97|96.42|67.37|78.64|63.09|28.91|65.57|0.83
Trojan|68.93|100.00|66.00|0.10|63.73|0.86|65.13|1.84
Wanet|69.83|98.46|69.76|7.14|65.31|43.96|66.27|4.36
Avg|68.84|96.07|67.40|34.41|63.35|12.46|65.49|1.96
---
Rebuttal Comment 1.1:
Title: Thanks for your response.
Comment: Thanks for the author's response. You have released most of my concern. I have increased the score.
---
Reply to Comment 1.1.1:
Title: Thank you for your response.
Comment: Thank you very much for your response and kind words. We are truly appreciative to hear that most of your concerns have been addressed. We will add above analysis into the revised manuscript. Thanks again for your valuable time and comments. | Rebuttal 1:
Rebuttal: # Common Response
We sincerely thank all reviewers for their time and constructive comments.
**Q1. Analysis of the mechanism and novelty of our neural polarizer defense (NPD) method.**
**R1:** We aim to present **a systematic analysis** about NPD's mechanism and novelty covering **three critical components**:
**1. Dynamic and sample-specific target label prediction strategy**
* **Definition and contrast with existing methods.** We estimate the target label for each training sample in inner-minimization step (See Line 177-178 in manuscript). **In contrast**, Neural Cleanse (NC) estimates a common target label for all samples.
* ***Target label prediction accuracy.*** In **Table 1**, we present accuracies of dynamic target label prediction on backdoor attacks at the first mini-batch. Most prediction accuracies are much higher than random guess (10\%). Although the predicted target label isn't 100% correct, it remains effective for backdoor removal, as analyzed later.
* **Effect & analysis.** There are two main advantages to the dynamic strategy:
* **The predicted target label enables us to generate targeted adversarial perturbation (T-AP).** As analyzed later, T-AP is a better surrogate for the unknown trigger than the untargeted adversarial perturbation (U-AP), enhancing backdoor removal efficacy. Despite the partial accuracy of predicted target labels, they enable robust backdoor defense. Incorrectly predicted labels guide us to generate U-AP (aimed at other classes). Thus, the generated adversarial perturbations of all fine-tuning samples in NPD are a **mixture of T-AP and U-AP**.
* **The sample-specific target label prediction is applicable to all2one and all2all attack settings.** In practice, defenders lack certainty about attack types (all2one or all2all), risking suboptimal defenses due to erroneous guesses or detection. Our sample-specific target label prediction avoids this risk. Thus, our method performs well against both all2one and all2all attacks (see **Table 2**).
**Table 1: Accuracy of dynamic target label prediction on CIFAR-10(%).**
|ATTACK|BadNets|Blended|Input-Aware|LF|SSBA|Trojan|Wanet|Avg|
:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
ACC|86.33|33.05|72.66|43.75|37.11|28.91|84.77|55.22
**Table 2: Defense performance under different all2all attacks on CIFAR-10(%).**
||Backdoored||NPD||
:-:|:-:|:-:|:-:|:-:
||ACC|ASR|ACC|ASR
BadNets|91.89|74.42|91.41|0.89
Blended|91.59|83.50|91.07|6.24
Input-Aware|91.91|84.80|90.53|4.88
SSBA|91.30|85.04|91.14|1.29
**2. Targeted and sample-specific adversarial perturbation**
* **Definition and contrast with existing methods.** We generate T-AP for each training sample in the inner-minimization step (Line 178-179 and Eq. (5) in manuscript). **In contrast**, i-BAU generates U-AP. NPD's min-min formulation contrasts i-BAU's min-max one. Besides, NC adopted targeted universal adversarial perturbation in a local patch to approximate the unknown trigger.
* **Effect & analysis.**
* **T-AP is a better surrogate to the unknown trigger than U-AP:**
* **Intuitively**, T-AP set is a sub-region of U-AP set, and the unknown trigger is a particular T-AP. Thus, using the projected gradient descent (PGD) adversarial attack method, the generated T-AP is more likely to be close to the trigger than the generated U-AP.
* **Visualization.** we visualize the trigger, as well as the generated T-AP and U-AP. As shown in **Fig. 1 in supplementary pdf**, the T-AP is more visually similar to the trigger.
* **Quantitative distance.** We calculate $L_2$ distance in the latent space of backdoored model among the targeted adversarial example (AE), untargeted AE, and the poisoned sample with trigger. The distance between Targeted AE and poisoned sample 36.4 is much smaller than that between Untargeted AE and poisoned sample 93.84.
* **T-AP & U-AP Mixture for Effective Defense.** As shown above, NPD generates a mixture of T-AP and U-AP. In contrast, in the variants NPD-TP (see Line 182-185 in manuscript) and NPD-UP (see Line 259), the generated perturbations are pure T-AP and U-AP, respectively. In Tables 1 and 2 of the main manuscript, there is slight gap between NPD and NPD-TP. In Table 4 of the main manuscript, there is an obvious gap between NPD and NPD-UP, i.e., the average ASR 1.99% vs. 8.32%. These two comparisons demonstrate that a mixture of T-AP and U-AP is enough to achieve good defense performance close to the pure T-AP, and the T-AP part makes significant contribution.
**3. Neural polarizer (NP)**
* **Definition and contrast with existing methods.** NP is a lightweight linear transformation layer (see Section 3.2 and Fig. 3 in manuscript). It's inserted into the backdoored model. Only its parameters are fine-tuned, while the parameters of the original backdoored model are fixed. **In contrast**, most existing fine-tuning based defense methods (e.g., i-BAU, NAD) don't modify the structure and fine-tune the whole model.
* **Effect & analysis.** Only fine-tuning NP's parameter yields two key benefits:
* **Data efficient.** Table 6 and Line 281-285 in manuscript showcase NPD, i-BAU, and ANP performance using few clean data (500, 250, 50 samples). NPD still performs well with few clean data while i-BAU performs much worse. Thus, NPD is much more data efficient.
* **Computationally efficient.** Table 7 in manuscript and the following **Table 3** show the running time of different defense methods on two datasets. It shows the computational efficiency of NPD.
**In summary, above three innovative components distinguish our NPD significantly from existing related methods, and make critical contributions to the superior defense performance of NPD on both effectiveness and efficiency**.
**Table 3: Runnign Time (sec.) in comparison with state-of-the-art defenses with 2500 images on PreAct-ResNet18.**
|Defense|FP|NAD|NC|ANP|i-BAU|NPD|
:-:|:-:|:-:|:-:|:-:|:-:|:-:
CIFAR-10|1169.01|74.39|896.45|58.75|57.23|55.16
Tiny ImageNet|3357|351|42512|1692|887|332
Pdf: /pdf/06b8cac6fc9e243b2822bfb37d32d7f975cdb79f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Auditing for Human Expertise | Accept (spotlight) | Summary: While humans and machines oftentimes make differing decisions, it's unclear whether humans make these decisions based on extra factors or information unavailable to machines. To understand this situation, the paper proposes a statistical test to determine if expert predictions are independent of the labels, when accounting for the input features. This idea indicates whether humans rely on different information, and in a sense add additional value unseen by a model. To evaluate this, the paper analyzes doctor predictions in a hospital admitance system, and find that doctors tend to use additional information.
Strengths: 1. Statistical approach is well motivated and clearly demonstrates how to test for humans relying on extra information
2. Test allows for flexibility due to choice of L, allowing for different statistical properties
3. Method is evaluated on a real-world hospital dataset, and the connection between the evaluation and the methodology is clear
Weaknesses: 1. Method relies on dataset containing pairs that are close in input space, yet distinct in feature space; such a situation might not be generalizable
2. Evaluation is only done on one real-world dataset; a controlled evaluation of the test would give better insights into how the test performs and the impact of different parameters
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. What are other choices for the function F, and would they be equally valid?
2. For the experiment with hospital admission data, would the results change if instead of using the GBS score, the individual factors that were used as inputs to the score were used?
3. In table 2, what does mismatched pairs mean?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Authors discuss limitations fairly thoroughly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback! Our responses are provided below.
**In response to**
> "What are other choices for the function F, and would they be equally valid?"
As noted in Section 3, our test is valid for any choice of $F(\cdot)$. However, the *power* of the test will certainly depend on the choice of $F(\cdot)$. As is typical for hypothesis tests, the degree of this dependence is hard to characterize analytically since it depends on the specific distribution of $(X, Y, \hat{Y})$. Our particular choice is a natural one however, as it is well-powered to detect the specific form of dependence we care about -- whether or not the unobservables $U$ actually improve human accuracy with respect to a known measure of accuracy. Please also see our response to 7cTX for additional discussion regarding the *choice* of loss function.
**In response to**
> "In table 2, what does mismatched pairs mean?"
"Mismatched pairs" refers to the number of pairs of observations chosen by ExpertTest which are not exactly identical. In the context of our case study, it means that we chose a pair of patients who did not have identical Glasgow-Blatchford scores, which risks incurring additional type I error as described in Section 4. The definition of "mismatched pairs" is given in Section 5, but we will clarify this in our final draft by including a description of each column in the table caption.
**In response to**
> "For the experiment with hospital admission data, would the results change if instead of using the GBS score, the individual factors that were used as inputs to the score were used?"
Excellent question -- please see our response to 7cTX for a detailed discussion of this point. The results of this experiment are included in the attached figures.
**In response to**
> "Evaluation is only done on one real-world dataset; a controlled evaluation of the test would give better insights into how the test performs and the impact of different parameters"
This concern is addressed in our response to all reviewers (see above).
---
Rebuttal Comment 1.1:
Title: Clarification
Comment: Thank you for your response. I have re-read the Appendix, and the synthetic experiments there are indeed what I was looking for, so thank you. I wanted a little bit more clarification on weakness #1 (whether data points that are close in feature space but far in label space exist). You answered this a bit in the general rebuttal, but I was wondering if you clarify why this generalizes?
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. First, we should note that our test recovers (effectively) exact type I error control whenever we succeed in finding pairs of identical observations (by Theorem 1 and the definition of $\varepsilon_{n,L}^*$); this is true in both low and high dimensional feature spaces and does not rely on assumptions about the smoothness of the distribution. Of course, such pairs are less likely to exist if we are only given finite samples in high dimensional feature spaces (and occur with probability 0 in continuous feature spaces), at which point we assume that nearby observations in the feature space induce similar distributions over the forecast $\hat{Y}$ (“generative model (12)” in Section 4). This assumption is motivated by the natural intuition that humans are unlikely to finely distinguish between very similar observations, particularly in high dimensional spaces. We do not however have a way of verifying this assumption, and you are correct that we cannot make any guarantees about type I error if (1) we fail to find identical pairs and (2) the distribution does not satisfy this smoothness assumption. The fact that we need such an assumption is not surprising, as conditional independence testing is intractable in full generality (see [1] below).
That said, there is no reason to believe that our assumption is *less* likely to hold in high dimensions; indeed, to take 7Ctt’s example, images which are very similar at a pixel level may look *identical* to a human expert. Furthermore, images which are quite different in pixel space may *also* induce similar distributions over forecasts, if these images are similar in some lower dimensional latent space which is relevant for human decision making. However, we may not necessarily know what the correct latent space is, or whether one exists, and thus we can only guarantee type I error control (assuming we fail to find identical pairs of observations) if a given forecasting task satisfies our modeling assumptions.
[1] Shah and Peters, 2018. “The Hardness of Conditional Independence Testing and the Generalised Covariance Measure” | Summary: This paper proposes a statistical framework for measuring whether, in the context of algorithmic predictions, human experts incorporate valuable information in their decision making that is unknowable to the algorithm. The authors formalize this question as a simple hypothesis test: “are human expert predictions independent from the outcome variable, when conditioned on the feature vector”. The proposed statistical test for “presence of human expertise” is straightforward, drawing on prior literature on testing conditional independence, and provides interpretable p-values. The paper then uses this framework to analyze real-world medical data from an emergency department of a large hospital, showing that physicians do in fact incorporate information above and beyond that captured by a standard algorithmic screening tool.
Strengths: The main strengths of this paper are in it’s simplicity, clear exposition, and scoping of a well-defined problem that it tries to solve. The authors’ give an elegant framework for formally defining the problem of measuring valuable human expertise and their proposed test is quite intuitive. The paper presents all its ideas in a concise manner and also discusses its limitations quite candidly.
I also like the thoroughness of the experiment conducted by the authors—it seems well executed and the results are compelling evidence for the validity of their statistical test.
Weaknesses: Some weaknesses of this work:
- From an algorithmic/technical standpoint, this paper uses a straightforward notion of conditional independence to define the problem, and a simple binned conditional independence test to solve it. This simplicity is not a bad thing, but it is worth noting that the main contribution of this work doesn’t present a novel technique or technical insight.
- The results and experiments of this paper would likely not extend to a high-dimensional setting. Their current experiment uses discrete, scalar features. The authors discuss this in their limitations. A concrete example: how would a test like this work for, say, radiology images, where the human predictive distribution is likely not smooth w.r.t the $\ell_2$ metric.
- I understand it’s not an easy task, but this paper would be much stronger if there were additional experiments where this technique was employed to understand the interplay between human and algorithmic decision making.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - In Section 2, first paragraph: why can you assume X is independent of U, without a loss of generality?
- I would be interested in seeing this used in settings where algorithmic predictions are close to replacing human predictions. Radiology images are the immediate example that comes to mind.
- Could your test (or a small modification) detect if human experts are using additional information in a negative way? For example, could you use it to detect biasedness in college admissions or medical treatment choices because the expert is relying on information outside of the set of features they should be looking at?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, good discussion of limitations and prior work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback! Our responses are provided below.
**In response to**
> "could your test (or a small modification) detect if human experts are using additional information in a negative way?''
1. Yes, this algorithm could absolutely be used in such a setting, though note that we require that outcomes $Y$ are not causally influenced by the predictions $\hat{Y}$. This precludes the possibility that even observing the outcome is contingent on the human's decision, as is likely the case in e.g. college admissions. This is discussed in Section 6 and further in Appendix A.
2. With that caveat, we could certainly test whether human forecasts are independent of sensitive attributes (e.g., race) after conditioning on some set of `allowable' features. This would be very similar to the definition of conditional statistical parity given in "Algorithmic decision making and the cost of fairness'' (Corbett-Davies et. al, 2017). Alternatively, one could test whether the predictions $\hat{Y}$ and sensitive attributes are independent conditional on the outcome $Y$, as in "Equality of Opportunity in Supervised Learning'' (Hardt et. al, 2016).
3. Depending on the quantity of interest -- for example, whether forecasts are *unrelated* to race or whether they are systematically *lower* for a particular race -- one might consider modifying our test to be a two-tailed rather than single-tailed hypothesis test.
**In response to**
> "In Section 2, first paragraph: why can you assume X is independent of U, without a loss of generality?"
This concern is addressed in our response to all reviewers (see above).
**In response to**
> "The results and experiments of this paper would likely not extend to a high-dimensional setting."
This concern is addressed in our response to all reviewers (see above).
**In response to**
> "I understand it’s not an easy task, but this paper would be much stronger if there were additional experiments where this technique was employed to understand the interplay between human and algorithmic decision making."
This concern is addressed in our response to all reviewers (see above).
---
Rebuttal Comment 1.1:
Comment: Thank you for your helpful responses! I have increased my review rating to a 7 due to the clarifications and the new results you posted above. | Summary: This paper proposes a method for determining whether a human expert is usefully using outside information that a model does not incorporate in order to make decisions. The goal is to test whether complementarity, humans working with models, is possible for a given task. The paper sets forth an algorithm, ExpertTest, provides some theoretical guarantees, and uses emergency room admissions as a case study for the technique. In the case study, the method indicates that doctors are making use of external information that is not captured in a commonly used risk score.
Strengths: The main strength of this paper is the novel problem that it seeks to solve. Understanding whether and how humans can add their expertise on top of automated decision systems is an important goal, and it is often understudied. It is a creative approach to a crucial problem.
Weaknesses: The primary weakness of the paper, in my opinion, is that the problem setup focuses on variables, or features, rather than also considering the functional form for the prediction itself. Fundamentally, if we are comparing the performance of something like the Glasgow-Blandford score (GBS) to humans, how do we know that the human is not using the same exact input features $X$ as the GBS, but just has a better way of mapping those $X$ to the prediction we are interested in? Why does $U$ have to exist at all for the human to be lending their “expertise” to the problem? That is, let’s say the GBS output is $\tilde{Y} = \tilde{f}(X)$, and the human output is $\hat{Y} = \hat{f}(X)$. If $\hat{f}$ is closer to the actual generating function in the ground truth than $\tilde{f}$, then the human could be “adding expertise” without actually using additional information as defined in this paper. Perhaps the human’s training would help them map these variables better than the GBS algorithm does. If I am missing something here, please correct me, but it’s very unclear to me why the method proposed leads to the conclusion that the expert is using additional features to make a decision.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Early in the paper, the authors equate $X$ with features, such as input features for a model. However, in the case study, the authors define $X$ as a GBS score, which is the output of a risk model. I think the notation needs to more carefully distinguish between what is an input to a model and what is an output of a model, and this may be part of the confusion that I express above in the weaknesses section. Could you clarify this point?
2) In the case study, $\hat{Y}$ is defined as whether the patient is admitted, and the ground truth $Y$ is defined as whether they suffered one of three adverse outcomes. Are these variables correlated? If a patient is admitted, are they less likely to suffer an adverse outcome? It seemed like this might actually be a setup where the “prediction” and “ground truth” are not independent of each other.
3) The third adverse outcome mentioned is initial discharge and readmission within 30 days. Does this mean that anyone who suffered that outcome had a $\hat{Y}$ of 0? I am just wondering if “initial discharge” means that the ER physician made the choice to not admit them.
4) How sensitive is ExpertTest to the choice of loss function?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I think the paper should more carefully address the assumptions in the initial problem setup, specifically what I lay out in the weaknesses section. What model of human expertise is being considered when the assumption is that any additional information is encapsulated in the $U$ variables?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback! Our responses are provided below.
**In response to**
> "The primary weakness of the paper…is that the problem setup focuses on variables, or features, rather than also considering the functional form for the prediction itself."
**We reproduced our original case study using the clinical features which make up the Glasgow-Blatchford Score. The results are attached with our response and discussed below.** We will include these results alongside our old ones in the final draft of Section 5. We respectfully disagree that the case study in our original draft is invalid however, and first provide a clarification:
1. Recall that we seek to test whether the predictions $\hat{Y}$ incorporate information other than the observable features $X$ to forecast an outcome $Y$.
2. In Section 5, we instantiate this framework where $X$ is *just* the Glasgow-Blatchford score (GBS). That the GBS is itself a summary of other clinical markers **is merely provided as context. Our goal is to test whether physicians incorporate information other than the GBS itself**.
3. You are absolutely correct that physicians may be relying on the *same* clinical markers but perhaps in a different way (i.e. by implicitly employing a different functional form). This is not a contradiction however -- the interpretation of ExpertTest simply depends on the set of features we choose to condition on.
4. We agree however that it is helpful to see the results both conditional on the GBS itself and conditional on its constituent parts. Indeed, these results strengthen our work significantly.
**Description of ‘input’ features**
We briefly contextualize the attached figures here, focusing on technical validity due to space constraints. Our final draft will include additional context regarding the clinical relevance of these features. The Glasgow-Blatchford Score is composed of the following nine features: blood urea nitrogen (BUN), hemoglobin (HGB), systolic blood pressure (SBP), pulse, cardiac failure, hepatic disease, melena, sycope and biological sex. The first four are continuous and the latter five are binary. The GBS is calculated by first converting the continuous features to ordinal values (BUN and HGB to 6 point scales, SBP to a 3 point scale and pulse to a binary value) and then summing the values of the first 8 features. Biological sex is used to inform the conversion of HGB to an ordinal value. We refer to this mix of binary and ordinal values as the *discretized* feature space. It is worth emphasizing that this score -- including the conventions used to convert the features to discrete space -- is part of the standard of care for patients with acute gastrointestinal bleeding, and has been extensively validated empirically (see Section 5 and Appendix E for additional details).
**Summary of new results**
We attach the results of running ExpertTest in both the original and discretized feature space. In both cases, we further normalize each feature to lie in $[0, 1]$ (to ensure that no feature is given outsize weight in calculating pairwise similarity). We find strong evidence that physicians incorporate information other than the 9 features described above when making hospitalization decisions ($\tau \approx 0$). In the original feature space, which includes continuous features, we naturally fail to find identical pairs of patients. Nonetheless, the pairs we find are very close under the $\ell_2$ norm, suggesting that ExpertTest remains approximately valid. In the discretized feature space we succeed in finding up to $L=1000$ pairs of identical patients, meaning that ExpertTest recovers (effectively) exact type I error control.
**In response to**
> In the case study, $\hat{Y}$ is defined as whether the patient is admitted, and the ground truth $Y$ is defined as whether they suffered one of three adverse outcomes…It seemed like this might actually be a setup where the “prediction” and “ground truth” are not independent of each other."
1. $Y$ and $\hat{Y}$ will typically be correlated (as they are here, and in any setting where the human makes reasonable forecasts). The condition we require (see Section 6 and Appendix A) is that $\hat{Y}$ does not *causally* influence $Y$.
2. We make this assumption here because (1) the ER physician making hospitalization decisions and GI specialist making treatment decisions are different physicians working in a large hospital system -- indeed, there are many cases where the GI specialist immediately discharges a patient admitted by the ER physician -- and (2) we assume we can still observe $Y=1$ outcomes for discharged patients ($\hat{Y} = 0$) who are readmitted or die within 30 days.
3. This latter assumption is necessarily imperfect, but is consistent with standard 'adverse outcome' definitions in the literature and Center for Medicare Services (CMS) guidelines. Relevant citations and further discussion of this outcome definition are included in Section 5 and Appendix E.
**In response to**
> "How sensitive is ExpertTest to the choice of loss function?"
In the special case of binary outcomes and predictions (as in our case study), ExpertTest will provide identical results for any loss function which is strictly increasing in the number of prediction mistakes. This will be true of nearly any natural loss function. The results of ExpertTest also do not depend on the relative cost of false negatives and false positives, which may be arbitrarily different (e.g., false negatives in a clinical setting may be far more costly than false positives; we need not specify exactly *how much* more costly as our results are insensitive to this choice). **We discuss this phenomenon further in Appendix E.** In more general settings we also find that ExpertTest is also not very sensitive to the choice of loss function, and will include additional experiments to demonstrate this in our final submission (unfortunately we cannot do so here due to space constraints).
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed response and additional experiments. My concerns about the features have been addressed and I have a better understanding of the method now. Given this response and the other comments, I am raising my score from 3 to 7. | Summary: This paper proposes a hypothesis-testing approach to identify whether a set of predictions made by a human expert uses additional information that is conditionally independent from the input covariates. The paper provides theoretical guarantees for the test in a general setting and asymptotically. The proposed test is then applied to real-world data with emergency room physicians.
Strengths: - The paper itself is written in a way that is easy to follow along, building the motivation and proposed work in a step-by-step manner.
- The paper proposes a test that is a novel application of conditional independence work in the statistics literature for a novel use case.
- The paper evaluates the theoretical test on a real-world use case.
Weaknesses: - Can the test help improve decision outcomes? Typically, the primary goal in human-AI decision-making is to achieve complementarity (e.g., as discussed in [1]), particularly by leveraging the complementary skills of human and AI. Because the test does not account for human and AI prediction accuracies, it is difficult to say whether performing such a test has any implications on complementarity. Another relevant cite is [2].
[1] Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. Bansal et al. CHI 2021.
[2] A Unifying Framework for Combining Complementary Strengths of Humans and ML toward Better Predictive Decision-Making. Rastogi et al. EAAMO 2022.
- Experimental validation: While it is great that the authors perform experiments on real-world data, it would be ideal to also verify the behavior of the test using synthetic where the differences between human and AI can be more carefully controlled to establish how sensitive the test is to these similarities and potentially the effect on the choice of L.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Major:
- Could you address the questions raised in the weaknesses mentioned above?
Minor:
- Notation confusion: U is used to both denote private information in Section 2 and random variables and observations in Section 3
- What would the test return if the human is using information U that is *correlated* with X? Here, this may be a case where U is easier for a human to notice compared to X.
- The table captions are not descriptive, which hinders readability.
- Tone of contribution: the authors state that this test is a “necessary condition for achieving human-AI complementarity” without demonstrating results in terms of improving decision outcomes.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback! Our responses are provided below.
**In response to**
> "Can the test help improve decision outcomes? Typically, the primary goal in human-AI decision-making is to achieve complementarity (e.g., as discussed in [1]), particularly by leveraging the complementary skills of human and AI. Because the test does not account for human and AI prediction accuracies, it is difficult to say whether performing such a test has any implications on complementarity. Another relevant cite is [2].".
1. This is an excellent question, and is addressed in Section 6 -- while the algorithm we give here is focused on the auditing problem, we hope that our framework yields insight into the structure of algorithms for Human/AI complementarity.
2. While providing such algorithms is beyond the scope of our work, we respectfully disagree that "it is difficult to say whether performing such a test has any implications on complementarity". A rejection of our test indicates that the human forecaster *is* usefully incorporating information which is unavailable to any predictive algorithm, even if such information is incorporated in a suboptimal way.
3. This suggests that, for example, *some* meta-algorithm which incorporates both human and baseline algorithmic forecasts should be able to achieve complementarity (given sufficient data, expressive power and other considerations which are standard in supervised learning tasks). On the other hand, a failure to reject our test suggests that *no* algorithm could hope to achieve complementarity, as the human is not incorporating useful information beyond that which is contained in the available features.
4. Thus, we view our test as a precursor to assessing whether complementarity is likely to be achievable in a given forecasting task.
**In response to**
> "Experimental validation: While it is great that the authors perform experiments on real-world data, it would be ideal to also verify the behavior of the test using synthetic where the differences between human and AI can be more carefully controlled to establish how sensitive the test is to these similarities and potentially the effect on the choice of L."
This concern is addressed in our response to all reviewers (see above).
**In response to**
> "What would the test return if the human is using information U that is correlated with X? Here, this may be a case where U is easier for a human to notice compared to X."
This concern is addressed in our response to all reviewers (see above).
**In response to other feedback**
We agree with the remaining suggestions, and thank you for your feedback. We will incorporate the two suggested references and change the notation in Section 3 to avoid clashing with Section 2.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for clarifying my concern about how this work should be situated in the broader space of human-AI decision-making. As the authors suggested in their rebuttal, I agree that a few minor proposed changes (highlighting the synthetic experiments and cleaning up notation / presentation) would certainly improve the work. As such, I will modify my score from 6 to 7. | Rebuttal 1:
Rebuttal: We thank all four of the reviewers for their thoughtful and constructive feedback. We address feedback provided by more than one reviewer in this comment, with additional responses to individual reviewer concerns provided in-line below each review.
# High dimensional feature spaces
> **7Ctt:** "The results and experiments of this paper would likely not extend to a high-dimensional setting. Their current experiment uses discrete, scalar features. The authors discuss this in their limitations. A concrete example: how would a test like this work for, say, radiology images, where the human predictive distribution is likely not smooth w.r.t the $\ell_2$ metric."
> **NoBC:** "Method relies on dataset containing pairs that are close in input space, yet distinct in feature space; such a situation might not be generalizable"
1. Per the proof of Theorem 2 (Appendix C), the excess type I error scales like $O(n^{-\frac{1}{d}})$ for a $d$ dimensional feature space. This behavior is typical in a generic, non-parametric setting (see e.g. the model-powered test of conditional independence of Sen et. al, 2017, which recovers the same bound).
2. If there were a *latent* lower dimensional representation, as is often the case with e.g., image data, our results could potentially be adapted to be with respect to the latent lower dimension. Indeed, as **7Ctt** points out, the smoothness condition required for Theorem 2 may not hold in a high dimensional setting, but may hold with respect to the latent representation.
3. It is also worth noting that these are *worst case* results over the set of possible distributions on $X, Y, \hat{Y}$. We find in both our real-world case study (Section 5 and figures attached to this rebuttal) and synthetic experiments (Appendix F) that typical distributions are better behaved; in particular, they exhibit more 'clustering' of similar $(x_i, x_j)$ pairs and thus require smaller datasets to achieve acceptable type I error.
4. We discuss alternative tests (e.g., kernel based tests of conditional independence) which may be better suited for high-dimensional feature spaces in Section 6.
# Additional experiments, synthetic data
> **DMEm**: "Experimental validation: While it is great that the authors perform experiments on real-world data, it would be ideal to also verify the behavior of the test using synthetic where the differences between human and AI can be more carefully controlled to establish how sensitive the test is to these similarities and potentially the effect on the choice of L."
> **NoBC**: "Evaluation is only done on one real-world dataset; a controlled evaluation of the test would give better insights into how the test performs and the impact of different parameters"
1. Our manuscript includes extensive experiments on synthetic data in Appendix F, which is included with our original set of supplementary material. It is likely that the reviewers missed this section.
2. We reference these experiments in Section 1, and will feature them more prominently in our edited draft.
# On the potential correlation of $X$ and $U$
> **DMEm:** "What would the test return if the human is using information U that is correlated with X? Here, this may be a case where $U$ is easier for a human to notice compared to $X$."
> **7Ctt:** "In Section 2, first paragraph: why can you assume X is independent of U, without a loss of generality?"
1. First, **there is indeed a typo in our work**; we intended to state that $X$ and $U$ can be assumed to be *uncorrelated* without loss of generality, not independent.
2. Neither assumption (independence or zero correlation) is used anywhere in our analysis however; this sentence was merely intended to clarify our framework.
3. Zero correlation can be assumed without loss of generality because we can take $U' = (U - E[U \mid X])$ and replace $U$ by $U'$. Thus, even if the human forecaster incorporates unobserved information which is correlated with $X$, we can define $U$ to be the *additional* information which is not correlated with the observed features.
4. **Given the confusion around this point, our proposal is to simply delete this sentence from the final draft.** If the reviewers prefer, we are also happy to change 'independent' to 'uncorrelated' and clarify this point in the final draft.
# Formatting of tables
> **DMEm:** "The table captions are not descriptive, which hinders readability."
> **NoBC:** "In table 2, what does mismatched pairs mean?"
This is excellent feedback, and we will include more descriptive captions in the final draft. We answer NoBC's specific question in our individual response below.
Pdf: /pdf/d2b6ecc3e59dae0cff2d4d8ddf21c3f9028786d9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Mixed-Initiative Multiagent Apprenticeship Learning for Human Training of Robot Teams | Accept (poster) | Summary: The paper introduces a novel learning approach designed for robot teams to acquire a preferred policy for collaborative task completion using human expert-generated data. Additionally, the method enables the robots to gain an understanding of the theory of mind of each individual agent within the team.
Strengths: The paper presents a learning framework that enables robots to acquire teaming strategies through human expert demonstrations, while simultaneously learning the theory of mind between agents. The effectiveness of the proposed approach is demonstrated on synthetic and real data collected from humans. The method outperforms other baselines in accomplishing the defined task.
Weaknesses:
1.I agree that incorporating learning communication policies is crucial for both multi-agent learning from demonstrations (MA-LfD) and multi-agent reinforcement learning (MARL). However, the tasks presented in the paper may be considered relatively simple, which might not fully showcase the author's capabilities. It would be beneficial if the author could demonstrate the proposed methods in more complex 3D environments such as ITHOR or Habitat, or even leverage visual-based theory of mind datasets like "Learning triadic belief dynamics in nonverbal communication from videos" or "BOSS: A Benchmark for Human Belief Prediction in Object-context Scenarios."
2.I'm curious if the author has conducted a performance comparison between a single human expert and multiple experts (N) for each agent. Benchmarking the differences in performance could provide valuable insights into the impact of expert diversity on the proposed approach.
3.Could the author elaborate on the rationale behind selecting GRU (Gated Recurrent Unit) over a transformer for the proposed MixTURE architecture? Understanding the specific reasons behind this choice would enhance the clarity and comprehensiveness of the paper.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: My questions are listed in the weakness, i hope the authors can address them.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes, the authors have addressed all the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and positive feedback. Please find below a point-by-point response to your comments and questions:
### **Weaknesses**:
● **Complexity of Tasks**: First, we believe the complexity of our tested domains, particularly in hard scenarios of the PCP and FC (i.e., 20x20 domain, 10 agents including 2 heterogeneous classes of robots, and several propagating fires) may have not been fully perceived by the reviewer. Not only are these domains very challenging even for humans to master (i.e., based on the statistical analysis of our large subject study and making expert observations on 55 humans), but also, they are challenging to learn by SOTA MARL methods. Please note that, several prior works on MARL, including [46], have attempted to learn the hard scenario of FC and failed or learned a very sub-optimal policy. For instance, authors of [46] in a later work stated that the SOTA performance they achieved in medium (i.e., 10x10) scenario of FC was about 60 steps (MixTURE achieves 34.8 steps), while all popular MARL baselines completely failed to learn in hard (i.e., 20x20) scenario (MixTURE learns and converges to 56.5 steps). Second, we believe the complexity of the domain has little effect on our proof-of-concept here. As mentioned by the reviewer, while fully showcasing MixTURE’s capability to learn from expert demonstrations in other complex domains can be interesting, we note that even in the domains that we tested here, none of the baselines can directly learn from human generated data while MixTURE can learn high-quality multi-agent policies. In this work, for the first time, we are building a foundation to learn multi-robot policies directly from human generated trajectories and we provide extensive set of results to demonstrate the feasibility on synthetic and real human data.
● **Human Data Diversity (Multiple Humans) for Each Robot**: We note that, including multiple human demonstrators for each data collection task is a challenging setup and perhaps a separate research question to be extensively studied in a future step. Using multiple humans require further communication and coordination among humans which can be challenging to translate to robot domains. Additionally, deciding the optimal number of human experts needed for a task raises further complications and challenges, while accessing multiple human experts can be expensive and time consuming.
● **Choosing GRU vs. Transformer**: There are several reasons for choosing GRUs over Transformers. Please note that our policies are recurrent in time, i.e., non-Markovian to handle partial observability in the environment. We reiterate that the GRU integrates information across time rather than over agents. A vanilla transformer-based architecture to handle this would require either storing data about all states seen throughout an episode or have a hard limit on how many steps back an agent can remember. The computational complexity would also become quadratic in the number of timesteps. Finally, GRUs are simpler and more light-weight in terms of memory and computation. We note that, eventually, the goal is to deploy and execute such learned policies in real-world and on physical robots with limited computation and memory resources. Therefore, it is important to design light-weight models that could be more practical on real robotic systems.
Thanks again for your time and constructive comments and questions. Your suggestions will be duly integrated into the paper's camera-ready version, reflecting our commitment to addressing your concerns comprehensively. We value your time, and we hope that we have addressed all your questions satisfactorily. If so, we would greatly appreciate it if you could please consider increasing your scores. Thank you.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I would like to thank the author for the clarification and response.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the positive response! If the reviewer is content with our response, might the reviewer kindly please consider raising the rating score of the paper? If there is anything else we can do to improve the reviewer’s assessment, please let us know. We would be grateful for the reviewer’s consideration.
---
Rebuttal Comment 1.2:
Comment: Dear reviewer, as the NeurIPS rebuttal period approaches its conclusion, we are reflecting on your invaluable insights that have guided our research. We thank you again for your comments and questions and your contribution to our work. If our responses were to your satisfaction, we would be immensely grateful if you could please kindly consider raising your score, which could profoundly influence our work's recognition. Thank you. | Summary: The paper introduces a model, called MixTURE, for learning multi-agent collaborative policies from human demonstrations, addressing the challenges of coordinating heterogeneous agents in complex tasks. MixTURE leverages a mutual information maximization-based differentiable communication module to reason about and adapt to diverse human demonstrations. The authors evaluate MixTURE through synthetic and human-subject experiments in the FireCommander (FC) domain. In the synthetic evaluations, MixTURE outperforms baselines, achieving significant improvements across difficulty levels and setting a new state-of-the-art in learning collaborative policies for complex tasks.
In the human-subject experiments, the proposed approach demonstrates its ability to learn high-quality collaboration policies from diverse human-generated demonstrations, surpassing frameworks that solely rely on demonstrated communication. The model's ability to reason about human demonstrations and adapt to trajectory distributions is highlighted as a key factor in its success. The statistical analysis supports the claims made by the authors. It shows that relaxing the need for demonstrating communication reduces the workload for human experts and improves system usability. Furthermore, the results indicate that MixTURE achieves higher performance scores, better scalability to complex scenarios, and lower demonstration time per step compared to frameworks with demonstrated communication.
Overall, the results support the effectiveness of MixTURE in learning collaborative policies, its ability to leverage human demonstrations, and its potential to enhance human-robot collaboration in complex multi-agent systems.
Strengths: - Originality:
The paper introduces a novel approach by explicitly addressing the challenge of inter-agent communication in dynamic and partially observable domains.
The use of a differentiable communication module based on Mutual Information (MI) is a creative and original contribution.
The paper's focus on learning communication strategies from human demonstrations and addressing heterogeneity and diversity in human data sets it apart from previous methods.
The elimination of the need for an expert to demonstrate a communication strategy and allowing robots to automatically learn communication protocols is a useful idea.
- Quality:
The technical details and algorithms are thoroughly explained, ensuring reproducibility and soundness of the proposed approach.
The paper demonstrates a fair experimental methodology, conducting evaluations in both synthetic and human-subject environments.
The inclusion of ablation studies and statistical analysis enhances the evaluation and supports the provided claims.
- Clarity:
The paper is well-structured and organized, with clear sections that guide the reader.
The authors provide comprehensive explanations of the methodology, algorithms, and experimental setup.
The use of figures, tables, and visualizations aids in understanding the concepts and results.
The paper uses appropriate terminology and notation, making it accessible to readers familiar with the field.
- Significance:
The paper addresses a significant research gap by explicitly considering the challenge of inter-agent communication in collaborative multi-agent problems.
The proposed approach has practical implications for teaching multi-agent coordination, as it eliminates the need for explicit communication demonstrations from experts.
The use of real human-generated data demonstrates the model's ability to cope with variations in demonstration styles and strategies, increasing its applicability in real-world scenarios.
The experimental results support the claims and show the effectiveness of the proposed MixTURE model, contributing to the advancement of Multi-Agent Learning from Demonstrations (MA-LfD).
Weaknesses: A discussion on how MixTURE could be used for continuous (state/action) problems would be helpful.
It’s also unclear whether the proposed approach would be generalizable to other domains.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you provide more insights into the generalization capabilities of the MixTURE model? How would it perform in unseen environments or more complex tasks? Or when it’s trained on a lower-difficulty level but tested on a harder one?
Could you discuss how / whether sub-optimality of human demonstrations might be detected?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Are there specific scenarios or conditions where the MixTURE model might struggle or fail to learn effective communication strategies?
As stated previously, a discussion on whether using MixTURE for continuous (state/action) problems is not trivial would be useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and positive feedback. Please find below a point-by-point response to your comments and questions:
### **Weaknesses**:
● **Discussion on MixTURE’s Applicability to Continuous Domains**: Generalization to continuous state and action spaces will require a very slight modification to our PPO optimization to be extended to the continuous settings (i.e., outputting a vector for the mean and standard deviation for a multi-dimensional action distribution, instead of discrete actions). This feature is already supported in our codebase for our future experiments. We chose the discrete domains for our experiments here to be consistent with prior work on this topic (referenced in paper). The MIM reverse model can remain unchanged in continuous spaces for reconstructing an agent’s output message.
● **Generalization to Other Domains**: We investigated MixTURE’s generalizability in our problem formulation by considering a generic form of Markov Games to include both partial observability and agent heterogeneity. This will ensure MixTURE’s applicability to a wide range of domains and real-world scenarios. In future extensions of this work, we intend to include more environments (such as StarCraft II via Meta's STARDATA public dataset) as well as investigating applicability to collaborative team of robotic arms using existing datasets, such as the ROBOTURK (https://roboturk.stanford.edu).
### **Questions**:
● **Insights into Generalizability of MixTURE**: These are interesting questions that require further ablation studies and investigations. Theoretically, the learned multi-agent policies are not expected to directly perform well in unseen environments or harder domains when trained on lower-difficulty levels. Instead, the learned policies from a lower-level difficulty domain can perhaps provide a warm-start for the robot team to further interact with the environment and improve their policies in a new, unseen domain or a harder difficulty level (e.g., via RL, assuming there are no human data available for the unseen domain or domain with harder level of difficulty).
● **Detecting Human Sub-optimality**: Typically, it is challenging to determine human sub-optimality without ground truth optimal solution. However, to evaluate the quality of demonstrations one can collect user feedback, perform trajectory analysis, or get expert review. To handle human sub-optimality however, these steps may not always be required. For instance, one can integrate existing methods of learning from sub-optimal demonstrations, such as [1].
[1] Zhu, Zhuangdi, et al. "Learning sparse rewarded tasks from sub-optimal demonstrations." arXiv preprint arXiv:2004.00530 (2020).
### **Limitations**:
● **Scenarios where MixTURE Might Fail**: Generally, MixTURE will not be applicable if there isn’t a good way to collect human expert or heuristic demonstrations for a multi-agent scenario (also applies to other MA-LfD prior work). We believe there are not many such cases as in most applications, either direct human demonstrations can be collected, or an expert heuristic can be designed to generate demonstrations. Additionally, we have not tested MixTURE in mixed collaborative-competitive domains. Nevertheless, we are optimistic that MixTURE can still learn a good policy in a mixed domain, as long as high-quality demonstrations are available for the cooperative team against the competition.
● **MixTURE in Continuous Domains**: Thanks. As discussed above, generalization to continuous state and action spaces will require a very slight modification to our PPO optimization to be extended to the continuous settings while the MIM reverse model can remain unchanged. This feature is already supported in our codebase for our future experiments.
Thanks again for your time and constructive comments and questions. Your suggestions will be duly integrated into the paper's camera-ready version, reflecting our commitment to addressing your concerns comprehensively. We value your time, and we hope that we have addressed all your questions satisfactorily. If so, we would greatly appreciate it if you could please consider increasing your score. Thank you.
---
Rebuttal Comment 1.1:
Comment: Thank you for your explanations, I have no further questions to ask.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We thank the reviewer for the positive response! If the reviewer is content with our response, might the reviewer kindly please consider raising the rating score of the paper? If there is anything else we can do to improve the reviewer’s assessment, please let us know. We would be grateful for the reviewer’s consideration. | Summary: This paper proposes a Mixed-Initiative Multi-Agent Apprenticeship Learning (MixTURE) framework to teach a team of agents using demonstrations provided by individual human experts. It learns both a cooperative policy for each agent and the inter-agent communication policy for each agent. The proposed MixTURE, including the framework and the communication learning model, is the main contribution of the paper. Human subject studies on the proposed framework can also be viewed as a contribution.
Strengths: + Evaluating the framework with statistical analysis/comparison and human subject studies is a strength.
+ Enabling multi-robot learning from demonstrations by an individual expert has the potential of improving multi-robot learning efficiency and human knowledge transfer to robot teams.
Weaknesses: - The problem of learning communications is not well justified or formulated. For example, what are assumptions on the protocol and constraints for the multi-robot communication (broadcasting or message passing)? As an example, connected autonomous vehicles use a broadcasting mechanism and follow a standard data format for communication. Why to use a differential communication channel? Are there examples of multi-robot systems using a differential channel for communication?
- If communication is learned based on a mutual information metric in the proposed framework, how does learning communication benefit from human demonstrations? Learning communication seems to be a separate problem from LfD in this work.
- The communication learning module needs the global joint action-observation to compute each agent’s outcome message. How is the joint action-observation information obtained during decentralized execution?
- How the approach can be generalized to realistic multirobot settings and environments? In the experiments, agents have discrete states and actions, and the environment is a grid world. These experiment settings do not well support the argument of human training of “robot” teams.
- The proposed framework follows a centralized training decentralized execution paradigm, but does not review related work on this paradigm or discuss its pros and cons. Especially, how important is multi-robot communication in centralized training decentralized execution methods?
- Existing works on learning communications (e.g., when, what, who and how to communicate between multiple agents) are not reviewed.
- As an opinion, advantages of using an individual human expert to provide demonstrations for a team of agents are not convincing, especially for in decentralized multi-robot systems. The general goal of LfD is to transfer human knowledge to robots/agents. Humans behave differently when working alone or with other teammates. In addition, when a single human expert controls a team of agents to perform a task, the expert still provides a sequence of decisions for different agents. However, decentralized teaming involves concurrent decisions, may have conflicts, and has tightly-coupled teaming activities (e.g., two robots carrying an object). All the above properties of teaming cannot be demonstrated by a single expert.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Both terms robots and agents are used in the paper. What are their differences?
- Just curious, where does the name MixTURE (especially the TRUE) come from?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No negative societal impact of the work is perceived.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and positive feedback. Please find below a point-by-point response to your comments and questions:
### **Weaknesses**:
● **Justifying the Learned Communication**: Please note that MixTURE operates under a CTDE paradigm: the differentiable communication channels are only used during training to learn a communication policy via gradient updates. Post-training, there is no more need for the “differentiable” channels in real systems. For instance, robots in a real system can use common wireless channels to send/receive messages generated by the learned communication policy. Also, please see the following quotes from the paper regarding the justification of the problem. The necessity of communication is described in paragraph 3, lines 42-44. The need to hand-engineer a communication protocol for a team of robots for each and every domain leaves much to be desired, leading researchers to make efforts to learn such communication protocols end-to-end. In MA-LfD, humans can't feasibly provide both environment and communication action demos (paragraph 4, lines 48-61). Additionally, we motivated and described the communication problem in Section 4, lines 153 – 183 as well as on page 5, starting from line 214, where we also mentioned that **each agent will broadcast a message to the rest of the team at each time step** (line 215). Further details regarding the domains are also provided in the appendix, sections 2 and 3.
● **How Does Learning Communication Benefit from Human Demonstrations**: We believe there might be a misunderstanding. Respectfully, we disagree that learning communication vs. learning from demonstration (LfD) are separate problems in our formulation. The communication policy is learned automatically and simultaneously while the robot team is being trained via LfD on human demonstrations for environment actions. The learned communication is automatically adjusted to fit the provided human demonstrations through gradient updates in LfD optimization. The structure of our discriminator (global observations with local actions) and the MIM objective are specifically designed to address the problem space of both LfD and the learned communications.
● **Joint Action-Observation Information During Execution**: Please note that, as stated on page 6, lines 240-244, access to joint action-observation info in MIM reverse model is for training, not execution. Learned policies are fully decentralized during execution, removing the MIM model. In training access to global info is possible due to the CTDE paradigm.
● **Generalization to Real Robots**: Generalization to continuous spaces requires a very slight modification to our PPO optimization (i.e., outputting a vector for the mean and standard deviation for a multi-dimensional action distribution, instead of discrete actions). This feature is already supported in our codebase for our future experiments. Discrete domains were chosen for our experiments here for consistency with prior work on this topic (referenced in paper). The MIM reverse model can remain unchanged in continuous spaces. While it is an interesting next-step to extend MixTURE’s case-studies, doing so is mostly a matter of engineering rather than a proof-of-concept. In this work, for the first time, we are building a foundation to learn multi-robot policies directly from human generated trajectories and we provide extensive set of results to demonstrate the feasibility on synthetic and real human data.
● **Importance of Communication in CTDE**: Communication is vital in CTDE methods during centralized training for optimizing collective behavior. During this phase, robots share experiences and data with the central controller, enabling the learning of complex team-level strategies, supporting coordination, and enabling adaptation to dynamic environments. Once the training is complete, the robots execute the learned strategy in a fully decentralized manner during the execution phase while benefiting from the knowledge acquired during centralized training. We will add this discussion in the paper as well as an overview of recent literature on CTDE methods.
● **Review on Prior Work for Learned Communication**: Thanks. Due to space limitations, we focused on providing the literature on prior MA-LfD work as they relate to the core of our algorithm. We will include a review of prior work on MARL and learned communication in the appendix for the final version.
● **Single vs Multiple Experts**: Using multiple humans requires further communication and coordination among humans which can be challenging to translate to robot domains. Additionally, deciding the optimal number of human experts needed for a task raises further complications, while accessing multiple human experts can be expensive and time consuming. A single human expert can generate action demonstrations for each robot one at a time (as in our setup) which resolves the mentioned tightly coupled scenarios.
### **Questions**:
● **Terms Robots and Agents**: Thanks. We use the terms robots and agents interchangeably to generalize MixTURE’s applicability. By ‘agent’ we are mainly referring to a ‘robot agent’. We will add this clarification to the paper.
● **The Name MixTURE**: We created the name MixTURE by taking letters from the title as follows: **Mixed**-ini**T**iative m**U**lti-agent app**R**enticeship l**E**arning. Rather than putting together the first letters, we tried to create an abbreviation that fits the core ideology of our algorithm.
Thanks again for your time and constructive comments and questions. Your suggestions will be duly integrated into the paper's camera-ready version, reflecting our commitment to addressing your concerns comprehensively. We value your time, and we hope that we have addressed all your questions satisfactorily. If so, we would greatly appreciate it if you could please consider increasing your score. Thank you.
---
Rebuttal 2:
Comment: Dear reviewer, as the NeurIPS rebuttal period approaches its conclusion, we are reflecting on your invaluable insights that have guided our research. We kindly request your attention to our responses to your comments and questions, and we earnestly seek to understand if we have addressed them to your satisfaction. Your feedback could profoundly influence our work's recognition, and we would be immensely grateful for your considered response. Thank you. | Summary: This paper proposes a learning framework for multi-agent coordination using communication from human expert demonstrations: Mixed-Initiative Multi-Agent Apprenticeship Learning (MixTURE). A human expert teaches a team of robots to collaborate on a task by demonstrating actions for every robot, while the robot team learns a emergent communication protocol that encourages sending information about the joint observation. An attentional communication module and a mutual information maximization reverse model are used. The framework is evaluated on three domains with different levels of difficulty and complexity. They also conduct a human-subject user study to collect real human data and assess the usability of their framework.
Strengths: - The paper introduces a interesting framework that leverages both human demonstrations and emergent communication to learn effective coordination policies among several robots.
- A comprehensive evaluation of their method on synthetic and real human data is conducted.
Weaknesses: - The proposed framework assumes that the human expert has access to the aggregated / joint observation of all the robots at each time step and the human expert can command every robot's action simultaneously at each time step. However, this assumption is rather unrealistic for real-world robot teams or robot teams in simulation.
- The synthetic environments that the framework is evaluated on is not large-scale enough compared to common testing environments for multi-agent reinforcement learning like SMACv2 (_SMACv2: An Improved Benchmark for Cooperative Multi-Agent Reinforcement Learning. Ellis, Benjamin and Moalla, Skander and Samvelyan, Mikayel and Sun, Mingfei and Mahajan, Anuj and Foerster, Jakob N. and Whiteson, Shimon._)
- In the experimental results section, no qualitative analysis or visualization of the learned policies and communication protocols are provided. It would useful to observe how robots coordinate and communicate in different settings.
- An expert heuristic is used to collect the expert demonstration dataset. However, heuristics can struggle to produce optimal demonstrations for more difficult environments. Then algorithms of multi-agent reinforcement learning still need to be used at the first place to collect the expert demonstration dataset.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How does the MixTURE framework scale to larger environments, more complex tasks or more agents?
- What is the computational cost and communication cost of the framework?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Not adequately.
- need to mention the strong assumptions on the expert's joint observation and action space
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and positive feedback. Please find below a point-by-point response to your comments and questions:
### **Weaknesses**:
● **Assumption Regarding Human Expert’s Access to the Joint Observation of All Robots**: We concur that a human's decision-making for one agent is influenced by knowledge of other agents' observations. However, we respectfully disagree that this assumption is too strong. Please note that, classic MA-LfD methods such as MA-GAIL, necessitate awareness all robots' states and observations AND they require providing both env. and communication actions at the same time. In MixTURE, we in fact relax a portion of this assumption by tasking the human to only provide env. actions for robots one at a time, to improve feasibility. Our real human subject study demonstrates this relaxation's feasibility, enabling successful demonstrations provided by humans for teams of simulated robots with up to 10 robots from 2 diverse classes. In practice, one can use existing multi-agent datasets (e.g., collaborative assembly in ROBOTURK (https://roboturk.stanford.edu) or Meta's STARDATA for StarCraft II) or readily create a demonstration dataset using existing tools (e.g., ROS, Gazebo). MixTURE’s contribution comes in at this stage, where we can learn collaborative multi-agent policies from this dataset.
● **Environment Scale**: Respectfully, we disagree. The domain complexity only minimally impacts our proof-of-concept here. In fact, increasing complexity motivates MixTURE's application even more to reduce human involvement as scale grows. Even in domains like SMACv2, which humans expertly play, we can amass an expert dataset (as done before in AlphaStar) for MixTURE's training. Additionally, we believe the complexity of our tested domains may have not been fully perceived by the reviewer. Not only is the FC domain very challenging even for humans to master (i.e., based on the statistical analysis of our large subject study and making expert observations on 55 humans), but also, they are challenging to learn by SOTA MARL methods [46]. Notably, as reported by authors of [46] the achieved SOTA MARL performance in medium scenario of FC was about 60 steps (MixTURE achieves 34.8 steps), while all prominent MARL baselines tested completely failed to learn in the hard scenario (MixTURE learns and converges to 56.5 steps).
● **Qualitative Analysis and Visualization of the Policies**: We appreciate this suggestion. We will include additional policy visualizations in our supplementary results. The interpretation of communication protocols will require further exploration through specialized interpretability-oriented experiments, an avenue we're considering for advancing our work.
● **Using MARL for Collecting Expert Demonstration Dataset**: We believe there is a misunderstanding that can be addressed. The fact that heuristics can struggle in more complex domains is in fact one of the reasons we are proposing the MixTURE framework, that is, to relax the need for such heuristics and instead, relay on a human expert’s domain knowledge. Please note that our expert demonstrations were not only produced by heuristics. **We collected two sets of datasets**: (1) a heuristic dataset, and (2) a human dataset. **All methods, including the proposed method and the baselines have been trained on both datasets**, meaning that, regardless of how the dataset was collected (heuristic or real human) our method outperforms the baselines. Finally, MARL can also significantly suffer with increased domain complexity and can be very challenging and time-consuming to train. The efficiency of expert-generated data outpaces creating and tuning a MARL for similar expertise.
### **Questions**:
● **How does MixTURE Scale**: We conducted rigorous scalability analysis, covering various domain sizes (dimensions), team sizes (number of robots), team compositions (robots per class), and tasks (number of initial firespots which propagate and grow over time). Overall, our results underscore MixTURE's robust scalability and efficiency in adapting to all cases. Moreover, our human subject study substantiates that MixTURE empowers human experts as well to scale proficiently across larger domains, teams, and tasks, showcasing enhanced policy performance and demonstration efficiency. These results can be found in Section 5 (Table1, Figures 2-5) as well as Section 5, Sub-section 5-1: Scalability (Figure 6), in appendix.
● **Computational and Communication Cost of the Framework**: Our computational resources for training MixTURE are listed in Appendix, Section 6. Moreover, the attention-based communication technically makes the time complexity O(N^2) in the number of agents, but practical implementation and optimization involve minimal additional inference layers. Therefore, we believe the overall additional computational cost is small, especially, in high-dimensional environments, policy and value networks bear the majority of the computation load. We will add this discussion to the Section 6 in appendix.
### **Limitations**:
● While it is technically unavoidable to provide multi-agent demonstrations without having humans informing their decisions based on robots observations, to the reviewer’s point, we will add this discussion to the paper as suggested. Please note that this also applies to existing MA-LfD work. Additionally, respectfully, we disagree that knowing the robot teams' action-space is a limitation. This is a conventional and reasonable aspect in the LfD domain.
Thanks again for your time and constructive comments and questions. Your suggestions will be duly integrated into the paper's camera-ready version, reflecting our commitment to addressing your concerns comprehensively. We value your time, and we hope that we have addressed all your questions satisfactorily. If so, we would greatly appreciate it if you could please consider increasing your score. Thank you.
---
Rebuttal Comment 1.1:
Title: thanks for the rebuttal
Comment: thanks the authors for the rebuttal and additional experiments. I have updated the score accordingly.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for taking the time to read our response. We are pleased that our response has effectively and positively addressed your concerns. Thank you very much once again for your valuable contribution to our submission. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work proposes a Multi-Agent Learning from Demonstration approach for learning multi-agent policies for collaborative tasks. The approach learns from human demonstrations of the joint policy, but does not require demonstrations of inter-agent communication, which is learned during training via online interaction. Results are presented on learning policies from both synthetic and real human data. A human study is performed demonstrating the increase in performance and decrease in workload for the proposed method, compared to methods that require demonstration of inter-agent communication.
Strengths: - This paper relaxes a limiting assumption made by some prior works that a human expert provides demonstrations for both environment actions and communication actions, which can be taxing for a human to actually provide. In doing so, this work strikes a balance between prior RL and IL approaches for learning the action and communication policies.
- The paper trains policies on both synthetic and human data. MixTURE is seemingly quite effective on learning from real human data.
- The paper empirically verifies the motivating claim about the effect of removing communication actions from a human expert's workload, via a human study on 55 users.
- A new "mixed discriminator" is proposed, where the discriminator is given global observations but only local actions. Per the discussion in Appendix 5.2, this choice exhibits improvements on the decentralized and centralized discriminators from MA-GAIL.
- The work is well-written and clear. Source code is provided for reproducibility.
Weaknesses: - The training objective has multiple moving parts (GAIL, PPO, BC, and MIM) which could potentially make training difficult; some additional discussion on this and how sensitive the method is to different weights on the losses would be helpful.
- Contribution 2 is the mutual information maximization-based communication learning model. On the 5x5 PP and PCP environements, the MIM does not have a significant effect. In the FC environment, it moderately increases sample efficiency. Have ablations been run on the larger/more complex environments? This may provide more robust support for the claim that MI maximization is "helpful for domains with increased task complexity," and in general may more strongly justify the inclusion of the MI objective, which otherwise complicates the training procedure.
- Related to the above, in Section 5.1 the authors conjecture that the MIM is a key point in dealing with diversity in the human data, i.e., that it "provides the model with the ability to reason about the underlying human demonstrations and cope with the trajectory distribution through automatically finding a suitable communication protocol." It would be valuable to discuss what communication protocol is learned in these settings (if it is interpretable) and why it so strongly outperforms MA-GAIL trained on the dataset with demonstrated communication actions.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: In addition to the points described in the Weaknesses section, I would appreciate clarification on the following:
- Could the authors clarify / provide more details on how the communication graph is generated? Do edges in the graph only exist when robots are spatially within close proximity (and if so, why is this a useful property) or do edges exist between every pair of agents?
- How many trials are shown in Figures 2 and 3?
- What is the process for hyperparameter tuning for the proposed method and the baselines?
- What is the intuition for maximizing the MI between the agent's message and the joint observation (including observations that the agent does not have access to) as opposed to the agent's individual observation history?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: - The authors note that the method does not currently account for suboptimal demonstrations. If the method is particularly sensitive to how the various losses are weighed, this may be an additional limitation in terms of the amount of tuning required.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and positive feedback. Please find below a point-by-point response to your comments and questions:
### **Weaknesses**:
● **Sensitivity to Different Moving Objective Parts**: The initial two components of the objective – the GAIL loss and the PPO loss – are fundamental to our GAIL-based MA-LfD architecture. The remaining two parts – offline BC loss and MIM loss – can be regarded as dynamic components, both of which are studied in our supplementary results in Sections 5.3 and 5.4. Our experiments showed that: (1) the addition of the offline BC loss always helps improve model performance, and (2) the MIM loss seems more effective in intricate or larger domains with greater agent counts, yet adjusting its weight for substantial gains can be slightly challenging. For other loss-part weights we mostly stayed consistent with prior work and did not observe any significant trends of sensitivity to weight balance. We will provide this discussion in the appendix.
● **MIM Ablation Results**: Yes, we ran MIM ablations in larger and more complex domains. As stated above, while we never observed any performance decay, tuning the MIM loss to achieve consistent and significant performance improvement seems to be challenging. As such, more comprehensive experiments are required to make a conclusion, which are being designed for our next step. Nevertheless, we believe there are further benefits that could be achieved via training the MIM reverse model. Particularly, we believe our MIM reverse model has the potential to roughly cluster message embeddings based on observation-action pairs which can provide further useful information and insight into the learned communication protocol for a task. We will amend the claim as suggested and add the further discussion in the camera-ready version.
● **Reason for Significantly Outperforming MA-GAIL**: Thanks. That is a great point. The goal of the MIM in the reverse model is to improve the communication policy distribution for an agent by reducing the entropy in message distribution post-observation (i.e., observing a state and receiving a message), thereby enhancing agent certainty in choosing actions. We believe ultimately, the MIM reverse model, when tuned well enough, should be able to roughly cluster message embeddings (i.e., reduced entropy) based on observation-action pairs, which can provide further insight into interpreting the learned communication protocols. Investigating the interpretability of such learned protocols is an interesting avenue of future research that we are considering. We hypothesize that MA-GAIL trained on demonstrated communications fails because of the following reasons: (1) humans are not fully aware of all agents’ full local state spaces and therefore, the demonstrated communication could be severely inefficient and sub-optimal. MixTURE instead learns an efficient communication policy automatically through gradient updates; (2) The laborious task of providing both environment and communication actions for the human demonstrators significantly deteriorates the quality of demonstrated policies, which can be confirmed by our subject-study results on page 9. We will provide a summary of this discussion in Section 5.2 for the camera-ready version.
### **Questions**:
● **Communication Graph**: A spatial proximity-based communication graph is useful for large scale applications and domains where robots have limited communication ranges and can only communicate within a certain radius. While we built the MixTURE architecture to be localized to accommodate such proximity-based limited communication, our initial proof-of-concept experiments utilized extended communication ranges to cover the domain. Future extensions will encompass ablation results for shorter communication distances
● **Number of Trials in Figure 2 and 3**: Figures 2 and 3 are showing results across ten trials. This is stated in Section 5.2, under RQ1.
● **Hyperparameter Tuning Process**: We conducted extensive empirical hyperparameter tuning. As stated in Section 6 of the appendix, Table 4, the values in brackets represent variables which we swept over via grid search. Values were swept over simultaneously to a resolution of 2 steps per decade (i.e., logarithmic scale, where, for instance, [10^-1.5, 10^0] means that the values tested were {10^-1.5, 10^-1.0, 10^-0.5, 10^0.0}. Moreover, each baseline has been separately tuned for all experiments.
● **Intuition Behind Including Joint Observations in MIM**: By providing the joint observations during centralized training, intuitively we intend to learn the communication policy such that the optimization of the MIM model results in a message distribution entropy-reduction at the team level. In this way, as discussed above, the model can potentially relate observation-action pairs to the globally sent/received messages to learn unified policies at the team level.
### **Limitations**:
● Thank you. That is correct. We will add a further statement regarding the challenges of tuning the MIM loss to achieve improved performance as we observed in our experiments.
Thanks again for your time and constructive comments and questions. Your suggestions will be duly integrated into the paper's camera-ready version, reflecting our commitment to addressing your concerns comprehensively. We value your time, and we hope that we have addressed all your questions satisfactorily. If so, we would greatly appreciate it if you could please consider increasing your score. Thank you.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thank you for the response and clarifying remarks. I do think it would be important to show either some significant benefit from MIM to the overall performance, or at least some qualitative/quantitative analysis of its effect on the communication protocol, if the MIM is to be considered a contribution. I've read through the other discussions as well and continue to lean positive overall.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for taking the time to read our response. We are pleased that our responses has effectively addressed your concerns. Thank you once again for your valuable contribution to our submission. | null | null | null | null | null | null |
Training Neural Networks is NP-Hard in Fixed Dimension | Accept (poster) | Summary: The authors analyze the fpt of training NN. Their main result is that finding a global minimum is hard even for fixed dimension.
Strengths: I like the direction of studying the FPT of training NN. The reductions are new to the best of my knowledge and could find further applications.
Weaknesses: The authors mention generalization a few times. However, the setting considered is completely detached from generalization.
For fixed dimension the sample complexity of the learning problem is fixed and hence can be solved trivially. A more general concern is that low dimension is a rarity in ML applications. In this regard, the number of neurons k seems to be much more interesting.
"We assume basic knowledge on computational complexity theory." I think this sentence can safely be removed.
There is a large literature on what can and cannot be done efficiently with respect to learning thresholds that the authors ignore.
One example is:
"Adam Klivans and Pravesh Kothari. Embedding hard learning problems into gaussian space" and various other papers (regarding learning ReLUs) by Kane and Diakonikolas. While the hardness in these is not NP-hardness, they should be mentioned nevertheless.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What is the best running-time dependency in term of k when can expect for fixed dimension? What is the running time of the fastest exponential algorithm for learning shallow ReLUs? Thresholds? Are their improvements for the running time of the algorithm of the Arora et al., paper?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Proving hardness results for low dimension.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review.
Indeed our paper is not concerned about generalization.
The goal is to understand the complexity of training in fixed dimension (which was an open question).
Note that hardness for few dimensions implies hardness for arbitrary dimensions (as we explained) and is thus a stronger result.
As regards the number $k$ of hidden neurons, hardness is already known for a single neuron.
We additionally show W[1]-hardness w.r.t. $d$ for 4 hidden neurons with zero training error.
Thank you for pointing out the papers from the literature.
We consider adding them to our references for the final version.
As regards your questions, note that our hardness results prove that the algorithm by Arora et al. is essentially optimal, that is, $n^{dk}$ is best possible in general (also for linear threshold activation). However, for the convex case, we give an improved algorithm running in $2^{O(k^2d)}$ time.
The best running time regarding $k$ for fixed $d$ is still open (Question 2). A running time of $f(k)\textrm{poly}(n)$ might be possible. This is our major open question.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Wasn't the hardness for non constant d already known?
I encourage the authors to add a table about known running times for the various problems they consider as well as mention the open problem stated in the response to this review.
I keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your follow-up questions.
Yes, hardness for non-constant $d$ was already known before. We cited this in related work.
We described the known running times in detail in the Introduction section. We will consider to add a table.
Note that the open problem is explicitly stated in the conclusion already. | Summary: The authors show that learning a 2-layered neural network with ReLU activation function, in an underparameterized regime, is NP-hard when the training error is 0. Specifically, they highlight that such learning cannot be accomplished in time complexity dependent on $k^{f(d)}$, where $k$ is the number of neurons in the layer and $d$ is the input dimension. Their main technique is a reduction from the NP-complete problem of POSITIVE ONE-IN-THREE SAT. In their reduction, they strategically position selection gadgets (corresponding to clauses, variables in POITS, and additional data points) along the y-axis, ensuring sufficient spacing between them. This construction can be represented by a piecewise linear function or a combination of ReLUs if and only if it implies a true instance of POITS.
They also show W[1]-hardness with respect to $d$ for $k = 4$ by reducing the problem to 2 HYPERPLANE-SEPARABILITY. This problem determines whether two given point sets can be strictly separated by two hyperplanes. Additionally, they prove that linear thresholding for 2-layered neural networks is also NP-hard, with a similar reduction as proposed for ReLU activations. Finally, they propose a branching algorithm for learning a 2-layered neural network, which has exponential time complexity with respect to $k$ and $d$.
Strengths: The authors have effectively motivated the problem and provided a good background for the problem. The part where they describe the geometry of $\phi$ and define the concept of a levee, accompanied by illustrative figures, was particularly informative and interesting.
Weaknesses: The variable $\ell$ seems to be overloaded, as it is used for both the loss function and the number of possible levees. This can be confusing when it is introduced again in the selection gadget.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * When you mention "full dimensional cells," are you referring to cells of dimension $d$?
* In order to prevent the other hyperplanes from becoming convex or concave, does the breakpoint $\mathbf{x}$ need to lie exclusively on one hyperplane?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discuss various approaches to extend their construction to obtain similar results in higher input dimensions beyond 2, for a fixed training error $\ge 0$ and for more than 4 ReLUs for the W[1] hardness result. Additionally, they address the possibilities of extending to piecewise linear ReLU and piecewise constant linear thresholding activations and also acknowledge that their techniques might not be applicable to smooth activations.
Finally they address two interesting open questions: 1. Results for the task of minimising generalization error instead of training error and 2. Training neural networks for approximate optimality.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review.
We will fix the issue with the variable $\ell$.
Yes, "full-dimensional" means $d$-dimensional. We will make this precise.
Yes, a breakpoint lies on only one hyperplane. We will fix this. | Summary: This paper mainly studies the complexity of training two-layer ReLU networks when they are not over-parameterized. Specifically, the authors prove that training a two-layer ReLU network to zero or arbitrary loss value is NP-hard when the input dimension $d\geq 2$. This result answers an open question (Question 1) posed by Arora et al. (2018) negatively by proving the NP-hardness. On the other hand, by assuming the exponential time hypothesis, the paper also partially answers a more general question of whether a running time that is polynomial in the data size or the number of hidden nodes (Question 2). The paper also provides a positive answer to Question 2 when the ReLU network is assumed to compute a convex function.
Strengths: The novelty of the paper is clear. It provides answers to open questions on the complexity of training two-layer ReLU networks. Therefore, the results seem to be significant. The presentation is concise and easy to follow. I enjoy reading the paper.
Weaknesses: Usually, there are symmetries and redundancies in the dataset. It would be more convincing and realistic if the authors could give some insights on how to leverage these available properties to develop efficient training algorithms. In practice, training a two-layer ReLU network is very efficient. Hence, I feel the settings in this paper seem to be a bit unrealistic. My concern is that it seems hard to apply these results to make a positive impact on our machine learning community. One possible way to address this is to discuss assumptions on different aspects (e.g., architectures) to make training tractable or even efficient. This work focuses on two-layer ReLU networks, but it would be interesting to give insights into extending the results to networks with multiple layers.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Line 35: In the case when $\gamma$ is strictly positive, are there any extra assumptions imposed on the loss function? The only assumption on the loss function $\ell(x,y)=0$ iff $x=y$ is fairly general. It would be more convincing if the authors could clarify more about the loss function.
2. Would it yield the same conclusion as Theorem 1 if a regularizer (e.g., weight decay or L1) is added to the loss function ($k<n$)?
3. Would it be possible to consider some assumptions on architectures that give a polynomial-time algorithm for training a two-layer ReLU network?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have addressed some of the limitations of this work in the text. Other limitations in my view are the gaps between the settings of the problem and the training of ReLU networks in practice. For example, one may be able to make some assumptions on architectures, regularizations, and properties of data to derive a polynomial-time training algorithm. However, given that the main purpose of this paper is to answer the open questions, discussions on these aspects may be skipped or minimized.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review.
Our goal was to settle the complexity of the training problem in the general case (without further assumptions on the input), which turned out to be hard. We agree with you that this is a worst-case result, which does not necessarily reflect practical conditions, but we are convinced that such results belong to an important fundamental understanding of the training problem.
Indeed our hardness results hold for a very simple restricted setting which makes them even stronger (e.g. for more than two layers, the problem can be expected to be at least as hard).
As regards impact, our results yield a better theoretical understanding of the power and expressiveness of ReLU networks and settle the complexity status almost completely.
Moreover, our results can be seen as a justification for making certain assumptions on the input data in order to achieve polynomial time (since without any assumptions, polynomial time is provably unlikely).
Adressing your questions:
1. There are no further assumptions on the loss function (also not for $\gamma > 0$).
Our results hold for any loss that satisfies the described condition.
Hence, we only make a weak assumption on the loss, which makes our results in fact stronger.
2. It is not clear whether Theorem 1 also holds for regularized loss.
It might still hold as long as the regularization does not prevent building levees as we do in our construction.
3. For unbounded dimension, the problem is already known to be NP-hard even for a single hidden neuron.
In fixed dimension, the problem becomes polynomial-time solvable if the number of hidden neurons is also constant (but this is not very interesting).
Since a network with two layers is already a very simple architecture, it might be tough to find a meaningful restriction that allows polynomial-time training.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will keep my rating unchanged. | Summary: This paper focuses on two layer neural networks with ReLU or linear threshold activations. The authors show that considering the input dimension (dimension of the data as a constant) equal with 2, there is no polynomial algorithm with respect to the number of data (n) and hidden nodes (k, k<n) that decides whether there exist weights that achieve zero training error. The second result indicates that there is no algorithm with dependence $n^{o(d)}$ on $n$ and arbitrary dependence on $k,d$. Furthermore, if $k=4$ the authors prove $W[1]-$hardness with respect to $d$. Similar results are obtained for the case of threshold activations. Finally, they provide an algorithm, independent of $n$ if the target function is convex.
Strengths: This paper is in general well-written. It closes some gaps regarding hardness results for deciding whether there exist weights on a neural network that achieve a small or zero training loss. The authors show that even when the target is zero error, the problem is NP-hard for $d=2$ and thus for any $d\geq 2$ (since we can simply zero pad and thus ignore the rest of the $d-2$ points (Theorem 1). They also prove that for $k=4$ and zero training error the problem is $W[1]$-hard with respect to the parameter $d$. Their results also hold for the case of linear threshold activations. They finally show that for the case that the target functions is convex, meaning the second layer has weights all equal to one, they show that the problem is fixed-parameter tractable (FPT), when the targeted error is exactly zero. It was already known that for the case of non-zero error, positive error the decision problem is $W[1]$-hard.
Weaknesses: Considering the last result (theorem 10) I am not sure what it adds to our understanding, it will be rare to encounter a convex problem a providing an algorithm to decide whether it is solvable it's not very useful, since we know that a convex function will have a global minimum and that it can be found through classical methods like gradient descent. The authors have already acknowledged that this algorithm will not have any practical relevance, but I also think its contribution as a theoretical result marginal.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Considering Theorem 1, is there some more tight characterization when $d> 2$ and the inputs are not sparse?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations and this work has no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review.
We agree that the algorithm for the convex case is unlikely to be useful in practice.
From a theoretical side, this result is interesting since it gives a partial positive answer to the open Question 2.
It is thus a step towards resolving this question and shows that the inherent hardness stems from the case when both positive and negative weights occur in the last layer.
Besides this, the algorithmic ideas might be inspiration for solving other similar tasks.
Answering your question, for $d>2$, the problem is known to be NP-hard even for a single ReLU with non-sparse inputs (see e.g. Goel et al.).
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I thank the reviewers for their response. I would like to keep my score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Video-Mined Task Graphs for Keystep Recognition in Instructional Videos | Accept (poster) | Summary: This paper aims to recognize the keysteps in instructional videos using automatically mined task graphs. These task graph is automatically discovered from a set of narrated instructional videos and contains all keysteps in a given vocabularly, i.e. it is not limited to a single task. This allows dependences between keysteps to be represented and helps move away from the requirement of prior work to have a strict ordering of keysteps in a task. The mined task graph is used for keystep recognition, task recognition and keystep forecasting on COIN and CrossTask.
Strengths: - The approach of task graph mining eliminates need for prior keystep recognition work of having set linear order of keysteps.
***
- Benefits not constrained to task of keystep prediction, the paper also shows the video representation is useful for related but distinct tasks of key step forecasting and task classification
***
- Authors clearly understand task and nature of instructional videos well, I particularly liked the insights given on lines 42-45
***
- The method is well motivated
Weaknesses: - The main weakness of the work is the lack of clarity in supervision used in relation to prior works, making it difficult to assess the suitability of the baselines used.
- Particularly as the proposed work to a vocabularly of keysteps as supervision in addition to the video's narrations. The predefined keystep vocabularly isn't clear until the beginning of the method section.
- The difference in supervision to prior works, e.g. [70], [82], [84], could be better explained in the related work. Currently not clear why these aren't explained and compared to without reading these papers in depth. Particularly [84] appears to only be supervised by the narration of the video which is also used by this paper in addition to the keystep vocabularly.
- I only looked at these three works as a sample, so it is very possible that there are other papers mentioned in related work with similar supervision levels to [82] and [84].
- From Table 1 of [84], [84] appears to outperform the proposed approach when comparing to the downstream step forecasting result of COIN in Table 4. This seems to also be the case for the results provided in Table 2 of [84].
- It also isn't clear if DistantSupervison [44] use the same keystep vocabularly as the proposed work. From figure 4 many of the mistakes made by [44] appear to be due to the name of the keystep, sometimes [44] even gives more information e.g. in press chest, as the keystep names appear to be less limited.
- Adding the supervision used to the results tables would greatly help assess this better.
***
- The importantance of the keystep vocabularly isn't evaluated
- My main question is whether the proposed method is robust to noise in the keystep vocabularly? I.e. how does the performance degrade with a larger keystep vocabularly containing keysteps which aren't used.
- It eems to work well for COIN and CrossTask with curated keystep vocab. For HowTo100M a presumably noisier vocab is used from WikiHow, however it would be much stronger if this effect was tested.
***
- Limited visualization of the task graph. The graph itself might be interesting and contain some insights for learning from instructional video as hinted on in lines 165-174. It would be useful to be able to see (a portion of) the mined task graph. A very small part is shown in Figure 3. However it should have been possible to visualize the full crosstask graph in supplementary.
- I recommend the authors include example task graph(s) in supplementary in future versions.
***
- From related work Paprika [85] sounds the most similar and it isn't clear why this work isn't compared to.
- It is a contemporaneous work, appeared online 31 March 2023, so this could be a valid reason. However, the footnote explains that [85] is using a different setting to [44].
- Comparing on the setting used by [85] would make the work stronger.
***
- Not factored into reviewing score as its a contempoaneous work but [A] has similarities to this paper so it might help to include a citation to [A] and explain the differences in a future version.
***
[A] StepFormer: Self-supervised Step Discovery and Localization in Instructional Videos. CVPR 2023.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - How does the supervision used in this work compare with prior works? Particularly [44], [82] and [84].
- Why is [84] not compared to?
- How important is a clean and well-defined keystep vocabularly to the method performance?
The rebuttal responded to the majority of my concerns particularly, it improved clarity in the supervision used by the proposed and prior works,. visualization of the task graph, experiments on the expanded keystep vocabulary with relative gain over distant supervision particularly convincing. I hope these are included in the final version of the paper.
Since the rebuttal has addressed all my major concerns and I found no major concerns in the other reviews I have raised my rating to weak accept. While comparison to paprika isn't a reason to reject this work since it is concurrent, better explaining the differences or having some kind of numerical comparison would make this work stronger and help future readers.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: A small section on limitations is present in the supplementary material. It can be improved by also considering negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging comments and insightful feedback.
**Q1. Lack of clarity in supervision used vs. prior work**
This seems to be a misunderstanding. All the baselines use the same level of supervision. Specifically, task graph construction does not use ground-truth keysteps in zero-shot keystep recognition (Sec 4.1). Similarly, in video representation learning (Sec 4.2), there is no annotation for keysteps hence the pseudo-labels are generated from similarity scores. In table 1, in addition to the keystep set, the supervision is narration (text-only), video feature (video-only) and both (video-text) for all baselines. Hence, all the comparisons are fair and baselines are a suitable comparison.
**Q2. Use of a vocabulary of keystep in addition to narrations**
The keystep vocabulary is the set of annotated keysteps in COIN/CrossTask. It is used by all the methods and not specific to our method. All the baselines and SOTA use the keystep vocabulary when assigning keysteps (L252-261). We have described keysteps (L20-27) in detail in the introduction. We appreciate the feedback, and we will accentuate this further in the text.
**Q3. Difference in supervision to [70], [82], [84]**
Note that [70, 84] are contemporaneous work per the conference policy (also see the global comment at the top). We will add a detailed comparison between the tasks and supervision used in these works. Here are the differences:
[70, 82] - The task the authors solve is “procedure planning” which is distinct from keystep recognition and representation learning. The goal in procedure planning is given a start and end image, the model needs to generate a “plan” to achieve the final goal. This task is unrelated to ours and comparison with it is not possible.
[84] - The paper proposes procedure-aware video representation. For supervision, they use videos and their narrations from HowTo100M. However, they use a text encoder trained with CLIP that uses 400 million (image, text) pairs. Hence, the level of supervision used is incomparable. Further, it is a contemporaneous work that appeared in CVPR 2023 *after* the NeurIPS deadline.
**Q4. Empirical comparison to [84]**
See above discussion about [84] – it is contemporaneous work published after we submitted our paper, and according to NeurIPS policy, a paper should not be rejected for not comparing to concurrent work. We will explore ways to relate the two methods empirically, though note that regardless of how they may each fare on video representation learning, our idea to enhance keystep recognition with the task graph remains novel with respect to [84].
**Q5. Supervision used by DistantSupervision [44]**
DistantSupervision [44] uses the same keystep vocabulary (L151). Since the keystep set is the same, the mistake cannot be due to the name of the keystep. Regarding figure 4, the keystep set between [44] and our work is the same and the yellow boxes (row A) represent ASR text of the narrations and not keysteps. Overall, our method uses exactly the same supervision as [44].
**Q6. Add supervision used to results tables**
All results in every table (1, 2, 3) use the same level of supervision, so we don’t add supervision explicitly in the tables. We will make sure to accentuate this in the paper.
**Q7. Importance of keystep vocabulary and its robustness to noise**
The proposed method uses a vocabulary set much larger than the set of keysteps for one task. Eg. an average task in COIN consists of 3.9 keysteps but the vocabulary set is 749 keysteps. Similarly, in HowTo100M representation learning (Sec 4.2), the vocabulary set is 10588 keysteps which is much larger than keystep set of any one task (and hence, quite noisy). Therefore, the method is robust to noise in keysteps and we indeed use keysteps that are not part of the task for a given video.
**Q8. Visualization of the task graph**
Thank you for the suggestion! We add a portion of the mined task graph for CrossTask in the attached rebuttal PDF (Figure 1). It contains 61 out of 105 keysteps with top 5 transitions labeled (to avoid clutter). We do see some interesting properties (also see Fig 3 of main paper). For example, “lower jack” happens only after “brake on”, “close lid” after “open lid” whereas “cut cucumber” and “cut onion” can both happen after each other.
**Q9. Comparison on the setting used by [85], a contemporaneous work, appeared online 31 March 2023**
Thank you for noting that this work is contemporaneous. Regarding comparison with [85], apart from the fact that it is contemporaneous, we found that [85] and [44] use different experimental settings. We use the original setting of [44] for better reproducibility, given the detailed setup descriptions in that paper. In fact, DistantSup [44] performs much better in its original implementation compared to the adjusted settings reported in [85] (DistantSup [44] gets only 32.74 as reported in [85] vs the DistantSup authors’ originally reported 54.1 in [44] for COIN step classification, similarly 82.66 vs 90.0 for COIN task classification; see DS* in Table 1 of [85] vs results in [44]). Hence, there is some disadvantage to DistantSup [44] in the revised setting deployed in [85], though the reason is not elaborated in [85]’s paper. Therefore, we stick to comparison with [44] (where the existing method DistantSup achieves its better numbers) for fairness and better reproducibility. We observe clear gains over those numbers (Table 4 in the main paper).
**Q10. [A] is a similar work**
Thanks for pointing out concurrent [A]. It uses non-annotated data for discovering steps in instructional videos and shows downstream keystep localization (different than our task). We will add it in the final version of the paper and discuss the differences.
**Q11. Societal impact**
Please see the global comment above. Though we gave it careful thought, we do not see any negative societal impact specific to our contributions in this paper.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for highlighting the improved clarity in the supervision used by the proposed and prior works. I hope this clarity can also be added to a future version of the paper. I also appreciated the visualization of the task graph in the rebuttal.
It is a shame the authors could not experiment with the keystep vocabulary. I understand the vocabulary used within a task is much smaller than the vocabulary available to the model, however the available vocabulary, in COIN for instance, is still much smaller than the one used in HowTo100M and much smaller than the vocabulary available in the English language. It is also a shame the authors refuse to consider any potential negative societal impact.
Nonetheless, I will enter the discussion phase more positive than my initial review thanks to the clarity on supervision.
---
Reply to Comment 1.1.1:
Title: Thanks and follow-ups
Comment: We thank the reviewer for acknowledging our explanation of the supervision and visualization of the task graph. We are happy that the reviewer is **more positive** about the paper now. We answer the two questions raised by the reviewer below:
**Q1. Experiment with the keystep vocabulary**
We thank the reviewer for observing that we use a much larger vocabulary than that used within a task in COIN/CrossTask. This choice of larger keystep set shows the robustness of our method in the presence of irrelevant keysteps as we discussed above.
To further expand on the specific point of the reviewer *“how does the performance degrade with a larger keystep vocabulary containing keysteps which aren't used.”* – we extend our setup to further demonstrate the robustness of our method in the presence of irrelevant keysteps. In this setup, we add keysteps from HowTo100M into the clean annotated keystep set of COIN and CrossTask to simulate an increasingly larger vocabulary. For each test dataset, we randomly select $\alpha N$ keysteps from the 10588 keysteps used in HowTo100M, where $\alpha$ is a scaling factor and $N$ is the number of keysteps in the test dataset, and inject these keysteps into test dataset vocabulary. This results in a much larger vocabulary size of $(1 + \alpha) \times N$. We progressively increase $\alpha$ and evaluate the performance at each scale level. We show here the performance on text-only modality and we see the same trend in other modalities. The vocabulary size is shown in the first column (e.g., 1.5 x N implies $\alpha = 0.5$). The frame-wise accuracy of the DistantSupervision baseline and our method with the bigger vocabulary are shown in columns 2 and 3, respectively. Our relative improvement in accuracy over DistantSupervision [44] is shown in column 4.
**Zero-shot keystep recognition on COIN dataset (N = 749)**
| Vocabulary size | DistantSupervision [44] | Ours | Relative Gain |
| ----------- | :-----------: | :------------: | :-----------: |
| 1 x N (original set) | 9.8 | 16.3 | 66% |
| 1.5 x N | 8.4 | 14.1 | 68% |
| 2 x N | 7.7 | 13.3 | 73% |
| 4 x N | 6.2 | 10.9 | 76% |
| 5 x N | 5.6 | 10.6 | 89% |
| 10 x N | 4.3 | 8.4 | 95% |
_______________________________________
**Zero-shot keystep recognition on CrossTask dataset (N = 105)**
| Vocabulary size | DistantSupervision [44] | Ours | Relative Gain |
| ----------- | :-----------: | :------------: | :-----------: |
| 1 x N (original set) | 16.1 | 20.1 | 25% |
| 1.5 x N | 13.3 | 16.8 | 26% |
| 2 x N | 12.6 | 15.7 | 25% |
| 4 x N | 10.1 | 12.9 | 27% |
| 5 x N | 8.9 | 11.7 | 31% |
| 10 x N | 7.5 | 10.1 | 35% |
_______________________
As expected, having a bigger keystep vocabulary containing irrelevant keysteps deteriorates the performance of both methods (see columns 2 and 3). The more irrelevant keysteps, the larger the reduction in performance. Nevertheless, our method is noticeably more robust to large keystep sets when compared to the baseline, and its advantage over the baseline steadily increases with the increasing number of keysteps (see column 4).
This demonstrates the regularization power of our task graph, which corrects the noisy similarity-based predictions with task graph priors (L196-200 in the main paper), thus resulting in a robust keystep recognition. These results empirically demonstrate the advantage of our task graphs under a much larger vocabulary setting. We will elaborate this result in the final version of the paper.
**Q2. Societal Impact**
We appreciate the reviewer’s suggestion to delve deeper here. Video representation learning and keystep recognition in general could risk negative impacts if any bias from the dataset influences the representations. For example, COIN/CrossTask/HowTo100M are collected from YouTube and they may only contain videos having certain kinds of home and those with access to recording devices and microphones. Such biases could result in failures when these systems are deployed in a diverse set of environments. For example, keystep recognition in a cluttered and low-end kitchen might not work if the model is trained in clean and tidy kitchens. In addition, using these video representations for AR/VR applications may raise user privacy concerns, depending on how the dataset creators went about collecting the video samples. We will emphasize these in our final draft. | Summary: In this work, the authors propose to address keystep recognition in instructional videos. To achieve this goal, they attempt to build a task graph from videos, which show how keysteps are related to each other. Based on this graph, one can further update the preliminary keystep assignment, when the initial prediction is with low confidence.
Strengths: 1 The key step recongnition is an important topic for procedural activity understanding.
2 The idea of building task graph seems to be technical sound as key step relation prior.
3 The experiments show the effectiveness of the method.
Weaknesses: (1) The task graph is pre-computed offline or built online? I assume, it should be built offline, according to the unannotated dataset of narrated instructional videos. Moreover, when working on another dataset, the task graph should be re-computed again?
(2) What is the computation time of Path Search, when updating the low-confident key step prediction?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please see the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging comments and insightful feedback.
**Q1. The task graph is pre-computed offline or built online? I assume, it should be built offline, according to the unannotated dataset of narrated instructional videos. Moreover, when working on another dataset, the task graph should be re-computed again?**
Yes, the task graph is pre-computed offline. When working on a dataset with a different vocabulary, the task graph needs to be re-computed to accommodate for the difference in ground truth keystep set. We generalize with HowTo100M where we use a large-scale keystep set (>10k keysteps). More precisely, if the keystep set were the same in all dataset, we would not have to compute the task graph again.
**Q2. What is the computation time of Path Search, when updating the low-confident key step prediction?**
PathSearch uses Dijkstra’s algorithm (L208) that has a worst-case complexity of $O(V^2)$ for $V$ nodes (equal to number of keysteps, $V = 105/749/10588$ for CrossTask/COIN/HowTo100M). This computation is minimal. When performing keystep assignment, PathSearch is less compute intensive than video feature extraction since forward pass is typically more than just $O(V^2)$ hence no delay is observed compared to the baselines. Hence our idea adds minimal overhead while achieving notable advantages in keystep recognition and video representation pretraining. | Summary: This paper addresses the problem of key-step recognition and localization in instructional videos by learning and leveraging a probabilistic task graph. The proposed method first localizes key-steps mined from text sources (such as wikihow) in videos by measuring the similarity between visual-narration features in videos (obtained from pretrained models) and key-step feature. It then constructs a graph whose nodes are key-steps and whose edges are transitions between key-steps (obtained from the localization results). Finally, the initial key-steps whose confidence is lower than a threshold will be replaced by the key-steps from the optimal key-step path in the task graph. The experimental results show some improvement compared to key-step recognition baselines.
* The reviewer read the author rebuttal and other reviews.
Strengths: - The paper is easy to read and overall framework is sufficiently well presented.
- Leveraging a task graph to improve recognition is interesting (although the paper is not the first work addressing it).
Weaknesses: - The paper claims that it is the first to use task graphs to enhance keystep predictions in instructional video (see line50). The reviewer disagrees with this claim. Parika [85] has addressed learning and leveraging task graphs to learn video representation for better key-step recognition. The final goal that the submission and Parika pursue is almost the same. In fact the submission has a narrower scope compared to [85] as it does not address representation learning while [85] addresses it along with better key-step recognition.
- The graph learned in the paper is not exactly a "task graph" (a task graph encodes all possible ways of doing a task). It is rather a transition probability model between key-steps. In particular, a limitation of the transition model in the paper is that it does not properly encode task executions, e.g., a transition model allows multiple transitions between two key-steps (e.g., a-->b-->a-->...-->b-->a), which may be invalid for task execution. This is because of the short-sightedness of the transition model that does not model long-range action dependencies.
- Related to the comment above, there is a need for an experiment that measures the edit distance between the predicted key-step sequences and the ground-truth key-step sequences, compared with baselines.
- Compared to [85] which also builds a task graph, the advantage of the studied method is not clear. Given the high similarity between the two works, there is a need to include [85] with its experimental setting in the paper for a fair comparison.
- The key-step to narration assignment can lead to violation of the transition model (see the formulation after line 205). Specifically, when the similarity between video and key-step is above a threshold, the key-step will be assigned to the video frame and is lower than a threshold it will be assigned by following the transition model. Additionally, there is a need in the experiment that shows the effect of the hyper-parameter $\gamma$ on the step recognition results (e.g., horizontal axis $\gamma$ and vertical axis $acc$).
- The evaluation metric in the paper does not consider background. Given that at least 40% of instructional videos in crosstalk and coin consist of background frames, there is a need to measure how well the proposed method avoids assigning key-step labels to the background frames. By measuring ACC and IOU for only key-step regions, one cannot evaluate whether the paper wrt SOTA assigns large or small portions of background frames as key-steps.
- In Table 1, for Video-only and Video-Text, both ACC and IOU of the method is very close to SOTA, which makes the effectiveness of the learn transition model questionable.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: In addition to the questions raised in the weakness section:
- From Table 1, one can se that the improvement of the performance on COIN is often more significant than on CrossTask. Why?
- Is the task graph built based on the videos and narrations in the training set and uses the same task graph during test? Or is the task graph only built based on the test set?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: There is no discussion of limitations and possible negative societal aspects of the work and need to be included in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable comments. Our responses show where multiple requests from the reviewer were addressed in the submitted paper. Also please note the concurrent work policy of NeurIPS re: [85].
**Q1. Comparison with [85]**
Paprika [85] is a contemporaneous work per the conference guidelines (see global rebuttal), meaning a comparison is not required.
Regarding relative contributions: As we were completing our submission, we saw their paper appear on arXiv; we explain the important differences with [85] in L96-105. Paprika learns a task graph ONLY for video representation learning, whereas we use the task graph for both keystep recognition (Sec 4.1) and video representation learning (Sec 4.2). Paprika does not perform zero-shot keystep recognition. Our work is the first to use task graphs to enhance keystep predictions. Hence our scope is broader than [85].
Regarding empirical comparisons: In [85], the resulting task graph has equi-probable edges, whereas ours is based on empirically observed transitions in video. A non-probabilistic baseline in Table 1(a) in the attached rebuttal PDF shows that a probabilistic graph performs better. Important: please see our response to Q9 for Reviewer wv2a discussing issues with direct comparison with [85] on representation learning.
**Q2. Transition probabilities vs “task graph” e.g. transition model allows multiple transitions between two key-steps, which may be invalid**
As a counterexample to the reviewer’s claim, keystep a: “add salt to boiling soup” and keystep b: “stir the soup to dissolve salt” can have a→b→a→b…. transitions when a user is not sure about how much salt to add and hence this transition cannot be disallowed. Specifically, given the stochastic nature of human activity, it is impossible to encode ALL possible ways of doing a task. Thus, a transition probability model is a reasonable way to capture all typical transitions, also helping accommodate transitions outside a fixed recipe. Note that the PathSearch algorithm (L205-211) finds a keystep path between high confidence keysteps, regardless of the duration in between, i.e., our method can also consider long-range action dependencies.
Please see the “Linear Script” baseline (L289-290) where we take the explicit order of steps. Also see the “Auto-Regressive” (L280-282) baseline that has long-range action dependencies and still performs worse. Both these baselines show that our explicit task graph is helpful in assigning keysteps.
**Q3. Add edit distance as a metric**
Thanks for the suggestion. We think edit distance is a useful additional metric, since it captures how similar the sequence of keysteps are w.r.t. the ground truth. Please see Table 1(b) in the attached PDF. On this metric, too, our method outperforms baselines for all modalities for both datasets. We will add this metric in the paper.
**Q4. Violation of transition model (L205) and effect of $\gamma$**
Please note that Supp. Table 1 already includes the suggested experiment with $\gamma$. See also L245.
Regarding transition violations – the algorithm first chooses instances where the similarity between video and key-step is above a threshold and between any two such time instants, we use the transition model (L203-205). We also have another version of the transition model - Bayesian Recursive Filter (BRF), (please see footnote 2 of the main paper and section 6 of the supp). In this version, we make predictions based only on the transition model and we observe that the performance in that case is weaker. Applying the transition model only between two high confidence segments is empirically better. Establishing this was part of our research exploration, since we originally pursued both models in parallel.
**Q5. Metrics do not consider background**
We also have metrics that do consider background. The chosen setting is consistent with prior work [19]. Accuracy metric is not ideal using background due to class imbalance (simply predicting everything as background results in high Acc). Note that IoU (reported in our submission) does consider background and penalizes false positives. Moreover, we add F1 metrics to further address the background frames question. See Table 1(b) in the attached rebuttal PDF. Our method outperforms all the baselines on all metrics consistently, including IoU and F1, which consider background.
**Q6. Performance close to SOTA in Video-only and Video-Text**
The increase in Acc for COIN dataset is 2.2% and 3.6% for video-only and video-text wrt SOTA. The difference is indeed lower for CrossTask while still being statistically significant. We attribute this to the smaller keystep set in CrossTask (105 vs 749 keysteps in COIN) and thus the transitions having lower predictability on average. The video features may themselves have low perceptual errors as we still see a gain of 3.9% in CrossTask text-only. As suggested by reviewer G41M, if we split the performance into low and high predictability segments, we see that the performance gain in highly predictable keystep transitions is more significant (3.4% for video-only and 3.5% for video-text). Therefore, our method performs better with a larger keystep set — important for the natural setting.
**Q7. Gain on COIN more than CrossTask**
Please see response to Q6. In short, COIN offers a much larger keystep taxonomy, and hence a greater need for our regularization.
**Q8. Construction of task graph on training or test set?**
For zero-shot keystep recognition (Sec 4.1), the task graph is built only based on the test set since we do not want supervision transfer between different splits. For representation learning (Sec 4.2), the task graph is built on the train set to pseudo label itself.
**Q9. Limitations**
We have discussed limitations in Supp (L59-65). Also see the global rebuttal. Though we've given it careful thought, we do not see negative societal impact for our contributions.
---
Rebuttal 2:
Comment: Thanks for your responses above. I believe the rebuttal addressed several important comments I raised in my review. Additionally, the point about concurrent work [85] is noted/valid. I will enter the discussion phase being more positive about this submission and willing to increase my score. | Summary: The paper considers the task of "keystep" recognition. Keystep is one of the N sub-tasks that are performed sequentially to achieve a goal / task. Keysteps can have causal dependencies. Prior work has the following limitations:
(1) only considers each keystep in isolation, without considering the overall task (sequence of keysteps) [this is suboptimal, especially when powerful models like Transformers can efficiently model temporal context]
(2) Keysteps are expected to conform exactly to a predetermined script of actions [sequence is not always precisely defined — there can be order inversions, skips, alternate keysteps, etc.]
(3) Classification of fixed-size, pre-segmented chunks [this is inaccurate].
The paper presents an approach to probabilistically represent the keystep transitions for tasks in the form of a "task graph". The approach leverages the prior probabilities for keystep transitions in the task graph in cases where there is low-confidence for video evidence. This results in improvements in performance in the zero-shot keystep localization task. It further improves video representations for other downstream tasks.
Strengths: (1) The proposed approach explicitly leverages the probabilities of the task graph. This has the advantage that it is more interpretable than, e.g., encoding the relevant nodes and keysteps as an embedding, and then making predictions based on this.
(2) There is consistent performance improvements in all tasks, especially zero-shot keystep prediction. Also, a big plus is the improvement in downstream tasks that leverage the better video representation that is trained on predicted pseudo-labels for other tasks.
(3) It is nice that the task graph is constructed across the entire dataset, and contains common keysteps across tasks. This makes the task graph extensible across (theoretically) infinite data. With a much larger task graph, one could encode the keysteps for a majority of human activities present in videos.
Weaknesses: (1) The approach involves constructing a task graph that probabilistically models the transitions between keysteps. Inference on this task-graph is also done by a principled path-search algorithm. However, to incorporate the probabilities from the task graph, a simple confidence-based thresholding operation is proposed. I.e., if the evidence (confident predictions) are below a certain threshold, then fall-back to the prior. This seems simplistic, and might not capture the complexity of individual data samples. Do we expect that a single threshold value across an entire dataset is a reasonable choice? Alternately, is there a way to have continuous bayesian approach incorporating the evidence and the prior for all samples?
(2) Solely relying on priors for low-confidence perceptual predictions can work for tasks with very high predictability of keysteps. For almost all others, this might cause large errors. Is there a breakdown of errors from the perceptual model and the task-graph-prior model? Please include this in future versions of the paper.
(3) Keystep classification on a temporal span of exactly 1-second-long clips is arbitrary. What’s the distribution of keystep temporal spans? Prior work indeed does this, but it would be nice to progress beyond this. However, I recognize that this may become very challenging in the case of novel or rare keysteps.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) Frame-wise accuracy metric is defined (in L264) as “the fraction of frames with ground truth k_i that has the correct assignment.” Accuracy metrics usually penalize both false negatives and false positives. The definition of frame-wise accuracy suggests that this is a recall-like metric (only penalizes false negatives). Am I interpreting the metric definition correctly?
(2) A question that arises wrt Strengths (1) is — is using the task graph probabilities explicitly, better than providing the relevant nodes in the task graph as input condition / context to a “revise” model (similar to baseline “Auto-Regressive [65, 69]”)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not explicitly discussed in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for encouraging comments and insightful feedback.
**Q1. Do we expect that a single threshold value across an entire dataset is a reasonable choice? Alternately, is there a way to have continuous bayesian approach incorporating the evidence and the prior for all samples?**
Thanks for the question. Our submission did consider this. We provided experiments with a Bayesian Recursive Filter (BRF) variant of our approach that does consider the evidence and prior for all samples. See section 6 and Table 1 in supplementary and Footnote 2. However, the performance is lower than the proposed method because BRF is causal and hence observes less context, and errors made in earlier prediction rounds are propagated to all time instances after that (Supp L49-52). The proposed simpler method is empirically better than the more complicated BRF. Establishing this was part of our research exploration, as we originally pursued both models in parallel.
As discussed in L245, performance is not highly sensitive to choices of the threshold $\gamma$ in the range [0.3, 0.5] (also see ablation in Supp. Table 1). We experimented with adaptive thresholding and found the performance to be inferior to using a constant threshold. The result can be seen in Table 1(a) in the attached rebuttal PDF. We adaptively identify one threshold per video, such that 50% of the video clips are considered to have “high-confidence” similarity scores. This works even for videos dominated by low similarity scores. The performance drops in this setting. We attribute this decrease to the fact that the model is assigning noisy samples as the high-confidence keystep and hence the overall keystep prediction is affected. We will add this ablation in the final paper.
**Q2. Solely relying on priors for low-confidence perceptual predictions can work for tasks with very high predictability of keysteps. For almost all others, this might cause large errors. Is there a breakdown of errors from the perceptual model and the task-graph-prior model?**
Thanks for suggesting this breakdown of errors. First, given the set of low-confidence perceptual predictions, we split it into two sets – first where the keystep predictability using task graph prior is high and the second where it is low. To measure keystep predictability, we use Shannon entropy of starting keystep. We choose a threshold such that the predictions are split into half. High Shannon entropy means low keystep predictability and vice-versa. Table 1(c) in the attached rebuttal PDF shows the performance. We see that our task graph outperforms the baselines even in the case of low predictability of keysteps. Of course, the gain is lower than it is for cases with high keystep predictability. Thus, our task graph prior can be seen as a way to correct noisy perceptual predictions, even in cases of low predictability. We will add this observation in our final paper.
**Q3. Keystep classification on a temporal span of exactly 1-second-long clips is arbitrary. What’s the distribution of keystep temporal spans? Prior work indeed does this, but it would be nice to progress beyond this. However, I recognize that this may become very challenging in the case of novel or rare keysteps**
Average temporal span of a keystep is 14.91s and 9.61s for COIN and CrossTask, respectively. Our setting is consistent with prior work [78,47]. Choosing non-overlapping contiguous segments of 1 seconds makes the keystep classification uniform and independent of keystep duration (some are long and some are short). For longer keysteps, the model can classify the same keystep for contiguous segments, denoting a longer temporal span of a particular keystep. So, in short, processing 1-second clips does not preclude identifying longer keystep instances when they occur. Work progressing beyond 1s video clips is orthogonal to our contributions.
**Q4. Accuracy metrics usually penalize both false negatives and false positives. The definition of frame-wise accuracy suggests that this is a recall-like metric (only penalizes false negatives). Am I interpreting the metric definition correctly?**
The definition in L264 only means that the ground truth set does not contain non-labelled (i.e. background) segments. The accuracy is the standard (Correct predictions)/(All non-background segments). Concretely, if $\phi$ is the background label, then we use the standard multi-class accuracy as
$Acc = \frac{\sum \limits_{V \in D} \sum \limits_{t \in |V|} 1(\hat{k}_t = k_t)}{ \sum 1(k_t \neq \phi)}$
Here, D is the dataset and V is video. This setting is consistent with prior work [19]. Using background in the accuracy metric is not ideal due to class imbalance (>50% background in both COIN and CrossTask). Note that IoU (another metric we report in Table 1) still penalizes false positives that are in the background. In the denominator of IoU we do add the false positives. We also add the F1 metric (including the background) for completeness in the rebuttal PDF Table 1(b). Our method outperforms the baselines and SOTA in all metrics.
**Q5. A question that arises wrt Strengths (1) is — is using the task graph probabilities explicitly, better than providing the relevant nodes in the task graph as input condition / context to a “revise” model (similar to baseline “Auto-Regressive [65, 69]”)?**
Please note that we provided a baseline that addresses this question. Pruning keysteps (L283-288) is the baseline that first clusters the keysteps and tries to find the “relevant nodes” and provides only those to the “revise” model. The performance is stronger than SOTA and auto-regressive, but still weaker than our method. Thus, explicit task graph probabilities are better than both auto-regressive baseline and providing only “relevant nodes”.
**Q6. Limitation not explicitly discussed in the main paper.**
We have discussed limitations in Supp (L59-65). Please also see the global comment above. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful comments.
**Three reviewers lean towards acceptance**. We believe our responses below clarify the items raised in the reviews, which include questions about metrics and a concurrent paper [85] (*vrnW*) and a possible misunderstanding about supervision (*wv2a*). There are several instances where requested experiments/baselines are indeed already in our original submission, detailed below.
All the reviewers are positive about the method – *iD7r* states the method is intuitive, *G41M* calls it more interpretable, *vrnW* suggests the method is technically sound and finally, *wv2a* agrees that it is well-motivated. Further, reviewer *iD7r* states that “proposed method offers substantial gains over state-of-the-art methods” and *qtTp* says the experiments “show effectiveness”. *G41M* agrees that “there is consistent performance improvements in all tasks, especially zero-shot keystep prediction” and that improvement in downstream tasks is a “big plus” by using a “better video representation”. *G41M* correctly notes that the task graph is constructed across the entire dataset and “with a much larger task graph, one could encode the key steps for a majority of human activities present in videos.” *wv2a* notes that the “benefits are not constrained to task of keystep prediction but distinct tasks of key step forecasting and task classification”. Finally, we thank *vrnW* for noting that the paper is easy to read and to *wv2a* to note that we “clearly understand task and nature of instructional videos well”.
We address the weakness individually to each reviewer alongside their review.
**We would also like to draw attention of the AC and the reviewers to several contemporaneous works [70, 84, 85, A] (thanks to *wv2a* for pointing out [A])**. All of them are published in CVPR 2023 (**after** this paper deadline in June 2023) and appeared first online (on arXiv) within two months of the paper submission (i.e. after March 17th 2023). This means they are concurrent work per the NeurIPS policy given in the call for papers:. **“...for the purpose of the reviewing process, papers that appeared online within two months of a submission will generally be considered "contemporaneous" in the sense that the submission will not be rejected on the basis of the comparison to contemporaneous work.”**
That said, all of them use different ideas for video representation learning. [85] is most relevant to our work, and (upon seeing their arxiv post as we were finishing our submission) we have differentiated with it in the submitted paper in detail (see L96-L105). These contemporaneous works further suggest the importance of video representation learning and using task graphs for keystep localization.
Finally, we have included a **Limitations section in the supplementary (Supp L59-65)**. As far as dependencies, our method relies on the strength of the perceptual prediction model, and our task graph construction and keystep prediction can be noisy if the perceptual model is weak. We do not foresee any negative societal impact.
Pdf: /pdf/03f5627461b8908ec19b6122977101475d655736.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper focuses on the procedural activity understanding task. Considering that the feature-keystep matching in current methods is independent and fails to encapsulate the rich variety, the authors propose a video-mined task graph as a prior to update the preliminary keystep assignments.
Strengths: 1. Using the graph structure to generate broader context and assign the correct keystep labels is an intuitive idea.
2. The propsed method offers substantial gains over state-of-the-art methods.
Weaknesses: 1. What is the cost for building the video-mined graph? Will this graph leads to great overhead?
2. The authors use the graph structure to amplify the context receptive field. How about using the attention mechanism (e.g. self-attention) since it is also a relation extraction manner?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the cost for building the video-mined graph? Will this graph leads to great overhead?
2. The authors use the graph structure to amplify the context receptive field. How about using the attention mechanism (e.g. self-attention) since it is also a relation extraction manner?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. What is the cost for building the video-mined graph? Will this graph leads to great overhead?
2. The authors use the graph structure to amplify the context receptive field. How about using the attention mechanism (e.g. self-attention) since it is also a relation extraction manner?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for encouraging comments and feedback.
**Q1. What is the cost for building the video-mined graph? Will this graph leads to great overhead?**
The overhead is minimal since the only extra computation is counting transitions given similarity scores followed by averaging. There are only $Nd$ additional operations for N-videos, each d-seconds long followed by averaging across $\mathcal{K}$ keystep classes. In fact, the cost of building the graph is comparable to a forward pass on only one test video that involves significantly more FLOPs. Hence our idea adds minimal overhead while achieving notable advantages in both keystep recognition and video representation pretraining.
**Q2. The authors use the graph structure to amplify the context receptive field. How about using the attention mechanism (e.g. self-attention) since it is also a relation extraction manner?**
The auto-regressive baseline (L280 and Table 1) does exactly this self-attention. In this baseline, an auto-regressive model first learns the transitions of the predicted keysteps. In the refining step, the model takes previous predictions and the current keystep-video score and generates the next prediction, typical of a self-attention mechanism. Our explicit representation significantly outperforms this baseline. It can be concluded that the proposed explicit task graph prior is more effective than self-attention, despite its implicit context modeling. | null | null | null | null | null | null |
Temporally Disentangled Representation Learning under Unknown Nonstationarity | Accept (poster) | Summary: This paper addresses the problem of identifying latents representations from sequential data which is stationary within contexts, and non-stationary between contexts. Existing work has either conditioned on observed auxiliary variables that indicate which context one is in (e.g. time contrastive learning and the followups), or they relied on mutually (conditionally) independent latents and did not allow time-delayed latent causal influences. The paper presents an approach called Nonstationary Temporal Disentangled Causal Representation Learning (NTDC)and shows conditions under which they can identify time-delayed latent causal variables and their relations without the need for observed auxiliary variables. They formulate the problem as a discrete Markov process and establish identifiability of the latent independent components. They give impressive experimental results on both synthetic and real-world datasets to demonstrate the proposed method's success in recovering latent variables.
Strengths: * Really nice experimental results section. The results on simulated data and I appreciated the section on the MoSeq dataset analyzing mouse behaviour.
* I think the graphical model that the paper analyses is very practically useful for abstracting time series data. By breaking a time series into conditionally stationary domains, $c_t$, that change more slowly, you gain a more interpretable model of the underlying behaviour of the system (the mouse experiments are nice illustration of this). Empirically, we've seen results similar to this before in the original MoSeq paper, but it is nice to have identifiability results to go with these approaches.
Weaknesses: * Assumptions (6) and (7) in Theorem 2 are unintuitive to the point that I'm not sure the theorem is useful at all. The main value of identifiability theorems are to give a set of preconditions on the data under which a practitioner can expect a disentanglement routine would work. But I'm not sure that there are any practitioners who would be able to judge whether those assumptions hold in their data, and there was no attempt to give any intuition for when they might hold and when they would fail.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Can you give any more intuitive assumptions that are sufficient for Theorem 2 to hold? Or give examples of distributions for which they hold and examples for which they fail? If done well, this would make the theory section stronger.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper already does a reasonable job of outlining the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely grateful to the reviewer for the informative feedback. Please see our point-to-point response below.
> Weaknesses: ... But I'm not sure that there are any practitioners who would be able to judge whether those assumptions hold in their data ...
Firstly we would like to mention that our assumed conditions are generally testable. One can run experiments with multiple seeds and compute the similarity (MCCs) between the learned representations; if the representation is not unique, then our assumptions must be violated.
> Q1: Can you give any more intuitive assumptions that are sufficient for Theorem 2 to hold? Or give examples of distributions for which they hold and examples for which they fail? If done well, this would make the theory section stronger.
This assumption was first introduced in GCL[1], namely, "sufficient variability", to extend the modulated exponential families [2] to general modulated distributions. Essentially, the condition says that the nonstationary regimes $c$ must have a sufficiently complex and diverse effect on the transition distributions. In other words, if the underlying distributions are composed of relatively many domains of data, the condition generally holds true. For instance, in the linear Auto-Regressive (AR) model with Gaussian innovations where only the noise variance changes, the condition reduces to the statement in [3] that the variance of each noise term fluctuates somewhat independently of each other in different nonstationary regimes. Then the condition is easily attained if the variance vector of noise terms in any regime is not a linear combination of variance vectors of noise terms in other regimes.
We further illustrate the condition using the example of modulated conditional exponential families in [1]. Let the log-pdf $q(\mathbf{z}_t \vert \mathbf{z}\_{\text{Hx}}, c)$ be a conditional exponential family distribution of order $k$ given nonstationary regime $c$ and history $\mathbf{z}\_{\text{Hx}}$:
$$q(z_{it} \vert \mathbf{z}\_{\text{Hx}}, c) = q_i(z_{it}) + \sum_{j=1}^k q_{ij}(z_{it}) \lambda_{ij}(\mathbf{z}\_{\text{Hx}}, c) - \log Z(\mathbf{z}\_{\text{Hx}}, c),$$
where $q_i$ is the base measure, $q_{ij}$ is the function of the sufficient statistic, $\lambda_{ij}$ is the natural parameter, and $\log Z$ is the log-partition. Loosely speaking, the sufficient variability holds if the modulation of by $c$ on the conditional distribution $q(z_{it} \vert \mathbf{z}_{\text{Hx}}, c)$ is not too simple in the following sense:
1. Higher order of $k$ ($k>1$) is required. If $k=1$, the sufficient variability cannot hold;
2. The modulation impacts $\lambda_{ij}$ by $c$ must be linearly independent across regimes $c$. The sufficient statistics functions $q_{ij}$ cannot be all linear, i.e., we require higher-order statistics.
Further details of this example can be found in Appendix B of [1]. In summary, we need the modulation by $c$ to have diverse (i.e., distinct influences) and complex impacts on the underlying data generation process.
# References:
[1] Hyvarinen, Aapo, Hiroaki Sasaki, and Richard Turner. "Nonlinear ICA using auxiliary variables and generalized contrastive learning." The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019.
[2] Hyvarinen, Aapo, and Hiroshi Morioka. "Unsupervised feature extraction by time-contrastive learning and nonlinear ica." Advances in neural information processing systems 29 (2016).
[3] Matsuoka, Kiyotoshi, Masahiro Ohoya, and Mitsuru Kawamoto. "A neural net for blind separation of nonstationary signals." Neural networks 8.3 (1995): 411-419.
---
Rebuttal Comment 1.1:
Comment: Thank you again for your insightful comments on our paper. If you think our response needs further clarification, or if you have additional questions, please don't hesitate to let us know. Your feedback is highly valued, and we're ready to engage in further discussion if needed.
---
Rebuttal Comment 1.2:
Title: Sufficient variability
Comment: Thanks for the clarification. This discussion is useful and should be in the paper: I'm someone who knows this literature fairly well, and while it looked like a sufficient variability-style assumption, I missed that it was exactly sufficient variability. Despite the fact that sufficient variability is now a commonly used assumption, I think a description of what it requires and when it fails should still be in every paper that uses it so that the paper has a self-contained explanation of when we can expect the method to work / fail.
On the assumption that you'll add this discussion to the paper, and I've increased my score to a 7.
---
Rebuttal 2:
Title: Thanks for your feedback and updating the score to 7
Comment: Dear Reviewer PKrt,
Thanks for your recognition of our rebuttal efforts and kindly updating the score. We are very delighted to hear that all of your significant concerns have been resolved.
With best regards,
Authors of submission 9100 | Summary: This paper gives identifiability results in a new setting, along with an estimation method and experiments validating the estimation method.
There are existing identifiability results in the nonstationary setting if you assume that the auxiliary variables are observed. There are other existing identifiability results in the nonstationary setting if you assume that the auxiliary variables evolve as a Markov chain but the latents only affect the current timestep. This paper gives an identifiability result in the nonstationary setting where the auxiliary variables are *not* observed and the auxiliary variables evolve as a Markov chain but the latent variables can affect the observed variables across time steps.
The estimation method is an extension of Sequential VAEs, where there is an autoregressive Hidden Markov Model model that models the nonstationarity, a prior network that estimates the prior using a conditional normalizing flow, and a VAE where the encoder fits the demixing function and the decoder fits the mixing function.
They perform experiments on two synthetic datasets that satisfy their identifiability conditions, a video dataset for a modified Cartpole environment, and a video dataset of mouse behavior. For the synthetic datasets, they compare MCC with seven baselines (BetaVAE, iVAE, TVL, SlowVAE, PCL, TDRL, and HMNLICA). For the video cartpole dataset, they compare MCC against one baseline (TDRL). For the mouse video dataset they don’t have ground-truth independent components so instead of numerical results they provide a visualization of the recovered components.
Strengths: Strengths:
- I am not especially up-to-date on identifiability literature, but as far as I am aware this is a novel problem setting and novel result.
- I agree that the “observed auxiliary variable” condition in many nonlinear ICA identifiability results is not realistic, and it is a useful direction to prove results that do not rely on this assumption.
- The choice of baselines is very thorough for the synthetic datasets.
- They use two video datasets to demonstrate their estimation method in a more realistic setting than synthetic data, which I appreciate since this is mostly a theory paper.
Weaknesses: Weaknesses:
- The writing is okay but not great. It gets the point across but it could use proofreading and better word choices in some places. Ex: “It is obvious that the causal relations among those latent variables are more meaningful and the identifiability is urgently needed.” <- this sentence could be improved.
- See questions below.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - In the Cartpole experiments, why do you only provide Accuracy and MSE A metrics for your method and not for baseline methods? And can you provide evaluations for the other baselines for this experiment?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: - As they mention in the conclusion section, they assume that the nonstationary variables follow a Markov chain, which is a somewhat restrictive assumption.
- No concerns about negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that you read our paper very carefully and your informative feedback, which has helped improve our paper. Below please see our response to your concerns.
>Q1: In the Cartpole experiments, why do you only provide Accuracy and MSE A metrics for your method and not for baseline methods? And can you provide evaluations for the other baselines for this experiment?
A1: Baselines either ignore the nonstationarity in the system (completely ignore $c_t$) or require domain indices ($c_t$) to be observed. In both scenarios, baselines don't predict $c_t$ as their output hence it doesn't make sense to evaluate the accuracy for $c_t$ and the transition matrix $\mathbf{A}$. It worth to mention that for HMNLICA, even though it estimates $c_t$, it doesn't consider the time-delayed relation for latent variables $\mathbf{z}_t$, for real datasets such as CartPole, the algorithm doesn't converge during training, hence we dropped it in our experiments.
>The writing is okay but not great. It gets the point across but it could use proofreading and better word choices in some places. Ex: “It is obvious that the causal relations among those latent variables are more meaningful and the identifiability is urgently needed.” <- this sentence could be improved.
We thanks the reviewer for pointing this out, we update the sentence as follows: ''Learning causal relations has practical use case, which benefits a lot of downstream tasks''. We will also proofread the paper and improve the writing in the final version. If you have any further suggestions please let us know in the discussion period.
---
Rebuttal Comment 1.1:
Comment: Thank you again for your valuable comments on our paper. Should you require more information or have further concerns, please do not hesitate to let us know. We are more than willing to engage in further discussions, provide additional information, or clarify our methodologies. Your insights are instrumental in improving our work, and we sincerely value your expertise and time. We look forward to any further comments you may have.
---
Rebuttal Comment 1.2:
Title: Thanks
Comment: Thank you for this feedback authors. This will be taken into account. | Summary: This paper presents identifiability results that handle nostationary time-delayed causally-related processes without auxiliary variables. In addition, the authors propose NTDC, which is a neural network that is based on their identifiability results. The method is evaluated on a few tasks in comparison to a few baselines, showing significance of toy tasks generated by the authors.
Strengths: This paper introduces identifiability results on sequential information which is causally-related without additional variables. Thus, this paper extends prior work from a theoretic viewpoint. Another strength of the paper is the new deep framework that incorporates the theoretical results.
Weaknesses: One of the main weaknesses of the paper is its presentation. In particular, a lot of jargon is used without properly introducing the terms. For instance, domain indices are used throughout the paper without ever defining what it means. Similarly, Sec. 4.1 is missing a lot of details (what is ARHMM? what is the prior network?). Sec. 4.2 seems to be unfinished in terms of writing. Missing details and descriptions make it hard to properly evaluate the proposed NTDC and position it with respect to prior work.
Another main shortcoming of the work is the lack of discussion on related work for sequential disentanglement. Only paper [20] is mentioned (extremely) briefly. However, there are already at least ten papers or more since [20] which significantly improved the state-of-the-art results, the neural network models, the evaluation and theory.
Another shortcoming of the paper is the dicsussion in the introduction. In particular, the authors argue their mouse example might be challenging for existing work. In addition, they argue to be first to consider scenario Fig. 1c. However, in a recent ICLR'23 paper with the title "Multifactor Sequential Disentanglement via Structured Koopman Autoencoders" (SKD) by Berman et al. the authors show a framework that considers a similar model to Fig. 1c, and it is able to handle data such as the mouse example. The authors should compare their work with SKD both qualitatively and quantitatively. Similarly, the work "Contrastively Disentangled Sequential Variational Audoencoder" (C-DSVAE) by Bai et al. can also deal with a similar scenario and should be compared with.
Another shortcoming of the paper is its evaluation. In particular, the proposed method is compared with several baselines only on toy datasets that the authors create. No sequential disentanglement baselines are considered. On real world data, the authors only compare with a single method.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: See above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful review and valuable feedback. We respond to your concerns point-by-point below.
> Q1.1: One of the main weaknesses of the paper is its presentation. In particular, a lot of jargon is used without properly introducing the terms. For instance, domain indices are used throughout the paper without ever defining what it means.
Domain indices is the discrete variable that denote different domains. In our model it is $c_t$ which is introduced in Sec 2.1 Eq (2).
We will add more explanation to the technical terms and if there is any further unclear issues, please kindly let us know. We are more than happy to provide more details during the discussion.
> Q1.2: Similarly, Sec. 4.1 is missing a lot of details (what is ARHMM? what is the prior network?).
We thank the reviewer for raising this question. ARHMM refers to Autoregressive Hidden Markov Module (which is mentioned in line 193-194), which is a standard abbreviation. Such module is to model the nonstationarity with input of observed data $\mathbf{x}_t$ and output the estimated domain indices $c_t$. Prior Network (introduced in line 202-204) refers to the module to learn transition priors, such priors are obtained by first learning inverse transition functions $f_z^{-1}$ that take the estimated latent variables and output random noise terms, and applying the change of variables formula to the transformation
$p(\hat{z}\_{t}\vert \hat{\mathbf{z}}\_{\text{Hx}}, c_t) = p_{\epsilon_c}\left(\hat{f}\_{z}^{-1}(\hat{z}\_{t}, \hat{\mathbf{z}}\_{\text{Hx}}, \hat{\boldsymbol{\theta}}\_{c_t})\right)\Big|\frac{\partial \hat{f}_z^{-1}}{\partial \hat{z}\_{t}}\Big|$.
And the Encoder-Decoder is a standard module in Variational Auto-Encoder based method which projects the $\mathbf{x}_t$ to latent space $\mathbf{z}_t$ while preserving the ability to reconstruct $\mathbf{x}_t$.
Please let us know if the information above make it easier for you to understand, we are also happy to add more detailed explanations if you have further questions.
> Q1.3: Sec. 4.2 seems to be unfinished in terms of writing. Missing details and descriptions make it hard to properly evaluate the proposed NTDC and position it with respect to prior work.
We apologize for accidentally commenting out part of the last sentence in Sec 4.2 in the submitted version. The complete writing is updated as follows: ''Specifically, we obtain the log-likelihood of the posterior, evaluate the prior
$\log p\left(\hat{\mathbf{z}}_t \vert \hat{\mathbf{z}}\_{\text{Hx}}, c_t\right)$ in Eq. (9), and compute their mean difference in the dataset as the KL loss:
$\mathcal{L}\_{\text{KLD}} = \mathbb{E}_{\mathbf{\hat z}_t \sim q\left(\mathbf{\hat z}_t \vert \mathbf{x}_t\right)} \log q(\mathbf{\hat z}_t|\mathbf{x}_t) - \log p\left(\hat{\mathbf{z}}_t \vert \hat{\mathbf{z}}\_{\text{Hx}}, c_t\right)$.''
> Q2, Q3: ... lack of discussion on related work for sequential disentanglement ... However, in a recent ICLR'23 paper with the title "Multifactor Sequential Disentanglement via Structured Koopman Autoencoders" (SKD) by Berman et al. the authors show a framework that considers a similar model to Fig. 1c, and it is able to handle data such as the mouse example. The authors should compare their work with SKD both qualitatively and quantitatively. Similarly, the work "Contrastively Disentangled Sequential Variational Audoencoder" (C-DSVAE) by Bai et al. can also deal with a similar scenario and should be compared with.
We thank the reviewer for pointing out and we thank the reviewer for providing related work with references. We will definitely add the related topics in related work section. As for the relation between ''Sequential Disentanglement'' and our setting, we kindly remind that we would like to investigate the disentanglement from the causal lens which means we care about recovering the ground truth of the data generating process. In other words, there are multiple ways to disentangle the variable and obtain well-disentangled representations, but only a very small subset of them are actually following the ground truth i.e. achieving identifiability. The main contribution for this work is slightly different from just finding a disentangled representation under nonstationary setting. Instead we care more about if we can recover the ground truth and provided the identifiability, hence the Encoder-Decoder module can adopt any current state-of-the-art method such as SKD, and such choice is orthogonal to our contribution.
Despite of the difference mentioned above, we still compared SKD in the CartPole dataset and reported the MCC in the table below. Note, and also as mentioned in SKD paper that ''C-DSVAE'' uses slightly more supervision in data augmentation which requires more knowledge than purely unsupervised learning, for fair comparison we only conducted our additional experiment using SKD. As shown in table below, the MCC for SKD is better than a variety of baselines, however, we can see the distinction between well-disentangled models and identifiable models, only the models with identifiability can find the ground truth latent variables with theoretical guarantee.
| Method | MCC |
| :-: | - |
| BetaVAE | 57.54 |
| i-VAE | 60.14 |
| TCL | 65.07 |
| SlowVAE | 63.16 |
| SKD | 73.24 |
|NTDC|96.06|
> Q4: Another shortcoming of the paper is its evaluation. In particular, the proposed method is compared with several baselines only on toy datasets that the authors create. No sequential disentanglement baselines are considered. On real world data, the authors only compare with a single method.
We additionally compared baseline methods together with SKD in the CartPole dataset and showed the result in previous question.
Please let us know if you have further concerns and we are happy to provide more detailed information in the discussion phase.
---
Rebuttal Comment 1.1:
Comment: Thank you.
---
Reply to Comment 1.1.1:
Comment: Thank you once again for your insightful comments and invaluable advice, which helped us improve the paper's quality and clarity. We are committed to demonstrating the merits of our paper and would be pleased to engage further with you. Should there be a need for any additional discussion or clarification that may enhance the paper's value, please don't hesitate to let us know. We appreciate your comments and advice.
---
Reply to Comment 1.1.2:
Title: Would you like to consider updating your recommendation?
Comment: Dear Reviewer 5DXD
Thank you for your prompt feedback. Would you like to consider updating your recommendation, if your concerns are properly addressed?
With best regards,
Authors of submission 9100 | Summary: The study focuses on unsupervised representation learning for sequential data with time-delayed causal influences. Identifiability results for disentangling causally-related latent variables have been established in stationary settings using temporal structure. However, existing work only partially addresses this in nonstationary settings by using observed auxiliary variables or introducing the Markov assumption, but not both simultaneously. The authors introduce NTDC, to reconstruct time-delayed latent causal variables and identify their relations without the need for auxiliary variables, by utilizing deep generative models. The experimental results demonstrate that the proposed methodology outperforms existing baselines by effectively exploiting nonstationarity and distinguishing distribution shifts.
Strengths: - The paper is clearly written.
- The authors tackle the setting of previous works and propose a non-existing work, which is described in Figure 1.
- The theoretical grounds are sufficiently provided with proper theorems.
Weaknesses: - There are some grammatical errors in the main paper.
- There is no explanation of the "disentangled" which is included both in the proposed model name and the title of the manuscript.
- The authors provide their code with anonymity, but there is limited explanations of the codes that which is directly linked to the proposed NTDC and how one can run the code.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors:
- Could you provide more motivational examples of the model structure in Figure 1(d) other than the mouse movement example?
- Are there any derivation with probabilistic graphical model settings of Figure 1(d)? I wonder whether the problem could be solved in a statistical manner rather than just utilizing DGMs.
- I guess the model structure can be viewed as a doubly-hidden Markov model, or it could be more generalized as a multi-layer hidden Markov model. Could proposed theorems, as well as the proposed NTDC, be generalized in such settings?
- What's the specific meaning of "disentangled" representation in the title as well as the model name? And how the disentangled representation is achieved?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - There are no potential negative societal impacts of their work.
- Even though the authors provided a new work, my concern is that it lacks significance considering the NeurIPS standard.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful review and valuable feedback. We respond to your concerns point-by-point below.
> Q1: Could you provide more motivational examples of the model structure in Figure 1(d) other than the mouse movement example?
Here are some examples with explanation:
1. For general time series data such as stock market data, the underlying dynamics (map $\mathbf{z}\_{t-1} \rightarrow \mathbf{z}\_{t}$) changes dramatically across time. Such a nonstationary process can be modeled by Figure 1(d) by choosing different $c_t$.
2. Videos have similar properties. $c_t$ represents the events in the video. Within the same event, the changing dynamics stay the same, and across different events, the changing dynamics are different.
> Q2: Are there any derivation with probabilistic graphical model settings of Figure 1(d)? I wonder whether the problem could be solved in a statistical manner rather than just utilizing DGMs.
Thanks for the question, we are a little bit confused by the terminology here. Do you want to say PGMs instead of DGMs? From our understanding, using PGM doesn't conflicts with the solving the problem in a statistical manner. It is just to show the underlying dependence relations among the variables, and finally the problem is also solved in the statistical manner with the help of PGMs. Please let us know if there is any misinterpretation for this question, we are very happy to provide more explanations during the discussion.
> Q3: I guess the model structure can be viewed as a doubly-hidden Markov model, or it could be more generalized as a multi-layer hidden Markov model. Could proposed theorems, as well as the proposed NTDC, be generalized in such settings?
Thanks for your good observation and question. Instead of a doubly-hidden Markov model, our model is more similar to a nonstationary state space model with a HMM component on top of the hidden variables $\mathbf{z}\_t$. As for the generalization, we believe the answer is yes, since intuitively by modeling the nonstationarity in the process, we can gradually identify the variables layer by layer. However it is clear that this will involve a lot of nontrivial technical details which we are not completely sure, we will continue working on it in the future work.
> Q4: What's the specific meaning of "disentangled" representation in the title as well as the model name?
While there is no single formalized notion of disentanglement which is widely accepted, the key intuition is that a disentangled representation should separate the distinct, informative factors of variations in the data. We approach the disentanglement from the nonlinear Independent Component Analysis (ICA) point of view. In our definition, "disentangled representation" for time series data means the following things:
1. (Temporally conditional independence) Each dimension of learned $\mathbf{z}_t$ (i.e. $z\_{t,1}, z\_{t,2} \dots z\_{t,n}$) are conditional independent given the history $\mathbf{z}\_{\text{Hx}}$.
2. (Identifiability) The learned representation $\mathbf{z}_t$ consists with the ground truth $\mathbf{z}_t$ in the data generating process (up to component-wise transformation and permutation).
The field ''disentangled representation learning'' has been widely/extensively studied in the past few years, but as pointed out by [1], existing works rely heavily on inductive bias and ''well-disentangled models seemingly cannot be identified without supervision''. That is saying even though existing models can disentangle different factors in an intuitive way, the factors/variables that the model discovered can be far from ground truth, which is not favorable. Finding such ground truth (identifiability) is a further important yet challenging task. In the nonlinear ICA literature, researchers established identifiability result to find the independent components in the data generating process, which is a well-disentangled model with theoretical guarantee.
> Q5: And how the disentangled representation is achieved?
NTDC achieved disentangled representation by performing nonlinear ICA on nonstationary time series data. Specifically, NTDC first learned the nonstationarity of the process and give estimation of the domain indices $c_t$, and then NTDC leveraged the learned $c_t$ to find the independent components $\mathbf{z}_t$. The identifiability result guarantee that (1) each dimension of the learned $\mathbf{z}_t$ are conditionally independent which enforces the disentanglement, and (2) the learned $\mathbf{z}_t$ are equivalent to the ground truth value up to permutation and component-wise transformation, which means that NTDC can really recover the truth.
> There are some grammatical errors in the main paper.
We thanks the reviewer for pointing out this, we have proofread the manuscript and will update it in the final version.
> The authors provide their code with anonymity, but there is limited explanations of the codes that which is directly linked to the proposed NTDC and how one can run the code.
We thanks the reviewer for raising this issue, the training python scripts `train_{exp name}.py` are in the root directory of the repository and the our proposed NTDC method is coded with name `hmmxtdrl`. A sample run script for simulation data is provided as follow: `python train_simulation.py -c configs/simulation/simulation_hmmxtdrl.yaml` and sample config files are provided in the `configs` folder.
Please let us know if you have further concerns and we are happy to provide more detailed information in the discussion phase.
# References
[1] Locatello, Francesco, et al.
''Challenging common assumptions in the unsupervised learning of disentangled representations.'' international conference on machine learning. PMLR, 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you once again for your thoughtful comments on our paper. Should there be any further questions or concerns, please let us know and we stand ready and eager to address them. We highly value your insights and would be more than pleased to provide any additional information or clarification you may require.
---
Rebuttal Comment 1.2:
Title: Thanks
Comment: Thank you for this feedback authors. This will be taken into account.
---
Rebuttal Comment 1.3:
Title: increase score from 4 to 4.5
Comment: Thank you for your clarification, especially for Q3, Q4, and Q5. I've read the authors' feedback, as well as other reviewers' reviews and the corresponding rebuttal. I've increased my score to 4.5, which is a definite borderline. In the meantime, I've left the rating to 4 since there is no 4.5 option.
---
Reply to Comment 1.3.1:
Title: Thanks and please let us know if you have further concern
Comment: Dear Reviewer t3u7,
Thanks for acknowledging our clarification during rebuttal, we are delighted that your concerns have been resolved. Do you have any further questions or concerns that prevent you recommending acceptance to our paper? Please let us know and we are eager to clarify any of your concerns.
With best regards,
Authors of submission 9100
---
Rebuttal 2:
Title: Possible to provide your feedback soon so we can reply?
Comment: Dear Reviewer t3u7,
Thanks for your time and comments! Hope we are not bothering you, but we are looking forward to seeing whether our response and revision properly address your concerns and whether you have any further concerns, to which we hope for the opportunity to respond.
We hope you will consider this work as an essential step towards unsupervised deep learning and temporal disentanglement, especially in the scenario in which the nonstationarity or domain information is unknown.
With best regards,
Authors of submission 9100 | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Nonparametric Identifiability of Causal Representations from Unknown Interventions | Accept (poster) | Summary: The paper discusses the task of identifying causal variables from high-dimensional observations under non-parametric mixing functions and causal mechanisms. This is done under the assumption of single-node, perfect interventions being available for all causal variables, as well as distinct paired perfect interventions in the case of having more more than two causal variables. The paper proves that causal variables are identifiable under this setup, when taking additional assumptions on the interventions being sufficiently different from the observational distribution. Thereby, weaker assumptions are possible for 2 variables than for more variables. Finally, the paper sketches possible implementations of learning algorithms for this setting.
Strengths: The paper is overall well written, even if it is aimed at researchers in identifiability and/or causal representation learning (CRL) specifically. A consistent notation is used throughout the paper, and all assumptions are clearly stated before the theorems. It is appreciated that proof sketches have been included in the main paper to support the claimed theorems and make the main paper a bit more standalone. The paper discusses all necessary related work and puts itself into context of the current field of research.
The main contribution of the paper is its theoretical result. It extends the domain of identifiable causal representation by considering yet another setup, where environment pairs with single-node, perfect interventions are given. The benefit of this setup is that it does not require counterfactual observations, while supporting a large function class despite taking needed assumptions on the interventions. The proofs for supporting the claimed theorems are given in the appendix, following common proof strategies in CRL. The proofs appear sound and intuitive, although a very careful check of the proofs was not possible during the review period. Overall, it is a good contribution to the theoretical identifiability in CRL.
Weaknesses: While the derived theory in the paper puts weaker constraints on the mechanisms of the causal variables and mixing function, its assumption of having access to single-node, perfect interventions on all causal variables is restrictive. Being able to perform an intervention on a variable is already commonly considered expensive or often not easily feasible, especially if it is a perfect intervention and single-node. However, doing this twice and even different between the two setups is challenging. Further, obtaining such a dataset requires non-trivial prior knowledge of the causal system, since it necessitates the ability to perform such single-node, perfect interventions on causal variables that are yet to be identified. The paper misses to give real-world examples to motivate the setup and its assumptions, which puts it in a more limited spot.
Besides the theoretical results, it is also important to validate the setup and the practicality of the theory in empirical studies. The paper only sketches some potential ideas, where all unknown parts are learned. However, optimizing the latent encoder, the causal graph, and the intervention targets all at the same time is not trivial as shown in previous works. Further, the appendix shows some limited results on a generative model, where one would need to iterate over all possible causal graphs and intervention targets. Still, this is not practical for systems larger than very few causal variables or high-dimensional observations.
The paper states that the intervention targets are not known. However, under the identifiability up to permutation, the intervention targets in this setup appear to be known. Specifically, assumption (A2') states that there exist $n$ environment pairs, where each pair intervenes on a different causal variable. Thus, the intervention targets for these pairs, as stated in the assumption, are known as $\pi(i)$. Since the variables cannot be identified up to permutation $\pi$ anyway, permuting the causal variables and thus the targets are still considered to be the same targets in the same identifiability class, e.g. as in the works cited for known intervention targets [69, 70]. Thus, the claim of unknown intervention targets appears not valid given the assumptions, or the assumptions should be clarified to e.g. have at least $n$/$n+1$ environments.
### Typos:
- Table 1: 'Causal Representation Learning'
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ### Review summary
The theoretical results of the paper provide a new setting under which causal variables are identifiable in CRL. However, the paper is limited by its strong reliance on single-node, perfect interventions and very limited empirical study. I consider the theoretical results outweighing the drawbacks a bit, although the paper would strongly benefit from empirical validation of the setup. Thus, my recommendation is 'Weak Accept'.
### Questions
- What is a real-world scenario in which the setup of pairs of single-node, perfect interventions is practical and common?
- Do you require the knowledge of the intervention targets up to permutation, or do you allow for more environments/environment pairs as long as each variable has been intervened upon once?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations have been discussed in different parts of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for reviewing our work. We address your questions and comments below.
___
> Restrictiveness of the assumption of access to all single-node perfect interventions.
While we agree that this is a strong assumption, it has previously been shown to be necessary even for more restricted settings. For example, Squires et al. [111] showed the necessity of access to all single node perfect interventions even in the fully linear interventional case, and Brehmer et al. [18] did so for a counterfactual, multi-view nonparametric setting. In this sense, it is unsurprising that at least as strong an assumption is needed in the nonparametric interventional case (and interesting that the same can be sufficient).
___
> Limited empirical study and the need to iterate over all possible causal graphs
As described in more detail in our general response to all reviewers, we now provided additional empirical results for a nonparametric setting with three variables. In particular, enumeration of causal graphs is avoided in this setting. Our main focus was on new theoretical results and validations thereof, and we believe that our new and existing experimental results are helpful to this end. That being said, we completely agree that “optimizing the latent encoder, the causal graph, and the intervention targets all at the same time is not trivial” and that further work is needed to make CRL methods more practical.
___
> Clarification regarding assumption (A2’) and (un)known intervention targets.
This is a subtle point that may not have been sufficiently clear in the initial manuscript. For the setting of Thm 4.2, assumption (A2’) states that (i) datasets come in pairs; (ii) we know that, within each pair, the same variable was intervened upon; (iii) we do not know which variable was intervened upon in each pair, only that each variable was intervened upon (at least) in one pair. Importantly, note that $\pi$ in (A2’) is defined as any permutation of $[n]$ and is not restricted to the subset of permutations that are isomorphisms of the unknown true graph (which is the partial permutation ambiguity of the CRL equivalence class). The conclusion that this is the same as knowing the intervention targets is therefore not supported. We will make the above more clear in the revised manuscript.
We are not quite sure what exactly is meant by “the assumptions should be clarified to e.g. have at least n/n+1 environments” and would welcome further clarification if this remains an open issue after our response.
___
> Real world examples
We are not experts in this domain (so the following is to be taken with a grain of salt), but we try to sketch a possible scenario and how it relates to our assumptions.
Consider genetic experiments, where tools like CRISPR-Cas9 may permit one to perform targeted and (near) perfect interventions on individual genes. The observations in this case could correspond to some downstream effect that is influenced by gene activity, such as counts of different proteins. One might target different genes (for multiple cells) and thus collect multiple environments or datasets. Moreover, different types or concentrations of CRISPR-Cas9 might give rise to (two or more) interventional environments affecting the same target. It may be natural to think of this setting as one with known intervention targets. However, if we consider a less well-studied genome of another species, the exact places where certain DNA or RNA sequences are found may not be known a prior, corresponding to unknown intervention targets.
We will happily add the above example, or other suggestions the reviewer may have, to the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for your response and clarifications.
> Clarification regarding assumption (A2’) and (un)known intervention targets.
I generally agree with you that your case indeed does not require knowing the intervention targets. My comment was mostly regarding the assumption A2, which currently suggests that this would be the case. By replacing the phrase of "*there exist $n+1$ environments...*" to "*there exist at least $n+1$ environments...*" (line 240), this could be clarified. This is because if you have exactly $n+1$ environments, the intervention targets can be inferred (under some definitions of previous works), and if you have more than $n+1$, the intervention target cannot be inferred anymore. If we take the example of three variables given in the general response, having four datasets, with the first being the observational one, would implicitly give you the intervention targets by numbering the dataset pairs in any arbitrary order. No restriction to isomorphisms of the unknown graph are needed, if one learns an arbitrary graph. If you have more than $n+1$ datasets, this is not possible anymore.
> Emperical results
It is good to see additional results. Still, the setting and usability of the method is signifciantly limited given the iteration over intervention targets. This requires training up to $n!$ models, a number that can quickly go out of hands for $n>3$ and slightly more expensive datasets to train on. Further, given the small differences and high standard deviations (e.g. diff between 132 and 231 despite inverting the graph), it is unlikely that the empirical method is applicable in challenging datasets at the moment.
Thus, the empirical part remains a considerable weakness of the paper. Nonetheless, as mentioned in my original review, the strengths from the theory outweigh this, in particular when the paper is improved by the suggestions of the reviewers.
---
Reply to Comment 1.1.1:
Title: Author Response
Comment: Thank you for the clarification, continued engagement, and insightful remarks.
_____
**Known vs Unknown Targets.** What exactly constitutes known vs unknown intervention targets appears to be a less clearly defined concept in CRL than in the fully observed case.
In principle, we agree that *if all possible causal graphs are considered* and exactly $n$ interventional environments with distinct targets are provided, then one can indeed simply call variable $V_k$ the one intervened upon in environment $k$ for $k=1, …, n$, and consider the intervention targets *known in this sense*.
In our work, we instead consider a setup in which the causal ordering is fixed to $V_1 \preceq V_2 \preceq … \preceq V_n$ and only graphs consistent with this ordering are considered (see l.167 ff.). This formulation is motivated by starting from the generative process and its fundamental ambiguities (e.g., ordering of the nodes), comes w.l.o.g., and was also adopted in some recent works, see, e.g., Squires et al. [111, Remark 1]. The intervention targets are then considered *unknown w.r.t. this pre-imposed causal ordering*. Our current proofs show that the intervention targets can then be identified from exactly $n$ environments (possibly up to irresolvable partial re-ordering), even if they are not known (w.r.t. the pre-imposed causal order) a priori.
______
**More than $n$ environments.** Regarding the “at least” formulation of allowing $m>n$ interventional environments, this would indeed strengthen the results and break the suggested strategy of assigning target $k$ to interventional environment $k$. In this case, we do *not* know a suitable subset of $n$ environments which contains exactly one intervention for each node. It therefore first needs to be shown that such a subset of environments can be identified.
Suppose for a contradiction that we select a subset of $n$ interventional environments which are assumed to correspond to distinct targets in the model $Q$ whereas this is not the case for the ground truth $P$ (i.e., there are actually duplicate and missing interventions).
- For the setting of Thm. 4.3 with paired interventions, we can show that this is not possible: Suppose that there are two pairs of environments $(e_a, e_a’)$ and $(e_b, e_b’)$ corresponding to interventions on $V_i$ in $P$, but which are modelled as interventions on distinct nodes $Z_j$ and $Z_k$ in $Q$. Similar to the proof sketch in l.301-303, it can be shown that $V_i$ must then simultaneously be a deterministic function of only $Z_j$ and only $Z_k$. This implies that $\partial \psi_i / \partial z_l =0$ for all $l$ which contradicts invertibility of $\psi$. Hence, only valid subsets will not lead to a contradiction. We can thus allow for any number $m\geq n$ of paired environments (“completely unknown targets”) for Thm. 4.3, as long as there is at least one paired intervention for each node. We will adjust assumption (A2’) to reflect this generalization, and will add a more detailed version of the above argument to the proof.
- For the setting of Thm. 4.1 with single interventions, finding a contradiction unfortunately seems more difficult. In short, we can rule out duplicate interventions on root nodes, but for $V_1 \to V_2$ we were not yet able to find a contradiction to selecting a subset of environments corresponding to two interventions on $V_2$. We will continue to investigate this matter, but remark that prior results in simpler parametric settings (e.g., Squires et al. [111], Varici et al. [116]) also require access to a set of exactly $n$ interventional environments, one for each node.
We will add a summary paragraph about the subtleties of known vs. unknown intervention targets in CRL to the discussion. | Summary: The paper studies the problem of inferring causal relationships between $n$ latent variables through observations under a mixing function. Given data $X$ from multiple environments (each of which corresponds to an unknown perfect atomic intervention), where $X$ is the observation of the latents under a fixed mixing function $f$, the goal is to recover $f^{-1}$ and the causal graph $G$ on the latent variables (up to $\sim_{CRL}$ equivalence).
Strengths: The problem is well-motivated and interesting. The authors did a good job explaining how this paper differs from prior work while providing a pretty good literature review. Some experiments are also given in Appendix D.
Weaknesses: While I am not an expert in the area and did not check all the proofs in detail, I do not see any glaring weaknesses. The theorem statements and proof sketches seem believable, especially since there is a lot of assumptions that were made to "make things go through". My biggest gripe is that there is a lack of discussion about the assumptions (see Questions section).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Line 175:
Maybe write what CRL stands for somewhere (possibly in the footnotes)?
Assumptions:
There are a lot of assumptions (which is okay, if they are well-justified and discussed). Can you explain or discuss why each of them is necessary or reasonable to have (without trivializing the problem)? I understand that you believe "pairs of environments" is not necessary in general, but what about the other assumptions? What happens if all but one is satisfied? What goes wrong? I am happy to further increase my score if this is sufficiently addressed and if the other reviewers did not raise any damning issues that I missed.
Table 1 caption:
Typo: "Reresentation Learning"
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Nil.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for reviewing our work. We will fix the typo and add an explanation for the CRL abbreviation. We answer your main question below.
> There are a lot of assumptions (which is okay, if they are well-justified and discussed). Can you explain or discuss why each of them is necessary or reasonable to have (without trivializing the problem)?
We summarize below the rationale behind each assumption, most of which are also highlighted in our proof sketches.
- Asm. 3.2 is required to rule out degenerate cases (cancellation along different paths) in which variables are (conditionally) independent despite being causally related. It is a standard assumption in classical causal discovery, and therefore also needed in CRL to recover the causal graph.
- Asm. 3.3 is required to know how many latent variables we are looking for. It is a standard assumption in identifiable representation learning (that is often made implicitly). However, as discussed in footnote 8 and our response to reviewer `U76e`, it may be dropped when suitable techniques for estimating the intrinsic dimensionality of $\mathcal{X}$ can be employed.
- Asm. 3.5 is needed for the mapping between latents and observations to be invertible in the first place. Without it, full recovery of the causal variables (up to CRL equivalence) is infeasible. This assumption is also standard for the simpler problem of nonlinear ICA.
- Asm. 3.9 is a characterisation of our generative setup. Sharing of some mechanisms and the mixing function is needed for the multi-environment setting to provide useful additional information: if everything may change across environments, the datasets can only be analysed in isolation, running into the non-identifiability of CRL from iid data.
- Asm. 3.10 and (A2) / (A2’) are needed since with imperfect interventions or interventions not on all nodes, identifiability is not achievable even in the linear setting as shown by Squires et al. [104].
- Asm. (A1) is a technical assumption needed for our analysis. It is not strictly necessary (see, e.g., footnote 9 on p. 21 in Appx. C.3) but substantially eases the readability and accessibility of the proof, without a major impact on the main causal aspects on the problem setup.
- Asm. (A3) / (A3’) is needed to avoid spurious solutions based on applying a measure preserving transformation on a part of the domain unaffected by the intervention, see also l. 318-320.
- Asm. (A4) is needed to rule out a fine-tuning of the ground-truth generating process that are possible due to fully non-parametric nature of the setup, see also Remark 4.2 and the following paragraph.
We hope that this sufficiently addresses your question. If so, we kindly ask whether you would consider increasing your score, also considering the lack of “any damning issues raised by other reviewers”.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. Please add a version of the above discussions about the assumptions in the revision.
I have increased my score :) | Summary: This paper proposed to identify the latent causal representations and their underlying causal structure, which is a very challenging and interesting problem. The \sim_{CRL} is introduced to describe the equivalent class up to elementwise operations and permutation, which is sufficiently meaningful for practical use. The CRL-identifiability theory is given under the data from paired interventional data and other assumptions, such as the pre-given number of nodes and others. In appendix, the authors presented a simple version of learning method and validates it on a synthetic dataset.
Strengths: In general, I found this paper to be highly enjoyable and insightful. It successfully addresses a challenging and captivating problem of extracting causal representations and their relationships. Given the increasing prevalence of unstructured data, such an endeavor holds significant importance. The authors have provided a comprehensive overview and engaging discussions that effectively highlight the unique contributions of their work in relation to existing literature. Moreover, the use of paired interventional data, which is more readily obtainable in practical scenarios, adds to the paper's practical relevance. Besides, the organization and writing of this paper are commendable.
Weaknesses: 1. I recommend that the authors provide practical demonstrations of the proposed method in real-world scenarios. While acquiring paired interventional data can be challenging in real-world settings, the authors could consider utilizing datasets generated from virtual environments, such as the causal world (https://sites.google.com/view/causal-world/home), to showcase the utility of their approach.
2. In practical applications, determining the number of latent nodes n, is often difficult. Consequently, verifying whether the number of paired intervention data includes all latent variables becomes challenging. This limitation may restrict the scope of application for the proposed theory and learning methods.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Intuitively, it appears that with paired interventional data, we can identify the latent representation with a unique permutation. For example, if we intervene to place a ball at position A on the table in e_i and at position B on the table in e'_i, we can infer that the position is the intervened variable through a simple comparison. Could you please provide a more in-depth explanation of the challenges encountered in theoretical analysis?
2. The requirement of the genericity condition is specified in Theorem 4.1, whereas it is not explicitly mentioned in Theorem 4.2, which is presented as a more general version of Theorem 4.1. Additionally, the assumption A_2' does not degenerate to A_2 when n=2; instead, it is stronger than extending A_2 to a general n. I would appreciate a more detailed clarification regarding this matter.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, the authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for reviewing our work. We address your two main questions and other comments below.
### Main questions
> “Intuitively, it appears that with paired interventional data, we can identify the latent representation with a unique permutation. For example, if we intervene to place a ball at position A on the table in e_i and at position B on the table in e'_i, we can infer that the position is the intervened variable through a simple comparison. Could you please provide a more in-depth explanation of the challenges encountered in theoretical analysis?”
If we correctly interpret the question, there may be a misunderstanding as to what we mean by “pairs of environments” (“paired interventional data” in your comment). Unlike other works considering counterfactual, multi-view data (e.g., [17,110]), we do NOT observe pairs of observations $(x,x’)$ as in your example with the moving ball. Instead, we only have access to two datasets drawn from $P^1_X$ and $P^2_X$ which we know both arise from two distinct stochastic interventions on the same *unknown* latent variable. However, none of the images in the two datasets need to show the same ball (with only position differing): all other latent variables are resampled for each observation and will never agree exactly across datasets (except on a set of measure zero). At a high level, the main challenge of the analysis (compared to a counterfactual or multi-view setting) thus stems from the lack of correspondence across observations from different datasets: we may never see the same object more than once, and it may be non-trivial to infer what was intervened upon if multiple generative factors differ.
Moreover, in your example, we only observe the pixels but not directly the position variable; the mapping between latents and observations, as well as the latent distributions are unknown and may be arbitrarily complex due to the nonparametric nature of the problem. Another key challenge thus arises from the flexible nonparametric problem setting.
Please let us know if we misunderstood your question or require further clarification.
___
> The requirement of the genericity condition is specified in Theorem 4.1, whereas it is not explicitly mentioned in Theorem 4.2, which is presented as a more general version of Theorem 4.1. Additionally, the assumption A_2' does not degenerate to A_2 when n=2; instead, it is stronger than extending A_2 to a general n. I would appreciate a more detailed clarification regarding this matter.
The distinction between the two settings may not have been emphasised enough in the initial manuscript. Allow us to clarify: Thm. 4.2 is not simply a direct extension of Thm. 4.1 to $n>2$, but a different statement relying on a related but distinct set of assumptions. As you correctly note, assumption (A2’) in Thm. 4.2 is indeed stronger than and different from assumption (A2) in Thm. 4.1. We chose the similar naming as both are assumptions about the available environments (but we are open to changing this to prevent confusion). Due to the stronger assumption (A2’), the genericity condition is indeed not needed for Thm. 4.2. We hope that this clarifies the matter and will highlight these subtleties in the revised manuscript.
___
### Other comments
> “practical demonstrations of the proposed method in real-world scenarios”
While we appreciate the suggestion and agree that this would be interesting, this was unfortunately out of the scope for the limited rebuttal period.
However, we refer to Appendix D and our general response to all reviewers for more details on our experiments (on synthetic data), parts of which we will move to the main text.
> “determining the number of latent nodes $n$”
We agree that this can be challenging, which is also why we explicitly included Asm. 3.3. We briefly touch upon practical methods for estimating the dimensionality $n$ of the observational manifold in footnote 8 in the last paragraph, which we will happily move to the main text.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. I keep my original score. | Summary: This paper gives an identifiability result in a setting that is relevant to causal representation learning, where we wish to infer latent causal variables and their causal graph from high-dimensional observations. They work in a setting that is more general than prior work that relies on, for example, weak supervision, temporal structure, or known intervention targets. Their setting assumes that both the causal model and the mixing function are nonparametric, and the targets of the interventions are unknown.
Their identifiability results are up to trivial indeterminacies (permutations and element-wise diffeomorphisms) and identify both the causal graph and the mixing function. Their first theoretical result shows identifiability for two causal variables given one perfect stochastic intervention per node. Their second theoretical result shows identifiability for an arbitrary number of variables when there are two paired perfect stochastic interventions per node.
The main text of the paper does not have an experiment section.
Strengths: - This paper frames a problem setting for identifiability that is interesting to causal representation learning, which begins to bridge the gap from existing identifiability results to modern machine learning that occurs on high-dimensional observed data.
- They work in a highly general setting where both the causal model and the mixing function are nonparametric, and the targets of the interventions are unknown. In my opinion, the problem framing and the choice of this general setting are the primary contributions of this work even if the theoretical results have limitations.
- The paper is generally well-written and well-structured.
Weaknesses: - Their first theoretical result is in a setting with only two causal variables, where you have an observation distribution and one perfect intervention per node. This result would be much stronger in a setting with n>2, as the authors note in the conclusion.
- Their second theoretical result is in a setting with arbitrary number of variables, but requires two distinct perfect paired interventions. Requiring these pairs of interventions is not a terribly realistic assumption, even though they don't require the intervention targets to be known.
- There is no estimation method or experiment results. Other identifiability papers often contribute an estimation method (e.g. a VAE using a regularizer that encourages sparsity of a mixing function), perform disentanglement experiments in settings that match their theoretical assumptions, or perform ablations on synthetic data where they can control which of their theoretical assumptions are met in order to empirically study the necessity / sufficiency of their assumptions.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Does your theory suggest any empirical validation that you could add to this paper? See the last bullet point in "weaknesses" section for empirical approaches that could be relevant to this kind of theoretical contribution.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: - The conclusion section includes a thorough treatment of limitations of this work, which helps future work to extend these results. No concerns about negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for reviewing our work. We address your questions and comments below.
> “The main text of the paper does not have an experiment section.” “There is no estimation method or experiment results. Other identifiability papers often contribute an estimation method [...], perform disentanglement experiments in settings that match their theoretical assumptions, or perform ablations on synthetic data where they can control which of their theoretical assumptions are met” “Does your theory suggest any empirical validation that you could add to this paper?”
As our main focus is on identifiability theory, we did not have space for including an experiment section in the main paper for the initial submission. However, we did, in fact, carry out an empirical validation similar to what you suggest. This was initially included in Appendix D (see Fig. 3 and its caption for the main points) and we will use the additional page to move some of this material to the main paper.
In short, we consider an estimation method that involves fitting multiple generative models based on normalizing flows with built-in causal structure for different choices of graphs and intervention targets. Consistent with our theoretical claims, our empirical results show that (i) the correct choice of graph and intervention targets are indeed identified as the ones that yield the best model fit in terms of held-out likelihood; and (ii) the true causal variables are recovered (up to rescaling) as supported by the high MCC values—the standard disentanglement/ICA metric to assess this.
In addition to what is already described in Appendix D, we performed some additional experiments during the rebuttal period, which are summarised in more detail in our general response to all reviewers.
We hope that adding a summary of these (new and old) experiments to the main paper satisfactorily addresses what appeared to be your main concern.
____
> Extension of the identifiability result from single interventions to $n>2$
We completely agree that this is desirable. As summarised in l.374-379, we do not see any fundamental reason why this should not be possible, but there are technical obstacles. Thus far, we were not able to find a simple characterisation of the set of genericity conditions required for n>2 variables, but we will continue to investigate this matter. | Rebuttal 1:
Rebuttal: We thank all reviewers for their feedback and time in reviewing our work. All reviewers recommend acceptance, rating the soundness and contribution of our work as good and its presentation as good or excellent.
We are pleased to read that our paper is “well-motivated and interesting” (`E44d`), ”relevant to causal representation learning”, “more general than prior work” (`FsUF`), “highly enjoyable and insightful”, that it “successfully addresses a challenging and captivating problem”, and “holds significant importance” and “practical relevance” (`U76e`). Reviewers also highlighted that the paper is “well-written” (`FsUF`, `ZLAN`), with `U76e` stating “the organization and writing of this paper are commendable”; and well-situated in the relevant literature (`ZLAN`,`E44d`,`U76e`), “highlight[ing] the unique contributions of [our] work” (`U76e`).
A shared issue raised by reviewers concerns the lack of experiments (in the main paper). To address this, we will use the additional page to add an experiments section to the main paper in the revised version. There we will include a summary of our empirical validation using normalizing flows, which was presented in Appendix D during the initial submission. Moreover, we also conducted some additional experiments during the rebuttal phase, see below and the attached PDF for details and a results figure. We will also add this to the new experiments section.
____
### Summary of New Experiments
**Setting.** Our new experiments extend our initial experiments along several axes:
- more variables: instead of focusing on the setting of Thm. 4.1 with $n=2$, we consider a setting with 3 causal variables and graph $V_2 \leftarrow V_1 \to V_3$
- nonlinear, non-additive noise data-generating process: we use location-scale models instead of linear Gaussian ones for the ground-truth SCMs
- nonparametric causal model: instead of fitting a set of linear parametric conditionals, we use a second normalizing flow (from exogenous variables U to causal variables V) as a nonparametric function approximator of the causal relations
- no enumeration of causal graphs: instead of training a separate generative model for each graph, we only specify the causal ordering (the natural ordering on [n]) and enforce the flow from U to V to have triangular Jacobian, consistent with the causal ordering and acyclic structure; this only leaves an enumeration of the intervention targets.
- violation of assumption (A2’): we consider only one interventional environment per node, thus investigating our conjecture that two paired interventions may not be required for Thm 4.2
**Results.** The results are summarised as a figure in the attached PDF.
(Due to the limited time for hyper-parameter tuning etc, as well as the more challenging training task in this more general setting, the results are more noisy than in the initial experiments. We therefore focus on describing trends.)
We find that the model with correctly specified intervention targets achieves the best fit in terms of likelihood, as well as high MCC. The second-best fit is attained by the $132^{(*)}$ model (yellow violin), for which the targets are misaligned in a way that should, *in principle*, be consistent with the partial permutation ambiguity of the true underlying causal graph. (Note, however, that none of the flow weights are a priori forced to be zero even in the absence of an edge $V_2 \not \to V_3$---this may induce *practical* differences between these two theoretically equivalent models.) This model achieves equally high MCC. All other models (for which the misalignment of intervention targets is incompatible with the partial permutation ambiguity of the true graph) achieve poorer model fits and lower MCC.
**Interpretation.** Overall, these results suggest that enumeration of graphs may not be needed, since 1, …, n is always a valid causal order w.l.o.g., and only choices of intervention targets compatible with the true causal graph achieve the best fits and MCC. Moreover, these results lend empirical support to our conjecture that 1 intervention per node may indeed be sufficient even for n>2 variables.
We will include a more detailed description of the experimental settings and results of these new experiments in the revised manuscript.
____
We address further questions and comments in our responses to the individual reviews.
Pdf: /pdf/1d9c5e4fb222eb151a2e976f16b1657725bfc1f3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
SEEDS: Exponential SDE Solvers for Fast High-Quality Sampling from Diffusion Models | Accept (poster) | Summary: There are powerful ODE solvers to speed up the sampling process of diffusion models. Despite being quick, ODE solvers do not usually reach the optimal quality achieved by SDE solvers which are however slow. To tackle this problem, the paper proposes SEEDS, which is based on Exponential Integrator in the stochastic case. SEEDS are derivative and training-free solvers with proven convergence order.
Strengths: 1. The authors provide exact solutions for diffusion SDEs and analytical computation of their variance.
2. The paper contains rigorous theoretical derivation and convergence order guarantee for the proposed methods. It also points out its connection with gDDIM.
3. The paper is overall well-written and self-contained.
4. The method achieves outstanding sampling performance.
Weaknesses: The proposed SEEDS method still needs around a hundred NFEs to achieve good sampling qualities, the quality will degrade fast when NFEs decrease, which limits its applications.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Would you please include the connection between the proposed method SEEDS-1/2/3 and the stochastic Runge-Kutta method properly combined with Exponential Integrator?
Also, the convergence order included is order 1 for SEEDS-1 in the mean-square sense and order 1 for SEEDS-2/3 in the weak sense, since the stochastic Runge-Kutta method can achieve higher convergence order even in the strong sense, the reviewer is concerned the convergence order provided is not optimal.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful lecture of our work. Below a discussion to the raised questions, which will be included in the paper.
> Would you please include the connection between the proposed method SEEDS-1/2/3 and the stochastic Runge-Kutta method properly combined with Exponential Integrator?
Contrary to the ODE case, there are many stochastic Runge-Kutta approaches, usually tailored for SDEs of a specific form. Nonetheless, a common way of distinguishing solvers with same strong order is to assign them couples $(p_d,p_s)$, where $p_s$ is the stochastic order and $p_d$ is the resulting order determined if setting $g=0$ in the considered SDE i.e. when they are deterministic. Example: [R, Tab. 6.2] determines solvers with orders (1,1.0) and (2,1.0) respectively and [R, Tab. 6.3] determines solvers SRA1 and SRA3 with orders (2,1.5) and (3,1.5) respectively. Many of these solvers' speed was already tested in [GGF, Table 3] on CIFAR-10 (VP) which we reproduce below with our SEEDS method added to it.
| Method | Strong-Order | Speed |
| :--- | :----: | :---: |
| Euler-Maruyama (EM) | 0.5 | Baseline speed |
|SEEDS | 1 | 6.83x faster |
|Lamba EM (atol=1e-3) |0.5 | 2x faster |
|Lamba Euler-Heun |0.5 | 1.75x faster |
|Lamba EM (atol=1e-3, rtol=1e-3) |0.5 | 1.27x faster |
|Euler-Heun |0.5 | 1.86x slower |
|SOSRA |1.5 | 5.92x slower |
|SRA3 |1.5 | 6.93x slower |
|SOSRI |1.5 | 8.57x slower |
|Lamba EM (default) |0.5 | Diverged |
|RKMil |1.0 | Diverged |
|ImplicitRKMil |1.0 | Diverged |
|ISSEM | 0.5 | Diverged |
To our knowledge, the only available strong order SERKs method for SDEs with inhomogeneous diffusion coefficients are the exponential Euler-Maruyama (EEM) method [K] and the stochastic RK Lawson (SRKL) schemes [D]. In short, the SRKL schemes only compute analytically the linear coefficient and use the Integrating Factor (IF) method to approximate the integrals in the representation of the exact solution given by the variation-of-parameters formula. This way, by a special change of variables (see [D, Alg. 1]), one can create exponential integrator versions of many SDE methods. We implemented our own version of the SRKL schemes to take into account that $\sigma,\alpha$ are not constant and used the Integrating Factor method to approximate the integrals in the representation of the exact solution given by the variation-of-parameters formula (after properly changing variables). Interestingly, the SRKL schemes seem to stabilize at increasing NFEs but at much higher FID values than their SETD counterparts.
**Comparison of SEEDS with current Stochastic RKL methods on CIFAR-10-vp-uncond (discrete)**
| Method\NFEs | 10 | 20 | 50 | 100 | Best known
| :--- | :----: | :---: | :---: | :---: | :---: |
| SRKL1($\lambda$) | 332.52 | 282.96 | 33.42 | 8.62 | \ |
| SEEDS-1 | 303.48 | 153.21 | 22,70 | 7.97 | 3.13(500NFE) |
| SRKL2($\lambda$) | 475.20 | 469.64 | 134.82 | 7.74 | \ |
| SEEDS-2 | 476.9 | 226.7 | 7.17 | 3.23 | 3.21(90NFE) |
| SRKL3($\lambda$) | 462.24 | 376.15 | 8.36 | 7.46 | \ |
| SEEDS-3 | 483.0 | 428.6 | 43.3 | 3.41 | 3.08(201NFE) |
> Also, the convergence order included is order 1 for SEEDS-1 in the mean-square sense and order 1 for SEEDS-2/3 in the weak sense, since the stochastic Runge-Kutta method can achieve higher convergence order even in the strong sense, the reviewer is concerned the convergence order provided is not optimal.
The proposed convergence order for each SEEDS-1/2/3 is optimal: this is a consequence of the general result from [CC] about maximum convergence rates for SDE schemes with uncorrelated Gaussian increments. The underlying idea is that any solver with strong order $\geq 1.5$ has to account for double stochastic integrals in the non-truncated Itô-Taylor expansion, ultimately forcing any SRK-like solver to use correlated random variables (see [R, Tab. 6.3] and [KP] more generally). SEEDS avoids this additional complexity but an interesting future avenue would be to extend SEEDS to the higher strong order case (and not in the IF approach but the SETD approach). Another interesting path would be to craft weak second order SERK methods for DPMs (the work [KCB] addressed this only for homogeneous semi-linear SDEs).
We hope to have addressed all necessary concerns from the reviewer to promote the acceptance of our paper.
References:
[R] Rossler. Runge-Kutta Methods for the Strong Approximation of Solutions of Stochastic Differential Equations
[K] Komori. Exponential Runge-Kutta Methods for Stiff Stochastic Differential Equations
[D] Debrabant et al. Runge-Kutta Lawson Schemes for Stochastic Differential Equations
[GGF] Jolicoeur-Martineau et al. Gotta Go Fast when Generating Data with Score-based Models
[CC] Clark & Cameron. The Maximum Rate of Convergence of Discrete Approximations for Stochastic Differential Equations
[KP] Kloeden & Platen. Numerical Solution of Stochastic Differential Equations
[KCB] Komori et al. Weak Second Order Explicit Exponential Runge-Kutta methods for Stochastic Differential Equations | Summary: This paper proposes an off-the-shelf (i.e., no further training) few-step sampler for diffusion probabilistic models (DPMs). By isolating linear terms in the exact solution of diffusion SDEs and using a change-of-variable method, the proposed sampler, SEEDS, simplifies the integrals and can approximate the solution of stochastic equations. SEEDS can accelerate the sampling of DPMs without compromising the quality of the samples.
Strengths: - (Quality & Clarity) The authors have successfully provided a theoretical derivation and error analysis of their proposed method, as well as details of the experiments. The paper is well-written and the reviewer had no difficulty comprehending its contents.
- (Significance) Numerical experiments show that SEEDS produces competitive results compared to previous sampling methods in terms of quality, with fewer function evaluations, in several benchmarks, namely CIFAR-10, CelebA-64, and ImageNet-64.
Weaknesses: - (Originality) The proposed method has shown promising performance. However, its theoretical background is based on a classical idea of the *variation-of-parameters* formula in the literature of differential equations. Although the authors claim that equation (5) is a novel representation of diffusion SDEs, this kind of solution form is very well-known in the SDE community even for general Hilbert spaces (e.g. see Curtain & Falb and Da Prato & Zabczyk). Therefore, it seems that justification is needed to claim that the contribution of this paper regarding the solution representation suggested by the author is novel. Furthermore, the derivation of exponentially weighted integral is quite similar to the DPM-solver (Lu et al.), except for utilizing Itō-Taylor expansion due to the diffusion part in the solution representation. The convergence result of the proposed algorithm may not be an incremental result. However, this paper's argument on the contribution has an excessive aspect, and it is quite difficult for reviewers to agree with the author's claim.
[Curtain & Falb] Stochastic Differential Equations in Hilbert Space, *Journal of Differential Equations*, (1971).
[Da Prato & Zabczyk] Stochastic Equations in Infinite Dimensions, (2014).
[Lu et al.] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps, *NeurIPS*, (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: To discuss the generation speed of the SEEDS-3 algorithm in Table 1, it is necessary to compare it quantitatively with the actual runtime, as well as with NFE. For example, in the case of the DPM-Solver, a comparison of DDIM with respect to runtime was made in Table 7 of Appendix E.7.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors mentioned the limations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank and agree with reviewer daVP on the poor choice of words for presenting the well-known the variation-of-parameters formula and we remove all mentions of novelty in this point giving the impression of overselling our work. Instead, we highlighted in the general rebuttal response what we find to be novel in our work.
As requested, we report below the average runtime of a batch on a single NVIDIA V100 of EDM, DPM-Solver and SEEDS for sampling (at increasing NFEs) by all diffusion models mentioned in Table 1, and additional higher dimension models: Latent Diffusion Model (LDM) and Stable Diffusion (SD). For reference, we added in Fig.2 of the appended PDF file a 512x512 image generated by Stable Diffusion using SEEDS-1 at only 90 NFEs to ensure that the runtime in this case follows an improvement in generation quality. We set batch sizes of 16 for SD, 32 for LSUN-Bedroom, 64 for CelebaHQ256 and 128 for all other datasets. One can see that the runtime is linear with respect to the NFE also for SEEDS, as the main advantage of the SETD method is to analytically compute the stochastic components in our solver, making their computation cost negligible. Some mild improvements about repetitive terms allowed our implementation of SEEDS to be slightly more effective than EDM and even DPM-Solver at same NFE.
**Runtime comparison (second /batch + std) on a single NVIDIA V100 of EDM, DPM-Solver and SEEDS**
(*Discretely-trained models, implementation based on the [DDIM] code.†computed using the [EDM] unmodified code. ‡our implementation in the [EDM] code.)
| Sampling method\NFE | 9 | 21 | 51 | 90 | 99 |
| :--- | :---: | :---: | :---: | :---: | :---: |
| **CIFAR-10 32x32** |
| EDM† | 2.096(±0.003)|4.891(±0.004)|11.952(±0.019)|21.213(±0.009)|23.086(±0.010)|
| DPM-Solver‡ | 2.099(±0.002) | 4.871(±0.004) | 11.841(±0.014) | 20.888(±0.020) | 23.006(±0.025)|
| SEEDS‡ | 2.086(±0.002) | 4.867(±0.003) | 11.817(±0.006) | 20.896(±0.028) | 22.957(±0.009)|
| **FFHQ 64x64** |
| EDM† | 4.361(±0.005)| 10.179(±0.005)| 24.738(±0.018)| 44.145(±0.016)| 48.007(±0.020)|
| DPM-Solver‡ | 4.344(±0.005)| 10.148(±0.009)| 24.637(±0.007)| 43.526(±0.029)| 47.857(±0.025)|
| SEEDS‡ | 4.353(±0.003)| 10.157(±0.004)| 24.661(±0.011)| 43.537(±0.013)| 47.885(±0.018)|
| **ImageNet 64x64** |
| EDM† | 7.525(±0.007)| 17.579(±0.004)| 42.696(±0.010)| 76.175(±0.024)| 82.870(±0.032)|
| DPM-Solver‡ | 7.535(±0.048)| 17.556(±0.009)| 42.629(±0.018)| 75.222(±0.026)| 82.725(±0.016)|
| SEEDS‡ | 7.429(±0.006)| 17.572(±0.013)| 42.654(±0.014)| 75.245(±0.034)| 82.793(±0.036)|
| **CIFAR-10 32x32*** |
| DPM-Solver | 0.272(±0.004)| 0.529(±0.007)| 1.324(±0.007)| 2.632(±0.004)| 2.895(±0.004)|
| SEEDS | 0.261(±0.002)| 0.523(±0.002)| 1.299(±0.003)| 2.598(±0.003)| 2.863(±0.008)|
| **Celeba 64x64***|
| DPM-Solver | 0.936(±0.004)| 1.821(±0.003)| 4.558(±0.008)| 9.108(±0.008)| 10.030(±0.008)|
| SEEDS | 0.912(±0.009)| 1.808(±0.004)| 4.526(±0.002)| 9.033(±0.004)| 9.922(±0.012)|
| **LSUN-Bedroom 256x256*** |
| DPM-Solver | 5.566(±0.019)| 11.124(±0.018)| 27.815(±0.031)| 55.648(±0.021)| 61.168(±0.026)|
| SEEDS | 5.498(±0.008)| 11.001(±0.022)| 27.503(±0.029)| 54.842(±0.020)| 60.262(±0.027)|
| **LDM-CelebAHQ 256x256**|
| DPM-Solver | 8.648(±0.013)| 17.492(±0.013)| 39.569(±0.015)| 68.205(±0.017)| - |
| SEEDS | 8.652(±0.005)| 17.469(±0.010)| 39.524(±0.010)| 68.240(±0.026)| - |
| **Stable Diffusion 512x512** |
| DPM-Solver | 19.598(±0.070)|40.679(±0.107)|92.914(±0.079)|163.902(±0.116)|-|
| SEEDS | 19.656(±0.106)| 40.781(±0.136)| 93.409(±0.123)| 161.451(±0.518)| - |
We hope to have addressed all concerns from the reviewer necessary to promote the acceptance of our work.
References:
[DDIM] Song et al. Denoising Diffusion Implicit Models
[EDM] Karras et al. Elucidating the Design Space of Diffusion-based Generative Models
[DPM-Solver] Lu et al. DPM-solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 steps
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response and clarifications. The response sufficiently addresses my questions. After carefully considering the responses, I will be keeping my current rating. I think the authors need to address the remaining concerns of other reviewers sufficiently.
---
Reply to Comment 1.1.1:
Comment: We warmly thank the reviewer for their response. Following your advice, we will proceed to address the remaining concerns of others reviewers to the best of our effort, hoping it will eventually be considered to be sufficient. | Summary: The paper proposed an efficient method for solving diffusion SDEs with some convergence guarantee. Experiments are done to verify their claims.
Strengths: The experiments show SEEDS-3 achieves optimal results with minimal NFE on many data sets.
Weaknesses: 1. Authors claim they have found a novel representation of the exact solution of the SDE, page 3, line 106. However, it is just a simple technique that's commonly used when solving SDEs. For example, when solving OU process, you multiply some exponential multiplier like this. I don't find anything novel about this.
2. Essentially, in the solution, the first and last terms are treated using a multiplier and change of variables. The second is rewritten using Taylor expansion. The convergence theorem is sort of too straightforward, as the Taylor expansion of course approximates the original function.
3. The experiments only report the NFE taken when achieving optimal performance, which I am not sure is a commonly accepted metric of evaluation in this community. Please comment on this. I am especially open to modifying the score if this question is addressed sufficiently.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer MHs8 for their comments. We have clarified in a general rebuttal response what building principles in SEEDS are our own contributions. In particular:
- Building principle of eq.5 will not be presented as novel in the revised version
- The truncated Itô-Taylor expansion, the SETD method and specially the specific combination of stochastic terms that we used are novel and not incremental over [DPM-Solver,DEIS].
- We point out that this already suggests that our convergence theorem is not straightforward. We refer the reviewer to Appedix B.2 in our manuscript for its proof. In particular, the use of Itô's Lemma, Itô isometry, Doob's martingale inequality, Hölder's, Lyapounov's and Grönwall's inequality are necessary to prove our theorem.
> The experiments only report the NFE taken when achieving optimal performance, which I am not sure is a commonly accepted metric of evaluation in this community.
Actually, besides optimal FID, we also report FID/NFE curves (in Fig. 1) as evaluation metrics (with supporting FID\NFE Table 3--5 in App. E.4). For sake of completeness, and in hope to fully address the reviewers question, we report below FID\NFE tables for experiments that lead to the results in Table 1 and we will add them in our manuscript.
In what follows: †=discrete models. ‡=continuous models. (+) resp. (-) means value at n+1 resp. n-1 NFE.
### CIFAR-10-uncond-vp‡
(*value obtained with the non-deep vp pretrained model. The reported 2.59 FID at 51 NFE in Table 1 of our manuscript is obtained with the "VP deep" architecture)
| Method\NFE|11|15|24|30|48|63|126|165|180|511
|:---|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|Euler-Maruyama|304.73(-)|248.13|\ |\ |66.32(NFE=50)|\ |\ |\ |12.27(NFE=200)|2.44(NFE=1000)
|GottaGoFast|\ |\ |\ |\ |82.42|\ |\ |2.73(NFE=151)|2.44|\
|EDM $(\text{S}_{\text{churn}}=0)$|18.41|6.52|3.52(-)|3.10(+)|2.99(-)|2.94|2.98(+)|\ |\ |2.93
|EDM $(\text{S}\_{\text{tmin,tmax}}+\text{S}\_{\text{noise}}=1)$|34.31|11.26|4.59(-)|3.44(+)|2.89(-)|2.77|2.72|\ |\ |2.69
|EDM $(\text{S}_{\text{noise}}=1)$|35.21|12.44|5.38(-)|3.99(+)|3.13(-)|2.90|2.60|\ |\ |2.55
|EDM $(\text{S}_{\text{tmin,tmax}}=[0,\infty])$|29.63|10.01|4.51(-)|3.43(+)|2.87(-)|2.71|2.52|\ |\ |2.54
|EDM $(\text{optimal})$|29.89|10.21|4.85(-)|3.77(+)|3.08(-)|2.84|2.47|\ |\ |**2.27**
|DPM-Solver-3|9.04|3.76|3.00|2.95|2.90|2.89*|\ |\ |\ |\
|SEEDS-3 $(\text{S}_{\text{churn}}=0)$|\ |\ |\ |288.22(+)|90.25(-)|33.91|2.45|**2.39**|2.45|\
### CelebA 64x64†
|Method\NFE|9|12|15|21|51|60|90|102|201
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:| :---:|:---:
|Euler-Maruyama|310.22(+)|227.16|207.97|120.44(-)|29.25(-)|\ |\ |\ |3.90(-)
|Analytic-DDPM|28.99(+)|25.27|21.80|18.14(-)|11.23(-)|\ | \ | \ |6.51(-)
|Analytic-DDIM|15.62(+)|13.90|12.29|10.45(-)|6.13(-)|\ |\ |\ |3.46(-)
|DDIM|10.85(+)|9.99|7.78|6.64(-)|5.23(-)|\ |\ |\ |4.78(-)
|DPM-Solver-3|6.92|4.20|3.05|2.82|**2.71**(NFE=36)
|SEEDS-3|460.87|374.48|301.66|261.87|3.84|6.58|**1.88**|1.97|\
In particular, the reported values for DPM-Solver-3 on CIFAR-10-vp-uncond (on the non-deep model) were computed by us, with sole purpose to propose a FID\NFE baseline. For baseline CIFAR-10-vp-conditional and optimized ImageNet64-EDM-preconditioned pretrained DPM, we didn't find FID/NFE curves that we could use as a baseline but we still report various FID values at increasing NFEs so that future works can use them as baselines.
### SEEDS-3 on other datasets
|Dataset\NFE|15|21|30|60|90|120|129|150|165|180
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:
|CIFAR-10-cond-vp‡|239.23|167.58|131.09| 25.06|3.19|2.17|2.08|2.15|2.16|2.19
|Dataset\NFE|12|15|21|51|102|201|270
|:---|:---:|:---:|:---:|:---:| :---:|:---:|:---:
| ImageNet 64x64‡|209.12|197.79|153.72|63.75|16.35|1.56|1.38
### Discussion on evaluation metrics for DPM sampling methods
All benchmarks on image generation tasks in paperswithcode.com classify papers by FID score and, when applicable, the corresponding NFE.
While comparing solver's performance should ideally be done by comparing their FID/NFE curves at fixed datasets & pretrained DPMs, computing such curve is computationally expensive. As such, we have many FID/NFE curves on CIFAR-10, which become strong comparison baselines (one baseline per pretrained DPM on CIFAR). Usually, authors establish FID/NFE curves on small datasets to illustrate the solver's convergence behaviour and then proceed to report FID scores at specific NFE values. Some limitations:
- [EDM] does not report FID/NFE curves for FFHQ, AFHQ and EDM-preconditioned ImageNet64, only showing the best obtained FID score for those. As such, using FID curves to compare SEEDS to EDM on those datasets is not possible unless we produce ourselves EDM curves. Additionally, [EDM] computes FID curves for CIFAR-10-VP uncond. cont. but not in the "deep" version (which has 8 instead of 4 residual blocks) and the FID points are given on different NFEs than [DPM-Solver] (which uses the "deep" architecture). Evaluating solvers is something that needs to be done with fixed pretrained DPMs and making FID/NFE tables needs to have FID values at same NFEs.
We hope to have convinced the reviewer that we did our best to use the ideal FID/NFE curve+table evaluation metric whenever it was feasible, and reported the best found FID+corresponding NFE for all experiments. We also hope we addressed a discussion on evaluation metrics that will satisfy the reviewer and help promoting the acceptance of our work.
References:
[DEIS] Zhang & Chen. Fast Sampling of Diffusion Models with Exponential Integrator
[EDM] Karras et al. Elucidating the Design Space of Diffusion-based Generative Models
[DPM-Solver] Lu et al. DPM-solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 steps | Summary: This paper proposes a sampling method that achieves 3 to 5 times faster computation by exploiting the semi-linearity in the time-reversal stochastic differential equation (SDE). It analytically computes the linear part and reaches optimal quality sampling. This sampling method demonstrates comparable results to existing SDE solvers on datasets like CIFAR10 or Celeba without requiring any additional training methods.
Strengths: The paper is well-written in a clear and readable manner. It is easy to understand the paper without much difficulty, and the derivation of formulas is well-guided, focusing on the essential parts. One of the strengths of the paper is that it proposes a method that achieves approximately 3 to 5 times faster computation while still reaching optimal performance, which sets it apart from other ODE-based methods. Additionally, the paper provides a thorough and clear analysis of convergence, which is commendable.
Weaknesses: In (5), the claim of a "Novel representation of exact solutions of diffusion SDEs" seems to be overclaimed as the equation is well-known in the form of the weak solution of the SDE. Therefore, it is difficult to consider this as a newly discovered novel representation. Moreover, while this integral representation leads to a practical sampling method, much of it is based on formulas used in the DPM-solver [1]. This suggests that the contribution of this work may be relatively weak. The main difference from the DPM-solver appears to be the inclusion of a stochastic term, but beyond that, significant distinctions are not apparent.
The claimed advantage of reducing the number of function evaluations (NFE) by 3 to 5 times seems less compelling when compared to ODE-based methods that can achieve sampling speeds more than 50 times faster and still attain optimal performance. For more persuasive results, it would be better if this method demonstrated similar NFE to ODE-based methods while reaching optimal performance.
Additionally, it is unclear whether this method addresses any meaningful issues beyond the problem of ODE-based methods not achieving optimal performance.
[1] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps, *NeurIPS*, (2022).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Is it possible to maintain fast sampling speeds for datasets with resolutions higher than 256, such as CelebA-HQ or ImageNet?
- Apart from the ability to reach optimal performance, are there any other distinguishing features compared to the DPM-solver? For instance, does it generate a more diverse set of samples or exhibit stronger capabilities in tasks like imputation?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 1 poor
Limitations: The paper has effectively summarized its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer CfgK for their effort reading our work. We have clarified in a general rebuttal response what building principles in SEEDS are our own contributions. In particular,
- eq.5 will not be presented as novel in the revised version
- The change-of-variables used in the EDM case (Prop. 3.2) is new and leads to the version of SEEDS achieving same performance as the optimized EDM solver but twice faster than the latter.
- The truncated Itô-Taylor expansion, the SETD method and specially the specific combination of stochastic terms that we used are novel and not incremental over prior works.
- Although for small datasets, ODE methods achieve optimal performance, already for ImageNet64 it has been shown in [EDM (arXiv version v2), Fig. 5(c)] the necessity of stochasticity to obtain optimal performance. It is precisely in that scenario that SEEDS show competitive results with twice less NFEs than EDM solver.
> Is it possible to maintain fast sampling speeds for datasets with resolutions higher than 256, such as CelebA-HQ or ImageNet?
Indeed, SEEDS maintains fast sampling speeds on higher-resolution images. For unconditional generation using Latent Diffusion Model, SEEDS is able to generate good quality images already at 100 NFEs. In the attached PDF file to this rebuttal, we added a 512x512 image generated with SEEDS-1 at 90 NFEs on Stable Diffusion. This is consistent with the suggestion in [EDM, end of Section 5], stating that more diverse datasets continue to benefit from stochastic sampling rather than deterministic sampling. We hope this convinces the reviewer on the scalability of SEEDS.
> Apart from the ability to reach optimal performance, are there any other distinguishing features compared to the DPM-solver? For instance, does it generate a more diverse set of samples or exhibit stronger capabilities in tasks like imputation?
We computed the Inception Score for SEEDS and saw only marginal improvements compared to DPM-Solver so both exhibit comparable sample diversity. Nevertheless, SEEDS have at least two distinctive capabilities in the realm of adversarial robustness showing substantial capabilities compared to DPM-Solver and EDM, which we hope will be relevant enough to the reviewer for promoting the acceptance of our work.
- For more than 2 years, the leaderboard of RobustBench has been dominated by Diffusion-based data augmentation techniques + Adversarial Training. The current SOTA is [DMAT, Table 4], which uses EDM-preconditioned DPMs and need to generate as many as 50M images to achieve SOTA robustness results. For ImageNet-64, [DMAT] use the EDM pretrained model+stochastic sampler for data augmentation, leading to a 5% robust accuracy improvement compared to doing so for the baseline ADM pretrained model. Since SEEDS reach same FID quality as EDM, but twice faster, we believe it will have a positive impact in this domain, making diffusion-based data augmentation schemes more affordable with limited computational capacity.
- The work [DiffPure] uses DPM-based adversarial purification as a test-time adversarial defense. The idea is to use off-the-shelf DPMs to annihilate the adversarial content in inputs in test-time before feeding them to a pretrained classifier. Below we reproduce [DiffPure, Table 6], showing standard and robust accuracy of a WideResNet-28-10 pretrained classifier with AutoAttack $l_\infty$ ($\epsilon$ = 8/255) for CIFAR-10 whose inputs are purified by a pretrained CIFAR-10-VP-unconditional DPM using ODE versus SDE sampling schemes. One can see that using SDE solvers show substantial robustness capabilities compared to ODE solvers.
| Sampling Method | Standard Acc | Robust Acc |
| :--- | :----: | :----: |
| VP-ODE | 90.79 | 39.86 |
| VP-SDE | 89.02 | 70.64 |
References:
[EDM] Karras et al. Elucidating the Design Space of Diffusion-based Generative Models
[DM-AT] Wang et al. Better Diffusion Models further Improve Adversarial Training
[DiffPure] Nie et al. Diffusion Models for Adversarial Purification
---
Rebuttal Comment 1.1:
Comment: Looking at the new change of variable, in the end, it seems the model calculates the score model for one NFE twice. It's uncertain whether the NFE has decreased, but the total running time might not differ significantly. I'm not sure if this can be considered a significant contribution...
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response, to which we happily bring answers and clarifications.
Both in [DPM-Solver] and in our method, the score model is calculated exactly once per NFE (this is just the definition of "Number of Function Evaluations"). Since theoretically both DPM-Solver and SEEDS compute analytically all components in the sampling method, then the total runtime is expected to increase linearly with respect to increasing NFEs in both cases, as was experimentally checked. You may also verify that there is no significant improvement done, in terms of runtime vs NFEs, between DDIM and DPM-Solver in Table 7 of Appendix E.7 in [DPM-Solver]. But DPM-Solver achieves good FID score significantly faster than DDIM, a fact that can't be seen just by looking at runtime vs NFEs. As such, one cannot evaluate the significance of a sampling method solely on analysing the runtime vs NFE behaviour. In our case, since we compute every term (both with the VP or our new EDM-optimized change of variables) analytically, then the computing cost of anything other than the score model's pass is negligible.
That being said, the NFEs that are necessary to obtain state-of-the-art FID score in SEEDS shows a significant improvement compared to other SDE sampling methods. It shows to be twice faster than optimized EDM sampling method on ImageNet-64 as was reported in our experiments.
We hope that this, and the involved theoretical proofs we made in this work that have no equivalent in the current literature on the subject, convinces you of our contribution. | Rebuttal 1:
Rebuttal: ## General Rebuttal Response
As reviewers 1-4 pointed out, and with whom we fully agree, the widely known variation-of-parameters formula is not a contribution of ours and we will modify this in our manuscript. In our effort to clarify the building principles of SEEDS, we unintentionally stressed incorrectly which of those principles are novelties in our work.
Let's clarify the 4 contributions :
- Using the stochastic exponential time differencing (SETD) method (l.130) to express the noise variance in terms of the $\varphi$ functions for the exponentially weighted stochastic integrals is new and not incremental over [DPM-solver]
- The truncated Itô-Taylor expansion is a new tool. It is different from - and not incremental over - Taylor expansions (as highlighted by reviewer daVP and fully detailed in App. D.2.2)
- Our principled use of the Chasles rule (l.182) to enforce dependence on overlapping paths for SEEDS-2/3 is new and is the central key of success of SEEDS. It also ensures that the set of resulting iterations of our solvers satisfy the Markov property. Furthermore, we claim that the specially chosen combination of stochastic terms in SEEDS is not incremental over any prior work. To highlight this, we added an ablation study on how choosing different noise decompositions (e.g. the values of A and B in Algorithm 4) has an impact on the performance of SEEDS. We consider 4 different combinations of noise components in SEEDS-2/3. For simplicity let's explain it at the SEEDS-2 (Alg. 3) level.
- Set $(z^1,z^2)$ two independent standard Gaussian random variables. Denote $A=\sigma_{s_1} \sqrt{e^h - 1} z^1$ for the noise contribution in the $\mathbf{u}$ term. We have the following choices for the noise contribution for $\tilde{\mathbf{x}}_t$:
- SEEDS-2: our noise combination $B=\sigma_t \left( \sqrt{e^{2 h} - e^h} z^1 + \sqrt{e^h - 1} z^2 \right)$
- Naive v1: one noise per stage $B=\sigma_t \left( \sqrt{e^{2 h} - e^h} + \sqrt{e^h - 1} \right) z^2$
- Naive v2: one noise per step $B=\sigma_t \left( \sqrt{e^{2 h} - e^h} + \sqrt{e^h - 1} \right) z^1$
- Naive v3: one noise per integral evaluation $B=\sigma_t \left( \sqrt{e^{2 h} - e^h} z^3 + \sqrt{e^h - 1} z^2 \right)$ (where $z^3$ is also standard Gaussian)
- Naive v4: noises in inverse position $B=\sigma_t \left( \sqrt{e^{2 h} - e^h} z^2 + \sqrt{e^h - 1} z^1 \right)$
For each of these combinations, we generated FID/NFE curves (see Fig. 1 in the attached PDF file), showing that using all naive combinations of noises in SEEDS-2/3 lead to a sharp drop of performance both in quality and speed. Finally, our choice is enforced with a theoretical justification of it (using the independence property of stochastic integrals on disjoint intervals) that has no counterpart in the deterministic setting. All convergence proofs in Appendix B make intense use of formulae (like Itô Isometry or Doob's martingale inequality) that also have no deterministic equivalent.
- The change-of-variables optimized for the EDM framework is new (our Prop. 3.2) and is the one we use to achieve same performance as EDM solver but with twice less NFEs than the latter for ImageNet-64 (Fig. 1(c) in our manuscript).
To give an example of the difference in the obtained formulas for SEEDS in this case, we write below the SEEDS-2 iteration rule, and we can add it to the manuscript at the request of the reviewers:
- $\lambda (t) := - \log \left( \frac{t}{\sigma_d \sqrt{t^2 + \sigma^2_d}}
\right) \qquad t_{\lambda} (\lambda) :=
\frac{\sigma_d}{\sqrt{\frac{1}{\sigma^2_d e^{- 2 \lambda}} - 1}}, \quad
\sigma_d = \sigma_{data}$
- $h = \lambda_t - \lambda_s, \quad \lambda_{s_1} = \lambda_s +
rh , \quad s_1 = t_{\lambda} (\lambda_{s_1})$
- $\epsilon_{\theta} \left( \widetilde{\mathbf{x}}_s, s \right)$ = $\left[ D\_{\theta}
\left( \frac{\widetilde{\mathbf{x}}_s}{\alpha_s} ; \sigma_s \right) -
\frac{\sigma_d^2}{s^2 + \sigma_d^2} \frac{\widetilde{\mathbf{x}}_s}{\alpha_s} \right]
\cdot \frac{\sqrt{s^2 + \sigma_d^2}}{s \sigma_d},$
- $\widetilde{\mathbf{x}}_{s_1} = \frac{s_1^2 + \sigma_d^2}{s^2 + \sigma_d^2}
\widetilde{\mathbf{x}}_s + 2 \frac{s_1 \sqrt{s_1^2 + \sigma_d^2}}{\sigma_d} (e^{r h} - 1){\epsilon\_{\theta}}\left( \widetilde{\mathbf{x}}_s, s \right) + \sqrt{\frac{(s_1^2 + \sigma_d^2) (s^2 - s_1^2)}{(s^2 + \sigma_d^2)}} z^1$
- $\widetilde{\mathbf{x}}_t = \frac{t^2 + \sigma_d^2}{s^2 + \sigma_d^2} \widetilde{\mathbf{x}}_s + 2 \frac{t \sqrt{t^2 + \sigma_d^2}}{\sigma_d} (e^{ h} - 1) \epsilon\_{\theta}\left({\widetilde{\mathbf{x}}\_{s_1}}, s_1 \right) + \sqrt{\frac{(t^2 + \sigma_d^2) (s_1^2 - t^2)}{(s_1^2 + \sigma_d^2)}} (e^{r h} z^1 + z^2) $
We hope that this clarification both highlights: on the one hand, the building principles of SEEDS, and on the other hand, the original contributions of our work. Finally, we would like to emphasise that many available optimisation tools for sampling (for instance multi-stepping, dynamic thresholding,...) can be readily built on top of SEEDS to better address specific tasks like guided sampling.
References:
[DPM-Solver] Lu et al. DPM-solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 steps
Pdf: /pdf/ec7b88209872634ce9b4961146d2a0e4cf06aafc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper aims to accelerate the SDE solvers through exponential integrators. The authors compute the linear part of the SDE analytically, employ a more accurate interaction of the stochastic part. Experimentally, the proposed method improves over the previous ODE solvers.
Strengths: - The paper proposes to utilize the exponential integrator to solve the diffusion SDEs (Eq. (5)). In addition, it approximates the integral components through Taylor expansion.
- The paper utilizes the stochastic exponential time differencing method to compute the variance of the stochastic term analytically.
- Experimentally, the third-order solver yields better FID than previous ODE solvers while requiring significantly smaller NFEs than SDE solvers.
Weaknesses: - It seems that the same formula of the exponential integrated version of SDE (Eq.5) has been proposed in prior works [1] (see their eq.17, arXiv version 1). Since the DPM-solver [2] is essentially the same as DEIS, and they also use a Talyer expansion of the score function, I wonder how this work differs from these prior works.
- The authors show in Prop 4.4 that setting $g=0$ in Eq.15, the resulting SEEDs solvers do not equal to DPM-solver. I wonder why this is the case based on the point above. In addition, setting $g=0$ in Eq.15 would not lead to meaningful backward ODE/SDEs.
- The SEEDS-3 solver indeed showcases great improvement over ODE solvers. However, when using an increased NFE, the EDM solvers perform better than SEEDS-3 (SEEDS-3 even gets worse when increasing NFE as shown in Fig.1). In addition, I notice the SEEDS solver does not even apply to a low NFE regime (NFE<100) on CIFAR-10. It might be helpful to utilize the techniques in a concurrent work [3] to further improve the sample quality when increasing the NFE, or to achieve better results in the low NFE regime. It also is helpful to discuss [3] as they also focus on improving the existing SDEs.
### Minors
- Line 74 : $\mu_t \to \alpha_t$
[1] Zhang, Qinsheng, and Yongxin Chen. "Fast sampling of diffusion models with exponential integrator." arXiv preprint arXiv:2204.13902 (2022).
[2] Lu, Cheng, et al. "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps." Advances in Neural Information Processing Systems 35 (2022): 5775-5787
[3] Xu, Yilun, et al. "Restart Sampling for Improving Generative Processes." arXiv preprint arXiv:2306.14878 (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is this work a straightforward application of prior works ([1], [2]) to SDEs?
- In Figure 1, why does the SEEDS solver have a such high FID score (worse sample quality) in the low NFE regime? Is it due to the error when simulating the noise term?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Discussed in Sec 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer nfDR for their effort reading our work.
> Is this work a straightforward application of prior works ([1], [2]) to SDEs?
We have clarified in a general rebuttal response what building principles in SEEDS are our own contributions. In particular:
- Building principle of eq.5 will not be presented as novel in the revised version
- The change-of-variables used in the EDM case (Prop. 3.2) is new and leads to the version of SEEDS achieving same performance as optimized EDM solver but twice faster than the latter.
- We point out that Fig.1(c) is not a FID/NFE curve.
- We refer to Fig.1 of the attached PDF file to this rebuttal to illustrate that SEEDS stabilizes at increasing NFEs.
- The truncated Itô-Taylor expansion, the SETD method and specially the specific combination of stochastic terms that we used are novel and not incremental over [DPM-Solver,DEIS].
- Although for small datasets, ODE methods have demonstrated to achieve optimal performance, already for ImageNet64 it has been shown in [Karras, Fig. 5(c)] the necessity of stochasticity to obtain optimal performance. It is precisely in that scenario that SEEDS show competitive results with twice less NFEs that EDM solver.
- Setting $g=0$ in Eq. 15 yields a both a valid ODE and valid ODE solvers. But such ODE is the PFO of a forward SDE process which is not equivalent to the one we used (the induced marginal probability trajectories via Fokker-Planck do not follow the same PDE).
We hope this will convince you that our work is not a straightforward application of the mentioned prior works.
> In Figure 1, why does the SEEDS solver have a such high FID score (worse sample quality) in the low NFE regime? Is it due to the error when simulating the noise term?
We thank the reviewer for pointing out the work [XY] that appeared after our submission (so cannot be considered as a baseline for this submission). Interestingly, in [XY, Th.1] there is a formal explanation on why ODE samplers outperform SDE samplers in the small NFE regime and fall short in the large NFE regime. In particular, it is theoretically shown that, at large step sizes, it is the discretization error that dominates sampling errors while, at small step sizes, it is the approximation error that dominates it. We will include this discussion in our manuscript, hoping that this will encourage the reviewer into promoting the acceptance of our work, and we definitely look forward to combine SEEDS with the new ideas in [XY].
References:
[DPM-Solver] Lu et al. DPM-solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 steps
[DEIS] Zhang & Chen. Fast Sampling of Diffusion Models with Exponential Integrator
[EDM] Karras et al. Elucidating the Design Space of Diffusion-based Generative Models
[XY] Xu et al. Restart Sampling for Improving Generative Processes | null | null | null | null | null | null |
Localized Symbolic Knowledge Distillation for Visual Commonsense Models | Accept (poster) | Summary: This paper proposes a framework that can generate global, local, and dynamic descriptions. ChatGPT can generate question-and-answer pairs containing specific regions or descriptive phrases with the proposed tricks. A critic model is trained to filter the generated data. Finally, the localized corpus is used to fine-tune the vision language model.
Strengths: 1. This paper proposes a way to acquire localized commonsense knowledge data and enable the vision and language model to answer localized visual commonsense questions. The workload of this work is impressive.
2. The fine-tuned vision language model achieves promising zero-shot performance for three localized visual reasoning tasks.
Weaknesses: 1. The proposed framework is more like engineering work. The proposed method to generate localized data pairs is straightforward, and the way to enable the local-level question-answering ability is not novel, limiting its scientific contribution.
2. There are too many tricks, including filtering by critic model and prompts for acquiring question-answering pairs, which decreases the reproducibility and robustness of the whole pipeline. Furthermore, the selection of the vision-language model and large language model, and hyperparameters for each model in each step increase the complexity of the proposed pipeline, and the final vision-language model may be sensitive to the generated data, which makes it hard to iterative improve the final model's performance by improving any step in this pipeline.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: This paper proposes an impressive engineering framework. It would be better to analyze the impact of the quality and amount of the generated data on the final performance of the visual language model.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: I did not find a potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging that the **workload is impressive** and **achieves promising zero-shot performance for three localized visual reasoning tasks.** We now address the reviewer's comments in the following section.
**1. The proposed framework is more like engineering work. The proposed method to generate localized data pairs is straightforward, and the way to enable the local-level question-answering ability is not novel, limiting its scientific contribution.**
Previous models enabled local-level commonsense question-answering ability via training on human annotation dataset such as VCR, which cannot be scaled automatically. On the other hand, we explore novel ways of automatically extracting localized visual commonsense knowledge at scale, which requires a heavy engineering workload due to their nature of automated pipeline. In fact, reviewer UzVU states that we investigate a “novel research question to distill visual commonsense knowledge”, which is “an interesting question that previous works under-explore”. We are also “the first work about automatically acquiring visual commonsense”.
Reviewer Cm2R has pointed out that our framework has limited innovation compared to the original symbolic knowledge distillation. Following our response, we argue that we investigate a different problem when the teacher and student are given different input modalities, where the teacher has no access to the raw image content. This leads to a new framework of utilizing automated image descriptors to capture the raw image content interpreted by the teacher model. Different from prior multimodal generation work such as LLaVA, we also generate localized data pairs to help grounding of vision language models. Given these differences, we believe our framework presents a novel approach to localized, visual commonsense understanding.
**2. There are too many tricks, including filtering by critic model … which decreases the reproducibility and robustness of the whole pipeline. …It would be better to analyze the impact of the quality and amount of the generated data on the final performance of the visual language model**
Based on our empirical studies, we find that the components of verbalizers and filtering by critic model are crucial to enable the knowledge distillation.
We first argue that **training the critic model is necessary to eliminate hallucinations and irrelevant generations to guarantee the dataset quality**. This is illustrated qualitatively in Figure 1 of the rebuttal where instances with incorrect visual details are given lower scores. Next, the human evaluation in Section 3 of the rebuttal clearly shows the benefits of filtered data based on their visual correctness (2.41 vs 1.97) and justification (2.34 vs 1.87).
Based on these findings, we further analyze the impact of the quality of the generated data using the score from critic model. Keeping the number of training data at 300K, we see if letting the critic model to be more critical would lead to increase in dataset quality. This can be controlled by the threshold value of critic model where we only keep the instances that score higher than the threshold. Figure 3 shows the performance trend based on the threshold value, where we see a near-linear increase on the final performance with more aggressive filtering. This **implies that the *quality* of generated data could be controlled by the critic model**.
In Table 2 of the rebuttal, we evaluate the performance of the pipeline using varying amounts of generated data, specifically comparing the results from datasets of 150K and 1M in size. We see that **increasing the scale leads to better performance in all of localized visual reasoning tasks**.
We are also open to running any other analysis that reviewers would like us to investigate.
**3. Are prompts for acquiring question-answering pairs necessary?**
In response to Reviewer MRhi, we found that adding all three verbalizer descriptors yield the highest scores from the critic. For instance, the **adding the QA descriptor provides the biggest gain from 49.0 to 58.4**. This means that calling all the descriptors, including QA, is necessary to get more high-quality data as well.
---
Rebuttal Comment 1.1:
Comment: I appreciate the clarification provided regarding the engineering workload and the quality of the data produced. Hence, I decide to raise my score to 5. Moreover, I recommend that the authors consider making the generated data publicly accessible to facilitate future research endeavors.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for reading our rebuttal and considering re-evaluation of the score. We will provide a public link to access the generated data and reproducible code in the final version. | Summary: This paper aims to build a instruction following model for **localized** visual commonsense reasoning. The work differs from existing works in that it is able to reason about localized image regions with boxes, without using complicated referring expressions. The paper first collects data by distilling LLMs, i.e., prompting GPT using verbalized image descriptions, which is a concatenation of 3 descriptors: global, local, and QA pairs. Then, the data collected in the first stage is filtered by a critic filter trained using human annotations. Finally, the filtered data is used to train a BLIP-2 model. Experiments are done on 3 datasets for the localized visual reasoning task, showing the performance of the proposed method.
Strengths: - The paper is clearly motivated and the problem studied is well described.
- The dataset collection method is well designed. Using (global, local, QA) descriptors to prompt LLM in order to generate question, answer and rationales.
- The effectiveness of the critic filtering is well-analyzed and clearly shown in Fig 3.
- The paper is well written and easy to follow.
Weaknesses: My major concern is about experiments. The current results may not be very extensive to show the effectiveness of the proposed method.
- Box size is an important factor, since the task is localized reasoning. How does the model perform on large and small objects? How does this compare to other methods? First, it would be helpful to describe the distribution of the bbox sizes in the generated data. Second, in the final evaluation, it is also helpful to study the model performance change wrt different box sizes.
- three descriptors (global, local, QA) are used to prompt LLMs. Is there a study to show the contribution of each of the three? Since QA is not as widely used as the other two, additional discussions/results showing its effectiveness would be good.
- Very limited baselines are compared with. In table 2, only CLIP is compared. In table 3, only ChatGPT. More baselines should be considered. For example, for tab 2, while zero-shot baselines are hard to find, it is still beneficial to show some non zero-shot methods (methods on the leaderboard of these datasets). Although the model performance is not directly comparable, it can give readers some sense of how well the model performs.
- (potential extension) While the contribution on the data collection/distillation part is strong, the contribution on the model part is weak. Basically, BLIP-2 is directly used and trained on the collected data. Could there be some model design specific for handling “localized” information? Moreover, the “localized” reasoning in the current paper is constrained to color-coded bbox, while more general forms like free-form masks can be a further extension.
- The work is in principle similar to LLaVA, extending it to be “localized”. Could the authors discuss the difference with LLaVA in more detail?
- The title suggests “symbolic”, how is this work related to “symbolic”? I feel this is mis-leading.
- some typos: L168 QA-MSE, L146 QAR is not defined.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for acknowledging that “the dataset method collection is well designed” and “effectiveness of critic filtering is well-analyzed”. We address the concerns raised by the reviewers.
**Bounding Box Size Distributions and Performance Analysis**
Figure 2 in the rebuttal shows the distribution of normalized bounding box sizes in the filtered corpus, highlighting the width, height, and the area. We notice that almost 50% of the bounding boxes have the normalized area $\lt$ 0.05, suggesting that small objects are well-covered in our corpus. The height shows more uniform distribution than the width, indicating that there are more bounding boxes with smaller widths and the width mainly clusters in the range of 0.1-0.2. This reveals that the corpus contains not just large and prominent objects, but also small or narrow objects that often require attentive vision models to recognize.
We use the Sherlock comparison task to study the model performance change w.r.t different bounding boxes as their dataset consists of single bounding boxes with diverse sizes. We measure the **Pearson’s correlation between the input bounding box size and the comparison accuracy**: $\mathbf{\rho}$ = -0.12 with p-value $\lt$ 0.05.
Based on the correlation, we see that the performance is actually higher for smaller objects. One might indeed initially think that larger bounding boxes would result in better performance, as they could potentially encompass more features or more of the object of interest. We hypothesize that the negative correlation is due to that:
- **Specificity**: Smaller bounding boxes quite often are more specific in identifying the target objects, thereby reducing the complexity of the region and making it easier for the model to focus and reason.
- **Clutter**: Larger bounding boxes might include more "noise" or irrelevant objects/background - which could mislead the model during the reasoning process as it gets distracted by extraneous details.
This is reflected in the dataset distribution as smaller objects are more covered in the dataset generation. The analysis on the bounding boxes sizes will be included in the final paper.
**Contributions of Three Descriptors to prompt LLMs**
We run ablation studies of the descriptor components in the ChatGPT prompt by using the critic model to score the ChatGPT generations. We collect QAR instances for 500 images and calculate the average score with higher score aligned with human preference. **We see that using all descriptors provides the best results**, and in fact the QA descriptor provides the biggest jump (from 49.0 to 58.4). We will add these results in the final version.
- Full verbalizations: 58.4
- No Localized Narratives: 56.1
- No CLIP: 52.1
- No Global Descriptions: 54.3
- No Region Descriptions: 49.8
- No QA: 49.0
**Limited Baselines**
We agree that limited baselines are compared due to difficulty of finding zero-shot baselines in the localized reasoning tasks. In Table 1 of the rebuttal, we additionally include the BLIP baseline results and the finetuned results that achieve top of the leaderboard on these dataests. The gap between finetuned and zero shot models suggests that localized visual commonsense is not present in pre-trained vision language models. This underscores the importance of our work in filling these gaps by adding localized visual commonsense to existing models.
**Extension of Localization Approach**
We constrain our study to color-code bounding boxes following prior works [Zellers et. al., Hessel et. al.] known to effectively encode localized information. One way to improve the handling of “localized” information is to directly encode the information as floating points in the text input and output. Another extension is using the segmentation masks over the bounding boxes for more refined localization of the desired objects and regions. We leave it as future work to improve the localization aspect.
**Difference with LLAVA**
- The main difference between LLAVA and our pipeline:
- LLAVA uses GPT-4 as the teacher model, while we use ChatGPT.
- LLAVA relies on human annotations of captions and object bounding boxes in COCO images, while we utilize diverse, learned descriptors to extend to images without human annotations.
- We train additional supervised critic to filter irrelevant generations made by the teacher model to ensure higher dataset quality, while LLAVA does not have filtering mechanism.
- As discussed in the main rebuttal and response to Reviewer Cm2R, we compare ours with LLaVA in the localized reasoning task on the BLIP-2 ViT-G model. We see that our corpus with critic filtering gives clear improvement in various localized reasoning tasks in Figure 3 of the main paper. Table 2 in the rebuttal shows more comparisons while fixing the dataset size to be the same. We see a clear improvement of using our corpus over LLAVA in localized visual commonsense reasoning tasks with the same dataset size (row 2 and 4 in Table 2), and the gap is increased with more training data (row 3 and 4 in Table 2).
**How is this work related to symbolic?**
The “symbolic” terminology is introduced in [West et. al.] to denote that knowledge is distilled from the teacher to student model “symbolically as text”. Our framework is different in that we include verbalizers to prompt the teacher model to account for the modality gap as per response to Reviewer Cm2R. We will clarify this terminology in the introduction.
**Typos**
Thank you for pointing out the typos. We will define QAR as (Question, Answer, Rationale) and fix any remaining typos accordingly in the final version.
[1]: MERLOT: Multimodal Neural Script Knowledge Models [Zellers et. al.]
[2]: The Abduction of Sherlock Holmes: A Dataset for Visual Abductive Reasoning [Hessel et. al.]
[3]: Symbolic Knowledge Distillation: from General Language Models to Commonsense Models [West et. al.] | Summary: In this paper, authors argue that for multimodal LLMs, the previous input formulation is too rigid: either needs to specify the region models should focus on, or brings a verbose object description to refer to the region. A more natural referring expression strategy can help models better understand where the model should pay more attention to and the intent of the input questions. The proposed approach, called Localized Symbolic Knowledge Distillation (LSKD), aims to provide more natural referring expressions. It involves providing literal descriptions of images to a large language model and allowing the model to connect these descriptions with a holistic understanding of the scene. In addition, localized references to specific regions within the image are provided to encourage the model to generate commonsense inferences about those regions. This method effectively distills the large language model's capacity for global and local scene understanding. They demonstrate the effectiveness of this approach by achieving state-of-the-art zero-shot performance on localized visual reasoning benchmarks and conducting human evaluations.
Strengths: Novel research question to distill visual commonsense: Distilling visual commonsense is an interesting question that previous works under-explore. The distilled visual commonsense could provide very useful information for understanding complex situations covered in datasets like VCR, which will make the reasoning more grounded and reliable. Also, this is the first work about automatically acquiring visual commonsense to my best knowledge.
Comprehensive experimental results on downstream tasks: I'm very glad to see the distilled knowledge could further help base models on downstream tasks like VCR, Sherlock and VisualCOMET. Also, the small models can even outperform ChatGPT with the help of LSKD when describing the useful information in localized reasoning process. Besides, authors design experiments to examine the effect of steps in LSKD generation methods, which further make the argument stronger.
Weaknesses: Lack of comparison with Sherlock and VisualCOMET: Although the current experiments support that LSKD is helpful for downstream tasks, I'm still a bit confused if LSKD is better than Sherlock and VisualCOMET knowledge bases. These two datasets are both large-scale and of great quality. It's better to compare with competitive resources, instead of just showing that considering external knowledge will help.
Comparing VL models and ChatGPT is unfair: Although ChatGPT is prompted with verbalizers, it still can only accept text information, which may ignore many visual information in the images. Especially for datasets like VCR, answering questions may need a very careful observation to some detailed places, which is what VL models like BLIP-2 is better at.
Presentation in introduction section is confusing: I don't quite understand the relation between the first and second half of the introduction section. For the first half, I think the authors are trying to say a better referring expression is important. But in the second half (which is the core part of the paper), they just focus on how to distill visual commonsense knowledge. They are a bit connected because both parts mention localized reference, but it has very loose connection with discussion about the quality of referring expression.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is there any way to apply this method to the datasets with no bounding box annotations? That will further improve the scalability of this method.
2. Is there any study to explore the correlation between the ration of error training instances and the downstream task performance?
3. Could you also apply LSKD to models like CLIP? It seems that in Table 2, they just use BLIP-2 ViT-G to show the effectiveness of LSKD.
4. More qualitative error analysis is welcome in the modified version.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations about introducing errors in the intermediate steps are reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for positive feedback that we explore an interesting problem of localized commonsense reasoning that **previous works under-explore**, and acknowledge our novelty (**the first work about automatically acquiring visual commonsense**). The reviewer is **glad to see the distilled visual commonsense knowledge could further help base models** to make them **more grounded and reliable.** We address the concerns raised in the weaknesses and questions.
**Comparison with Sherlock and VisualCOMET**
For Sherlock and VisualCOMET, we fully acknowledge the large-scale and high-quality nature of these knowledge bases, and we agree with the reviewer that comparison with them is an imperative step. As discussed in the main rebuttal, we include results based on models trained on our corpus (Row 3) with these human-annotated corpora in Table 2 (Row 5 and 6). make the following observations:
- Ours vs Sherlock (Row 3 and 5): Sherlock yields better results in Sherlock and VCR, and falls short in others.
- Ours vs VisualCOMET (Row 3 and 6): VisualCOMET yields better results in VisualCOMET, and falls short in others.
Not surprisingly, evaluating the same task as the existing training corpus leads to higher performance than LSKD (e.g. training on VisualCOMET corpus yields better results in VisualCOMET data than LSKD). We observe that LSKD provides benefits in terms of better generalization across diverse visual reasoning tasks. In fact, training with human authored corpus leads to considerable drop than the zero-shot models in Visual7W tasks. Such drop from zero-shot model, however, is not observed when LSKD is applied due to their diverse nature of knowledge corpus.
We also like to point out that another benefit of LSKD pipeline over the human authored corpus is their annotation cost and scalability. The cost of annotating human-written knowledge corpus is higher than that of making API calls and takes significantly more time and effort if one were to scale the dataset collection.
**Comparing VL models and ChatGPT is Unfair**
We compare VL models and ChatGPT as a way to make a comparison between student and teacher models in the knowledge distillation pipeline. Despite the modality gap, we still consider ChatGPT with verbalizers a strong baseline, since the descriptors inform place information, specific descriptions of regions, and dense QAs to inform the visual content to help to answer the question. Prior works show that language models can display strong zero-shot visual reasoning [Zeng et. al., Zhu et. al., Wu et. al., Lu et. al.] by utilizing the text descriptions from vision tools. In response to Reviewer MrHi, we show that small and narrow objects are covered by the region proposal pipeline, suggesting that descriptors can cover specific details of the image. However, we do agree that there remains an information bottleneck in the ChatGPT evaluation when understanding the visual content and will address these concerns in the limitation section.
**Introduction doesn't flow as well**: Thank you for the comment! We will fix the introduction to fit the flow between the need for localized visual commonsense models and distilling visual commonsense knowledge.
**Apply to datasets with no bounding box annotations**: Yes! Please refer to the main rebuttal and updated results in Table 1. We see consistent improvements on datasets with no bounding box annotations (AOKVQA, SNLI-VE, and Visual7W).
** Any correlation between training instances and downstream performance?**: In Figure 3 of the main paper, we find that there is a positive correlation between the filtering threshold value as data quality control and the downstream task performance. In other words, making this critic model more critical helps filter quality knowledge statements from large quantities (L233-235). Thus, even from extracting the generations with the same teacher model, we observe that decreasing the error of training instances with the critic filtering is crucial to see gains in downstream task performance.
**LSKD to other models**: We thank the reviewers for the suggestion, and agree that LSKD can be applied to models like CLIP. We run experiments mainly on BLIP-2 ViT-G due to their superior zero-shot performance than CLIP. Due to limited time and computation resources in the rebuttal period, we have not included their results but will do so in the later discussions.
**More qualitative error analysis**: In addition to Figure 4 of the main paper, we will add more qualitative analysis in the main paper.
- [1]: Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language [Zeng et. al.]
- [2]: ChatGPT Asks, BLIP-2 Answers: Automatic Questioning Towards Enriched Visual Descriptions [Zhu et. al.]
- [3]: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models [Wu et. al.]
- [4]: Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models [Lu et. al.]
---
Rebuttal Comment 1.1:
Comment: Thanks for answering the questions! I'm glad to see most of them are addressed! And I hope that they can be discussed in the following version. I will raise my score a bit. | Summary: The paper introduces Localized Visual Commonsense models that enhance vision-language (VL) models by allowing users to specify specific regions within images. The authors train their model by collecting commonsense knowledge from a large language model (LLM) using global literal image descriptions and automatically generating local literal region descriptions from VL models. They also use a critic model to select high-quality examples. The results show that training on this localized commonsense corpus improves the performance of VL models of passing generated referring expressions to an LLM.
Strengths: + The paper is well-motivated. The introduction of Localized Visual Commonsense models represents a novel solution to build a more general multi-modal model.
+ The constructed data is of large scale, which contains 1M samples over 250K images.
+ The extensive experiments show that the proposed dataset can support model to perform better zero-shot inference.
Weaknesses: - How does this Dataset reflect knowledge? Since ChatGPT's input is based on factual descriptions of images, how can we ensure the questions generated by ChatGPT are related to external knowledge? On the other hand, how do the authors define what are knowledge-related questions?
- For some knowledge-related questions, it seems that ChatGPT needs to guess the state of the subject in the image. Could this process result in hallucination issues? I notice that there's a critic filtering in the method, which rates the results by training on annotated data. What types of patterns is this model expected to learn, and under what circumstances does it consider the generated data to be inadequate? From Section 2.3, it seems to be judging visual correctness, but how can the correctness of the generated knowledge be determined?
For example, in Fig1, the model assumes that [0] and [2] are teaching [1]. This is a guess made by ChatGPT, as no similar descriptions are generated by the visual models, indicating that they cannot capture this information. So how would the critic model determine if this kind of guess is based on reality or is just a hallucination?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My major concern with this work is how to guarantee the dataset quality since there are a lot of automatic generation methods in dataset construction. So I would like to hear more illustrations about that.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and valid concerns regarding how to guarantee the dataset quality, which we address below.
**1. How does this Dataset reflect knowledge? How do the authors define what are knowledge-related questions?**
In our work, we investigate extracting commonsense knowledge from LLMs that involves reasoning about everyday objects and concepts to make judgments and predictions about new situations. Our contribution is to extend this paradigm to handle and extract rich knowledge for the multimodal data. We do this by providing factual descriptions of objects in the image, and utilizing the LLM’s commonsense reasoning capabilities to extract complex relations and interactions among objects in the image. We hypothesize that ChatGPT can perform such tasks with localized information based on the findings from previous work:
- Wu et. al. demonstrate that ChatGPT can solve challenging zero-shot multimodal tasks via sophisticated use of prompting and diverse vision experts, including image captioning and object detection models. These tasks include spatial/coordinate understanding (can provide valid localized information) and open-world concept understanding in the multimodal domain.
- Bang et. al. perform evaluation of ChatGPT in commonsense reasoning and find that “ChatGPT does quite well not only in terms of answer accuracy but also in generating reasonable reasoning procedures to support its answer”.
These findings suggest that **one can indeed prompt LLMs to create commonsense, inference statements with reasonable reasoning procedures in the multimodal domain, if the appropriate descriptions of the images are provided**. Similarly, our dataset collection procedure follows this intuition: 1) The global, local descriptors and the QAs by themselves give a relatively surface-level understanding of the objects and concepts. 2) The LLMs can utilize the information to probe complex relations and interactions among the objects in the form of Question, Answer, and Rationale explaining the reasoning procedures (QAR).
This form of knowledge is in fact illustrated in Figure 7 of the supplemental, where we observe:
- Localized descriptions of a man and woman standing next to each other and holding a surfboard.
- Description of a girl is running in the sand towards the man, who is holding a surfboard.
- Then, ChatGPT uses commonsense reasoning that the three hold close, family-like relationships, and predicts the father might be interested in teaching the girl how to surf.
To empirically validate if the dataset contains relevant knowledge-related questions, we have trained the state of the art zero-shot model on the localized corpus and evaluated on downstream tasks known to require localized, visual commonsense knowledge to achieve good performance (VCR, Sherlock, VisualCOMET). The results in Table 2 in the main paper and Table 1 in the rebuttal show that the model with LSKD provides improvements on various localized and reasoning based downstream tasks, suggesting that **ChatGPT is indeed capable of generating relevant knowledge of visual commonsense understanding**
**2. How to guarantee the dataset quality since there are a lot of automatic generation methods in dataset construction? I would like to hear more illustrations about that.**
As pointed out by the reviewer, the **dataset quality is controlled by the supervised critic model trained to filter out irrelevant instances, including hallucination**. The main rebuttal reports empirical benefits of filtering and includes human evaluation of filtered data, which we summarize here:
We first argue that increasing the dataset quality should be transferred to their respective downstream performance. In Figure 3, we show empirical benefits of the critic-filtering mechanism on the downstream tasks. We see that the performance strictly increases by applying the critic model than without (threshold of 0 in the left side of the graph). Applying more strict threshold on the critic filter leads to near-linear increase in performance in downstream tasks. This suggests that the dataset quality can be controlled with the filtering mechanisms using the supervised critic model.
In Section 3 of the main rebuttal, we run human evaluations of instances with and without filtering based on visual correctness of QA pairs and justification of the rationale. We see that the data filtered by the critic model achieve higher ratings in both criteria by a large margin.
**3. What types of patterns is the critic model expected to learn, and under what circumstances does it consider the generated data to be inadequate?**
The critic follows the patterns of human annotations on evaluating the generated QARs from ChatGPT. The criteria is based on the visual correctness of QAs and appropriate justification of rationales as described in $\S$2.3. Unlike ChatGPT, the critic model is a vision language model with access to the image input to determine to eliminate cases of hallucination of objects and actions. By training on human labels, we hypothesize that the critic model is equipped with adequate visual discriminability to perform the appropriate filtering.
We show an illustration of the critic model prediction in Figure 1 of the main rebuttal. The critic model uses its visual understanding skills to give high-scores instances with correct visual information (First example: gold hook in region [4] is localized correctly with appropriate description of its significance; Second example: the tag in region [5] correctly indicates the brand of the stuffed animal). We see that instances with visually incorrect and hallucinated details (bear holding another toy, scowling expression of brown teddy bears) are given low scores.
[1]: A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity [Bang et. al.]
[2]: MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action [Wu et. al.] | Rebuttal 1:
Rebuttal: We thank the reviewers for positive comments acknowledging that our work supporting region-level reasoning is "important and interesting tasks for many applications" (reviewer Cm2R), and "represents a novel solution to a more general multi-modal model" (reviewer Ta54). Reviewer UzVU notes that "this is the first work about automatically acquiring visual commonsense" that is "well-designed" (reviewer MRhi) and "workload is impressive" (reviewer uG1j).
We now address some common concerns shared by the reviewers.
**1. Generalization of localized reasoning corpus to other visual reasoning tasks**
In response to Reviewer Cm2R and UzVU, we measure the effectiveness of the localized knowledge corpus on other vision-language tasks not limited to datasets with no bounding box annotations. We specifically focus on ones that require high-level reasoning that would benefit from visual commonsense corpus:
- AOKVQA [A]: that require outside world-knowledge to answer questions evaluated on multiple choice
- SNLI-VE [B]: inference based visual entailment to test fine-grained image understanding
- Visual7W [C]: Visual QA with focus on visual grounding and evaluations carried out on TellingQA.
Table 1 in the rebuttal displays the results of zero-shot and fine-tuned models on the considered visual reasoning tasks. We see that applying LSKD to BLIP-2 which is the best zero-shot model provides empirical improvement in every downstream tasks. This suggests that our corpus can be extended to improve on various reasoning tasks, not limited to localized reasoning, and further strengthens our contribution of corpus creation.
[A]: A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge [Marino et. al.]
[B]: Visual Entailment: A Novel Task for Fine-Grained Image Understanding [Xie et. al.]
[C]: Visual7W: Grounded Question Answering in Images [Zhu et. al.]
**2. Ablation studies of localized corpus and comparison with existing, related corpus**
Several reviewers have requested to quantify the effect of filtering and dataset size, and make comparison of our localized commonsense knowledge (LCK) corpus with related resources.
***Effect of Filtering on Downstream Performance***
In Figure 3 of the main paper, we have presented the effect of data quality controlled by filtering threshold set by the critic model. We keep the training dataset as 300K and sample knowledge statements scored higher than the threshold by the critic. We found that models trained without filtering (indicated by a threshold value of 0) always performed worse than filtered datasets at any varying threshold for downstream tasks. We also see near linear trend based on the threshold value, in which **increasing the dataset quality based on the supervised critic score can lead to increase in downstream performance**.
***Comparison with existing corpus***
The LLaVA dataset consists of 150K instructions on question-answers, summary, and conversations generated by GPT-4 with no supervised critic. In **Table 2 of the rebuttal**, we report downstream results of BLIP-2 ViT-G trained with our localization corpus and LLAVA. To keep the comparison fair with LLAVA, we include results with same number of training data (150K). We notice that training on LLAVA corpus yields far less results on the localized reasoning benchmarks (VCR, Sherlock, VisualCOMET) than our subset of corpus. This is expected due to lack of support of localized reasoning in the LLAVA data. On the other hand, LLAVA provides benefits on other visual reasoning tasks, suggesting that the choice of corpus should be tailored to the specific task to optimize performance. We also notice that the critic filtering is again beneficial across all the tasks in the smaller training subset.
Next, we compare with existing human-annotated visual commonsense knowledge corpus (reviewer MRhi). In Table 2 of the rebuttal, we observe that training on human annotated corpus vastly improves the performance of their relative tasks (e.g. training on VisualCOMET boosts the VisualCOMET performance from 39.0 to 50.2). However, we notice that this could lead to worse results than other visual reasoning tasks than the zero-shot counterpart (drops from 77.1 to 70.1/70.8 in Visual7W). This suggests that human-designed datasets may lead to less task generalization due to their lack of diversity, while such trend is not observed in LSKD.
**3. Ensuring Dataset Quality**
Figure 1 in the rebuttal shows qualitative examples to understand the patterns of critic model in distinguishing good and bad examples. We see that the model mostly relies on incorrect visual details (highlighted as red) lower than the correct instances. The third instance does not have glaring visual errors but are scored lower due to statement of "largest and most prominent regions", which is ambiguous but close to false. The critic model displays good calibrations in ordering the instances, such as giving the lowest score to the instance with the most visual error.
**Human Evaluation of Filtered Dataset**
We additionally run human evaluation to measure the acceptability of data with and without filtering. We collect 500 instances the same way critic model labels are collected in Sec 2.3: 1) is the QA visually correct? and 2) does the rationale justify the answer? Likert scores from [1-3] are calculated for each criteria (higher the better).
| | With Filtering (threshold=0.8) | Without filtering | | |
|----------------|----------------------|-------------------|---|---|
| QA Correctness | 2.41 $\pm$ 0.74 | 1.97 $\pm$ 0.84 | | |
| Rationale Justification | 2.34 $\pm$ 0.82 | 1.87 $\pm$ 0.87 | | |
| | | | | |
The results further suggest that applying filtering ensures higher dataset quality not only measured by downstream task performances, but also confirmed by human judgment.
Pdf: /pdf/46f4aff867e2b5bf7e83dca3315a88a2c3b1b8a1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work presents a localized visual common sense model that can support region-level inputs for knowledge reasoning. Specifically, the proposed methods prompts large language models to collect commonsense knowledge from global image descriptions and local descriptions. A critic classifier trained over a small number of annotation data is used to select high-quality examples. The distilled localized common-sense corpus can be further used for fine-tuning vision-language models. The experimental results demonstrate improvement over baselines in the zero-shot setting.
Strengths: a). Strong motivation; supporting region-level references and reasoning is important for many applications.
b). Technical sound; provide detailed and clear explanation of each component; experiments and ablation studies are comprehensive.
c). Thorough evaluation; have human evaluations comparing zero-shot performance of localized reasoning with generative models.
Weaknesses: a). A new way of distilling vision-language knowledge but no framework-level innovation compared with text only symbolic distillation.
b). The effectiveness of supervised critic: as the validity of instance annotation is only 45% and the trained classifier can only achieve 0.66 on F1 score. It is unclear whether the filtering process can be effective in selecting high-quality examples.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: a). Can the collected 1M instances of localized commonsense knowledge be used for other relevant vision-language tasks? Will the knowledge further benefit those tasks?
b). In the Section of “effect of filtering”, the comparison with LLaVa datasets seems unfair since the number of LLaVa is fixed. What is the number of training data for LLaVa?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n.a.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for agreeing that support for **region-level references and reasoning is important for many applications**, and our experiments and ablations are comprehensive that include human evaluations of generative models. We address the comments raised by the reviewer below.
**1. A new way of distilling vision-language knowledge but no framework-level innovation compared with text only symbolic distillation.**
Text only symbolic knowledge distillation and its variants assume that the teacher LLM (e.g. GPT-3, ChatGPT) and student models have access to the same text input modalities to perform the knowledge distillation process. In comparison, our framework explores when the teacher and student are provided with varying levels of input modalities in that the teacher LLM does not support raw image as input, while the student multimodal model is not subjected to the same constraint. To deliver the raw image content interpreted by the teacher model, we use a range of automatic image descriptors to verbalize images as text modalities. Specifically, we introduce three types of descriptors (Global, Local, QAs) all of which have proven significant as per the response to Reviewer MRhi. We assert that this unique aspect of our framework is an innovative contribution that will enable the multimodal community to comprehensively understand effective strategies for distilling visual-language knowledge from LLMs using automated, partial image descriptions.
**2. The effectiveness of supervised critic: as the validity of instance annotation is only 45% and the trained classifier can only achieve 0.66 on F1 score**.
In Line 170-178, we explore tuning thresholds for the critic model and keeping only the instances whose predictions are higher than the threshold value. Figure 2 shows that the model precision increases by increasing the threshold value. Specifically, the acceptability jumps 45% to 70% by setting the threshold as 0.8. This shows that applying the critic model alone can increase the dataset quality measured by humans.
In Figure 3, we show empirical benefits of the critic-filtering mechanism on the downstream tasks. We see that the performance strictly increases by applying the critic model than without (threshold of 0 in the left side of the graph).
Lastly, we report human evaluation of filtered and unfiltered dataset in the main rebuttal. We show that humans prefer dataset with critic filtering based on visual correctness and justification by a large margin.
**3. Can the collected 1M instances of localized commonsense knowledge be used for other relevant vision-language tasks? Will the knowledge further benefit those tasks?**
Table 1 in the rebuttal shows the benefits of localized commonsense knowledge in other visual-reasoning tasks such as AOKVQA, SNLI-VE, and Visual7W. We see clear improvement from applying BLIP-2 with LSKD over BLIP-2 in their zero-shot performance. We will include these downstream task results in the final version.
**4. In the Section of “effect of filtering”, the comparison with LLaVa datasets seems unfair since the number of LLaVa is fixed. What is the number of training data for LLaVa?**
The LLaVA dataset consists of 150K instructions on question-answers generated by GPT-4, while ours is generated by ChatGPT. In Figure 3 of the main paper, we have sampled the training data after filtering so that the number of instances is always fixed to be 300K (L236) and make comparison with the LLaVA.
To make a fair comparison with the LLaVA, we experiment with 150K training samples of our corpus with and without filtering. In Table 2 of rebuttal, we observe that **LSKD + filtering wins over training on the LLaVA corpus on localized reasoning benchmarks (VCR, Sherlock, and VisualCOMET), despite using the weaker teacher model**. The results suggest that our creation of a new localization corpus is crucial to support the model with grounded reasoning. On the other hand, LLAVA wins on QA-based reasoning tasks as they are aligned with the nature of training corpus. We thus observe that the appropriate application of these corpuses can be task-dependent, and adopting a selective approach towards choosing the right toolset may result in significantly enhanced performance across various benchmarks.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering the questions, which some of them are clear. But for Q2, I understand that improving the threshold will definitely increase the quality. However the acceptability is still relatively low (70%) and I am asking whether such acceptability score is enough for high-quality data filtering.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for reading the rebuttal and providing a further clarifying question.
Regarding Q2, we consider that 70% acceptability still represents a substantial enrichment of our dataset with high-quality examples. We believe the best way to verify this is **via showing a clear empirical gain** by training on the dataset with and without critic filtering, as discussed in the rebuttal to Cm2R.
In **Table 2 of the main rebuttal**, we compare model performances trained on our filtered dataset versus those trained on well-established, high-quality, human-annotated visual knowledge corpora, such as Sherlock and VisualCOMET. We demonstrate our model's ability to perform at par, if not better, on localized reasoning tasks, and our dataset also provided better generalization to other visual reasoning tasks (AOKVQA, SNLI-VE, V7W). These empirical outcomes support our assertion that a 70% acceptability score can indeed produce a high-quality dataset.
We acknowledge that there's room for improvement in the data filtering process to increase this acceptability score further. Future work can focus on enhancing the visual recognition capacity of the critic model to achieve this. Your noteworthy feedback will guide our future efforts in this direction. | null | null | null | null | null | null |
Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training | Accept (poster) | Summary: The paper proposes a pruning-based defense method against backdoor attacks on FL. Specifically, the malicious clients are limited to updating model parameters within an isolated subspace, which reduces the attack surface (or "poison-coupling") for malicious clients as well as the computation/communication cost in FL. Empirical results of a wide range of metrics compared with SOTA baselines have shown the effectiveness of the proposed method.
Strengths: ++ It is interesting to detect and defend against backdoor attacks by exploiting model pruning techniques.
++ The observation of "poison-coupling" in FL is inspiring. The paper is well-written, and I enjoy reading the paper.
++ The evaluation involves multiple SOTA baselines of attacking and defense methods.
Weaknesses:
W1: Some design assumptions and intuition could be further clarified, see C2
W2: Evaluation could still be improved, see C3
W3: Missing related references
[1] Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Gong. FLDetector: Defending federated learning against model poisoning attacks via detecting malicious clients. SIGKDD 2022
[2] Hanxi Guo, Hao Wang, Tao Song, Yang Hua, Zhangcheng Lv, Xiulang Jin, Zhengui Xue, Ruhui Ma, and Haibing Guan, Siren: Byzantine-robust Federated Learning via Proactive Alarming, SoCC 2021
[3] Xiangyu Qi, Tinghao Xie, Ruizhe Pan, Jifeng Zhu, Yong Yang, and Kai Bu. Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks. CVPR 2022
W4: minor writing issues
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: C1: The paper might violate the double-blind policy---from the Dropbox link provided on Page 4, the files shared show the owner is Huang Tiansheng, https://huangtiansheng.github.io.
C2: Please further clarify the following questions:
* How to defend against malicious clients faking the masks and launching adaptive attacks? Yes, the evaluation has experiments on adaptive attacks, but it would be helpful if the design part discusses the design intuition for adaptive attacks. Attacks like [3] might still be injected into the limited subspace quietly, even with random and heterogeneous mask initialization.
* The intuition for "consensus fusion (CF)"---"Given that those malicious parameters served to recognize backdoor triggers will be deemed unimportant for benign clients, they should not be contained in the subspace of benign clients, which accounts for the majority" should be further discussed and verified. For non-IID cases, such a consensus might not hold.
* Adding more theoretical analysis would be helpful to explain the intuition, though it might not be easy to derive a thorough theory.
C3: Please consider improving the evaluation from the following perspectives:
* More non-IID scenarios could be explored.
* Involving some NLP datasets will be helpful
Writing issues:
* Line 109, "alloww" --> "allows"
* Figure 2 left could be visualized better with the three metrics
* Figure 3 font size is too small
* "For FL, defenses can be classified into two main genres..." The two categories seem not aligned with the classifications in the evaluation (Section 6). For example, which category is RLR belong to?
* "The malicious/dummy parameters should have less chance to appear in benign clients’ subspaces." --- you mean parameters at specific positions?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: See questions.
Flag For Ethics Review: ['Ethics review needed: Failure to comply with NeurIPS Code of Ethics (lack of required documentation, safeguards, disclosure, licenses, legal compliance)']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for very detailed comments.
# C1 (Problematic dropbox link)
We have stopped sharing the problematic Dropbox link. Apology for this un-intensional error.
The original submission used both the anonymous github URL (abstract) and the dropbox URL (page 4). Our intention of sharing this dropbox URL is to open-source the checkpoints such that the difference between centralized and federated backdoor models can be displayed.
The owner id will not be displayed when a dropbox account is logged in. This made us not aware of the fact that the dropbox will expose the owner id. We should have asked other co-authors to double check this dropbox URL.
We sincerely hope that the PC chairs, the reviewers and the area chair will consider that including this dropbpx URL is an unintentional error when making decision. Thank you!
# C2+W1 (Design intuition)
**Q1 (mask faking)** When clients launch adaptive attack by faking the masks, Lockdown remains to be effective, because the core idea of Lockdown is based on the security principle that the malicious parameters (or coordinates) should not be accepted by other benign clients. More specifically, those poisoned coordinates, even though are present in the global model, during FL training, will be pruned out (masked to 0) once Lockdown consensus fusion is performed. As a result, the poisoned parameters, even injected by adaptive adversary into arbitrary locations of the model, may not evade the consensus fusion operation enforced by Lockdown.
The intuition behind the key ideas of Lockdown:
* **(Intuition of subspace searching)** Each client will only involve the parameters they deem important in their subspace following the dynamic way of subspace searching.
* **(How consensus fusion works)** By introducing consensus fusion, those parameters that appear less among the subspace will be pruned out. This also indicates that the parameters considered not important by most clients will be pruned out. Given that a majority of the clients is assumed to be benign, the poisoned parameters will not be considered as important ones by the benign clients, and therefore they will be pruned out by consensus fusion.
* **(Role of subspace initialization)** Either heterogeneous or homogeneous subspace (or mask) initialization is empirically observed to work in different degrees in lowering ASR, but we indeed observe that different initialization has impact in the model's benign accuracy.
**(When Lockdown falls)** Our adaptive attack Omniscience can succeed to evade Lockdown by obtaining the full/substantial knowledge of the consensus subspace and inject its malicious parameters within the consensus subspace. However, this attack is very hard to conduct without sufficient knowledge of other clients' subspaces or assuming that the FL server is compromised and collude with the malicious poisoning clients.
**Q2 (consensus fusion in Non-IID setting)**
In non-IID setting, the benign clients would develop more heterogenous subspaces. Hence, when we only reserve the consensus of subspace, the benign accuracy of the model will suffer with a more drastic decline. *However, this does not cause a problem in filtering out the poisoned parameters in conesensus fusion stage*, because there are not backdoor data in the benign clients' local dataset and therefore they would not use the poisoned parameters (or involve them into their subspace), no matter how the non-iid extent is. We show the extra experimental data in our reponse of C3, justifying our claim that Lockdown works well in reducing ASR even in an extreme Non-IID case.
**Q3 (theoretical analysis)**
One of our future research will be the theoretical analysis of the Lockdown robustness guarantee, including the robustness bound for subspace size and subspace consensus quorum.
# C3 + W2 (Additional Evaluation)
**(More Non-IID experiment)** Following the suggestions by the reviewer under limited time, we did an additional experiment to demonstrate the efficacy of our Lockdown defense on different Dirichlet parameters. Note that $\alpha=0.2$ simulate a highly non-iid extent (the benign accuracy of FedAvg drops a large margin in this case).
The below table shows the number of **benign acc**.
| $\alpha$ | 0.2 | 0.5 | 0.8 | iid (infinite) |
|----------|-------|------|------|-----------------|
| FedAvg | 84.3 | 88.8 | 89.2 | 90.8 |
| RLR | 64.5 | 72.9 | 82.9 | 85.5 |
| Krum | 26.5 | 43.4 | 43.6 | 75.8 |
| Lockdown | 80.6 | 86.1 | 88 | 90.1 |
The below table shows the number of **ASR**.
| $\alpha$ | 0.2 | 0.5 | 0.8 | iid (infinite) |
|----------|-------|------|------|-----------------|
| FedAvg | 74.8 | 86.4 | 74.7 | 66.1 |
| RLR | 82.5 | 29.5 | 24.5 | 4.3 |
| Krum | 2.8 | 11.1 | 6.6 | 4.3 |
| Lockdown | 7.7 | 3.4 | 3.9 | 7.1 |
As shown, Lockdown reduces ASR to <10\% even when $\alpha=0.2$, which demonstrates its defense efficacy in a highly Non-IID case.
**(NLP dataset)**
We are working on SST-2 task with a pretrain Bert model.
# W3 (Missing related references)
We thank the reviewer for sharing the 3 references. We have reviewed them and cited them properly.
# Writing issues
Thanks for correcting the writing issues. We have corrected the typo as well as the font size issue. For Figure 2, we now use different marker size to represent the pruning ratio. Our claim that "defenses can be classified into two main genres" is indeed over-claimed. We now discuss another genre of defenses known as "robust aggregation", which should be the one that Krum and RLR fall in. For the claim "The malicious/dummy parameters should have less chance to appear in benign clients’ subspaces.", here the malicious parameters indeed mean parameters at specific positions, i.e., those positions that are corrupted by the attacker.
---
Rebuttal Comment 1.1:
Title: Discussion welcomed!
Comment: Dear reviewer t853,
Thanks for your helpful comments, and also for the valuable suggestions on experiements directions! We have done an experiment on different Non-IID extent. Do you find the results support our claims, and does our explaination coincides with the results? We look forwards to seeing your feedback!
---
Rebuttal Comment 1.2:
Comment: Thanks for the further clarification and experimental results!
Some further questions after reading your response and other reviewers' comments:
* What is the fundamental cause that leads to poison-coupling effects in FL instead of centralized training?
* What are the backdoor attack method, neural network model, and dataset in Fig. 2? How could we justify the "poison-coupling effects" generally exist?
* The methodology applied in Fig. 2 to monitor "poison-coupling effects" is still a "black box" approach. Could we treat the FL process as a white box and identify parameters that are important for both benign patterns and backdoor triggers?
* I am still not convinced by the intuition "Given that those malicious parameters served to recognize backdoor triggers will be deemed unimportant for benign clients, they should not be contained in the subspace of benign clients, which accounts for the majority"
* If there is coupling, malicious parameters for triggers and benign parameters for normal patterns may share a subset positions in the neural network. Then, malicious parameters could be in the subspace of benign clients.
---
Reply to Comment 1.2.1:
Title: Further clarification by authors
Comment: We thank the reviewer for the insightful questions. Below we try to address them.
# Clarification of poison-coupling effect
Our initial paper does not articulate well on the poison-coupling effect, which causes a lot of confusion among reviewers. What we actually want to convey is that the poison parameters are statistically coupled with the benign ones in federated learning, making it hard for us to identify and remove them. But we are not trying to claim that the poisoned parameters for triggers and benign parameters for normal patterns share the same subset positions. Existing pruning defense,e.g., CLP relies on the statistical difference between the benign and poisoned parameters to filter out the poisoned parameters, but this existing method cannot work in a federated learning context, due to the statistical coupling effect we mention. We will make the point more explicit in the revised version.
# Fundamental cause of poison coupling effect
The cause of the poison coupling effect is still not clear in the current stage. We report this phenomenon, hoping that this can raise the awareness among FL community (That's why we want to open-source the checkpoints and the pruning evaluation code in the Dropbox link). The only thing we can tell is that the cause of the statistical difference between poisoned and benign parameters is due to the fact that the poisoned parameters in a poisoned model are usually larger in magnitude than the benign parameters, which explains why the Lipschitzness constant of poisoned parameters (or channels) are larger. This is understandable because the poisoned parameters need to enlarge their output when activated by the trigger pattern, in order to overwhelm the output of other benign parameters, which are activated by the benign pattern. However, why the model poisoned by federated learning does not need to exhibit this statistical feature? We don't have an explicit answer yet unfortunately.
# Experimental setting
The experiment is done on CIFAR10 with a ResNet9 model, under BadNet attack, which is the default attack setting we use in the experiment part. We delete this information in our initial submission due to space limitations, but we will add them back in the final version. Thanks for reminding!
# Justification of the existence of poison-coupling
Because we are trying to say that the benign parameters and the poisoned parameters are coupled in a statistical way. The L2 norm distribution and the Channel Lipschitzness distribution shown in the middle/right of Figure 2 actually justify our claim. We also open-source the checkpoints (though they are temporarily not available due to the Dropbox issue) where everyone can check the federated model, which indeed exhibits poisoned behaviors, but does not show substantial statistical differences between different parameters.
# Limitation of "black box" approach
Yes, a more desirable way to convey the message is to get the ground-truth poisoned parameters, and show their statistic of L2 norm/Channel Lipschitzness compared to that of other benign parameters. However, identifying the groud-truth is not easy to achieve due to the astounding size of modern neural networks. We hope the reviewer can understand.
# Intuition of the design
To solve the reviewer's concern, we must answer the question: Will malicious parameters for triggers and benign parameters for normal patterns share a subset of positions? Our answer is no. Using our Lockdown method, the malicious client's final subspace actually has a large overlap with that of the benign clients, and the overlap area actually will be reserved in the consensus subspace yielded by consensus fusion. However, the final model after projecting into the consensus subspace does not exhibit backdoor behavior anymore. That explains that the shared parameters for benign pattern in the consensus are all free from poisoning, and only some unique subset of parameters outside the consensus are dedicated for trigger recognition.
The connection between our Lockdown design and the poison-coupling effect is that by isolation subspace training, the poisoned parameters are no longer coupled with the benign parameters by looking at the subspaces of all clients (i.e., easier to identify). We will make this clear in our design motivation.
We hope this addresses the reviewer's concern, and we are happy to discuss further questions. Thank you for pointing out the points we fail to articulate, which easily leads to confusion. | Summary: The paper proposes a backdoor defense. The method is based on robustly estimating a sparse parameter subspace which is used to restrict updates. Each client will vote for a set of parameters it considers "important" to update, and non-important neurons will be frozen for that iteration. The paper shows that this defense works well empirically and provides an ablation study showing that all of their algorithm's ingredients are necessary.
Strengths: The idea of using sparse training in the context of backdoor defense is interesting. The experiments demonstrate that the method is very effective compared with baseline methods.
Its ablation study also suggests the necessity of individual design choices.
It also evaluates adaptive attacks.
Weaknesses: The paper does not discuss the defense from theoretical aspects. Thus, it is unclear what level of security guarantee can be achieved. Namely, the defense may be only effective because of the current status of research or our knowledge and can be potentially vulnerable to future attacks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What security can be achieved by the method?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper should discuss its limitation on the theoretical side.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the very encouraging review and helpful comments. We have provided some security analysis in the supplementary material (see Appendix A.4). A formal verification of the security guarantee is an important research result by itself. It is on our future research agenda. Especially, it is interesting and important to develop the theoretical robustness bound for both the subspace size and the consensus quorum.
---
Rebuttal Comment 1.1:
Title: Discussion welcomed!
Comment: Dear reviewer ftvs,
Thank you for your strong support of this paper. Just let us know if you need any clarification on the paper after seeing the feedback from other reviewers. | Summary: Federated learning (FL) is a promising approach for privacy-preserving ML applications. However, it is also vulnerable to backdoor attacks. Although some pruning-based methods have been proposed to defend against backdoor attacks, the authors note that it is difficult to prune malicious channels (via Lipschitzness) or weights (via l2 norm) in the federated backdoor setting. The authors named this phenomenon the poison-coupling effect. To address this challenge, the authors propose to isolate the training subspace of individual clients to prevent benign clients from applying malicious clients' parameter updates. The experiment demonstrates that the proposed method can effectively defend against backdoor attacks.
Strengths: 1. The poison-coupling effect is an interesting phenomenon and valued problem to be resolved.
2. The idea of defending backdoor in FL with isolated subspace training is interesting and make sense to the problem, and this methods can also reduce the communication cost while defending backdoor attacks.
3. The experiments are comprehensive and seems to be convinced.
Weaknesses: 1. This approach relies entirely on the malicious client strictly adhering to the training protocol. That is a strict assumption and hard to be satisfied in the real applications.
2. This approach also relies on some global setting such as count(1{m_{i,t}}) which is the size of the subspace for client i, and the fixed pruning ratio $\alpha_t$ for all clients, which is suitable for all clients with non-iid data set or unbalanced dataset? Those problems is not discussed in this paper.
3. Using gradient sparsification to defend against federal backdoors is a common idea, and the authors do not discuss related work.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. My main concern is that Lockdown's performance is entirely dependent on the malicious client strictly following the training protocol. An attacker can easily bypass this defense using two datasets. Specifically, it could first use clean dataset to generate the mask and then performs sparse training on the poisoned training set. I suggest that the defense algorithm should occur mainly on the server, not on the client side.
2. Subspace searching should be the key contribution to solving the poison-coupling effect. However, the size of the subspace for all clients are same? and the pruning ratio $\alpha_t$ in every round $t$ should be same for all clients? I think the authors should discuss some complicated cases such as Non-IID datasets or unbalanced datasets or multi-task learning etc.
3. I'd like to start by pointing out that using sparsification methods to defend against attacks and reduce communication overhead is a common paradigm, with similar methods, including Hermes[1] FedMask[2] and SparseFed[3]. However, the authors do not discuss Lockdown's relationship with them. I also note that the proposed method tends to perform worse in terms of benign accuracy. It is not a significant drawback since Lockdown uses only a quarter of the communication overhead compared to FedAvg. Still, I suggest the authors compare the proposed method with other gradient sparsification methods to show that Lockdown can exhibit higher accuracy and robustness for the same amount of communication overhead.
[1] Li, A., Sun, J., Li, P., Pu, Y., Li, H., & Chen, Y. (2021). Hermes: An Efficient Federated Learning Framework for Heterogeneous Mobile Clients. Proceedings of the 27th Annual International Conference on Mobile Computing and Networking, 420–437. https://doi.org/10.1145/3447993.3483278
[2] Li, A., Sun, J., Zeng, X., Zhang, M., Li, H., & Chen, Y. (2021). FedMask: Joint Computation and Communication-Efficient Personalized Federated Learning via Heterogeneous Masking. Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems, 42–55. https://doi.org/10.1145/3485730.3485929
[3] Panda, A., Mahloujifar, S., Bhagoji, A. N., Chakraborty, S., & Mittal, P. (2022). Sparsefed: Mitigating model poisoning attacks in federated learning with sparsification. International Conference on Artificial Intelligence and Statistics, 7587–7624.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: see the above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank this reviewer for the constructive, encouraging, and helpful comments. We below respond to the three weaknesses:
# W1 (Strict assmumption on Lockdown protocal)
*The answer is NO.* Lockdown defense does not assume that malicious clients have to adhere to the lockdown training protocol. Concretely, we have evaluated the effectiveness of lockdown under two adaptive adversaries (see line 308 page 8): Omniscience and FixMask. Both allow malicious clients to violate the protocol. For FixMask attack, the malicious clients maintain the same initialize masks and refuse to change in the later round(s). Our experiments show that Lockdown remains robust when adopting a proper pruning/recovery ratio. For Omniscience attack, the malicious clients are assumed to have the full knowledge of the consensus subspace, and consequently adapt/adjust their subspace to the consensus. However, to launch a successful Omnoscience attack, the attackers need to obtain (or guess) the accurate consensus subspace, which is hard. It requires to compromise and collude with the FL server. Under such a whitebox Omniscience attack, Lockdown is not robust against poisoning. However, most existing representative poisoning defense methods assume that FL-server is trusted and not colluding with any clients. Due to this reason, we insist that Omniscience attack does not pose severe risk to Lockdown.
# W2 (Client-level sparsity/pruning ratio)
Both the size of the subspace for client i and the pruning ratio are the global parameters for all clients under both iid and non-iid setting.
It is important to note that our lockdown method with these global parameters are easy to implement and yet highly effective, especially under the Non-IID data distribution, which is a representative scenario for federated learning systems with a large population of heterogeneous clients. Hence, in our current Lockdown design, we did not need to consider personalized client-level sparsity of subspace. One of our ongoing efforts will exam whether the per-client sparsity will offer some value added poisoning defense utility even though it may add more configuration and computation complexity.
# W3 (Relation with previous work)
We thank this reviewer for 3 related references. Will add them in the related work.
**([1] [2] (sparse training)** both aim to design a communication/computation efficient FL solution. In Hermes, each client employs DNN pruning to learn a personalized sparse DNN and only communicate the updates of the subnetworks with the server. In FedMask, each client will learn a personalized sparse binary mask and share this binary mask with the server while keeping local model unchanged. Instead of learning a shared global model, each client obtains a learned (personalized) binary mask.
Unlike [1] [2] which optimize per-client DNN pruning solely for communication/computation efficiency, Lockdown designs subspace pruning/recovery/fusion for canceling the poisoning effect at FL server while maintaining the benign local model update quality and the global model accuracy. In this sense, Lockdown differs from the two works in that their design goals are substantially different.
**([3] Gradient sparsification)** utilizes gradient sparsification to defend against data poisoning in FL. Unlike sparse training, gradient sparsification compresses the gradient before sharing with the server. [3] combines global model compression and device-level gradient clipping to defend model poisoning and rely on carefully tuning server side compression ratio and client level clipping. Unlike [3] centering on label flipping attack using gradient compression/clipping, Lockdown is defending against trigger-based backdoor poisoning with the novel and poisoning robust sparse-training techniques.
We thank this reviewer for the helpful comments and useful references.
---
Rebuttal Comment 1.1:
Title: Discussion welcomed!
Comment: Dear reviewer dbHt,
Are there any issues left by the rebuttal? We are open to discuss if there is still unaddressed concern on the paper. Thanks a lot for improving the overall quality of the paper!
---
Reply to Comment 1.1.1:
Title: Can our adaptive attacks eliminate your concern?
Comment: Hi reviewer dbHt,
Thanks for the insightful review comment. We believe your main concern lies in our strict assumption of enforcing Lockdown protocol among all the clients, including the malicious ones, which clearly should not be appropriate.
Do you think our results on adaptive attacks, which assume the attackers to follow other specific strategies in training, help erase your concern? And would you consider updating the overall rating based on our rebuttal? We are also happy to discuss if you have further questions after checking other reviewer's feedback. Appreciate all your efforts putting in reviewing our paper! | Summary: The paper addresses the vulnerability of federated learning (FL) to backdoor attacks and the limitations of existing defense solutions in resource-constrained scenarios. It introduces "Lockdown", an isolated subspace training method to counter the poison-coupling effect present in traditional pruning-based defenses for FL. Lockdown modifies the training protocol by isolating subspaces for different clients, uses randomness in initializing isolated subspaces, and employs subspace pruning and recovery to segregate subspaces between malicious and benign clients.
Additionally, quorum consensus is introduced to purge malicious/dummy parameters from the global model.
Empirical results demonstrate that Lockdown outperforms existing methods in defending against backdoor attacks, while also providing communication efficiency and reducing model complexity, making it suitable for resource-constrained FL scenarios.
Strengths: 1. Propose a novel defense for FL backdoor attacks. It addresses the limitations of existing defenses in resource-constrained scenarios.
2. Strong and thorough evaluation. The ablation study is also comprehensive.
Weaknesses: I think the paper writing has much room to improve. For example, in Section 4, the paper had better briefly introduce the idea behind channel lipschitzness. Also, in Section 5, although the paper follows a top-down style, it is still full of technical details, without much intuitive explanation or design motivation.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. After reading Section 4, I still do not understand what the 'poison-coupling' means. How are the 3 observations in Figure2 connected with (or lead to) the conclusion that 'backdoor parameters tends to be coupled with the benign parameters'? Also what does 'L2 norm of last convolutional layer' imply?
2. Could you elaborate why 'subspace recovery' is necessary? Why not only use 'subspace pruning' to keep important parameters?
3. How does the proposed method improve communication efficiency? What is the key reason? Can the standard model pruning also work?
4. For Table 8, does the communication improvement has something to do with the number of clients?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the review comments. Below we try to address the reviewer's concern.
# About the poison-coupling effect (Q1)
**(Poison coupling effect)** We define the poison-coupling effect based on the empirical observation that the parameters used for poisoning by a small percentage of compromised clients in federated learning are relatively harder to identify with high confidence because the poisoned parameters and the benign parameters are statistically coupled.
**(CLP pruning)** To demonstrate the poison-coupling effect, we measure the effectiveness of the CLP pruning method on both the federated model and the model trained in centralized setting respectively. CLP's pruning criterion is the estimated Lipscheness constant of each channel. The channel with a larger estimated Lipscheness constant will be pruned because they are more likely to be the poisoned channels. We have rewritten the CLP pruning in Section 3 to improve the readability.
**(Why 3 observations in Fig. 2 support the poisoning coupling)** The left of Figure 2 shows that if we use CLP to prune the federated model, we will need a larger pruning ratio to safely remove the poisoned parameters and purify the models. This is the first observation, telling that the federated poisoned model is relatively harder to be purified. The middle/right of Figure 2 respectively shows the estimated Lipschitzness constant and the L2 norm of the two poisoned models. We see that the federated model has a more uniform statistical distribution in the channel level in both cases. These two cases (the 2nd/3rd observations) further explain that the poisoned/benign parameters (or channels) are coupled together in a statistical way, and therefore is harder to be identified, i.e., it demonstrates the poison-coupling effect.
**(Implication of the L2 norm of the last convolutional layer)** The L2 norm of the last convolutional layer means the L2 norm of the model parameters within each channel of the last convolutional layer of a model. We only report the statistic data within the last convolutional layer of the model, because the L2 norm exhibits a large discrepancy over different layers.
# Necessity of subspace recovery (Q2)
We introduce the subspace recovery as a necessary subsequent step after subspace pruning because the pure pruning would continuously shrink the federated model round by round, which may lead to numerous errors and/or implementation issues. For example, when the FL model is pruned during the progress of the training round by round, the pruning ratio will become much harder to configure and tune. This is because the federated model may collapse when the sparsity of the model is larger than a certain threshold. To ensure the robustness of the lockdown defense, we introduce the subspace recovery, which will coordinate the pruning/recovery procedure, making the FL model maintain the same sparsity over the rounds of the FL learning/training process.
# About communication efficiency (Q3)
The lockdown powered FL training procedure asks every client to train the local model using a subspace structure, and consequently, in each FL round, each of the participating clients only transmits a subset of the full model parameters, i.e., those within each client's own subspace. This can substantially reduce the size of the model parameters shared with the FL server and hence enhance the communication efficiency of federated learning in every round. In comparison, the standard pruning techniques usually prune the model after it is being trained (like CLP pruning) when deploying the trained model on edge devices. However, they fail to reduce communication overhead over the iterative federated model training process.
# Table 8 Communication improvement (Q4)
In a lockdown powered FL system, the communication improvement is independent of the number of clients. In each round of federated learning, every participating client will gain the communication efficiency under lockdown protection.
In Table 8, we measure the sum of the communication overhead for every client in the FL system. Therefore, this statistic is related to number of clients.
# Section 5 Intuitive explanation and design motivation (see Weakness section)
We have updated Section 5 by adding additional illustration on the high-level idea of lockdown solution. The core ideas include:
* **(Subspace searching)** Each client will only involve the parameters they deem important in their subspace following the dynamic way of subspace searching.
* **(Consensus fusion)** By introducing consensus fusion, those parameters that appear less among the subspace will be pruned out. This also indicates that the parameters considered not important by most clients will be pruned out. Given that a majority of the clients is assumed to be benign, the poisoned parameters will not be considered as important ones by the benign clients, and therefore they will be pruned out by consensus fusion.
We are grateful to the reviewer for the constructive and helpful comments. We sincerely hope that we have addressed your comments and would appreciate the reflection of your satisfaction on the overall rating. Thank you.
---
Rebuttal Comment 1.1:
Title: Discussion welcomed!
Comment: Dear reviewer ozxV,
We are happy to discuss about the issues left by the rebuttal. Please leave a message if you think something need further clarification. Thanks for your time!
---
Reply to Comment 1.1.1:
Title: Is our writing better to understand with ideas in the rebuttal?
Comment: Hi reviewer ozxV,
As you mainly comment on the writing issues, we have try improving by summarizing our high level idea in the rebuttal (which also will be updated in the revised paper). Do you find our explanation of the poison coupling effects, as well as our design motivation makes sense to you, and would you consider updating the overall rating based on our rebuttal? We are also happy to discuss if there is something still causing confusion. Thank you for enhancing the overall quality of the paper! | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Causal Effect Regularization: Automated Detection and Removal of Spurious Correlations | Accept (poster) | Summary: In this paper, the authors proposed a causal effect regularization technique - CausalReg, which can effectively identify and remove the spurious but unknown attributes. CausalReg is robust to no-identifiable issues, finite sample error, and noise. The experimental results demonstrate the superior performance of CausalReg in identifying and exclude the spurious attributes.
Strengths: The paper is well-written, with a strong theoretical and experimental backend to justify the ability of the CausalReg to automatically remove the spurious correlated features. Specifically, in theory, the authors have proved that under certain data generation structures, the causal effects are guaranteed to be identified. In practice, the algorithm demonstrates significant control of the spurious correlated feature compared to other approaches (ERM, Mouli, CAD).
Weaknesses: Major:
1. Precision and Recall experiment is missing: Some experiments should be done to reflect the precision and recall of the discovery of spurious and causal attributes. Specifically, the paper claims its advantage compared to other methods that require binning with thresholds to identify the spurious and causal effect; the authors then should compare with these algorithms and demonstrate comparable results even without binning techniques.
2. The mathematical definition of spurious attributes is not completely clear.
Minor:
1. The figures seem to be twisted
2. Legends in Fig 3 is not clear
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. The effect of CausalReg seems to rely on the ML model in stage 1 can learn a fairly good causal relationship. Is it true? If so, how much would the authors anticipate the performance change when adapting other deep learning models in Stage 1?
2. How was the counterfactual distribution learned (mentioned in line 176, equation (2))
3. What would happen if some causal features were missing.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: As the authors have mentioned, one major limitation is that the theoretical guarantee of causal discovery works only under certain data generation procedures.
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1: Precision and Recall experiment is missing: Some experiments should be done to reflect the precision and recall of the discovery of spurious and causal attributes.** (part of the remark left out for brevity)
A: We thank the reviewer for suggesting this experiment. However, classifying an attribute as spurious or causal based on its causal effect will require some thresholding criteria. To the best of our knowledge, no such previous work exists that uses the causal effect of an attribute to classify them as causal or spurious. One exception is the Mouli+CAD baseline that defines a score for every attribute to classify them as spurious or not. We observe that for two datasets, Syn-Text-Unobs-Conf and MNIST34, Mouli+CAD is not able to detect the spurious attribute correctly. Comparatively, for our method even though the causal effect estimated for the spurious feature is not exactly zero, our method performs better than the Mouli+CAD baseline since we don't perform any hard thresholding for the spurious feature (see Section 4 for details).
That said, we have plotted the AUROC curve (see Fig 2 in the uploaded document in the global comment of rebuttal) for the identification of causal and spurious attributes by thresholding the causal effect estimated for a different feature. In the plot, the orange curve shows the AUROC curve for Riesz Estimator and the blue curve is for the Direct Estimator (see Section 3.2, Stage 1 and Appendix E.4 for details on the estimator). Riesz Estimator performs better than the Direct as shown by the higher area under the curve. The “star” marked in the plot shows the TPR (true positive rate) and FPR (false positive rate) of the identification of causal and spurious attributes by the Mouli+CAD method.
> **W2: The mathematical definition of spurious attributes is not completely clear.**
A: We thank the reviewer for pointing this out. We have defined the spurious attribute in Lines 94-96 stating “We use the fact that changing spurious attributes will not lead to a change in the task label i.e. they have zero causal effect on the task label y.” Thus a spurious attribute has zero true causal effect on the task label $y$. We will make this more precise in the camera-ready submission if accepted.
> **W3: The figures seem to be twisted**
> **W4: Legends in Fig 3 is not clear**
A: We thank the reviewer for pointing this out. We will make the necessary changes in the camera-ready submission if accepted.
> **Q1: The effect of CausalReg seems to rely on the ML model in stage 1 can learn a fairly good causal relationship. Is it true?** (part of the remark left out for brevity)
A: It is true that having a good estimate of the causal effect will definitely help the subsequent steps of our method. But since we use a continuous value of the causal effect of the attribute for regularization instead of hard thresholding, our method allows for error in the first step not to affect the later steps severely. Thus our method is robust to error in the estimation of causal effect in Stage 1.
In stage 1 of our experiments, we used two different deep-learning-based causal effect estimators, Direct and Reisz (see Section E.4 in the appendix for details). In practice, one can replace these estimators with any reasonable causal effect estimator of their choice based on the dataset considered. We observe that both estimators give a similar estimate on all the datasets whereas Riesz performs relatively better when the predictive correlation $\kappa$ is high. At higher $\kappa$, the causal effect of the spurious attribute is noisy i.e. have a non-zero value. In spite of this noise and slight difference in the causal effect estimate of both the estimators in the first step, in the second stage our method is able to perform equally well for both the estimators used (see Fig 13, 14, and 15 in the appendix for the individual performance of our method when using the Direct and Riesz estimator in stage 1). Given these observations, we conjecture that our method should be robust to the choice of causal effect estimator used in stage 1. Theorem 3.1 also states that we only need the ranking of the causal effect of causal and spurious attributes to be correct in order to learn an optimal classifier further demonstrating the robustness of our method to the noise in the causal effect estimation that could happen due to choice of the estimator. See “**Robustness to noise in the estimated causal effect**” in the global comment of the rebuttal for further discussion.
> **Q2: How was the counterfactual distribution learned (mentioned in line 176, equation (2))**
A: We have provided a detailed answer titled “**Learning counterfactual distribution**” in the global comment of the rebuttal.
> **Q3: What would happen if some causal features were missing.**
A: There are two potential problems that could happen:
1. **Drop in the overall accuracy**: It is possible that since some causal features were missing, spurious attributes may have been providing the missing information/compensating for the drop in the accuracy of a learned classifier. Now, if we remove those spurious features, the overall accuracy might drop.
2. **Identifiability of causal effect**: Given an attribute, we find the causal effect of the attribute on the task label $y$. For estimating the causal effect, we use a causal effect estimator that may need access to other covariates/attributes for correctly estimating the causal effect. In case some of the other attributes (causal or spurious) are missing, it might affect the performance of the causal effect estimator in Stage 1. That said, we have shown that our method is robust to the noise in the estimation of the causal effect in Stage 1 (see “**Robustness to Noise in the estimated causal effect**” discussion in the global comment of the rebuttal for further detail).
We will add a brief discussion on this in the camera-ready version if accepted. | Summary: The authors propose to reduce the effect of spurious attributes on the classification of, say, images, by first estimating the true causal effects and then regularizing the effect of spurious attributes to be closer to the estimated effects. They show, in a theoretical scenario, that this approach is sound and robust to noisy estimates in the first stage.
Strengths: - The paper is clearly written, and tackles an important problem of reducing the effects of spuriously correlated variables on prediction tasks.
- The theoretical results are interesting.
Weaknesses: - The proof sketches do not currently give any intuition about the shape of the proofs, the simply restate what has to be proved.
- It is unclear what the benefit of using the authors' approach is over doing the same first stage and discarding the spurious attributes.
- If the causal effect estimation relies on knowledge about the causal graph (DGP-1, DGP-2), then why do we need to do all this? Should it not already be known which attributes are causal and which are spurious through graphical criteria?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Given that the entire point of using regularization instead of feature selection is to ensure that causal attributes are not accidentally discarded due to our uncertainty as to which attributes are spurious, is it reasonable to assume in Section 3.3 that the representation disentangles between the two categories of attributes?
- In particular, is there a "smart" encoder architecture which would ensure that the representation correctly disentangles with "high probability"?
- Similarly, since the penalty weights $\lambda$ are specifically chosen to satisfy a certain inequality, then if our causal estimates are wrong, what would be the benefit of using regularisation?
- Overall, what are the upsides and downsides of using regularization over feature selection?
- Can the authors explain what prevents the desired classifier from being optimal in Theorem 3.1 when the representation is of higher dimension?
- Perhaps relatedly, could a consistent estimator of causal feature effect for step 1 be combined with a consistent step 2 (e.g., in the setting of Theorem 3.1) to provide a consistent estimate in which the desired classifier is indeed the global optimum (also in the higher dimensional case)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors do not discuss the limitations of their approach in detail.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1: The proof sketches do not currently give any intuition about the shape of the proofs, they simply restate what has to be proved.**
A: We thank the reviewer for pointing this out. We will update the paper to include a more technical proof sketch.
> **W2: It is unclear what the benefit of using the authors' approach is over doing the same first stage and discarding the spurious attributes.**
> **Q4: Overall, what are the upsides and downsides of using regularization over feature selection?**
A: Compared to doing the first stage and discarding spurious attributes, there are two key benefits of using regularization over feature selection:
1. **Noise in estimating the causal vs spurious attribute (Lines 38-39)**: The causal effect of attributes obtained in the first step could be noisy due to various reasons like non-identifiability of causal effect, etc. Thus it is possible that the causal effect for the spurious attribute could be non-zero. Simple thresholding techniques might lead to some false positive or false negative spurious attributes. As a result, this approach of estimating the causal effect and then discarding the spurious attribute may not work. See “**Robustness to noise in the estimated causal effect**” in the global comment of the rebuttal for discussion on the robustness of our method to noise.
2. **Attributes could have a continuous range of causal effects**: An example to justify this is the MNLI dataset [1], a popular dataset used in the spurious correlation literature. Given a premise and a hypothesis, the task is to predict whether the hypothesis entails, contradicts, or is neutral with respect to the premise. It has been observed that the presence of negation words (“nobody”, “no”, “never”, and “nothing”) is correlated with the contradiction label. At the same time, these features are sometimes important for prediction since they change the meaning of the sentence and thus have an aggregate non-zero causal effect. Thus regularizing with the correct causal effect is important to learn a correct classifier.
[1] Adina Williams, Nikita Nangia, Samuel R. Bowman, 2018
> **W3: If the causal effect estimation relies on knowledge about the causal graph (DGP-1, DGP-2), then why do we need to do all this?** (part of the remark left out for brevity)
A: We politely disagree with this comment. In summary, our method doesn’t assume the knowledge of underlying DGP. We consider two commonly occurring DGPs (DGP1 and DGP2) only to illustrate that causal effects are only identifiable in certain causal graphs.
Also, the DGP considered in our causal graph is general/has unknown edges. There are certain edges that are unknown (shown by the red color in the graph) and thus one cannot use the graphical criteria to find the causal and spurious attribute. Please see the “**Our method does not assume knowledge of the underlying DGP**” discussion in the global comment of the rebuttal.
> **Q1 and Q2: Is it reasonable to assume in Section 3.3 that the representation disentangles between the two categories of attributes?** (part of the remark left out for brevity)
A: We agree with the reviewer’s comment that it is not reasonable to assume that the latent space will always disentangle. But, we only make this assumption for creating a simple setup where we could theoretically study the effectiveness of our method in the presence of noisy estimates of the causal effects. But empirically, we don't make the assumption that the encoder disentangles the latent space into such attributes and show that our method is still effective and robust to noisy estimates of the causal effects.
> **Q3: Similarly, since the penalty weights $\lambda$ are specifically chosen to satisfy a certain inequality, then if our causal estimates are wrong, what would be the benefit of using regularisation?**
A: We believe there is a slight misunderstanding in the interpretation of Theorem 3.1. The theorem states that we don’t need a precise estimate of the causal effect of the attributes for learning the correct classifier. Rather, as long as the penalty weights $\lambda$ (inverse of the estimated causal effect of the attribute, Line 199) satisfy the specified inequality, our method will select the correct classifier. As long as the estimated ranking of causal effects is correct, our method will select the correct classifier (Remark on line 223). This shows that our method is robust to the error in the estimates of the causal effect of the attributes.
> **Q5 and Q6: Can the authors explain what prevents the desired classifier from being optimal in Theorem 3.1 when the representation is of higher dimension?** (part of the remark left out for brevity)
A: We believe there is a slight misunderstanding in the interpretation of Part 2 of Theorem 3.1. We only make the assumption that there is **one high-dimensional causal** and **one high-dimensional spurious** feature in the representation for ease of exposition. In case, there are multiple high-dimensional causal and spurious attributes, all the different causal attributes can be concatenated into a single bigger high-dimensional vector and similarly for multiple spurious attributes and the same proof will hold. We leave the extension of our current proof for the case with multiple high-dimensional causal and spurious attributes (each with different causal effect) for future work.
> **Limitations: The authors do not discuss the limitations of their approach in detail.**
A: We thank the reviewer for pointing this out and we will elaborate on them further in the camera-ready version. Here is the summary:
1. Our method only works when the underlying dataset-generating process is causal.
2. We can guarantee the identification of causal effects only in certain kinds of DGPs. For a general DGP, while we explore this limitation both theoretically (Theorem 3.1) and empirically (Syn-Text-Unobs-Conf), a more extensive study of robustness is needed.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Responses to W3 and Q1+2: While the empirical results here show good performance, is there any way of ascertaining how well the provided estimates perform on future datasets? Since in general neither the DGP nor the disentanglement assumptions will hold, can we tell how well our correction for spurious features performs when we don't have access to the ground truth data? Relatedly, if the identifiability assumptions *do* hold, can we verify this after having run the algorithm?
---
Reply to Comment 1.1.1:
Comment: > C1: While the empirical results here show good performance, is there any way of ascertaining how well the provided estimates perform on future datasets? Since in general neither the DGP nor the disentanglement assumptions will hold, can we tell how well our correction for spurious features performs when we don't have access to the ground truth data?
We thank the reviewer for raising these thoughtful questions. We agree with the reviewer that in general both the identifiability and disentanglement assumptions may not hold. Below we give pointers that can help ascertain the kind of datasets where the theoretical assumptions are expected to hold.
**Identification assumption**:
1. **Datasets built using human annotation where input X is sufficient for predicting label (DGP-1)**: For certain datasets like CivilComments and Twitter-AAE (considered in this work), the **data-collection process** tells us that the task labels were created by showing the input sentence to human annotators or through a deterministic function given the input and thus resembles the DGP-1 considered in this work. In fact, DGP-1 is a common data-generating process covering datasets with human-annotated labels where the task label Y is generated using the observed input X and this input is sufficient for the prediction of the label. Thus, **for many-real world NLP datasets that use human annotation or deterministic functions to create ground-truth labels using input**, it is indeed possible to **estimate the correct causal effect** using the estimator given in the proof of Proposition 3.1 (part 1).
2. **Datasets with multiple correlated attributes where the label depends only on a subset of attributes (DGP-2)**: Such datasets are common in computer vision, e.g., the MNIST dataset considered in our work. Here there may be multiple attributes that are correlated with each other. They can be divided into two subsets: the causal attributes (e.g., shape) that affect the task label Y and the spurious/”style” attributes (e.g., color, rotation) that do not affect Y. This corresponds to DGP-2 from Figure 2 and common in image datasets (see, e.g., Von Kugelgen et al. (2021)). Here too, identification of attributes’ effect is possible whenever the correlated causal and spurious attributes are observed. Note that other independent causal attributes can be unobserved.
More generally, given a dataset with a correlation between spurious attributes and the task label, we conjecture based on Prop. 3.1 that as long as all the causal attributes correlated with the spurious attribute are observed, then the effect of attributes is identified (causal attributes refer to the attributes that cause Y; causal attributes independent of the spurious attribute can be unobserved). So given a future dataset, if we can determine such a property based on domain knowledge, then the theoretical applicability of our method can be ascertained.
**Disentanglement assumption**:
In addition, we would like to emphasize that the **identification of the correct causal effect is a more fundamental assumption** for our method to work as desired whereas the disentanglement assumption is merely made for the convenience of the proof. Neither the general-purpose causal effect estimation algorithm used in the first step (e.g. RieszNet or Direct Estimator) nor our regularization term in the second step depend on the disentanglement assumption.
Von Kügelgen, Julius, et al. "Self-supervised learning with data augmentations provably isolate content from style." Advances in neural information processing systems 34 (2021): 16451-16467.
> C2: Relatedly, if the identifiability assumptions do hold, can we verify this after having run the algorithm?
We again thank the reviewer for this thoughtful question. In general, verifying identification from observational data is difficult and this challenge exists for any causal effect inference task. However, if we assume access to data from another domain that varies the correlation of spurious attribute and the task label, it may be possible to check whether our method recovers the correct non-spurious representation for predicting Y. **The test is as follows**: If the identified effect is correct, then our model’s prediction accuracy should stay invariant across the original and new domains (assuming that the noise distribution stays constant). That said, we want to emphasize that the power of such tests depends on the quality of the new domain sampled: **in general, it may be possible to reject some bad models but verifying identifiability from data alone is a rich, open question**. | Summary: The paper proposes a new method to detect and remove spurious attributes. First, the paper gives the sufficient conditions that are needed for estimating the causal effects with theoretical proof. To detect the spurious attribute, the proposed method estimates the causal effect based on a deep learning-based estimator [5]. To mitigate the identified spurious attributes, the method adds a regularization term that aims at matching the classifier’s output with the estimated causal effect of an attribute on the label (Eq. 2). The paper further proves that the proposed regularization term’s robustness to noise in causal effect estimation given that the ranking of estimated treatment effect is correct. The experiments are conducted on three datasets, demonstrating the effectiveness of the proposed method.
Strengths: 1. The paper is well-written and easy to follow.
2. The paper presents insightful theoretical proof.
3. The proposed method achieves comparable or better empirical results than other methods.
Weaknesses: **[Discussion or Comparison with Other Methods]** The paper fails to discuss or compare with other methods of identifying spurious attributes. EIIL [31] maximizes the IRM objective to infer environments. DebiAN [32] uses the violation of the fairness criterion to partition data into spurious subgroups. Domino [33] proposes error-aware clustering to detect spurious correlations. JTT [34] regards incorrectly predicted samples to detect spurious correlation.
**References**
[31] Elliot Creager, Joern-Henrik Jacobsen, and Richard Zemel, “Environment Inference for Invariant Learning,” in ICML, 2021.
[32] Zhiheng Li, Anthony Hoogs, and Chenliang Xu, “Discover and Mitigate Unknown Biases with Debiasing Alternate Networks,” in ECCV, 2022.
[33] Sabri Eyuboglu, Maya Varma, Khaled Kamal Saab, Jean-Benoit Delbrouck, Christopher Lee-Messer, Jared Dunnmon, James Zou, and Christopher Re, “Domino: Discovering Systematic Errors with Cross-Modal Embeddings,” in ICLR, 2022.
[34] Evan Z. Liu, Behzad Haghgoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn, “Just Train Twice: Improving Group Robustness without Training Group Information,” in ICML, 2021.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In the rebuttal, I expect the authors to address my concerns (listed in the “weaknesses” section) by answering the following questions:
1. Add discussion or comparison with other related works [31-34]
2. Will the code be released for better reproducibility?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The paper has adequately addressed the limitations (Section 6). From my perspective, the paper does not have a potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: Add discussion or comparison with other related works**
A: We thank the reviewer for pointing this out. Given limited time and computation constraints, we have added results on three new baselines (JTT[34], EIIL[31], and IRM [35]) on two datasets – Syn-Text-Unobs-Conf and MNIST34 dataset considered in this work. In summary, our method outperforms all the baselines considered on both the metrics of interest – Average Group Accuracy and $\Delta$Prob. See the “**Results on Additional Baselines**” discussion in the global comment and Fig 1a and 1b in the uploaded rebuttal document (in the global comment) for detailed discussion.
DebiAN [32] proposes an alternating framework – Discoverer to find the biased subgroups and Classifier to train a model such that the bias discovered by Discoverer is fixed. This method bears similarity to the EIIL [31] framework (added as a baseline in our work) where they use IRMv1 penalty to automatically find the biased subgroup instead of group fairness violation metrics considered in this work. Though the EIIL framework doesn’t use the alternating strategy to further refine the model. In future work, it would be interesting to study the connection between these invariance or fairness constraints and the corresponding causal effect of attributes. Next, they use reweighting instead of regularization to mitigate the bias in the model. In future work, we can also explore reweighting-based techniques, such as inverse propensity weighing, for estimating the causal effect.
Domino [33] mentioned by the reviewer only focuses on discovering the biases and provides no method to fix them or learn an unbiased model whereas our work aims to do both the task – discover and learn an unbiased model.
> **Q2: Will the code be released for better reproducibility?**
A: Yes, we will release the code with the camera-ready version of our paper.
**References**
[33] Discover and Mitigate Unknown Biases with Debiasing Alternate Networks, ECCV 2022
[34] Just Train Twice: Improving Group Robustness without Training Group Information, ICML 2021
[31] Environment Inference for Invariant Learning, ICML 2021
[35] Invariant Risk Minimization, 2020
---
Rebuttal Comment 1.1:
Comment: I have read the authors' responses and other reviewers' comments. The response addresses my concern. I raise my rating to "Accept." I encourage the authors to add the response to the final version. | Summary: The authors study the problem of learning under the presence of spurious correlations, given multiple attributes, some of which may be spurious. They first propose three causal graphs to represent the data generating process, and show that two them allow for identification of the causal effect of the attributes on the label. They propose a method to identify spurious attributes by computing the causal effect of each attribute on the label, under two possible causal graphs, and then regularizing the model based on the magnitude of this treatment effect. They show that their method beats the baselines on three datasets.
Strengths: - The paper tackles an important and well-studied problem in machine learning.
- The paper is generally well-written and easy to follow.
- The paper is grounded in solid causality theory.
Weaknesses: 1. The authors demonstrate theoretical guarantees for two causal graphs shown (DGP-1 and DGP-2). In practice, assuming the causal structure seems like a fairly strong assumption, and the authors do not really examine what happens when the graph is mis-specified. For example, does the method still work if the true data generating process is anti-causal?
2. In order to compute the regularization term, the authors require a sample from the counterfactual distribution. Learning a model that can generate such counterfactual samples seems difficult. In their real-world dataset example, the authors use GPT-3 to generate such counterfactuals. Having access to an LLM seems like it should be outside the scope of the problem, as the authors could have also just asked GPT-3 to predict the sentiment directly (and likely get decent performance). The authors should address (empirically) how to generate these counterfactuals for other text and image datasets, and how the quality of this model impacts their method.
3. The empirical evaluations feel lacking to me. For example, the authors do not evaluate on typical spurious correlation datasets such as Waterbirds, CelebA, or MNLI. In particular, CelebA contains many attribute fields which would be ideal to demonstrate their method.
4. The authors should empirically examine the case where there are multiple spurious attributes, which may be inter-correlated. Does computing the treatment effect of each attribute and then regularizing each one separately make some assumption about the independence of the spurious attributes?
5. The authors should show the results of a few more baseline methods that are popular in the spurious correlation literature, such as JTT [1] (which also does not require knowledge of the attributes), or GroupDRO [2] as an upper bound.
[1] https://arxiv.org/abs/2107.09044
[2] https://arxiv.org/abs/1911.08731
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address the weaknesses above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1: The authors demonstrate theoretical guarantees for two causal graphs shown (DGP-1 and DGP-2).** (part of the remark left out for brevity)
A1: We politely disagree with the reviewer's comment. In summary, our method does not assume the knowledge of underlying DGP and thus is invariant to any misspecification in the graph. We consider certain illustrative DGPs (DGP1 and DGP2) to show that causal effects are only identifiable in certain causal graphs. In particular, DGP1 is fairly general and covers a wide range of real-world datasets where the labels are generated by manual annotation and the input contains all the attributes sufficient to allow for such labeling without any external knowledge. But in our empirical experiments, we use a general-purpose causal effect estimator (Direct and Riesz) that doesn’t assume the knowledge of the underlying DGP of the dataset. Next, we go on to show that even when the causal effect is not identifiable (eg. in DGP3 and experimentally simulated in the Syn-Text-Unobs-Conf dataset, see Section 4.1 and Appendix E.1) our method is able to perform significantly better than all the considered baselines (results in Section 4.3 and Appendix F). Please see the “Our method does not assume knowledge of the underlying DGP” discussion in the global comment (response to all reviewers) of the rebuttal for a detailed discussion.
Will our method work in the anti-causal setting? In an anti-causal setting, the causal effect of all attributes is zero by definition. Since our method depends on computing the causal effect of different attributes, it is not guaranteed to work for an anti-causal setting.
> **W2: In order to compute the regularization term, the authors require a sample from the counterfactual distribution.** (part of the remark left for brevity)
A2: We agree with the reviewer’s critique that our method requires access to the counterfactual distribution and the quality of these counterfactuals could impact our method. However, we have addressed this concern in the paper with experiments on datasets with and without access to Oracle counterfactual distributions like LLMs. For a summary, see the “Learning counterfactual distribution” discussion in the global comment of the rebuttal.
In addition, a common deployment goal is to build a model with efficient inference at test time. In such a case, it is reasonable to use an expensive model like GPT3 for generating counterfactuals during training and build a smaller, efficient prediction model for inference. Using GPT directly for real-world use cases will have scalability issues.
> **W3: The empirical evaluations feel lacking to me. For example, the authors do not evaluate on typical spurious correlation datasets such as Waterbirds, CelebA, or MNLI.** (part of the remark left out for brevity)
A3: In the appendix (E.1) we evaluated our method on three different attributes of the Civil-comments dataset, another real-world dataset considered in the spurious correlation literature [1], and part of the challenging WILDS benchmark [4] for out-of-distribution generalization. This dataset also has multiple spurious attributes out of which we subsample three – gender, race, and religion – and evaluate our method on all three attributes (see Section F.1, F.2, Table 10, and Fig 9 in the appendix for detailed discussion). These attributes are present in the sentence in a subtle way and thus cannot be easily removed by simply discarding a few words from the sentence. To summarize the result,
Causal effect Estimation (Stage 1): Table 10 in the appendix summarizes the estimated causal effect for each of the attributes of this dataset. Since this is a real-world dataset, the true causal effect for each of the attributes is unknown but we expect them to be $0$ since these sensitive attributes should not affect the task label $y$ (toxicity of a sentence). We observe that both the causal effect estimators (Riesz and Direct) perform similarly and have a causal effect close to $0$.
Overall Performance (Stage 2): Fig 9a, 9b, and 9c in the appendix show the overall performance on all different attributes of this dataset. Overall, we observe similar results as the TwitterAAE dataset where our method performs comparably to the baselines on the Average Group Accuracy but our method has close to optimum $Delta$Prob i.e. 0.
> **W4: The authors should empirically examine the case where there are multiple spurious attributes, which may be inter-correlated.** (part of the remark left out for brevity)
A4: We thank the reviewer for this suggestion. If multiple features are dependent it is possible that the regularization step might become difficult since the regularizer might need to satisfy the causal effect constraints among the features (which is unavailable). That said, given the limited time and computing resources, we plan to include this experiment in the camera-ready version of the paper if accepted.
> **W5: The authors should show the results of a few more baseline methods that are popular in the spurious correlation literature, such as JTT [1], or GroupDRO [2] as an upper bound.**
A5: We thank the reviewer for pointing this out. We have added results on three new baselines (JTT[1], EIIL[2], and IRM [3]) on two datasets – Syn-Text-Unobs-Conf and MNIST34 dataset considered in this work. We find that our method outperforms all the baselines considered on both the metrics of interest – Average Group Accuracy and $\Delta$Prob. See the “Results on Additional Baselines” discussion in the global comment and Fig 1a and 1b in the uploaded rebuttal document for detailed discussion.
**References**
[1] Just Train Twice: Improving Group Robustness without Training Group Information, ICML 2021
[2] Environment Inference for Invariant Learning, ICML 2021
[3] Invariant Risk Minimization, 2020,
[4] WILDS: A Benchmark of in-the-Wild Distribution Shifts
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. Most of my concerns and questions have been addressed, and I think the paper has been improved with the new experiments that the authors have run. I have increased my score as a result. I would encourage the authors to include experiments on additional datasets in their revision, especially MNLI, which the authors reference in their response to Reviewer quUc as an example of a dataset where spurious features may have non-zero causal effect (this concept has also been discussed in [1]).
[1] https://arxiv.org/pdf/2210.14011.pdf | Rebuttal 1:
Rebuttal:
> **Our method does not assume knowledge of the underlying DGP. We describe three commonly occurring DGPs only to illustrate identification properties** (Reviewer nk8u, quUc):
**Identifiability of causal effect**: We have listed different DGPs (data-generating processes) common in the real world and shown that causal effects are identifiable for some of them. However, our method doesn't require the knowledge of underlying DGP (see below). We agree that for different DGPs, there could be different correct causal effect estimators, but they don't require the knowledge of whether the attribute for which we are estimating the causal effect is causal or spurious. For example in DGP1, we don’t know beforehand whether A is causal or spurious (In Fig 2, the red arrow in DGP1 depicts that the edge is unknown) and we can use the causal effect estimator designed in the proof of Claim 1 of Proposition 3.1 to estimate the causal effect of A on X directly (equation 11 in appendix B). If the value of the estimated causal effect under an infinite sample limit is 0, then A is spurious otherwise not.
**Our method doesn’t need knowledge of underlying DGP**: In our empirical study, we consider different datasets with different underlying DGPs. For example, MNIST34 follows DGP2, real world-datasets TwitterAAE and CivilComments follow DGP1, and Syn-Text-Unobs-Conf follows DGP3 (see Section 4.1). However, our method does not use this knowledge. To demonstrate this, we have used the same causal effect estimator (Riesz and Direct) for all our experiments (see section 4.2).
> **Results on Additional Baselines** [Reviewer nk8u,pmkx]:
We have added additional baselines to compare our method. We have limited our evaluation to two datasets, Syn-Text-Unobs-Conf and MNIST34 due to time and computing resources constraints, and will add the comparison on other datasets in the camera-ready submission if accepted. See Fig 1(a) and Fig 1(b) in the rebuttal document in the global comment for the plots. In summary,
1. **Syn-Text-Unobs-Conf dataset (1a in rebuttal doc)**: We have added three new baselines, JTT[1], EIIL[2], and IRM[3]. Unlike Mouli+CAD, JTT, EIIL, and IRM are able to considerably decrease the $\Delta$Prob (lower is better) while having better Average group accuracy compared to ERM. However, our method (CausalReg) is able to perform better than all the considered baselines on both metrics.
2. **MNIST34 dataset (1b in rebuttal doc)**: Due to time and compute constraints we have only been able to add a comparison to the JTT and IRM baseline for this dataset but plan to add other baselines in the camera-ready submission. Though JTT is able to lower the $\Delta$Prob (lower is better) compared to Mouli+CAD and ERM, our method is able to outperform all the baselines on both metrics.
[1] Just Train Twice: Improving Group Robustness without Training Group Information, ICML 2021
[2] Environment Inference for Invariant Learning, ICML 2021
[3] Invariant Risk Minimization, 2020
> **Robustness to Noise in the estimated causal effect** (Reviewer nk8u, quUc, zxmG):
1. **Theoretical Analysis**: Theorem 3.1 states that we don’t need access to the precise causal effect estimate of an attribute to learn a model robust to spurious correlation. Our method only requires the ranking of the causal and spurious attributes to be correct to learn the desired classifier.
2. **Empirical Analysis**:
1. **Robustness to varying predictive correlation $\kappa$**: Across datasets, as we increase the predictive correlation ($\kappa$), the causal effect estimate becomes worse (see Table 6-10). In spite of this error in stage 1, our method performs better than the other baselines (see Sections 4.1 and 4.2 in the main paper and Sections F.1 and F.2 in the appendix).
2. **Sensitivity to noise in estimated causal effect**: To evaluate the sensitivity to the noise, we regularize the model with our method using a spectrum of different causal effects for the spurious attribute to simulate the noise in the estimation step. We observe that for all the datasets, our method is able to perform better than the baselines even with a large noise. See Fig. 10, 11, and 12 in the appendix for a detailed discussion.
> **Learning counterfactual distribution** (Reviewer nk8u, zxmG):
Our method assumes access to examples sampled from a counterfactual distribution (Eq. 2 and Line 176) to train an unbiased model. Empirically, we demonstrate the performance of our method on different types of datasets (synthetic, vision, and text) with access to different qualities of counterfactual distributions. More specifically,
1. **Counterfactuals using GPT3 (Text)**: For the TwitterAAE dataset, we use GPT3 as the oracle counterfactual distribution (see Table 4 and Section E.1 in the appendix). Given an input sentence, GPT3 is prompted to generate a new sentence where the race-specific attribute is changed from African-American (AAE) to Non-Hispanic White or vice-versa.
2. **Counterfactual distribution using Topic Model (Text)**: In Appendix E.1, we evaluate our method on three different attributes of the real-world CivilComments dataset. For this dataset, instead of using generative models like GPT3, we use a handcrafted list of words to generate the counterfactual w.r.t. three different attributes — religion, race, and gender (Table 5). We remove the words corresponding to each attribute from the sentence to generate the noisy estimate of counterfactual (see Section E.1).
3. **Counterfactuals using deterministic transformation (Image)**: For certain simple attributes in the image dataset like rotation, background color e.t.c. a deterministic transformation could be used to generate the counterfactual images (as done in our experiment on the MNIST dataset, see Section 4 and E.1). For more complicated transformations like changing the subject in the image, it would be interesting to explore existing generative models in future work.
Pdf: /pdf/7b022e1650e2dab8c7a967c7d5b0ede8633a27b8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Strategic Behavior in Two-sided Matching Markets with Prediction-enhanced Preference-formation | Accept (poster) | Summary: This paper studies the matching market with returning agents and
proposes an important strategic behavior that returning agents can
attack future predictions by distort short-term interactions. The
authors formulate the systems as (repeated) three phases and study a
simplified setting to derive informative conclusions on the impact of
this attack behavior.
Strengths:
-Novel and meaningful topic of strategic behavior in matching market
with returning agents.
-The description of setting and analysis of phenomenon is clear and
easy for readers to follow.
Weaknesses: A natural question is whether these conclusions can be extended to
larger matching markets. The setting used for inducing main
conclusions of this paper seems too simplified.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In line 275-276, "the school always chooses outcome
1-l", I am confused about how can the school choose
the outcome of interaction here.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer UhQP for their feedback and question. We are grateful both for their appreciation of the novelty and importance of the topic and for their concern about the simplicity of the model.
Regarding the question, we apologize for the lack of clarity surrounding the link between adversarial interaction attacks and outcomes. In the interaction phase, the school interacts with the allocated student. During this interaction, the school has some choice of action (e.g., provide integration services or after-school preparation for the student transferring from a low-performing school). In practice, the student also has a choice to react to these opportunities (e.g., attend and pay attention to these services). However, as mentioned in lines 161-165, since the student is not returning, we assume they will make the most out of the provided opportunities. Thus each action of the school will have a direct connection with an (expected) outcome of the interaction. For example, the school could know, based on past experience, that by providing a fraction of (1-l) of the total hours of after-school training, the student (is expected to) get a final SAT score equal to a fraction (1-l) of the maximum one. As such, each action of the school during the interaction with the student corresponds to an outcome of their interaction. We tried to capture this intuition in the example included in lines 171-176. We will clarify this in the final version.
The level of control a returning agent has on the interaction outcome varies with the application domain. For example, in a school choice setting, there is still some variability in the performance of the student. That is, based on the performance of the student in mock exams and/or the performance of similar past students, the school can anticipate the final result of the current student, but there will still be some variability. However, in a refugee assignment problem, if a location does not provide (good) interview opportunities for one refugee in the first 90 days, the respective refugee does not have any chance of getting a (good) job. In short, in a general scenario, the returning agent chooses their action (i.e., how much support they provide to a given non-returning agent), which maps to an expected outcome (i.e., the success of the interaction, which is estimated based on past experiences).
We would also want to add a few words regarding the simplicity of the model. As mentioned in the reply for reviewer Zyns, throughout the modeling phase, we tried to follow the model-simplicity principle introduced by Robert Axelrod (1997) and used in the next decades. More precisely, our goal was to keep the model minimal in order to highlight the source of vulnerability. This also allowed us to create a general framework and , thus, show the feedback loop could be problematic in multiple settings.
That being said, we are in agreement with Reviewer UhQP that besides its advantages, the simplicity of the model is also a limitation. In particular, we underline in lines 394-395 that a central direction for future work would be to consider more realistic (and in particular larger) models. As a first step, we used the appendix to extend the model (e.g., by considering higher numbers of agents, alternative matching mechanisms, and k-NN as a more realistic recommender system) and perform simulations. This additional analysis largely confirmed our theoretical results. Most important, when there are few non-returning agents compared to available places and the utility of returning agents is always negative, adversarial interaction attacks were effective for all considered matching mechanisms. While this analysis does not exclude the need for larger and more realistic settings in future work, we hope it provides some evidence that such attacks are worth investigating within various settings and application domains.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I have read the rebuttal and remain my score. | Summary: This paper is looking at problems that arise in two sided matching markets, where preferences of the agents are being informed by various prediction mechanisms. In particular, the paper makes an argument that the existence of a predictive model used by agents to inform their preference has interesting strategic repercussions. They argue that when thinking about market design, predictive models can not be ignored or analysed independently.
Strengths: I liked this paper. It was well written and the theoretical analysis appears to be solid. That said, I think its significance comes in clearly highlighting the arise of potentially unforeseen strategic behavior in markets due to the use of predictive models that shape users’ preferences. The argument made in the paper that there are interesting interplays between ML models and market mechanisms is an important warning to market designers AND opens up interesting theoretical questions in the mechanism design/market design space.
Weaknesses: - The theoretical analysis was focussed on the random serial dictatorship mechanism. I understand why the authors might have done so as it simplifies the analysis significantly, but theoretical results on other matching mechanisms (e.g. DA) would have also been interesting.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Are there qualitatively similar findings if DA was used instead? What would be the complications with analyzing DA over RSD? I looked at the results in the simulation and it seemed like DA did not always follow identical trends as RSD and was curious as to its behaviour.
- A key driver of the results seems to be the feedback loop between the prediction model and the market. Would simply cutting this feedback loop be enough (I will admit I am not sure how this would be done without ignoring data) or are you suggesting that an entirely new way of modelling such mechanisms is needed?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors did a very good job at discussing the implications of their work in terms of inequity. In particular, this is a key message of the paper -- that care needs to be taken in designing and deploying market mechanisms since data generated by them and fed into predictive models might have strategic ramifications that can exaspertate inequality.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer EhdA for their supportive feedback and insightful questions! We are especially grateful for their engagement with both the paper and supplementary material.
To address the first question, as mentioned by the reviewer, DA poses additional challenges as we would need to consider more rounds (where students propose to schools) and potentially distinguish between different versions of DA. Since using DA changes the matching phase, the main difference comes in the probability of being allocated an undesirable student given a unilateral past strategic interaction (when histories are equal, we have symmetry and thus equal probabilities of outcomes). To take one example, under student-proposing DA with lotteries for the preferences of schools, there are two alternatives: (a) the two students prefer different schools (thus, each being allocated to the school they prefer), or (b) the two students prefer the same school (thus, using the lottery-based preference of the preferred school for the allocation). By interacting strategically, a school maximizes the chance of the undesirable student to prefer the other school (i.e., the beneficial subcase of a). This will complicate the proof of Lemma2 (within the appendix) but otherwise lead to the same conclusions. Since this analysis requires explaining DA and discussing the intuition behind a larger number of particular cases while providing a limited amount of additional insight, we reserved the discussion of DA for the simulation.
As EhdA accurately noted, by departing from the assumptions of the minimal model, we notice differences between DA and RSD. Looking at individual rounds during matching revealed this is due to the increased importance of the lottery-based preferences of schools under DA. For instance, assume there are more students than places at schools (which produced the largest gaps DA - RSD) and two schools with one place each. Then, under RSD, two students will be selected: the first will be allocated to the school they prefer the most, and the second to the other school. Being strategic under RSD is thus effective as, when the first student is undesirable, it minimizes the chance this student will be allocated to the strategic school. On the other hand, under DA, in the first round, each student proposes to the most preferred school. In the next round, all students who applied to school B and were not tentatively accepted by B would apply to school A and vice-versa. Thus, by the third round, each school would have tentatively accepted the student they (lottery-based) prefer the most out of all but one student. Therefore, the effectiveness of a strategy used by, e.g., school A is limited to maximizing the pool of undesirable students school B receives in the first round of the DA. However, since there are still many undesirable students remaining, school A will still receive proposals from many undesirable students in subsequent rounds. Thus, besides edge cases, under DA with many undesirable students, schools receive the students they (lottery-)prefer the most.
On the second question, as EhdA mentioned, cutting the feedback loop by ignoring data would be effective throughout scenarios. From the results of Section 5.3, one alternative is to maintain some noise in predictions (i.e., lower the prediction accuracy, alpha). More precisely, as discussed in lines 371-376, we want the maximum value of alpha that will introduce sufficiently much noise to make adversarial interaction attacks inefficient (under the current market parameters, e.g., beta, theta). This is, however, to some degree, a theoretical solution. While it makes sense mathematically, it is unclear how to find this optimal value of alpha in practice as many parameters (e.g., precise utility functions, how forward-looking returning agents are) are private information. Moreover, since these attacks are detrimental to social welfare, returning agents might be reluctant to share truthfully whether they would implement adversarial interaction attacks.
Based on the simulation results, another solution could be giving more decision-power to returning agents. For example, by allowing them to fully express their preferences (instead of using lotteries), we might eliminate the incentive to attack when utilities are positive or all places will be filled. However, the intervention is ineffective when utilities are negative and there are fewer non-returning agents than places. We discuss this intervention in lines 401-404 but not what to do in the latter scenario (which corresponds to realistic settings, e.g., refugee assignment). As mentioned by EhdA, this might require a different mechanism design. For instance, we could use top trading cycles; start with a random allocation of, e.g., refugees to locations and redistribute refugees based on the preferences of locations. It is important to investigate, though, the resulting loss in welfare.
Finally, one alternative solution could involve legislation. While detecting and punishing adversarial interactions is likely difficult to impossible, there could be a relocation of funds to counteract the undesirability of matches. For example, all locations could contribute equally to a collective account that covers the estimated costs of integrating all refugees. Then, when a refugee is matched to a location, that location also receives the paired funds for integrating the refugee. Doing so will lead to a positive utility scenario and potentially to equal desirability of all non-returning agents.
Altogether, our initial analysis seems to suggest efficient interventions will likely depend on the application domain and might require input from domain experts to assess their feasibility. Thus, in order to avoid giving a hasty suggestion, we concentrated on underlining the main causes for problems and kept the discussion on potential solutions at a high level. We left this interesting and important analysis for future work.
---
Rebuttal 2:
Comment: I would like to.thank the authors for their response as I found.it interesting. I like the direction of this work and continue to support this paper. | Summary: This paper introduces a fresh perspective on attacks called *adversarial interaction attacks*, which can occur in markets that involve both a returning and a non-returning side. In such markets, agents' preferences are shaped by prediction mechanisms. While previous research has examined strategic behavior separately within the realms of matching and prediction mechanisms, the authors uncover the existence of these adversarial interaction attacks by explicitly considering the feedback loop between these mechanisms.
To investigate this phenomenon, the authors create a simplified setting and make several observations. They demonstrate that agents are driven by incentives to employ these attacks for personal gain. Moreover, the authors provide evidence that when returning agents adopt these attacks, it not only reduces the overall benefit but also amplifies inequality among non-returning agents.
Strengths: **Significance:** This paper provides important insights into adversarial interaction attacks in markets with returning and non-returning sides. By shedding light on the underlying incentives and repercussions associated with these attacks, the authors deepen the understanding of strategic behavior within matching and prediction mechanisms. It also emphasizes the importance of developing strategies to mitigate these attacks and promote fairness in market outcomes.
**Novelty:** The attacks presented in this paper are novel. Previous studies have focused on strategic behaviors in either matching markets or prediction mechanisms individually, but this paper introduces a new perspective by examining the interplay between the two.
**Clarity and Quality:** The paper is well-written and comprehensible.
Weaknesses: The setting analyzed is a bit too simplistic, perhaps this can be improved in the paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Zyns for their feedback on and engagement with our manuscript! We appreciate the positive feedback on the significance and novelty of our work and the expressed concern regarding the simplicity of our model.
Regarding the latter, we tried to follow the model-simplicity principle introduced by Axelrod (1997) and widely used since. More precisely, our goal was to keep the model minimal in order to highlight the source of vulnerability. This also allowed us to create a general framework and, thus, show the feedback loop could be problematic in multiple settings.
That being said, we agree that besides the advantages it provides, the simplicity of the model is also a limitation. As such, we reserved subsection 5.4 to discuss this choice and underline that a central direction for future work would be to consider more realistic models. As a first step, we used the appendix to extend the model (e.g., by considering more agents, alternative matching mechanisms, and k-NN as a more realistic recommender system) and perform simulations. This additional analysis largely confirmed our theoretical results. Most important, when there are few non-returning agents compared to available places and the utility of returning agents is always negative, adversarial interaction attacks continue to be effective for all analyzed market characteristics. While this analysis does not exclude the need for more realistic settings, we hope it provides some evidence that such attacks are worth investigating within various application domains.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I've read the response, and I've decided to keep my scores. | null | null | Rebuttal 1:
Rebuttal: We would like to thank the organizers and reviewers for the opportunity to further discuss our work in this response phase and author-reviewer discussion! We are grateful for the extra work involved and will try to address the questions through direct responses. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Perceptual Kalman Filters: Online State Estimation under a Perfect Perceptual-Quality Constraint | Accept (poster) | Summary: This work studies the problem of temporal singal reconstruction for corrupted data. Under a perfect perceptual-quality constraint, the previously regarded optimal model, i.e, the Kalman filter is shown to confronted with a fundamental dilemma. A recursive formula for perceptual filters is proposed and be empirically validated on a video reconstruction problem.
Strengths: + This work is put in a proper context in the liteature, where the perception-distortion trade-off is an important theory that only attract few research interests.
+ Reasonable problem formulation, and detailed theoretical drivations.
Weaknesses: - Is the proposed tool generalizable to other fidelity measures in addition to MSE?
- The effecrtiveness of the proposed tool is only validated on a single sample, more are expected.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: It's better to split Eq. 17 into multiple lines for improved formatting.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
*Is the method can be generalized beyond MSE?*
Please note that in Thm. 4.1 we have a general form for linear perfect-perceptual quality filters (Eq.14). While the form (14) is optimal for MSE, it can be used as a representation for linear filters in general (but see restriction below).
The constraints on the coefficients ($Q_k-\pi_k S_k \pi_k^\top - \phi_k \Sigma \phi_k^\top \succeq 0$) are necessary and sufficient for perfect perception, again, regardless of the cost objective. It indeed might be possible to optimize coefficients for objectives other than MSE under these constraints to gain optimal perceptual *linear* filters. Note, however, that under general distortion measures, optimal filters might be non-linear. We will add this discussion to the paper. Thank you for the interesting remark.
*More validation is expected:*
We validate our method on a 2d oscillator example, and two video reconstruction scenarios, which were included in the main text within the space limitations. Additional experimental results can be found in Appendix H, including inverted pendulums and further results for the video reconstruction.
Please refer to our "global" response, where we add an experiment for demonstrating many additional different dynamics. We will add this demonstration to the final version.
*Formatting Eq. 17:*
Thank you for your suggestions, we will format eq. 17 as you suggested,
---
Rebuttal Comment 1.1:
Title: Post-rebuttal
Comment: The reviewer appreciates the authors' responses to address the raised concerns. | Summary: This article addresses the problem of optimal causal filtering under a perfect perceptual-quality constraint. The authors introduce the concept of an unutilized information process and present a recursive formula for perceptual filters. The study demonstrates the effects of perfect perceptual-quality estimation on video reconstruction. Overall, this article contributes to understanding optimal causal filtering under a perceptual-quality constraint and offers new insights for addressing this issue.
Strengths: Unfortunately, for me the paper was not easily accessible as I am not familiar with studies of the signal reconstruction.
·The overall problem is interesting and has practical significance.
·I find this work quite original to the best of my limited knowledge.
·The paper to me seems technically sound and theoretical assumptions are formally proved.
·The appendix materials have been carefully prepared, which can be of great help in understanding the paper. Meanwhile, the open-source code provides convenience for community reference.
Weaknesses: ·Experimental analysis is limited, is it necessary to showcase more diverse video scenarios?
·Is it necessary to add a conclusion section to summarize the conclusion of this paper and clarify its possible application scenarios, as well as the interests can be followed in the future research.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: As mentioned above.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper clearly describes societal impact and potential limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive response.
Our focus in this work was on analytic closed form results. To achieve this we applied our algorithms and analysis to the Gauss-Markov setting, where we also focus our empirical efforts. Future work, extending our results to more general domains, will be able to compare empirical results in more diverse settings. In this respect, please also see our response to reviewer 3Cww.
We did not include a conclusion section due to the limited space. We will add it to the final version.
Pay attention that additional experimental results can be found in the Appendix, as well as in our "global" response.. | Summary: This paper aims to study the problem of optimal causal filtering under a perceptual-quality constraint. The main contribution of this paper is to provide a mathematical framework and a closed-form solution to the aforementioned problem under some mild conditions. The experimental results on a video reconstruction problem demonstrate the effectiveness of perceptual-quality estimation.
Strengths: 1. The cost of temporal consistency investigated in this study holds significant importance in the field of online restoration.
2. The experimental results on a video reconstruction problem provide evidence for the qualitative effects of the proposed perceptual filtering.
Weaknesses: 1. The readability of the paper is restricted due to the oversimplification of the formula derivation, such as in the case of Formula 16.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The authors should concisely explain why they did not compare their method with other perceptual-quality constrained SOTA methods in online restoration. Adding a comparison with SOTA methods, rather than solely comparing to the Kalman filter, would enhance the overall persuasiveness.
2. When both constraints 6 and 7 are applied simultaneously, it may result in an additional MSE loss. Under what conditions can this loss be alleviated while still satisfying both constraints?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: No limitations are addressed in this paper. It is recommended that the authors make an effort to improve the readability of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments.
We did our best to make our work readable. We will make an effort to clarify equation derivations and improve readability in the final version.
As for Eq.16, it is a direct consequence of the MMSE orthogonality property, see e.g. [8]. We will clarify this in the text.
To the best of our knowledge, there is no other work that addresses perfect-perception reconstruction (in the mathematical sense) in causal filtering problems. While temporal consistency is an emerging topic in the design of deep models and architectures, none of the existing methods are designed to achieve perfect perception as in our work. We will add a discussion on this issue to the paper.
Under the causality constraint (6), the deterministic Kalman filter is known to be MSE optimal. Now considering (7), the MSE is expected to grow, as shown analytically in the static case in [8]. This cost is alleviated in the extremely degenerate cases, where KF perfectly estimates the trajectory (e.g. observations are full or process is deterministic).
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. After reading the response as well as the other reviewers' comments, I would like to increase my rating. | Summary: This submission introduces the Perceptual Kalman filter as a tractable solution to the online state estimation problem under perfect perpetual-quality constraints. This means the authors review the problem of state estimation under perceptual constrains (i.e. the joint distribution of any sub-sequence of states should match the original data). Then they derive a tractable approximation of the perceptual filters which have an analytic solution. Experiments were conducted on a harmonic oscillator and a dynamic texture domain. The harmonic oscillator experiments capture the MSE due to perceptual consistency and also due to the approximation used for tractability. The dynamic prediction domain demonstrates the approach scales to high-dimensions.
Strengths: 1. Clear and thorough presentation of perceptual estimation and its relationship to prior work (in particular the Kalman Filter).
2. Easy-to-understand derivation of the Perceptual Kalman filter and in particular the trade-offs made in order to obtain tractability.
3. Experiments that clearly compare how the technique compares to prior work, the theoretical maximum performance and the cost of tractability. Scalability experiments also demonstrate that this works for large linear systems.
Weaknesses: 1. Although the linear systems and observations have wide applicability, the paper would be improved by mentioning this limitation and why it has or doesn't have a significant effect in the applications where perceptual quality of state-estimates is desired. This is mainly to explain how widely the linear results presented by the paper apply to the problem landscape of the general case.
2. The experiments only show results for a harmonic oscillator. Although this is extremely useful as away to visualize the consistency gap and the gap due to $\Phi_k = 0$, experiments showing how different dynamics can change these gaps would be very useful.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: How bad can the $\Phi_k = 0$ approximation hurt your performance? Is there some provable bound for the worst-case?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: I think the authors present the limitations of the method clearly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments.
Nonlinear filtering is in general analytically intractable even in the standard setting of classic state estimation without constraints. Given the absence of current theoretical analysis of perceptual constraints in filtering, it is natural to focus on the linear setting at the current state of knowledge and gain analytical insight. Even this has proved to be challenging, and a clear goal is to extend this work to more general settings. While temporal consistency is an emerging topic in the pracical design of deep models and architectures, none of the existing methods are designed to achieve perfect perception as in our work, hence empirical comparisons are hard. We will add a discussion on this issue to the paper.
An interesting direction for future work can be a study of domains where there are hidden linear-Gauss dynamics in some latent space (e.g. using GAN, VAE etc.) while transformation back to signal space is non linear. We will mention this as a future work
Regarding a bound on the degradation due to $\varPhi_k=0$ reduction: In fact, there is no such general bound. There are possible cases where this reduction may lead to worse error, while in other scenarios it might be optimal. Let us examine two extreme cases. In one extreme, when an observation is missing, the corresponding innovation is uncorrelated with the estimand, hence discarding past unutilized information means that the state update step in Algorithm 1 is completely random, which results in high MSE. In the other extreme, for processes where states at different timesteps are weakly correlated ($\rho(A)\ll 1$ or $\rho(Q) \gg \rho(P_0))$, unutilized information rapidly becomes irrelevant and can be safely discarded.
We empirically demonstrate the efficiency of the reduced filters in our numerical Section (Fig. 3) and in Appendix H.2. We’ll add a discussion about the intuition behind the influence of $\varPhi_k$. Thank you for this interesting remark.
Regarding your request to show gaps for different dynamics. We conducted an experiment that demonstrates the MSE gaps for different settings. The experimental details and numerical results appear in our "global" response and related PDF. We will add this demonstration to the final version. | Rebuttal 1:
Rebuttal: We thank all reviewers for their comments.
Following some of your remarks, we conducted an additional experiment to demonstrate different MSE gaps between temporally consistent and inconsistent filters (based on harmonic oscillators with different dynamics). Detailed description and results appear in the attached PDF.
Specifically, this experiment demonstrates how different dynamics yield different gaps (due to temporal consistency, and due to $\varPhi_k = 0$). We used different harmonic oscillator dynamics (Section 5.1), where the marginally stable matrix $A$ is multiplied by a varying factor of $\rho$, and observations are given only up to a certain time.
We plot the terminal mse, at time $T$ for each filter, normalized by the variance of $x_T$. The estimator $\hat X_{direct} $ is established using the direct optimization approach (Appendix C).
We observe that when timesteps are more correlated, both consistency and unutilized info play a major role, hence MSE gaps between filters get bigger. This experiment also demonstrates that past unutilized information is helpful in the absence of current obsetvations.
We will add this experiment to the paper.
Pdf: /pdf/a6c8d95fc44e8623c71e38a62f05e2c5762d6d25.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Temporal Conditioning Spiking Latent Variable Models of the Neural Response to Natural Visual Scenes | Accept (poster) | Summary: The authors propose a new model called temporal conditioning spiking latent variable models (TeCoSLVM) that uses spiking neurons to simulate neural response to visual stimuli. They claim that this approach helps to preserve information in spike trains and adapt to temporal dependencies. In experiments with retinal data, they demonstrate that the TeCoS-LVM models produce more realistic spike activities, fit spike statistics accurately, and generalize well to longer time scales.
Strengths: Models that accurately predict spiking activity are still an open problem. The authors make an interesting contribution to this by defining a latent variable model that is trained with an information bottleneck objective. The work is, as far as I can tell, original. A few links to existing work are missing, which I mention below. The paper is mostly well and clearly written, and has an overall good quality. I have a few questions for the authors, but if they are answered satisfactorily, the paper makes a significant contribution to the development of dynamic spiking models.
Weaknesses: I have a few questions regarding the paper and a few hints for relevant literature.
---
Decoder:
* I couldn't find information on the actual form of the decoder in the main paper. In particular, how is $\psi_{dec}$ computed?
* How are the latent states integrated into the LIF neurons?
* What is also not entirely clear to me is whether the model has access to real spike trains from previous time steps or not.
* From the figure it doesn't look like it, but since the decoder is not clear to me I cannot be sure. It would be nice if the authors could clarify. This is particularly relevant to make sure the comparison to the CNN is fair.
Experiments:
* How are the spiking rates computed?
* What loss function is used to train the CNN? Poisson loss? Would it be possible to also train the spiking model with Poisson loss (i.e. do you have the rate)? The reason I am asking that is that Fig 3B shows that your TECOS models perform better in terms of spike train dissimilarity. However they were trained on that. If the CNN was trained on a different objective, the comparison is a bit unfair.
Fig3:
* How do you generate spikes from the CNN in Fig 3A.
---
Regarding dynamic models of neural activity, these two paper might be relevant (in particular regarding the point of predicting variable length sequences):
- Fabian H. Sinz, Alexander S. Ecker, Paul G. Fahey, Edgar Y. Walker, Erick Cobos, Emmanouil Froudarakis, Dimitri Yatsenko, Xaq Pitkow, Jacob Reimer, Andreas S. Tolias Stimulus domain transfer in recurrent models for large scale cortical population prediction on video
- Eric Y Wang, Paul G. Fahey, Kayla Ponder, Zhuokun Ding, Andersen Change, Taliah Muhammad, Saumil Patel, Zhiwei Ding, Dat T. Tran, Jiakun Fu, Stelios Papadopoulos, Katrin Franke, Alexander S. Ecker, Jacob Reimer, Xaq Pitkow, Fabian H. Sinz, Andreas S. Tolias Towards a Foundation Model of the Mouse Visual Cortex
Regarding spiking neurons, this paper might be relevant:
- Ramesh, Poornima, Atayi, Mohamad, Macke, Jakob H: Adversarial training of neural encoding models on population spike trains
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I integrated the questions in the weaknesses above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are discussed in one short paragraph at the end. It could be a bit more extensive.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dear reviewer,**
**Thanks very much for your detailed review and positive comments, which have greatly encouraged us! Our response is as follows.**
> Decoder:
>
> 1. I couldn't find information on the actual form of the decoder in the main paper. In particular, how is $\psi_{dec}$ computed
> 2. How are the latent states integrated into the LIF neurons
> 3. What is also not entirely clear to me is whether the model has access to real spike trains from previous time steps or not
> 4. From the figure it doesn't look like it, but since the decoder is not clear to me I cannot be sure. It would be nice if the authors could clarify. This is particularly relevant to make sure the comparison to the CNN is fair
**Re:** Thanks for your comment! We will improve relevant descriptions in our revised manuscript to make them clearer.
1. The temporal conditioning decoder $\psi\^{dec}$ is a two-layer spiking MLP that takes the current latent $\mathbf{z}\_t$ and hidden (which acts as a sensory memory) $\mathbf{h}\_{t-1}$ as inputs, and outputs neural response predictions.
2. Currently, in our implementation, we feed the latent states as the electric current input into the spiking neuron.
3. & 4. The model will not access real spike trains (and stimuli) of previous timesteps (as described in Figure 1. C in the manuscript). Take time $t$ as an example, our model only takes the current stimulus $\mathbf{x}\_t$, and the sensory memory (hidden) $\mathbf{h}\_{t-1}$. This means, compared to the CNN model, our model receives fewer inputs (CNN receives a batch of stimuli [1,2], while we only have a single timestep input). Therefore, regarding the amount of input information received for producing a single-step prediction, our comparison is actually more favorable to CNN, but our model still performs better.
[1] Deep learning models of the retinal response to natural scenes. *NeurIPS*, 2016.
[2] Interpreting the retinal neural code for natural scenes ... *Neuron*, 2023.
> Experiments:
>
> 1. How are the spiking rates computed
> 2. * What loss function is used to train the CNN? Poisson loss? Would it be possible to also train the spiking model with Poisson loss (i.e. do you have the rate)
> * The reason I am asking that is that Fig 3B shows that your TECOS models perform better in terms of spike train dissimilarity. However they were trained on that. If the CNN was trained on a different objective, the comparison is a bit unfair.
**Re:**
1. TeCoS models are spike-output, so in our tests, the firing rates are calculated using 20 repeated trials.
2. * Yes, the CNN model is trained using the Poisson loss [1]. Since our models directly output spike trains, it is not possible to *directly* use Poisson loss for them (additional conversions are required to achieve this, but the necessity of doing so is limited).
* Regarding the spike train distance comparison, we have conducted additional experiments that use different spike train distances to make to comparison more convincing (***Please refer to "Author Rebuttal by Authors" at the top of this page for TABLE 1 and 2.*** ). According to these results in Table 1, and Table 2, on these spike train distance metrics, our series of methods (tecos/tecos-noisy and our two variants) maintain a stable superiority over the baselines.
[1] Deep learning models of the retinal response to natural scenes. *NeurIPS*, 2016.
> Fig3: How do you generate spikes from the CNN
**Re:** Following previous works [1,2], the spikes are generated by sampling from the predicted poisson distributions.
[1] Deep learning models of the retinal response to natural scenes. *NeurIPS*, 2016.
[2] Interpreting the retinal neural code for natural scenes... *Neuron*, 2023.
> Regarding dynamic models of neural activity, these two papers might be relevant (in particular regarding the point of predicting variable length sequences):
>
> * Stimulus domain transfer in recurrent models for large scale ... *NeurIPS*, 2018.
>
> * Towards a Foundation Model of the Mouse Visual Cortex. *bioRxiv*, 2023
>
> Regarding spiking neurons, this paper might be relevant:
>
> * Adversarial training of neural encoding models ... *NeurIPS Workshop on NeuroAI*, 2019.
**Re:** Thank you for pointing out the relevant literature! We are very happy to add these relevant discussions to our revised manuscript to address your concern. We find that these excellent works (Sinz 2018, Wang 2023) leverage a hybrid "core" model (feedforward+recurrent) to shift along the temporal dimension. This is closely related to the temporal conditioning structure we used in this work. Regarding the related literature (Poornima 2019) you mentioned, we noticed that the part using adversarial training for optimization to capture the deterministic and stochastic components in neural population activities is very relevant to our motivation for using noisy LIF neurons.
---
Rebuttal Comment 1.1:
Title: Thanks for your clarifications
Comment: Dear authors, thanks for your clarifications. I have read the rebuttal. I will wait for the discussion with the other reviewers before adjusting (or keeping) my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your response!
Comment: Thank you for your recognition of the idea of our work, as well as your efforts to improve our manuscript!
We'll make sure that the points you raised are clear in the revised version of our manuscript.
*Authors.* | Summary: The authors model retinal ganglion cell responses to natural stimuli using a spiking latent variable model. They employ the (variational) Information Bottleneck (IB) method to compress the visual representation, similar to last year’s NeurIPS paper by Rahamni et al. However, this work differs in using binary responses (discretizes spike trains) instead of count responses, an input channel size of 1 instead of T, and temporal conditioning (RNN) instead of GP prior to handle temporal dependencies. They compare their results to Rahamni et al’s IB method and a CNN-based architecture on 4 salamander datasets. They show that those baselines fail to capture the spike autocorrelation, and a noisy version of their LVM model can also reproduce real firing rates more accurately.
Strengths: - good performance
- scales well to long time series
- modeling single trial spike trains, not just trial-averaged firing rates
Weaknesses: - Stimuli are converted to spikes, then to real valued signals (LVM), then back to spikes (ganglion cells). This does not align with visual processing in the retina: photoreceptors exhibit graded responses, as do bipolar cells (typically), whereas ganglion cells spike.
- The authors stress the importance of taking temporal dependencies into account, but instead of comparison to IB-GP they compare to IB-disjoint. Whereas the former has a temporal prior over latents the latter does not.
- A comparison to a method that also predicts spike trains not firing rates is lacking. Such a method could be competitive with regard to the considered spike train dissimilarity and autocorrelation measures.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Eq (1) 3rd eq with $u_t$ on both sides, yielding $u_t=u_{reset}/o_t$, is odd.
- line 103: Isn't $\tau_m$ is the membrane constant, not $\tau$? It seems $\tau=e^{-\Delta t/\tau_m}$ with bin size $\Delta t$.
- Why do you compare to IB-disjoint, not IB-GP? The authors of [19] favored IB-GP (temporal prior over latents) over IB-disjoint (no temporal prior).
- Does IB-GP need fewer latents that are more interpretable latents than yours?
- Line 290: Where is the Fig 5 you refer to?
- There are various measures of spike train dissimilarity in the literature, e.g. Victor-Purpura distance, Van Rossum distance, SPIKE-distance. You use the MMD in results and use an objective function (Eq 10) that specifically optimizes for this particular measure, thus obtaining good results. Do your conclusions hold for the other commonly used measures of spike train synchrony as well?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dear reviewer,**
**Thank you very much for your thorough review and positive comments, which have greatly encouraged us! Our response is as follows.**
> Stimuli are converted to spikes, then to real valued signals (LVM), then back to spikes (ganglion cells). This does not align with visual processing in the retina: photoreceptors exhibit graded responses, as do bipolar cells (typically), whereas ganglion cells spike.
**Re:** Yes, the computational model reflects some, but not all, architectural, computational, and anatomical motifs of neural circuit formations [1-4]. The proposed dynamic spiking model aims to reproduce the neural activities by simulating the underlying general neural coding processes (as also mentioned in Reviewer qcrE); in doing so, it ignores some known neuronal biophysical motifs. And in this manuscript, we specialize our model to a natural stimuli case (as a paradigmatic example) for evaluations and investigations. We would like to add related discussions in the revised version of the manuscript to address your concern.
[1] Stimulus-and goal-oriented frameworks ... *Nat. Neurosci.*, 2019.
[2] The Tolman-Eichenbaum machine ... *Cell*, 2020.
[3] Fitting summary statistics of neural data ... *NIPS*, 2021.
[4] Mesoscopic modeling of hidden spiking neurons. *NIPS*, 2022.
> Eq-1-3rd eq with $u_t$ on both sides, yielding $u_t=u_{reset}/o_t$ , is odd.
**Re:** Thanks for the correction. We will revise this expression in our revision to make it clearer. The original intention here is to indicate the resetting in the iterative calculation form we use, i.e., reset to $u\_{reset}$ after the neuron emits a spike.
> line103: Isn't $\tau_m$ is the membrane constant, not $\tau$? It seems $\tau=\exp(-\Delta t / \tau_m)$ with bin size $\Delta t$.
**Re:** Thanks for pointing out this unclear part. In our experiments, we actually handle data that is sampled using a fixed timestep. Thus, following previous work (e.g. [1]), when implementing spiking models in our case, we replace the genuine time constant with a fixed time constant which "incorporates" $\Delta t$. That is, in our implementation in this case, the model weight is associated with the simulation timestep (although strictly, the model weights should be irrelevant to $\Delta t$). We will improve the presentations in our revision to make it clearer.
[1] Brain-inspired global-local learning incorporated ... *Nat. Comm.*, 2022.
> * The authors stress the importance of taking temporal dependencies into account, but instead of comparison to IBGP they compare to IB-disjoint. Whereas the former has a temporal prior over latents the latter does not. Why do you compare to IB-disjoint, not IBGP? The authors of [19] favored IBGP ...
>
> * Does IBGP need fewer latents that are more interpretable latents than yours?
**Re:**
* This is mainly because, in evaluations on our data, we found that the performances of IBDisjoint were better. E.g., on Mov1Ret1, the Pearson CC *(IBDisjoint vs IBGP)* 0.65vs 0.61. Actually, this phenomenon happens sometimes ("…IB-Disjoint obtains superior performance than IB-GP…" from Appendix 4 in [19]). Another possibility is that our re-implementations (as [19] has not yet released them) lack some technical details which are omitted in the paper [19]. Besides, IBGP has additional parameters (those of the Cauchy kernel) that need to be adjusted, this makes its hyper-parameter tuning process quite tricky when the data is changed.
* In the main experiments in the IB paper, only nine neurons were actually used, so the latent dimension was small. Our tests show that our methods can actually, use a smaller latent dimension to produce more accurate predictions. For example, on Mov2Ret2, the firing rate Pearson CC of TeCoSLVM-Noisy with *latent-dim=8* was *0.81*, while those of the IB model with *latent-dim=32* was *0.66*. Thus, our method can use fewer latents, which is more advantageous from the interpretable aspect.
> Line 290:Where is Fig5
**Re:** Yes, in Appendix (it was re-uploaded as *Fig1* in the PDF file in "*Author Rebuttal by Authors*" panel). We will like to revise our manuscript to fix the unclear ref. you mentioned.
> A comparison to a method that also predicts spike trains not firing rates is lacking...
**Re:** To address your concern, we have conducted additional experiments using two variants of our model, following your comments (***Please refer to "Author Rebuttal by Authors" panel at the top of that page for TABLE 1, 2***). We noticed that, these variants also have certain advantages compared to the baseline, which also demonstrates the efficiency of our models.
> There are various measures of spike train distances, e.g. Victor Purpura, van Rossum, SPIKE dist...
**Re:** We have conducted additional evaluations on two benchmarks (Mov1Ret1, Mov2Ret2) following your comment and present them in tables as follows. According to these results (***Please refer to "Author Rebuttal by Authors" TABLE 1, and TABLE 2***), on these spike train distance metrics, our series of methods (tecos/tecos-noisy and two variants) maintain a stable advantage over the baselines.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications. I appreciate the supplementary results for a model with real-valued hidden neurons, as well as the inclusion of additional evaluation metrics. These metrics further demonstrate that TeCoSLVM maintains its superiority over the baselines.
However, when considering these metrics, the noisy version of TeCoSLVM no longer holds the same preference. Could you provide clarification on this matter? Do you calculate these alternative metrics on individual trials, with subsequent averaging of the scores obtained, while for CC, trial averages are initially computed to derive rates before calculating the score?
---
Reply to Comment 1.1.1:
Comment: > Do you calculate these alternative metrics on individual trials, with subsequent averaging of the scores obtained, while for CC, trial averages are initially computed to derive rates before calculating the score?
**Re:** Thank you for your reply. Yes, the spike train distance metrics are computed on multiple trials and then averaged; while for CC, trial averages are initially computed to get firing rates.
> However, when considering these metrics, the noisy version of TeCoSLVM no longer holds the same preference.
**Re:** It is true that when considering only the spike train distance metric, the performance of the noisy version of TeCoSLVM is slightly affected (as in *manuscript Fig. 3B, Mov1Ret2, Mov2Ret2*) due to the noise-perturbed neuronal dynamics. Although there is a slight decrease in spike timing precision, the overall impact of the noisy version is advantageous. Notably, the noisy version consistently and significantly improves the firing rate CC compared to the noise-free TeCoSLVM in all our evaluations. Another compelling merit is that, one can successfully and easily reproduce the trial-by-trial variance by incorporating these noisy spiking neurons.
---
Finally, thanks for reviewing and improving our work. We'll ensure all points you raised are clear in our revision!
*Authors* | Summary: This paper proposes a spiking latent variable model of neural response to natural visual stimuli. The model is trained to directly predict spike trains instead of trial-averaged firing rates, and designed to work, at test time, on sequences that are longer than sequences seen during training.
Strengths: 1. The authors identify two critical limitations of biophysically-realistic computational models. They propose a model that effectively address these limitations. Directly predicting spike trains using spiking neural networks, appears to be a natural solution.
2. The empirical experiments suggest that this results in a more accurate prediction of neural responses.
Weaknesses: 1. A main claim of the paper is that the model generalizes to longer time scales unseen during training. It is unclear whether the model is actually leveraging long-range interaction in its prediction. The results do not demonstrate that the memory of the model is holding long-range history from the sequence. It is unclear whether a model with a sliding window (of 1s) would perform the same or worse.
2. While the authors do a great job at analyzing different aspects of the model including the impact of hyperparameters on the learned latent space, it remains unclear how this more bio-realistic model provides novel insights into sensory processing.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - The model can be applied to sensory data of other modalities, how sensitive is the model to the choice of hyperparameters? For example, the latent dimension or the hidden dimension?
- Why was the warmup period similar for all four datasets (0.5s), one would expect to observe differences due to different dynamics of the stimuli.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: As stated, the proposed model is only partially realistic.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dear reviewer, many thanks for your detailed review and comments! Our response is as follows.**
> A main claim of the paper is that the model generalizes to longer time scales [...]
**Re:** We would like to point out that our model only uses single-step stimulus input for prediction and leverages temporal dependencies by holding a memory in its predictions (as mentioned by Reviewer *QFPa*, *qcrE*). Also, the IB-Disjoint model has an LVM structure and uses a 1s sliding window, similar to the model you described. According to experimental results in Fig.2 and 3 in the paper, our model outperform IB-Disjoint in all evaluations.
And to further address your concern, we implemented a 1s sliding window variant (*SlidWindVariant*, and *SlidWindVariant Noisy*) of our model following your description, and ran tests on Mov1Ret1 data (Table below). The results show that these sliding window variants performed worse in all metrics, reflecting the advantages of using a temporal conditioning structure to handle these temporal interactions.
| metric\model | TeCoSLVM | TeCoSLVMNoisy | SlidWindVariant | SlidWindVariant Noisy | CNN | IBDisj. |
| ------------------ | -------- | ------------- | --------------- | --------------------- | ------ | ------- |
| firing rate CC | 0.579 | 0.727 | 0.567 | 0.681 | 0.690 | 0.653 |
| VictorPurpuraDist. | 12.84 | 14.02 | 18.62 | 19.93 | 19.60 | 21.92 |
| vanRossumDist. | 127.35 | 238.61 | 307.78 | 343.02 | 376.82 | 394.02 |
| SPIKE Dist. | 0.124 | 0.155 | 0.197 | 0.210 | 0.207 | 0.224 |
> While the authors do a great job at analyzing different aspects of the model including [...], it remains unclear how this more bio-realistic model provides novel insights into sensory processing.
**Re:** To name some concrete sensory processing (encoding of natural stimuli) studies that could benefit from our model. Fitting neural response to natural stimuli more accurately: works like [1] could benefit from our model to identify microcircuits that mediate/gate diverse encoding properties in sensory processing. Intrinsic/latent dynamics, specific circuit mechanism characterization: our model provides an efficient approach to decompose the computations in sensory processing into the latents (a generative model that enables a low-dimension representation [2]), which could allow neuroscientists to generate new hypotheses, or test their hypotheses about how interneurons with diverse response properties are combined to perform sensory processing [2-4] and provide neurally-grounded understandings of sensory computations therein.
[1] Multiplexed computations... *Nat. Comm.*, 2017
[2] Stimulus-and goal-oriented frameworks... *Nat. Neurosci.*, 2019.
[3] Toward a unified theory of efficient... *PNAS*, 2018.
[4] Interpreting the retinal neural... *Neuron*, 2023.
> The model can be applied to sensory data of other modalities, how sensitive is the model to the choice of hyperparameters? For example, the latent dimension or the hidden dimension?
**Re:** Our results show that the model is quite robust to the hyperparameters you mentioned and can achieve good performance even when the hidden state/latent space dimension is small.
We evaluated the performance of TeCoS-LVM models under different hidden state and latent space dimensionality settings. The results, shown in Appendix Fig. 6A, indicate that increasing the latent space dimension improved performance. Still, further increases ($>32$) had little effect when the latent space dimension was large. This also suggests that the number of latent factors for the visual stimuli coding task we currently consider is not very large. On the other hand, increasing the hidden state dimension enhances sensory memory capacity and benefits temporal conditioning operations. As shown in Appendix Fig. 6B, the performance does not increase significantly when hidden state dimensionality increases to a certain extent (about $64$). The results in Appendix Fig. 6B also indicate that the proposed temporal conditioning mechanism can effectively utilize sensory memory and achieve good results even when the hidden state dimension is small (memory capacity is small).
> Why was the warmup period similar for all four datasets (0.5s), one would expect to observe differences due to different dynamics of the stimuli.
**Re:** Thanks for your meticulous review. In fact, the setting of this warmup period is currently, more empirical. This means, *we cannot use it as an "indicator" to reflect the different dynamics of different data*. The length of this period is more empirical, as it is difficult to precisely choose the most "suitable" warmup period length for different data. In other words, currently, we lack a reliable quantitative approach to accurately measure which warmup length is exactly "optimal" or most "correct".
---
Rebuttal Comment 1.1:
Title: Thanks for your clarifications
Comment: Thank you for addressing the points I raised! These clarifications and the additional work carried out in response to all reviewers further strengthens this paper. I increased my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your response
Comment: Dear reviewer,
We'd like to express our appreciation for your review and insightful suggestions. Your feedback will greatly improve our work. We'll ensure all points you raised are clear in our revised manuscript.
*Authors* | Summary: This paper proposes a model, which is composed of spiking neural networks and a conventional recurrent network, to reproduce neuronal responses, i.e., retinal ganglion cells. The author claims that the novelty of this model is incorporating the spiking network, which makes the model directly outputs spikes and is more biologically plausible. The author did simulations to show the model outperforms alternatives in reproducing retinal ganglion cells' activities.
Strengths: ### Originality
In my understanding, the paper has two novelties: 1) the model incorporates spiking neural networks as encoder and decoder; 2) the model uses an RNN to provide temporal conditioning which is more flexible than previous models whose kernels have fixed temporal durations.
### Quality
The writing has clear explanations of the probabilistic framework, but it is quite unclear how the probabilistic framework is connected with underlying spiking networks (see my comments below).
Weaknesses: ### Major
I understand the variational information bottleneck framework, and the text of this part is clearly written. However, it is unclear the details of the spiking networks and how they are incorporated into the probabilistic framework.
1) Specifically, how the distributions' parameters $\phi$ and $\psi$ in Eqs. 6 and 7 are related to spiking networks? PS: I only see line 151 saying $\phi$ is a linear readout of spiking neurons' responses.
2) What are the structures of spiking networks in the encoder and decoder? Are the pure feedforward spiking network, or recurrent spiking network?
3) During the model training, did the authors only train the readout weight from the spiking network? or also train the synaptic weights inside the spiking network?
Without providing these details, I am not confident in judging the novelty of this paper. I am happy to see the authors' rebuttal to explain this.
### Minor
- Line 177, “Sec. C” and Line 188, “Sec. E”: I don’t know where these sections are. In the supplementary?
- Eq. 3: does this equation miss a term of $\beta I_c$ in the Lagrangian multiplier?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Conceptually, I don't understand why the model with spiking network encoder and decoder could render better results. I am really interested in some deep, theoretical discussions on this issue. I could understand if the output layer is a spiking network, it would fit the spiking neuronal responses better. Nonetheless, I don't understand why using spiking net as intermediate layers could improve the performance. Can the author provide an alternative result to replace the spiking intermediate layers by rate-based neurons and compare the performance of the two versions?
- Another conceptual question. From the neural pathway point of view, the neural circuits from the retina to the visual thalamus, i.e., LGN, is equivalent to dimensionality reduction because the number of neurons decreases with the neural pathway. And then from LGN to the visual cortex the neuron number will expand a lot. In this sense, the visual pathway inside the retina, and the one from the retina to LGN looks like an __encoder__, while the pathway from LGN to the visual cortex is like a __decoder__. If my statement was true, the retina is just an encoder with less number of neurons in the output layer (retinal ganglion cells) compared with the input layer (photoreceptor). Hence, to reproduce retinal ganglion cells' responses, what is the advance of considering the proposed model architecture (Fig. 1B) with an encoder (dimensionality reduction) and a decoder (dimensionality expansion?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: See my comments in Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dear reviewer, thank you very much for your detailed review and positive comments, which have greatly encouraged us. Our response is as follows.**
> I understand the variational information bottleneck framework, and the text of this part is clearly written. However, it is unclear the details of the spiking networks and how they are incorporated into the probabilistic framework.
>
> 1. Specifically, how the distributions' parameters $\phi$ and $\psi$ in Eqs. 6 and 7 are related to spiking networks?
> 2. What are the structures of spiking networks in the encoder and decoder? Are the pure feedforward [...]
> 3. During the model training, did the authors only train the readout weight from the spiking network [...]
**Re:**
1. For the temporal conditioning prior $\phi\^{prior}$ , we use a spiking network, which takes the hidden state $\mathbf{h}\_t$ as input, and outputs the distribution means and variances using linear readout synapses. The temporal conditioning encoder $\psi\^{enc}$ is a spiking network that receives the current stimulus $\mathbf{x}\_t$ and hidden $\mathbf{h}\_{t-1}$ as inputs, and produce mean and variance using linear readout synapses. The temporal conditioning decoder $\psi\^{dec}$ takes the latent $\mathbf{z}\_t$ and the hidden $\mathbf{h}\_{t-1}$ as inputs, and produces spike activities (predictions).
2. The spiking networks (for parameterizing those $\phi$ and $\psi$s) mentioned above are all feed-forward; they are spiking MLPs. Since we introduced temporal conditioning operations using hidden states to handle temporal dependencies, we followed the feed-forward network structure, as in previous works [1,2]. We found that such a simple feed-forward structure can already produce very competitive performance. Adding more recurrent connections in the model may bring some improvement, but it also increases the complexity and the number of parameters of the model, which is not conducive to further analysis. Therefore, in this work, we only consider using a feed-forward structure.
3. During the model training, we optimize all synaptic weights inside the model, including those of the spiking networks, and the RNN, which maintains the hidden $\mathbf{h}$.
[1] Deep learning models of the retinal response to natural scenes. *NIPS*, 2016.
[2] Interpreting the retinal neural code for natural scenes ... *Neuron*, 2023.
> Line 177, “Sec. C” and Line 188, “Sec. E”: I don’t know where these sections are. In the supplementary?
>
> Eq. 3: does this equation miss a term of $\beta I_c$ in the Lagrangian multiplier?
**Re:** Yes, they are in the Appendix. We'd like to revise our presentation in revision to make it clearer. The $I_c$ term is not missing in equation 3, because we are maximizing an equivalent objective [1].
[1] Deep Variational IB. *ICLR*, 2016.
> Conceptually, I don't understand why the model with spiking network encoder and decoder could render better results [...]
**Re:** We have conducted additional experiments following your comments (***please refer to "Author Rebuttal by Authors" panel above for Table 1,2***). We additionally consider *Victor-Purpura*, *van-Rossum*, and *SPIKE* distances as metrics. The results show that our variants can also achieve competitive performance than baselines, but lag behind standard tecos-lvm models.
Since our main network is feed-forward, using spiking neurons can enhance the capture of the spatiotemporal features embedded in the stimuli stream (which is essentially spatiotemporal) through the neuronal temporal dynamics (ANN neurons don't have that), thus producing better predictions. Also, a hallmark of natural stimuli is their sparse latent structure [1,2]. By using spiking neurons in the intermediate layer, the model can take advantage of the sparse representation (compared with those of ANN neurons) to better fit the latent structure of natural stimuli, resulting in improved performances.
[1] *Natural image statistics...* Springer, 2009.
[2] Toward a unified theory of efficient, predictive... *PNAS*, 2018
> Another conceptual question. From the neural pathway point of view, the neural circuits from the retina to the visual thalamus, i.e., LGN, is equivalent to dimensionality reduction because the number of neurons decreases with the neural pathway. And then from LGN to the visual cortex the neuron number will expand a lot. In this sense, the visual pathway inside the retina, and the one from the retina to LGN looks like an **encoder**, while the pathway from LGN to the visual cortex is like a **decoder**. [...] Hence, to reproduce retinal ganglion cells' responses, what is the advance of considering the proposed model architecture (Fig. 1B) with an encoder (dimensionality reduction) and a decoder (dimensionality expansion?
**Re:** Considering the proposed architecture allows us to simultaneously obtain latent dynamics for future interpretation/analyses [1-3] and achieve better modeling performance.
The encoder-decoder we use here is not solely for dimensionality expansion/reduction (Fig.1A is only schematic), but rather to simulate a two-stage efficient neural coding process [4] of inferring latent codes from stimuli and then decoding neural responses from latent codes. And, the latent dimension in our model is adjustable, e.g., if we choose a large latent dimension (larger than the output RGC dim.), then the whole process becomes dimension reduction. So the entire TECOS-LVM model is performing the encoding process (stimuli to the retina), which is, functionally, an encoder as you mentioned. We would like to supplement the relevant content in the revised manuscript and improve the graphics in Fig. 1 to address your concern.
[1] Inferring single-trial neural population dynamics... *Nat. Methods*, 2018.
[2] Targeted neural dynamical modeling. *NIPS*, 2021.
[3] Drop, swap, and generate... *NIPS*, 2021.
[4] Toward a unified theory of efficient, predictive... *PNAS*, 2018
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' reply, which addresses most of my questions.
I am still unclear about which rate-model was used in your additional experiments. Is it the CNN in the table?
The perceptrons in CNN don't have temporal dynamics. A fair comparison would be just replacing the spike generation in spiking neurons with a transfer function to output instantaneous rate. PS: I do agree the temporal dynamics of spiking neurons can capture the spatiotemporal features in the stimuli, but I don't get the point of why the spikes can help the performance of the model.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply!
Comment: > Thanks for the authors' reply, which addresses most of my questions.
>
> I am still unclear about which rate-model was used in your additional experiments. Is it the CNN in the table?
>
> The perceptrons in CNN don't have temporal dynamics. A fair comparison would be just replacing the spike generation in spiking neurons with a transfer function to output instantaneous rate. PS: I do agree the temporal dynamics of spiking neurons can capture the spatiotemporal features in the stimuli, but I don't get the point of why the spikes can help the performance of the model.
**Re:** Yes, in the table of our additional experiments, the CNN and the IBDisjoint are rate-model.
Firstly, incorporating spiking neurons in intermediate layers allows the model to take advantage of the sparse spike representation to better fit the sparse latent structure of natural stimuli [1], which might enhance the model performance. On the other hand, we employed a spike train-based objective rather than a firing rate-based one. In this scenario, spiking hidden neurons can better align with the ultimate objective, which may also contribute to improved performance.
Due to the limited time of the discussion period, we regret that these supplementary experiments about using the "Leaky-Integrate and Firing Rate" you mentioned are still in progress. However, we are delighted to include those experiments and discussions in our subsequent revision to address your concern.
[1] *Natural image statistics... * Springer, 2009.
------
Thank you very much for reviewing and providing constructive suggestions. They have greatly helped us enhance the quality of our manuscript. We'll make improvements in our revised manuscript accordingly.
*Authors* | Rebuttal 1:
Rebuttal: We sincerely thank all *Reviewers* for their thorough reviews and *Chairs* for their efforts. We have responded to each point in your comments separately. *Due to word count limitations, we had to shorten some of your reviews when quoting them.* Please find our replies in your corresponding panels.
-----
Below, we refer to the two variants (from Rev. **j9bD** and **QFPa**) of the proposed tecos-lvm as the model ***"variant"*** (which has the same structure as tecos-lvm, only uses LIF output neurons, other hidden neurons are real-valued), and the model ***"variant Noisy"*** ( same as the ***variant***, but uses noisy LIF output neurons). Results are means computed across repeats. Also, we added Victor Purpura, van Rossum, and SPIKE distances (the lower the better) as evaluation metrics (from Rev. **QFPa** and **qcrE**).
***TABLE 1:*** Comparison with variants on Mov1Ret1. CC stands for Pearson Correlation Coefficients (the higher the better). The TeCoS-LVM, and TeCoS-LVM Noisy models are the main models introduced in our manuscript.
| metric \ model | TeCoSLVM | TeCoSLVM Noisy | variant | variantNoisy | CNN | IBDisjoint |
| -------------- | -------- | -------------- | ------- | ------------ | ------ | ---------- |
| firing rate CC | 0.579 | 0.727 | 0.468 | 0.720 | 0.690 | 0.653 |
| Victor Purpura | 12.84 | 14.02 | 14.58 | 13.11 | 19.60 | 21.92 |
| van Rossum | 127.35 | 238.61 | 179.12 | 245.45 | 376.82 | 394.02 |
| SPIKE | 0.124 | 0.155 | 0.144 | 0.136 | 0.207 | 0.224 |
***TABLE 2:*** Comparison with variants on Mov2Ret2. CC stands for Pearson Correlation Coefficients (the higher the better). The TeCoS-LVM, and TeCoS-LVM Noisy models are the main models introduced in our manuscript.
| metric \ model | TeCoSLVM | TeCoSLVM Noisy | variant | variantNoisy | CNN | IBDisjoint |
| -------------- | -------- | -------------- | ------- | ------------ | ------- | ---------- |
| firing rate CC | 0.616 | 0.822 | 0.586 | 0.798 | 0.708 | 0.656 |
| Victor Purpura | 22.67 | 28.44 | 23.71 | 28.49 | 39.26 | 38.38 |
| van Rossum | 574.30 | 1135.81 | 682.51 | 1205.68 | 2638.96 | 2244.98 |
| SPIKE | 0.123 | 0.153 | 0.133 | 0.148 | 0.221 | 0.221 |
----
Pdf: /pdf/6311ac88380b2b3d084c00444a9c3ec8d7825ca5.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Effective Human-AI Teams via Learned Natural Language Rules and Onboarding | Accept (spotlight) | Summary: The authors propose an HAI system called IntegrAI that should help people make better decisions about when to defer to AI predictions, make predictions themselves, or combine their decisions.
As part of their system, the authors perform semantic clustering using LLMs to find areas of disagreement between AI and humans and distill those clusters into simple rules that can be surfaced as insights to users.
The IntegrAI-describe algorithm can be summarized as:
1. Create clusters in joint embedding space
2. Get candidate descriptions of each cluster and embed them into same space
3. Search for counterexamples
4. Update clusters and repeat
The key claims are:
1. IntegrAI learns optimal integration decisions
2. Rules generated by the oracle (GPT3.5 in this case) are easily understandable
3. Onboarding calibrates human expectations about AI performance
4. Dashboard insights help humans choose which action they should take thereby improving teaming performance.
Strengths: - the IntegrAI framework is novel and very interesting
- the IntegrAI-describe algorithm is a very clever way to leverage LLMs for semantic clustering
Weaknesses: 1. Several of the main claims (see list in summary above) are unsubstantiated by empirical results. Specifically:
- claim 2 (rules are easily understandable) is not extensively tested and the results comparing it against SEAL seem inconclusive.
- experimental results provide only weak evidence for claim 3 and claim 4, and it remains unclear whether a combination of Onboarding + Rec is actually useful (e.g. if looking only at effect size means, then Onboarding alone has both higher performance and lower time than Onboarding+Rec)
2. Additionally, I think there may be some issues in the significance testing for the results in Table 3. Given the large reported standard errors, it seems unlikely that the p-values would be as small as reported.
Minor suggestions:
- small typo in figure 1: no panel "d)"
- small type on line 57: should be "compared" instead of "compare"
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See sections above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 4 excellent
Limitations: The proposed framework is a 2nd-order framework in the sense that:
- 0th order is if human or AI individually make decision
- 1st order is if human sees AI decision and can change mind
- 2nd order is another AI sits on top of this process and tells human whether they should change their mind.
For this type of framework to be useful, there needs to be a complex setting where humans struggle to model their own uncertainty/limitations as well as those of the AI. I think the traffic light setting may not be adequately complex to really let this framework shine and show off its capabilities. I think it would really strengthen the paper and provide evidence for the key claims if the authors could evaluate on a more complex setting (e.g. multi-class, greater ambiguity, etc.)
Also, the proposed approach requires collecting paired data. This is a potential limitation if a lot of expensive human data is required. It would be great to see a discussion on how much paired data is required and whether this is a major limitation. An additional limitation of the paired data collection procedure the authors use in their particular case study is that integration can't be detected (i.e. currently authors only detect whether the human used the AI answer or didn't).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your detailed review, we appreciate your time and all the important comments.
**Error Measurements Typo**: Before we delve into each point in your review, we want to clarify a typo in our paper. We report standard deviations instead of standard error for all results when we write $ XX.X \pm X.X$. Just to clarify in detail what we mean: given a random variable X (e.g. performance of human on user study), given samples $x_1,\cdots,x_n$ we can compute the mean $\mu_n$ (np.mean) and the standard deviation $\sigma_n$ (np.std as used in our code in the supplement) measures the deviation around the mean. On the other hand, the standard error is computed as standard deviation divided by the square root of sample size: $e_n=\sigma_n / \sqrt{n}$ measures the precision of our mean measurement $\mu_n$ to the true mean $\mu$.
## Weaknesses:
1. We agree that we should make a better job to make clear that claims 1-4 are hypotheses that we as authors made before developing the experiments and are then either refuted or backed by the experiments, the introduction states these hypotheses which then section 6-7 help rebute or back.
*“claim 2 (rules are easily understandable) is not extensively tested and the results comparing it against SEAL seem inconclusive.”*:
It is true that we did not perform direct quantitative analysis about the rules being interpretable themselves, what we did test instead in the user study is whether the rules presented through onboarding have an effect in improving the human accuracy. Moreover, we have in the appendix show examples of these rules for BDD and for MMLU in Tables 7 and 8 respectively so one can qualitatively evaluate whether they are intepretable. Finally, the results in aim 3 (sec 6) evaluating our method compared to SEAL do show a noticeable effect: with respect to METEOR score, our method achieves 0.32 $\pm$ 0.04 (standard error: $0.04 = 0.30 / \sqrt{50} $ where 50 is the total amount of objects tested for Aim 3) while SEAL achieves 0.10 $\pm$ 0.02.
*‘experimental results provide only weak evidence for claim 3 and claim 4“:*
We believe our user study provides strong evidence for claim 3 in the task studied as we have a statistically significant effect for onboarding compared to baselines. For claim 4, as the results show, that claim is refuted as clearly recommendations increase the time to make a prediction and do not improve performance beyond onboarding. We will make in future iterations of this paper that the claims in the intro are hypotheses that we test and then include the result of our tests from the beginning.
2. We hope that the note at the beginning of the rebuttal clarifies the issue with standard errors (we reported standard deviation). There are no issues at all with the significance testing performed. Table 3 with standard errors instead of standard deviation is as follows (simply divide the standard deviation by square root of 50, as 50 is the sample size)
| Metric | AI only | Human | Human-AI | Onboard(ours)+Rec | Onboard(ours) | Onboard(baseline) | Rec |
|------------------|----------------|----------------|----------------|-------------------|----------------|-------------------|------------------|
| Accuracy (\%) | $79.2 \pm 1.3$ | $78.5 \pm 1.8$| $77.2 \pm 1.5$| $81.7 \pm 1.2$ | $82.9 \pm 1.2$ | $79.9 \pm 1.4$ | $81.4 \pm 1.7$ |
## Limitations:
*“The proposed framework is a 2nd-order framework in the sense that:”*
Practically our interface is a 1st order framework in your terminology and not a 2nd order framework since the human sees the AI decision and then makes the final decision, the AI does not tell the human to change their opinion (interface in Figure 3). However, it can be considered as a pseudo-2nd order framework since during onboarding we use the data of the human relying on the AI and try to correct the human’s perception about the AI.
*“I think it would really strengthen the paper and provide evidence for the key claims if the authors could evaluate on a more complex setting (e.g. multi-class, greater ambiguity, etc.)”*
-> Our user study task on traffic light detection went through multiple iterations to make sure the task is interesting which was accomplished with varying increasing levels of blur to the images to accomplish “greater ambiguity” as you mention.
*“approach requires collecting paired data.”*
This is a major limitation of our work in that to work optimally we need human data of interacting with the system. The approach can be made to function without paired data but performance would decrease. We go around the limitation of limited paired data by building predictors of the human prior from the limited data we have so that we can essentially increase the sample size but might introduce additional bias. We will add this as a limitation in our paper in the next iterations. Finally, while it is true that integration cannot be detected which we fully acknowledge in lines 148-151, human prediction with and without AI can reveal some information of integration.
---
Rebuttal Comment 1.1:
Title: Clarification on rebuttal
Comment: Thank you for the detailed response!
## Claims
I'd like to make sure I understand the key claims you are making. For example, on lines 34-36 you write: "We further propose to surface the AI-integration decisions found by IntegrAI as recommendations to the human within an AI dashboard used after onboarding. This AI dashboard helps the human evaluate which action they should take, thereby leading to effective AI adoption for enhanced decision-making." and on lines 230-231 (in a section titled "Onboarding and Recommendations to Promote Rules") you write: "We accomplish this through an onboarding process followed by test-time recommendations, as described next." and at the end of section 5 in line 262 you also write "These recommendations are shown as an aid to the human to reduce their decision-making time." all of which I summarized as claim 4. In that case, your full proposed method would be Onboarding (ours) + rec.
My understanding based on your response is that you are changing all of these claims about dashboard/rec now to instead be hypotheses, which are then refuted in the last few sentences of your analysis.
The updated claims are then that the dashboard is actually not helpful (i.e. rec increases time but not performance) and that your proposed method is just Onboarding (ours)? Would you say your contributions are then 1) a novel method for learning rules (i.e. the content); and 2) a novel onboarding method (i.e. the delivery method)? Is rec just a baseline then that acts as an ablation of the delivery method since it involves showing rules to people but not onboarding them? Totally fine for rec to be a negative result by the way, I strongly believe we shouldn't punish authors for including negative results since these can also be beneficial for the community.
If that's the case, then:
- comparing "Onboarding (ours)" vs "Onboarding (baseline)" would test whether the rules you come up with are actually helpful.
- comparing "rec" vs "Human-AI" would also test whether the rules you come up with are actually helpful.
- comparing "Onboarding (ours)" vs "rec" would test whether the onboarding procedure you came up with is actually helpful.
- comparing "Onboarding (baselines)" vs "rec" would also test whether the onboarding procedure you came up with is actually helpful.
- etc.
I think these comparisons (though many of them won't be statistically significant) reveal the following insights that disentangle what each component contributes:
- The rules you come up with lead to increased performance
- The onboarding procedure leads to decreased time (worth noting that this is at the cost of increased training time, so this only becomes worthwhile for tasks that need to be repeated many times)
## Statistical testing
Thank you for the correction, the test results make a lot more sense if those were standard deviations in the table though I think there may still be typos or some other problems. For example, shouldn't the p-value for Onboard (ours) vs Human-AI be 0.004 rather than 0.0004.
Also weren't 6 tests conducted in total rather than 5 (affecting the Bonferonni correction)? Finally, if you are relying on statistical significance testing to substantiate your claim, shouldn't you also test other claims you make like that "Onboarding (ours)" outperforms "Onboarding (baseline)" and "AI only"?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer
Comment: We thank the reviewer for such detailed reading of our paper and our rebuttal. We are mostly in agreement with your reading of the paper, we highlight some important details here.
To forward this comment, we plan to release all code and raw data of our study allowing for easy replication of all results and tables.
Thank you for following up on Table 3, indeed we realized that the p-values for Onboard(ours)+Rec and Onboard(ours) had an issue in Table 3. The processing code of the raw data for the onboarding (ours) condition was run twice duplicating values. To explicitly point out the error, let X be the dataset of values for Onboard(ours) and Y be the dataset of values for Onboard(ours)+Rec used to report Table3 in the paper. Let X' and Y' be the true (correct) dataset values for Onboard(ours) and Onboard(ours)+Rec. The incorrect data relate to the correct data with X=(X',X') (twice duplicated) and Y=(Y',X') (duplicated with the Onboard (ours) values).
The following is the corrected form of Table 3 (avg and standard deviations reported), all scores are in [0,1] as fractions.
| Metric | Human | Human-AI | Onboard(ours)+Rec | Onboard(ours) | Onboard(baseline) | Rec | AI only|
|:--------------------------------------|:---------------|:---------------|:---------------------|:-------------------|:--------------------------|:-------------------|:-------------------|
| Accuracy (mean, std dev) | (0.785, 0.127) | (0.772, 0.106) | (0.804, 0.089) | (0.829, 0.086) | (0.799, 0.096) | (0.815, 0.121) | (0.792, 0.092) |
| Test with Human+AI (p-value, t-value) | (0.455, 0.752) | (0.455, 0.752) | (0.1034, 1.64343) | (0.00394, 2.95053) | (0.20123, 1.28688) | (0.06136, 1.89218) | (0.14883, 1.44748) |
| AI reliance (mean, std dev) | N/A | (0.165, 0.227) | (0.672, 0.166) | (0.256, 0.237) | (0.238, 0.287) | (0.211, 0.218) | |
| Time per example (mean, std dev) | (5.408, 2.142) | (7.78, 3.834) | (7.622, 2.679) | (5.936, 2.076) | (6.841, 3.644) | (8.717, 5.0) |
In the corrected table, the accuracy (and std dev) for Onboard(ours) is the same as in the reported table since duplicating data won't have an effect, the p-value is also only shifted by one digit to the left (0.004). The accuracy of Onboard(ours)+Rec was 81.7 which we can verify is equal to = (80.4 (correct value for Onboard(ours)+Rec ) + 82.9)/2, which confirms our error. This error in the analysis was completely unintentional since it wouldn't have changed the significance of our results.
From your rebuttal and suggested tests, we run them and perform p-value adjustments for multiple testing using the Benjamini Hochberg procedure:
| | Condition 1 | Condition 2 | p-value | adjusted_p-value |
|---:|:------------------|:------------------|-----------:|-------------------:|
| 0 | Rec | Human-AI | 0.0613596 | 0.224985 |
| 1 | AI only | Human-AI | 0.148829 | 0.265314 |
| 2 | Onboard(baseline) | Human-AI | 0.201229 | 0.27669 |
| 3 | Onboard(ours)+Rec | Human-AI | 0.103404 | 0.262999 |
| 4 | Onboard(ours) | Human-AI | 0.00394331 | 0.0433764 |
| 5 | Human | Human-AI | 0.573388 | 0.573388 |
| 6 | Onboard(ours) | Onboard(baseline) | 0.119545 | 0.262999 |
| 7 | Rec | Onboard(ours) | 0.51309 | 0.564399 |
| 8 | Rec | Onboard(baseline) | 0.495399 | 0.564399 |
| 9 | AI only | Onboard(ours) | 0.0108641 | 0.0597526 |
| 10 | Onboard(ours)+Rec | Onboard(ours) | 0.168836 | 0.265314 |
This performs all pairwise tests between the 3 main conditions (Rec, Baseline Onboarding, and our Onboarding (w/out rec), and tests whether the addition of recommendation to onboarding has an effect. We find that the only significant effect is that of Onboard(ours) improving performance over Human-AI without onboarding, all other effects are not significant at level $\alpha=0.05$ (family-wise error rate). Our main claim in the paper about significance in the paper is that our onboarding procedure (without rec) outperforms the Human-AI team, compared to AI-only is close to being significant (0.06) but misses the 0.05 threshold after multiple testing corrections (11 tests).
Our goal in the paper was to mainly investigate whether our procedure would improve the performance of the human-AI team, and there is only an implicit comparison with the baseline which does not improve performance (we had never performed tests beyond those reported in the paper before this rebuttal as we had internal pre-registration). | Summary: This paper discusses some research problems a very interesting scenario where human and AI need to collaborate to achieve a certain goal. The authors propose to learn rules grounded in data regions and described in natural language, which illustrates how the human should collaborate with the AI agent.
The paper also proposes a novel algorithm, region discovery algorithm, which uses an iterative procedure where a large LM describes the region while distinguishing it from the rest of the date. Experiments show positive results when following this protocol.
Strengths: 1. It tries to solve research problems in a very interesting domain.
2. The user study is extensive and its setting is convincing.
Weaknesses: 1. The presentation needs huge improvement. e.g., some figures are not clear and text is vague.
2. The study was conducted with 25 human participants, which might not be representative human users.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Have you considered about the "cost" during human AI collaborations? e.g., time. If fully automated pipeline can work well with less cost, that might be our preferred one.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The relatively small scale human evaluation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your detailed review, we appreciate your time and all the important comments. Please read below for our response.
## Weaknesses:
1. We will improve the presentation and clarify of the paper in our next iteration. In particular, we will clarify the methods in section 4 better and figure 1 and others.
2. *"The study was conducted with 25 human participants, which might not be representative human users."*
Please note that the final user study was conducted with *150* human participants, with 50 humans in each condition. The initial 25 human participants were used for data collection to build the approach, then an additional 150 participants were recruited. This number of participants per condition is comparable to prior work in the literature and was sufficient to establish the statistical significance of our approach.
## Questions:
Thank you for an insightful question. We did indeed consider a form of cost of the human-ai interaction, in particular time to make a prediction as you suggest. We found in Table 3 (last row), that Humans without the AI spend 5.4s per example, without onboarding they spend 7.78s per example, however with onboarding this reduces to 5.9s which is a significant decrease (two sample t-test, p=0.0014, t=3.2872).
## Limitations:
Please note again that the user study evaluation was conducted with 150 participants (50*3) as mentioned in lines 360-362 of section 7 “Experimental Conditions”.
If our rebuttal changed your view of the paper given the corrected sample size of participants for the studies, please consider adjusting your score accordingly.
---
Rebuttal Comment 1.1:
Title: Author Rebuttal
Comment: Dear Reviewer, I hope you get the chance to read our rebuttal that tries to address your concerns, if it does, please consider adjusting your review in response. | Summary: This paper presents an approach to allow effective human-AI teaming. The approach describes a process of semantic discovery of regions in an embedding space following by generating natural language descriptions of the regions and then learning refinement of the regions using counterexamples. Once regions and region descriptions are generation, human onboarding is performed alongwith proposals for whether to rely on AI for decision making or not with an explanation. Analysis shows that human-AI team performs better than just AI or humans. User study shows very interesting results (role of onboarding, role of providing recommendations, impact of time per task with/wo AI teaming and recommendations).
Strengths: - This paper presents an interesting and practical approach for developing a human AI collaboration system with a focus on both machine learning aspects (learning AI integration functions) and the HCI aspects (onboarding + natural language descriptions) of how human-AI teams can really work in balance (staying away from over-reliance on AI and under-reliance on AI).
- The use of chatGPT as an oracle is also a great workaround for lack of ground truth data
- Evaluation shows that the learnt AI integration function leads to lowest loss at test time and also minimum number of region proposals (which may not overwhelm the user)
- Groundtruth cluster analysis of regions on the BDD dataset also shows high correlation compared to other region discovery approaches.
- the contrastive approach for region description is also very intuitive and gives good results.
Weaknesses: - There's a lot covered in the paper in the limited space and the focus seems to be on region discovery and region description algorithms as well as the higher level human-AI teaming and overall system level evaluation study. It seems like one could do more extensive analysis with additional experiments with more datasets on each of the sections.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - There are other approaches for region discovery such as the deep aligned clustering algorithm, adaptive decision boundary algorithms etc (from the fields of open class learning, semi-supervised representation learning, novel intent discovery, etc) What do the authors think about these other approaches for region discovery.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - This paper covers a lot of ground and the authors have also captured the current limitation well in the Limitations section of the paper. However, given all the limitations, the paper presents some very promising ideas and directions of further research on human AI collaboration.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your review, we appreciate your time to review our paper and the important considerations you raise.
Weaknesses:
We agree that we can add more experiments on more datasets. We do plan to add more ablation experiments and on more datasets in the appendix (on ImageNet16H, CIFAR-10H and a synthetic gaussian setup) notably.
Questions:
Thank you so much for pointing out two related works that can be applied for region discovery [1,2] that we will add a comparison to and cite in future iterations of our work.
In [2] the authors present a novel method for intent classification that leverages prior knowledge of previously known intents and tries to discover new intents (analogy between intent and regions in our work). In our work, there are no known previous regions so we cannot leverage the prior data in that fashion as the authors use to fine-tune the representation. As in [2], we do not know the number of regions a priori, and their approach to filter clusters can be useful. We find that the deep aligned clustering algorithm to be very interesting to apply a baseline in our tasks. One caveat is that it would require changing the cross-modal embeddings (CLIP) we use and it has been shown by DOMINO [17] that they outperform other uni-modal representations such as BERT embeddings for region discovery. Similarly, we also think that the Adaptive Decision Boundary Learning algorithm in [1] can also be applied to discover regions and might work well on the setting. We look forward to applying both algorithms as baselines in future iterations of our work.
[1] Zhang, Hanlei, Hua Xu, and Ting-En Lin. "Deep open intent classification with adaptive decision boundary." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 16. 2021.
[2] Zhang, Hanlei, et al. "Discovering new intents with deep aligned clustering." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 16. 2021. | Summary: The authors attempt to improve human-AI collaboration through a system that (1,2) identifies similar regions in a dataset (both in terms of the task similarity and in terms of human behavior—usage of AI advice—on the task) and optimizes for how humans should ideally use the AI in the given region, (3) generates contrastive descriptions for the region, and (4) provides recommendations to users based on 2. They synthetically evaluate each component of their system, and empirically test their system with human users across two tasks.
Strengths: 1. Problem setup and optimization solution are straightforward and make sense.
2. Empirical analysis using humans to assess their system (with multiple datasets).
3. Generally well-written. See “Weaknesses” for a few suggestions I have to clarify some parts (section 4).
Weaknesses: 1. Only testing on one task
2. The experiment design has an obvious confounder — the raters are only offered a binary choice, while the AI
3. The setup did not allow distinguishing between ignoring the AI and collaborating with the AI. This should be possible with an adjusted setup — for example, have a 2 stage process for data collection where the person makes an initial decision and subsequently is given the chance to use AI (in section 7, it sounds like you did this, but this was not clear earlier in the paper). Additionally, rather than having people submit a discrete label, you can ask them to submit a continuous value (e.g., something like a confidence). Note that this could be done just for the dataset collection process to approximate $R^*$, and you could still use your original setup for the empirical evaluation. (Side note: if you had recorded a “confidence” for the dataset, optimizing for the regions may be easier as this is a differentiable problem now since the $r$ would be continuous-valued.)
4. It’s actually unclear how well this setup can distinguish between 0 and 1 — using the AI vs not using the AI. Since everyone sees the AI’s output, they may be biased based on that while still making their own decision. There are a number of studies that have established the biasing effect this can have.
5. The overall method in 4 (and 4.1, 4.2) is relatively straightforward, but it is a bit difficult to parse initially (some key points are buried in the details). I would recommend a concise description at the start of each section to describe what will be done in that section. (e.g., at the start of 4.1, saying something like “We generate an embedding for each datapoint by concatenating a task embedding with the human’s decision to use AI. We then define an objective function to jointly discover data regions and optimal usage of AI; it seeks to maximize task instance similarity while enforcing similar (optimal) human behavior (usage of the AI) on that task”). In my opinion, this high level overview will make the sections easier to read.
6. In the appendix, it seems the empirical analysis on the second dataset results in not significant benefits to using the proposed system. This is OK, but I would recommend adding discussion about this in the main paper. It is important to understand when your system may and may not work, and where improvements may be needed. For example, it is possible the disparity in human and AI accuracy is large enough that simply trusting the AI is a better strategy than trying to assess when to trust the AI more.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How important is the embedding choice for the region discovery algorithm?
2. There seem to be a lot of hyperparameters here. How important are the choices here? For example, the upper and lower bound on fraction of points in a region seem to be an important consideration — were there any experiments to assess what is reasonable for (a) human users and (b) for the generated region description? How many regions is reasonable for onboarding?
3. For the region parametrization, are the CLIP embeddings normalized?
4. For region description, (1) the initialization and (2) the embedding could matter a lot. Did you do any analysis of the variation in the final description with respect to these?
5. What do you use to generate the initial text descriptions for each image? In 4.2 you reference a few possible methods, but don’t indicate which you use.
6. In Table 1, I am unclear on what the numbers represent. How is the error calculated?
7. For the empirical studies, is the task order randomized? Also, for the people who did “predicts alone” and “predicts with help of AI”, were the tasks where they predicted alone randomized person to person?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As the authors recognize, it would always be nice to have more empirical studies (e.g., with more users, more datasets, etc.). The authors also recognize the potential negative outcomes of this work (incorrectly identified regions and/or recommendations could incorrectly bias users to under/over-trust the AI).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your detailed review, we appreciate your time and all the important comments.
## Weaknesses:
1. It is true that we test our onboarding method on one user study task in the main body of the paper. However, the purpose of the user study is to test the behavior of various humans (150 overall) when they undergo onboarding. In the setting studied, we showed that onboarding can have a positive effect on performance, we believe the conclusion would remain the same if the task was different (e.g., classify species of animals) but with the same explanations, and relative human and AI performance individually. We do agree additional tasks would be interesting to see how the effect of onboarding changes with different task parameters, this is left to future work. Most importantly note that we do evaluate the components of our method on four different datasets and different tasks within section 6.
2. It seems the sentence of this point was cut in the middle, we are more than happy to reply to the full point if re-written during the discussion period.
3. With this setup, we cannot (as you correctly point out) always identify which integration decision did the human take, we can only do it when the press the "use AI answer" button. This is something we clearly acknowledge in lines 146-150, this means we can only learn a proxy version of the human integrator. Let us clarify that our data collection in the user study consists of two parts: 1) human predicts without AI on a set of examples and 2) human predicts with AI using the same interface as the evaluation interface. While it is true that we can devise other forms of data collection schemes, notably collecting human confidence which is something we considered but did not pursue as we believed it would be too subjective and create additional biases, we were mostly interested in figuring out how to learn from feedback data collected from the same interface participants will predict on at test time.
4. It is correct that it is hard to distinguish which integration decision the human took, the only sound way to do this is to ask the human explicitly after they predict which would be an interesting study design but not as relevant to learning accurate regions which do not require exact knowledge of the human prior.
5. Thank you so much for pointing out how we can better improve the presentation of section 4, we will improve the presentation in future iterations.
6. The MMLU results had only been completed after the full paper deadline due to budget constraints, so in the main paper, we only mentioned performing the study but did not elude to the results (whether positive or negative we had planned to include them in the main paper if they had been done in time). Our hypothesis for why the MMLU onboarding results are inconclusive is that the ChatGPT explanations are very informative about the model’s uncertainty, in fact, ChatGPT frequently responds with an explanation stating “I don’t know” or “there is not enough information”. To test a weak version of this hypothesis, we ran a follow-up study conducted after the Appendix deadline with Human-AI performance without ChatGPT explanation but only the ChatGPT answer. We found the Human-AI performance drops to 70.4% (no explanation) compared to 75.1% (with explanation) (from Appendix table 9), a two-sample t-test finds p =0.045 and t=2.03. Thus our hypothesis is that onboarding is most effective when the AI explanation do not inform the user when and when not to rely on the AI.
## Questions:
1. The choice of the embedding space for region discovery algorithms has been established by prior work DOMINO [17] and has a significant effect on the ability to recover regions of high error. The authors in [17] found that cross-modal embeddings work best compared to random initialization, uni-modal embeddings, and activations of the model itself. We follow the recommendations of prior literature and use the CLIP cross-modal embeddings.
2. There are four main hyperparameters for the region-finding algorithm: $\alpha$ (0.5 used) controls consistency of the region, $\beta_l$ (0.01 used) and $\beta_u$ (0.1) control the minimum and maximum size of the region and $\delta$ (1 used) minimum gain of the region. The two parameters that have the most impact are $\alpha$ and $\beta_u$ which we experimented with quite a bit. Setting $\beta_u$ to 1 (unconstrained) produces few regions of large size and a large number of tiny regions so that the choice of 10% size leads to most often a dozen good-sized regions. For onboarding, we chose 9 so that they can fit in time within a user study.
3. We do not normalize the CLIP embeddings.
4. We did not find that the initialization matters a lot for region description since the method allows one to find counterexamples and quickly fix a bad initialization by adding more points.
5. We use a different method for each dataset. For BDD, we use the metadata provided with each image to construct a textual description with a template:
caption = caption += scene + " during the " + time_of_day + " with " + weather + " weather, the scene contains " + objects_in_scene
For MS-COCO we use the captions provided with the dataset.
For MMLU we use the question itself as the description. For DynaBench we use the review as the textual description.
6. Let us first clarify that we report in Table 1 and Table 3 (and in Aim 3) results as mean +- standard deviation. In Table 1, we report the loss (equation 1 with l(.,.) being the 0-1 loss) of the human-AI team when following the recommendations of the AI-integration function learned with the regions.
7. In all the user study conditions, for each participant, we select a random set of examples so that each participant per each condition sees different examples. For conditions (1) and (2) described in lines 362-368, we randomize which of the two sub-conditions participants see first for each participant. | Rebuttal 1:
Rebuttal: Thank you for the reviews of our paper and the insightful suggestions. Please find under each review our rebuttal.
We wanted to iterate the main claims and contributions of our paper:
- We propose a novel region discovery algorithm (sec 4.1) and evaluate it on four different datasets and show that it can outperform baselines (aim 1 and aim 2 in sec 6)
- We propose a novel region description algorithm (sec 4.2) (automatic cluster labeling), we show quantitatively that it can uncover the ground truth cluster label in an ablation study on MS-COCO (aim 3 in sec 6) and show qualitatively that it produces coherent descriptions (table 7 and table 8 in Appendix)
- We propose an onboarding scheme that presents the regions discovered as “lessons” to the human (step 1,2,3 in section 6), we evaluate this strategy with a full user study with 150 participants on the BDD dataset in section 6 and show that onboarding can significantly improve performance (p=0.002). We additionally propose to show the recommendations of the region as an aid to the human (end of section 6), however, we show that they are not as effective as onboarding.
We are very glad that the reviewers thought that the problem tackled was “in a very interesting domain.” (reviewer Ea7z) and “interesting and practical approach for developing a human AI collaboration system” (reviewer oGDF), our “framework is novel and very interesting” (reviewer YaUF), the approach to describe regions is clever “IntegrAI-describe algorithm is a very clever way to leverage LLMs for semantic clustering“ (reviewer YaUF) and “use of chatGPT as an oracle is also a great workaround“ (reviewer oGDF) and that the “user study is extensive and its setting is convincing” (reviewer Ea7z).
We also want to point out a typo in the paper on line 376 “$\pm$ standard error”, every result in the paper is reported as mean +- standard deviation (instead of standard error). One can obtain standard error from standard deviation by dividing by the square root of the sample size. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A Path to Simpler Models Starts With Noise | Accept (poster) | Summary: The authors propose a possible explanation for the often large Rashomon ratios in tabular datasets (criminal justice, healthcare, etc.). The explanation involves both the dataset generation process and choices made by the practitioners who train the model. The main thesis is that label noise leads to the adoption of simpler models. The authors provide empirical and theoretical justifications to support this hypothesis. In addition, the authors propose a new Rashomon metric.
Strengths: 1. The paper is well-written and well-organized.
1. The paper focus on an important topic with practical implications for critical domains like criminal justice, healthcare, etc.
1. The authors provide an in-depth theoretical and empirical analysis to support their claims.
1. The authors introduce a new Rashomon metric.
Weaknesses: 1. It would be beneficial to give some definition/example of a “simple model” in the introduction.
1. In the definition of the true Rashomon set, did you actually mean to take $L(f)<L(f^*)+\gamma$ in the definition of $R_{set}$? Otherwise, we should take $\gamma>=1$.
1. It would be interesting to see if the results generalize to more complex datasets and models.
1. Could you provide references to support the claim that the assumptions in Prop. 6 generally holds in practice?
1. How does the choice of the Rashomon coefficient affect the analysis? For example, could you provide Figures similar to Fig. 1, but for different choices of the Rashomon coeff?
1. Could you please elaborate on the motivation for defining the pattern diversity and on the connection to larger Rashomon ratios? This is not clear from Sec 5. If I understand correctly, the relation is only established through the analysis which shows that both Rashomon ratios and pattern diversity increase with noise.
1. While the arguments in Sec 4 regarding the path taken by the practitioner appear plausible, I’m not entirely convinced.
Minor:
1. Line 41: there’s a missing period before “Our”.
1. Line 88: “...that often there are often…”
1. Line 139: $p_f$ -> $p^f$.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is there a way to define (an equivalent to) the Rashomon ratio for a continuous hypothesis class?
1. Does the provided analysis hold for a continuous hypothesis class?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review, we really appreciate it. Please see below our response to your questions.
**Weakness point 1 (example of a “simple model”)**: Yes, we will add an example to Introduction. Consider a hypothesis space of linear models with real-valued coefficients in m dimensions. Then a simpler model will be a linear model with at most m’ coefficients, where m’ is significantly less than m.
**Weakness point 2 (definition of the true Rashomon set)**: Yes it is correct, thank you for identifying the typo on our side. We corrected it.
**Weakness point 3 (generalization to more complex datasets and models)**: We used 20+ real-world datasets, the most complex of which involved 10K+ data points and 18 features. We expect that further advances in methods for efficiently computing the number of patterns (e.g., along the lines of Appendix K.3) will permit even larger models and datasets to be considered in the future.
**Weakness point 4 (Proposition 6 assumptions)**: An example of when the leaf assumption of Proposition 6 is satisfied is the GOSDT algorithm (see Lin et al, Generalized and Scalable Optimal Sparse Decision Trees, 2020). For GOSDT, each leaf is created only if adding this leaf will provide sufficient improvement to the loss compared to the regularization on the number of leaves. We believe that the assumption on features is not strong, as we expect that with high probability trees that have low accuracy (and thus are not in the Rashomon set) will not contain good features. (That is, the model should be at least as good as its best feature, so models that are poor can’t even have one good feature.)
**Weakness point 5 (does Rashomon parameter effect analysis)**: Figure 1 actually does not depend on the Rashomon parameter. It illustrates the practitioners' validation process. Similar trends to Figure 1 hold for other hypothesis spaces as well. Regarding Figures 2 and 3 from the paper, in the global response pdf file we show that for a real-world dataset, the choice of the Rashomon parameter does not influence results (see Figure 3 in the global response file). We illustrate this point for one dataset, but it holds for the other datasets as well.
**Weakness point 6 (Section 4 and 5 connection)**: That is correct. Sections 4 and 5 make two different points, where both of them show that Rashomon set metrics increase in the presence of noise.
In Section 5, we wanted to show that when the practitioner does not follow the path, there are still large Rashomon sets in the presence of label noise. Therefore, we showed empirically that different Rashomon set metrics tend to increase with noise and this is a connection between diversity and Rashomon ratios. We do not have a bound that connects the two yet, and this is a part of future work. In the case of pattern diversity, we were able to bound it and show an increase in the upper bound with noise in a more general case than our empirical results.
**Weakness point 7 (regarding practitioner following the path)**: With respect to Step 3, we assumed that the machine learning practitioner follows the common ML pipeline and we will add clarification to the paper by defining the practitioner as such. Thank you for pointing this out. Please note that we believe that this ML pipeline is extremely common and we haven't seen someone not following it. Reducing complexity (or regularization) is a very common tactic to prevent overfitting for various ML algorithms.
**Question 1 (definition of the Rashomon ratio for a continuous hypothesis space)**: In the paper we use the pattern Rashomon ratio for continuous hypothesis space and classification, as the set of classification patterns is finite for a finite dataset (for example, for linear models see Figure 2b and Figure 4 in Appendix K.5). There is a general definition of the Rashomon ratio for continuous space (Semenova et al, On the Existence of Simple Machine Learning Models, 2022), however, it depends on the prior over the hypothesis space which might not be known.
**Question 2 (does the analysis hold for continuous hypothesis space)**: Yes, let’s walk the path for the continuous hypothesis space. Step 1 (variance of the loss increases with noise) does not depend on the size of the hypothesis space. For Step 2 (higher variance leads to worse generalization), the generalization bound can be easily extended to the covering number bound when the hypothesis space is continuous (in this case, instead of the cardinality of the hypothesis space, the bound will rely on the size of the cover of the hypothesis space). Step 3 (practitioner chooses a simple space) is a regularization technique that is common for continuous spaces as well. Finally, we show empirically that Step 4 (Rashomon ratio goes up) holds for continuous hypothesis space of linear models in Figure 2b.
**Minor**: Thank you for finding typos, we corrected them.
Thank you again for the review. Hopefully provided answers, new analysis, and clarification addressed weaknesses and questions highlighted in the review.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have read the author's reply and the other reviews, and I will keep my score. | Summary: How does noise influence the set of models that have similar performance? This paper presents a study that decomposes the problem in three stages, first showing how noise in labels harms generalisation, second, lower generalisation capability leading to more restrictive model choices, and lastly showing that the restrictive model space has a larger Rashomon ratio (i.e. larger proportion of models in this space with similar performance). The main claim is in presenting a technical argument for use of simpler (interpretable) models in critical domains, as these tend to be noisy.
Strengths: - The central idea of promoting simpler models on account of noise in real-world datasets is noteworthy. The use of arguments, albeit limited, to demonstrate a plausible sequence of choices leading to simpler model specification is also interesting.
Weaknesses: - The presented results use a patchwork of assumptions, model classes and conditions to hold for each result. I appreciate a comprehensive evaluation is out of scope and this is intended to be a plausible mechanism. This implies that the overall conclusions are only weakly supported.
- How does label noise influence the Rashomon ratio? Empirical evidence of this central point should be directly provided. Figure 2 breaks this down by tree depth, and Figure 3 offers characteristics of the set. The paper would benefit from a more direct noise-to-metric plot.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - While there are some useful analytical results, the paper largely rests on empirical work. The authors have alluded to this in their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating the goals of our paper, which makes initial steps towards understanding broad trends that we believe have been under-studied.
**Regarding weakness point 1**: Given that we can’t prove things in all generality (though of course, we would like to), we did our best to provide empirical evidence for those steps in the path we described. We hope that the combination of theory and experiments we provided in the paper are valuable and show that there are multiple angles from which this issue can be productively studied, as is often the case with interesting machine learning phenomena. If there are specific experiments you would like to see, we are happy to do them. We further hope that the gap between theory and experiments can be narrowed in future work, and welcome your suggestions for how to do so.
**Regarding weakness point 2**: Figure 3 in the paper provides the noise-to-Rashomon ratio, the noise-to-pattern Rashomon ratio, and the noise-to-diversity plots for uniform random label noise. For Figures 3a and 3b, on the x-axis, we have the probability with which each label is flipped, and on the y-axis the numerator of the Rashomon set metric. Since the hypothesis space is chosen before seeing data, it does not change as we add more noise to the dataset, so the denominator of the Rashomon ratio and pattern Rashomon ratio stays the same. This means that only the numerator of the Rashomon ratio changes, which is the number of trees in the Rashomon set (Figure 3a) and the number of patterns in the Rashomon set (Figure 3b). In other words, the y-axis in Figure 3a and Figure 3b is a constant multiplied by the Rashomon ratio (3a) and pattern Rashomon ratio (3b).
Thank you again for the review. | Summary: The Rashomon set is the collection of all models that perform almost equally well in a given dataset. The Rashomon ratio is the fraction of models that are members of a given hypothesis class and simultaneously are in the Rashomon set. The paper studies the relationship between noise in the data generation process and the Rashomon ratio. The authors also argue that noisier datasets lead to a larger Rashomon ratio. Finally, the authors also define a measure that captures the average difference in prediction across different classification patterns in the Rashomon set.
Strengths: * The paper is well-written, and the problem of studying how data noise affects the Rashomon set is interesting.
* Proposition 6 is interesting and confirms (in a specific scenario) that more complex hypothesis classes would lead to smaller Rashomon ratios. The experiments in Figure 2 (a) and (b) help illustrate Proposition 6.
* The idea of defining a pattern Rashomon set is interesting, and using it as a proxy for exploring the Rashomon set can be helpful in other applications.
Weaknesses: * Some of the provided bounds are non-computable. For example, the bounds in Theorem 4 (as the authors discussed in the paper) and Theorem 9 (the empirical risk minimizer is not always attainable) can not be computable.
* The pattern diversity metric (listed in the Abstract as a contribution) is a slight modification of the pairwise disagreement metric -- specifically, it assumes that all patterns are equiprobable. The result of Proposition 16 shows that these metrics may have different values in a particular (and maybe unrealistic) scenario. I encourage the authors to explore the differences further. Moreover, the authors could discuss when pattern diversity is useful but pairwise disagreement is not.
* The paper shows that when label noise is large enough, the loss variance is larger; hence, the Rashomon ratio may also be larger. With this chain of thought, one can conclude that simple models may exist. However, given a dataset, it needs to be clarified how to decide whether the label noise is high. Moreover, even assuming that there exists a black box that gives practitioners the level of label noise, it is necessary to answer the question: How large the noise needs to be for simpler models to exist?
* Most of the results depend on the fact that the 0-1 loss is being used. However, other losses may be preferred in practice, and discussing how the results extend to other losses is necessary.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * How would the results change when other losses (beyond the 0-1 loss) are used? Is it expected that the same/similar results will hold? It might be interesting to illustrate the results using other losses experimentally.
* How can we compute the pattern diversity metric when listing all models in the Rashomon set is impossible? For example, how to compute the pattern diversity metric for SVMs?
* How does Theorem 11 relate to Theorem 10? What should a practitioner expect when there is noise in both the label and the features?
* How tight is the upper bound in Theorem 9?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discuss the limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review.
**Weakness 1 (Theorem 4 and 9 bounds):** Theorem 4: Indeed the size of the true Rashomon set is not computable as we pointed out. We also pointed out that it doesn’t really matter - the bound still tells us what’s going on whenever the Rashomon set is large. The paper (Semenova et al, On the Existence of Simple Machine Learning Models, 2022) provides a way to assess whether the Rashomon set is large using several different ML algorithms. So, we often have indirect indicators that the Rashomon set is large, which are sufficient to give confidence that Theorem 4 is meaningful qualitatively. For Theorem 9, we actually might be able to compute the empirical risk minimizer for some hypothesis spaces (for example, it can be computed exactly for sparse decision trees, see the GOSDT algorithm of Lin et al). Even if the algorithm produces an approximation $L^u(f^u)$ of $\hat{L}(\hat{f})$, the right-hand side of equation (2) in Theorem 9 is monotonic with respect to $\hat{L}(\hat{f})$ when the loss is below 0.5. Therefore, the quantity $2L^u(f^u)(1-L^u(f^u))$ is a tight upper bound that we can compute.
**Weakness 2 (pattern diversity vs pairwise disagreement):** The pairwise disagreement metric and pattern diversity work in different domains. While pairwise disagreement metric compares different functions, the pattern diversity looks into all possible unique classification patterns on the data. Pattern space allows us to avoid problems with reparameterization that can cause issues, even completely change the loss landscape [see Dihn et al, Sharp minima can generalize for deep nets, 2017]. In Figure 4 in the global response file, on a simple example, we show how different parameterizations change the pairwise disagreement metric. The advantage of pattern diversity is more evident when we think about the computation of the two metrics. We can compute the pattern diversity by enumerating all possible patterns on the given finite dataset as described in Appendix K.3. We cannot do the same for the pairwise disagreement metric without additional assumptions on patterns’ support (how many functions achieve this pattern).
**Weakness 3 (the necessity of knowing when noise is high):** Interestingly, the user does not ever need to know the exact level of label noise! If they suspect label noise is present, our results say that when they try to apply an interpretable ML method, they are likely to maintain performance compared to a black box model. So, all the user needs to do is apply an interpretable ML method and see if they get a good performance. They do not need to assess label noise empirically at all.
Our paper simply explains the reason why they would expect good performance from the simpler models. Since there is a long-held belief in an accuracy-interpretability tradeoff, our goal is to explain that such tradeoff is not always present, and why that is.
**Weakness 4 (dependence on 0-1 loss):** This is true, but hopefully this is not a critical limitation of the paper. 0-1 loss is a useful starting point for understanding many ML concepts and has been actively used in theory. We acknowledge that going beyond 0-1 loss would be worthwhile future work in the Limitations section. There is an *enormous* literature just on 0-1 loss so we are not unique - this reflects that there exist many yes/no prediction problems.
**Question 1 (other losses):** We tried to answer this question in the Limitations section of the paper. For least squares and ridge regression, the noise is equivalent to regularization and we show in Theorem 11 that the Rashomon volume decreases. However, the hypothesis space will change as well with the regularization, so our results might still hold in this case, we just have no way to assess it theoretically yet.
**Question 2 (diversity for continuous space):** The pattern diversity relies on patterns, which are finite for finite datasets. Therefore, we can enumerate all patterns and compute pattern diversity, even if the hypothesis space is infinite. The algorithm described in Appendix K.3 will work for SVM as well as other hypothesis spaces. It will be computationally expensive for larger datasets, so additional optimizations (similar to the procedure to discard non-relevant points in Appendix K.3) might help to handle the complexity.
**Question 3 (Theorem 10 and 11):** Those two theorems have different setups and are designed to make different points. Theorem 11 is for ridge regression and least squares loss, while Theorem 10 is designed for classification and 0-1 loss. For classification and 0-1 loss, if the dataset has attribute and label noise, we expect the results identified in Section 4 to hold. To support this point, we empirically show that the variance from Step 1 increases with attribute noise. We consider additive noise for data generated from Gaussians and uniform random noise for binary real-world datasets (Figure 1(b)-(c) in the global response file). Steps 2, 3, and 4 of our path in Section 4 are noise model independent. For Section 5, under attribute and label noise, we empirically observe that the characteristics tend to increase for the majority of the datasets (Figure 2 in the global response file). Because we use algorithms that regularize the number of leaves, with more attribute noise regularization can distort the trend as we get shallower trees (Figure 2d).
**Question 4 (Bound in Theorem 9):** The bound holds with equality when the Rashomon parameter and the empirical risk are 0. The bound is tighter when the risk is lower and the Rashomon parameter is smaller. Since the bound is general, it becomes looser as the risk increases since we might encounter many different possible scenarios of point disagreement distributions.
Thank you for the review and pointing out ways to improve our paper. Hopefully, the new analysis based on different noise models, our answers, and clarification can handle the weaknesses that you identified.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers.
I will increase the soundness score to 3.
It is still unclear how a practitioner would use the author's result. In the rebuttal, the authors mention, "If they suspect label noise is present, our results say that when they try to apply an interpretable ML method, they are likely to maintain performance compared to a black box model. So, all the user needs to do is apply an interpretable ML method and see if they get a good performance". This reviewer suggests the authors add extensive experiments using various black-box and interpretable models, showing their claim is correct in practice.
Moreover, I suggest the authors include all discussions in the rebuttal to the main paper; this will significantly improve the clarity of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you, we will add the discussion points to the paper to improve its clarity.
Mentioned experiments were performed in the previous literature, please see the works of Holte, Very simple classification rules perform well on most commonly used datasets, 1993; Rudin et.al., Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges, 2021; Semenova et.al., On the Existence of Simpler Machine Learning Models, 2022. Semenova et.al. compared two interpretable with three black box methods for 38 datasets (see Section 6, Appendix E) and concluded that similar performance of different complexity ML algorithms is an indirect indicator of larger Rashomon ratios. In our paper and reply, we relied on the results of mentioned previous works. Apologies for not citing this related body or works more explicitly in our rebuttal reply.
In Section 5 of our paper, we show empirically that the Rashomon ratio tends to increase with noise. So putting together the two types of experiments, we get that (1) similar performance of different machine learning algorithms correlates with larger Rashomon ratios, and (2) noise can be the reason for larger Rashomon ratios. Therefore, if a practitioner suspects that noise is present in the dataset, they can use different machine learning algorithms to verify if the Rashomon ratio is likely large. However, please note that the main focus of our paper is not to measure the amount of noise in data, but rather to explain what the noisy data tell us about Rashomon ratios and simplicity.
Our paper explains why machine learning practitioners are able to find simple models (including interpretable, fair, or obeying monotonicity constraints) as accurate as black boxes for a lot of real-world datasets. Because of the belief in the accuracy/interpretability trade-off, the black box models are often used, even for high-stakes decisions. Our paper is showing theoretically that the accuracy/interpretability trade-off assumption is wrong as it does not hold for a lot of datasets that are noisy. Therefore, our work can be used, for instance, to argue in court why a designer of a black box model who insists that this model is needed is most likely wrong. It’s a useful legal argument from the perspective of those struggling against black box models being used unnecessarily.
Please let us know if you would like to see any other experiments. Thank you for your time, and we look forward to hearing from you. | Summary: The authors explore how the Rashomon Ratio (the fraction of all models that are in the Rashomon set) changes in the presence of label noise. They show that increased label noise causes the expected variance of the ERM’s performance to increase. Then, they hypothesize that this increased variance leads practitioners to use simpler model classes, which are known to have higher Rashomon Ratios. They also introduce pattern diversity as a metric for how diverse the prediction patterns are across members of the Rashomon set. The authors provide empirical support across several datasets that (i) simpler models perform better when a dataset is noisy, (ii) simpler model classes have higher Rashomon ratios for decision trees and linear models, and (iii) Rashomon set size, number of Rashomon patterns, and diversity of these patterns tends to increase with label noise. The authors are also forthright about the limitations of their work, and provide a nice overview of open questions related to Rashomon Ratios in the presence of noise.
Strengths: * Provides one of the first theoretical studies on the Rashomon effect. As the users state, there are important policy implications of the Rashomon effect and understanding the phenomenon is important to being able to craft policy around it.
* The theoretical results are also shown to hold empirically on real datasets.
* The “limitations” section is very welcome, as it helps the reader to contextualize what this paper does and does not do. It also provides a good set of ideas for future research.
* The paper is generally well-written. The "path" framework is not perfect (see weaknesses), but I appreciate that the authors try to account for how both theoretical properties of the data and human choices lead to large Rashomon sets.
Weaknesses: * Certain claims are not proven in a broad sense and this is not clearly delineated (or worse, the authors claim something more general than what they prove)
* Uniform label noise vs real-world patterns of noise: Theorem 2 shows that variance of the loss increases with uniformly random label noise, however, in lines 167-168, the authors say that it “holds more generally,” which is not true
* The paper is written in a way that assumes that ML practitioners are following the standard ML pipeline (e.g., switching to a simpler model class given poor test set performance) but these claims are not backed up by any data. I would encourage the authors to see if the HCI literature has studied ML practitioner behavior in this situation, and if not, to reframe the paper so that it doesn’t present the human behavior narrative as fact.
* Presentation
* Generalization is a key part of the argument in section 4, but the paper never defines generalization or generalization bounds.
* The discussion of learning with noise in the related works section also seemed very brief given the amount of work that this topic has received. For example, there is work that aims to learn more stable models in the presence of label noise.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: A couple of minor points:
* In section 5.1, wouldn’t it make more sense to call $a_i$ the sample agreement? (Or otherwise, to change its definition to sum over the indicator function with $p_k^i \neq y_i$? )
* In the equation under line 166, should $p$ (with no subscript) appear at all?
One bigger question:
* Do the results hold empirically when labels are not randomly corrupted? E.g., label errors are more likely in some subset of the population.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: * Soundness limitations — see weaknesses above
* The paper only considers random, uniform label noise. This seems more acceptable as a first step for the theoretical contributions, but real-world label noise is often not random and can be correlated with protected demographic group attributes. Given that the Rashomon effect is of particular interest in social prediction tasks, I wish the authors had explored how their findings generalize to realistic noise settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review. We address the questions point-by-point below.
**Weakness point 1.1 (uniform label noise vs real-world patterns of noise in Theorem 2):** Apologies for the confusion. In the text, we were not trying to assert that Theorem 2 holds more generally, but that variance of losses generally increases with noise. To strengthen this point we added more experiments with other types of noise (please see Figure 1 in the global response pdf file). We considered two other noise models, including margin noise (when mistakes in labels are made near the decision boundary) and attribute noise (when mistakes are made in the feature matrix) noise for high dimensional Gaussians and binary real-world datasets, and show that variance of loss increases with noise.
Margin noise is realistic, common, and occurs when data arises from Gaussians. Because of the central limit theorem, data often follow gaussian distributions. Also, a combination of gaussian and uniform random noise is quite realistic due to the central limit theorem and because of clerical errors causing label noise (extreme points on either side do not necessarily have perfect labels).
Additionally, we generalized Theorem 2 to non-uniform noise models (please see author rebuttal for Theorem statement). Now, for a sample $(x, y)$, each label $y$ is flipped independently with probability $\rho_x$ (instead of label $y$ being flipped with uniform probability $\rho$). We show that the variance of losses increases with this non-uniform label noise. In the non-uniform noise model, noise can depend on $x$. This includes cases related to fairness where one subpopulation has much more noise than another.
**Question 3: (the case when label errors are more likely in some subset of the population):** That’s a good point. Yes, our results will hold in this case. We mentioned above that we generalized Theorem 2 to the case when noise over labels is non-uniform, and each label is flipped independently with probability $\rho_x$ which depends on $x$. Such change allows one to design noise models differently for subpopulations. Note that Steps 2, 3, and 4 in the path we described in the paper are independent of the noise model.
Even if we consider only uniform label noise, in practice, typically when a practitioner is facing a subgroup that is different from others, they would create a separate model for that subgroup to increase the chances of performing well on all groups. In that case, our results would also apply to each model separately.
**Limitation point 2 (concern about fairness):** Besides the generalization of Theorem 2 and experiments described above, we would like to add that the Rashomon effect is actually very important for fairness [Coston et al, Characterizing Fairness Over the Set of Good Models Under Selective Labels, 2021], and knowing when the Rashomon effect is large is beneficial for the search of a fair model. Unintuitively, even if the noise generation is not fair (we do not support this happening, we are saying if data happen to be this way), large noise will still lead to larger Rashomon ratios as explained in our paper. If the practitioner knows that the Rashomon effect is large, they can explore the Rashomon set in order to find a more fair model [Coston et al, Characterizing Fairness Over the Set of Good Models Under Selective Labels, 2021].
**Weakness point 1.2 (ML practitioners that follow the standard ML pipeline):** Thank you for pointing out the assumption on our part. We did assume that the machine learning practitioner follows the common ML pipeline and is knowledgeable about overfitting and common steps in preventing overfitting. We will add the assumption that we define a machine learning practitioner as someone who operates in the standard machine learning pipeline to the paper - we hope that this should solve the issue. We point out that the pipeline we assumed is extremely common and we haven’t seen someone not following this. Choosing a simpler hypothesis space (basically regularization) is a common step for essentially all ML algorithms. There also could be many ways to control the complexity of the model. For example, for tree boosting, parameters that can control complexity include a number of estimators, maximum tree depth, minimum leaf weight, and so on.
**Weakness point 2.1 (generalization):** In the paper, we say that there is good generalization when the generalization error (as the difference between training and test performance, see Vladimir Vapnik, Statistical Learning Theory) is small. Correspondingly, we say that the generalization is bad when the generalization error is large. We will add the definition to the paper.
**Weakness point 2.2 (learning with noise related works):** The learning with noise literature is only tangentially related, despite the similarity of the name. These papers focus on specialized techniques for cleaning up or compensating for noisy data, while our paper focuses on studying what noisy data tells us about simple models (models created with standard algorithms, not the ones specialized for specific types of noise). Thank you for pointing it out anyway, we will add more citations to the related work section in case the audience of our paper would like to learn more about algorithms with noise correction.
**Question 1:** Yes, we can call a_i sample disagreement.
**Question 2:** Thank you for finding a typo in line 166. We have corrected it.
Thank you for the review and for identifying ways to improve our paper. We believe the definitions and clarifications handle both weaknesses you’ve pointed out. Hopefully the new analysis based on different noise models and generalized theorem additionally supports the path that noisy datasets essentially lead to larger Rashomon ratios, which enables the search for simpler (and fair) models.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal, I feel more confident about the paper's soundness now and will increase the scores accordingly.
To clarify one point from my review, I agree with you that the proposed path outlines the standard ML pipeline that practitioners "should" follow. I guess my concern was more, as more and more organizations use ML, how common is it that the ML practitioners have the appropriate formal training to know that they should follow this pipeline? But I understand if this is out of scope for your paper.
---
Reply to Comment 1.1.1:
Comment: Thank you! Any trained ML scientist should follow this pipeline since how to control overfitting is an essential component of any introductory machine learning class. So our guess would be upwards of 99% but of course, we don’t have the data to verify that (and it might be challenging to conduct such a study and to gather a proper sample). So yes, we should declare this as well beyond the scope of our paper. Good question though.
Thank you for your time and valuable feedback on our paper. | Rebuttal 1:
Rebuttal: We thank all the reviewers for the reviews. Below we provide a generalization of Theorem 2 to non-uniform label noise. In the response file, we also provide additional figures and analysis.
**Theorem** (Generalized Theorem 2 to non-uniform label noise). Consider 0-1 loss $l$, infinite true data distribution $D$, and a hypothesis space $\mathcal{F}$. Assume that there exists at least one function $\bar{f}\in\mathcal{F}$ such that $L(\bar{f}) < \frac{1}{2} - \gamma$. For a fixed $f \in \mathcal{F}$, let $\sigma^2 (f, D)$ be the variance of the loss, $\sigma^2(f,D) = Var_{z\sim D} l(f,z)$ on data distribution $D$. Consider non-uniform label noise, where each label $y$ is flipped independently with probability $\rho_x$, $(x, y) \sim D$. Let $D_{\rho}$ denote the noisy version of $D$. For any $\delta >0$, let $D_{{\rho}^{\delta}}$ be a noisier data distribution than $D_{\rho}$, meaning that for every sample $(x, y)$ the probabilities of labels being flipped are higher by $\delta$: $\rho_x^{\delta}=\rho_x+\delta$. If for a fixed model $f \in R_{set}(\mathcal{F},\gamma)$, $L_{D_{{\rho}^{\delta}}}(f)<0.5$, then
$\sigma^2 (f, D_{\rho}) < \sigma^2 (f, D_{{\rho}^{\delta}}).$
Pdf: /pdf/4138d86c366a2596f90ca4fc06fedbf206138741.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
$k$-Means Clustering with Distance-Based Privacy | Accept (poster) | Summary: The present paper focuses on the application of a distance-based privacy notion called rho-dist-DP in the context of clustering. This privacy notion aims to protect an individual data point that moves at most a specified distance (rho) in the underlying metric space. The authors demonstrate that by leveraging rho-dist-DP, it is possible to achieve better utility compared to traditional DP approaches. To this end, they propose a new algorithm and provide both theoretical and empirical evidence supporting its improved performance over standard DP clustering algorithms. At a technical level, the main theoretical result is a generalization of [70] whose proof is inspired by techniques in [22,25].
Strengths: The notion of distance-based DP presents a promising alternative to address the high utility cost associated with traditional DP. Effectively designing and analyzing distance-based DP algorithms is crucial for implementing privacy in applications that demand a high level of utility. The present paper contributes towards advancing research in this direction.
Weaknesses: The primary inquiry addressed in this paper revolves around whether a better performance can be achieved by relaxing DP into a distance-based notion. Regrettably, the answer to this question appears to be rather straightforward: yes. The reason behind this outcome is rather evident—by setting rho to be the diameter of the underlying metric space (Lambda), regular DP can be recovered. Furthermore, the lower bound in Appendix D shows that rho-dist-DP is in fact DP in this specific case. As a result, the driving question of this paper and its subsequent answer feel rather dull.
Additionally, a significant aspect that the present paper overlooks is the interpolating nature of the distance parameter, rho. Specifically, when rho equals 0, no privacy is provided, while setting rho to Lambda recovers DP. Although Theorem 1.1 suggests a monotonic relationship between performance and the distance parameter rho, the experimental results fail to consistently support this claim. In fact, the distance-based DP algorithm (with rho < Lambda) is at times outperformed by the DP algorithm (with rho = Lambda), thus contradicting the expected trend. (In a slightly related note, it may be more insightful to compare rho-dist-DP against d_X-privacy [41] rather than classical DP, as the latter serves only as a limiting case for rho-dist-DP.)
From a writing perspective, Sections 3-5 exhibit shortcomings in terms of explaining the proposed algorithm. The absence of illustrations or concrete examples hampers clarity and understanding. Instead, the authors provide an abstract explanation that does more harm than good. While it is reasonable to assume that a meaningful message can be reconstructed from the current discussion, it is crucial for the authors to provide it directly to the reader in a more accessible manner.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: In its present form, the paper fails to capture the interpolating nature of the distance parameter (rho). Given its importance, it would be helpful to have solid theoretical results and strong empirical evidence on the role of this parameter.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: The authors do not address the limitations of the proposed method carefully. As mentioned in the Weaknesses Section, the interpolation character of the distance parameter (rho) is not carefully addressed, which is fundamental to understand the consistency of the proposed algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for the valuable comments.
Q: “The primary inquiry addressed in this paper revolves around whether a better performance can be achieved by relaxing DP into a distance-based notion. Regrettably, the answer to this question appears to be rather straightforward: yes. The reason behind this outcome is rather evident—by setting rho to be the diameter of the underlying metric space (Lambda), regular DP can be recovered. Furthermore, the lower bound in Appendix D shows that rho-dist-DP is in fact DP in this specific case. As a result, the driving question of this paper and its subsequent answer feel rather dull.”
A: Any relaxation of a constraint, implies that a not-worse performance is possible. This is not surprising. What is interesting, is to show how the improved performance can be obtained (our algorithm) and by how much such a method outperforms the non-relaxed version. This is common throughout the theory of optimization and in DP as well.
Let’s consider an historic example: Approximate DP, aka (epsilon,delta)-DP. This is obviously a relaxation of Pure DP (which is obtained by setting delta=0) and was introduced in the landmark Dwork et al 2006 “Our data, ourselves: Privacy via distributed noise generation”. Obviously any problem admits a better solution with (epsilon,delta)-DP than in epsilon-DP. That does not diminish the interest in studying it, because the non-trivial question is how to obtain such improved performances and when such improved performances are possible. The study of the (epsilon,delta)-DP relaxation introduced significant innovation to the field of DP (e.g., the Guassian mechanism which is not possible in epsilon-DP).
We believe that studying rho-dist-DP can contribute to novel results in metric space data beyond our work.
Q: Additionally, a significant aspect that the present paper overlooks is the interpolating nature of the distance parameter, rho. Specifically, when rho equals 0, no privacy is provided, while setting rho to Lambda recovers DP. Although Theorem 1.1 suggests a monotonic relationship between performance and the distance parameter rho, the experimental results fail to consistently support this claim. In fact, the distance-based DP algorithm (with rho < Lambda) is at times outperformed by the DP algorithm (with rho = Lambda), thus contradicting the expected trend.
A: We do not think we overlooked the interpolating nature of the distance parameter rho. In fact both our theoretical result (Theorem 1.1) and our empirical result (Figure 2, and we apologize about the typo in Figure 2 where "dx clustering" should be "dist-DP k-means") supports the interpolating nature of the distance parameter, rho. In particular when rho=0, we exactly recover the result as non-private k-means++ (as shown in Figure 2). In addition, as shown in Figure 2, when rho decreases, we get better empirical approximation guarantee. In Figure 1, we are in the case that rho < Lambda and our algorithm (yellow line) has better empirical approximation than the previous DP algorithm (green line).
When rho = Lambda, our algorithm might be outperformed by the previous DP algorithm. This is because our algorithm needs to utilize the information of the parameter rho, and it splits the privacy budget to handle points “far” from the centers and the points “close” to the centers separately, which introduces some additional overheads in the noise. To develop an (eps,delta,rho)-dist DP clustering algorithm that has exactly the same empirical approximation as the (eps,delta)-DP clustering algorithm when rho=Lambda is an interesting open question, and we leave it as a future work. In general the regime of interest for our algorithm are the cases where \rho << \Lambda as otherwise the two definitions converge and it is possible to use off-the-shelf DP clustering algorithm.
Q: In a slightly related note, it may be more insightful to compare rho-dist-DP against d_X-privacy [41] rather than classical DP, as the latter serves only as a limiting case for rho-dist-DP.
A: As we discussed before, though a DP algorithm implies a rho-dist-DP algorithm, the core technical question of our paper is how to develop an algorithm with better approximation guarantees in the rho-dist-DP setting (a more relaxed setting) than the algorithms in the standard DP setting (a more restrictive setting). We observe that dX is an orthogonal relaxation of DP, so it is not clear for us to compare rho-dist-DP setting with dX-privacy setting since dX-privacy depends on the definition of the distance between two datasets, where our definition depends on the distance between the different data points in two neighboring datasets.
---
Rebuttal Comment 1.1:
Title: Reviewer CEv9 -- Any comment on the rebuttal?
Comment: Dear Reviewer CEv9,
As the deadline for the interactive phase draws to a close we would like to ask your feedback on the rebuttal. Have we addressed your concerns or do you have any additional questions?
Best regards | Summary: This paper proposes a definition called "distance-based privacy" which is relaxation of differential privacy. Their definition differs from standard differential privacy in its notion of neighboring instances; whereas standard DP allows an arbitrary replacement of a single item from the space, their definition considers as neighbors only those instances that can be obtained by changing an item up to a Euclidean distance of $\rho$ from the original. They then study k-means/median clustering under this relaxed definition and propose an algorithm for the problem built upon previous work. They also provide analysis and experimental results for their algorithm.
Strengths: - This paper is generally well-structured.
- They provide experimental results on several real-world datasets
Weaknesses: - The relaxed definition is a strictly local definition, in that it only provides privacy protection up a pre-specified distance. This in itself is not an issue. However, by the way it's defined, this means for cities B and C, both within a distance $\rho$ from city A, an algorithm which allows indistinguishability between B and A, does not provide protection for locations in C. Moreover, if dist(B,A) > dist(C,A), a different $\rho$ would be required if indistinguishability of A from B is desired than if that from C is desired (somewhat reminiscent of local sensitivity..). Either that, or larger $\rho$ needs to be used, which eventually becomes the radius of the space. Also, essentially the same definition was introduced in [a] for privately releasing locations.
- When comparing with an algorithm (e.g. DP k-means in the paper) which satisfies the stronger notion of standard DP, it should be discussed whether the competitor algorithm can also use the relaxed notion of neighbors to its advantage, or otherwise discuss whether the privacy parameters should be adjusted in the experiments for a fair comparison.
- Since the proposed algorithm is built upon a previous algorithm [25], the experiments should also include performance of [25] to show the proposed modification is indeed an improvement (or otherwise discuss why it's not possible to do so).
- Parts of the writing appear to be re-phrasing of that from [25] (e.g. part of the introduction). Moreover, some sentences appear to be taken verbatim from [25] (e.g. lines 23, 140-142, 146-150, 155-164...). This reduces the credibility of the work.
[a] Lu Zhou et al. "Achieving Differentially Private Location Privacy in Edge-Assistant Connected Vehicles."
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for the valuable comments.
Q: The relaxed definition is a strictly local definition [...] However, by the way it's defined, this means for cities B and C, both within a distance …
A: We would like to recap the definition and the properties of (eps, delta, rho)-dist-DP to address some misunderstandings. Note that in our definition of (eps, delta, rho)-dist-DP, if we move an *arbitrary* data point by a distance at most rho, the output distribution should always be (eps, delta)-close to the original output distribution. This means that if d(A,B) < rho, it is hard to distinguish that a user is from city A or from city B, and similarly, if d(A,C) < rho, it is hard to distinguish that a user is from city A or from city C as well (which means that an (eps,delta,rho)-dist DP algorithm protects all location information up to distance rho at the same time for *all* users). In addition, d(A,B) < rho and d(A,C) < rho this trivially implies that d(B, C) < 2*rho by triangle inequality. Therefore, the output distribution when a user is at B is be (2*eps, 2*delta)-close to the output distribution when the user is moved to C. In this example, even if B and C are farther than \rho there is a privacy protection although with slightly weaker guarantees.
In general, the definition automatically implies (L*eps, L*delta)-DP if you move a single point a distance L*rho. Of course if a point is moved between two very far location >>\rho (e.g, for \rho = 1 mile, and a point moves from the East coast of the US to the West Coast of the US) the privacy protection degrades to the point that it possible to guess with higher likelihood to which side of the country the point belongs. This is expected as this is privacy relaxation and in some contexts for appropriate \rho this privacy protection is sufficient.
Notice that variants of DP in a similar fashion (e.g., Dx privacy) have been used already in the past on location data (see cited work) and the same phenomena applies to them.
What is an appropriate \rho for a given application is policy question beyond our work. Similarly to process is in place for determining eps and delta in any real DP application (see for instance the complex discussion on the epsilon parameter in 2020 Census, https://www.census.gov/newsroom/press-releases/2021/2020-census-key-parameters.html) this involves assessment of privacy risk, utility of the system, as well as the legal, policy and regulatory environment. Such policy discussions are outside of the scope of the paper (See the review from vkqo06 that agrees with this point).
Q: When comparing with an algorithm (e.g. DP k-means in the paper) which satisfies the stronger notion of standard DP, it should be discussed whether the competitor algorithm can also use the relaxed notion of neighbors to its advantage, or otherwise discuss whether the privacy parameters should be adjusted in the experiments for a fair comparison.
A: The main message of our paper is that our algorithm utilizes the relaxed definition of the neighboring dataset to achieve a better approximation from both theoretical and empirical perspectives, and none of the previous algorithms can utilize such relaxation. Keeping eps and delta the same for both our algorithm and the competitor algorithm is fair, since this keeps the same tolerance for the difference in the output distributions for neighboring datasets. We will add this clarification in our empirical study section in the final version of our paper.
Q: Since the proposed algorithm is built upon a previous algorithm [25], the experiments should also include performance of [25] to show the proposed modification is indeed an improvement (or otherwise discuss why it's not possible to do so).
A: Note that though some of our analysis techniques are inspired by [25], we have very different algorithmic structures. In addition, [25] mainly focuses on distributed/parallel settings which introduces complexity that we do not have and would make the comparison not fair. Note that [25] provides a general framework which uses any non-distributed/non-parallel DP k-means/k-median algorithm as a black box subroutine, and their approximation guarantee is worse than the black box algorithm they used. This is why we only need to compare our algorithm with non-distributed/non-parallel DP k-means/k-median algorithms.
Furthermore, we want to emphasize that the work of [25], has not been implemented. The paper [25] is purely theoretical and does not have an empirical validation section. Implementing it in all its details is highly non-trivial given its complexity.
Q: Parts of the writing appear to be re-phrasing of that from [25] (e.g. part of the introduction). Moreover, some sentences appear to be taken verbatim from [25] (e.g. lines 23, 140-142, 146-150, 155-164...). This reduces the credibility of the work.
A: We apologize for the sentences that appear too close to the work of [25]. The introduction shares commonality to that work which inspired ours as we point out several times. We will thoroughly improve our presentations.
---
Rebuttal Comment 1.1:
Title: Reviewer 1C7X -- Any comment on the rebuttal?
Comment: Dear Reviewer 1C7X,
As the deadline for the interactive phase draws to a close we would like to ask your feedback on the rebuttal. Have we addressed your concerns or do you have any additional questions?
Best regards
---
Rebuttal Comment 1.2:
Comment: Thank you for your response.
About the experiments, I agree that the effectiveness of utilizing $\rho$ should be demonstrated, but that can be shown by varying $\rho$ from 1 to a much smaller value, such as shown in Fig. 2. In order to better understand the trade-off in switching from DP to $\rho$-dist DP, I think it helps to advantage the DP competitor algorithm with a larger privacy budget (say $2\varepsilon$, $3\varepsilon$). For example, if the DP competitor algorithm can already match the performance of the $\rho$-dist DP algorithm with $2\varepsilon$ (say), then there is no compelling reason to switch to an algorithm with a weaker privacy guarantee?
---
Reply to Comment 1.2.1:
Comment: We would like to clarify the relationship between dist-DP and DP a bit more to address a misunderstanding. As we mentioned in the rebuttal, suppose the diameter Lambda = L * rho, then an (eps,delta,rho)-dist-DP algorithm implies an (L*eps, L*delta)-DP algorithm. However, the reverse is not true, i.e., if an algorithm is (L*eps, L*delta)-DP, it is not necessarily an (eps,delta,rho)-dist-DP algorithm. Therefore, when comparing (eps,delta,rho)-dist-DP with (L*eps, L*delta)-DP, our dist-DP algorithm provides a stronger privacy guarantee instead of a weaker privacy guarantee.
To give a concrete example, suppose the dataset contains geographic locations of users on the earth, then the diameter (the furthest distance between two points on the earth) of the dataset is ~20k kilometers. The distance between New York and Toronto is ~500 kilometers. It means that if we move one user from New York to Toronto, then the probability density of every possible output of a (500 kilometer)-dist-DP algorithm with eps=0.1, will shift by a multiplicative factor at most e^{0.1} (which is roughly 1.1). However, when we consider an (0.1 * 40)-DP algorithm, even if we only move the location of a user by distance at most 1 meter, it is possible that the probability density of possible outputs is shifted by a multiplicative factor > 50 and thus the attacker may more easily obtain information about the user location up to 1 meter of distance.
Intuitively, the promise of our algorithm with small epsilon is that we strongly protect knowing the precise location of a point up to <=\rho precision. Setting \rho=500km and small epsilon will protect knowledge of which city the user is in with very strong bounds (but will not protect which hemisphere the user is in with meaningful bounds). In comparison, a large epsilon-DP algorithm will not protect any disclosure on the user with meaningful bounds (up to learning the precise GPS location of the user).
Please let us know if you have any additional questions or concerns. | Summary: In this paper, the author proposed efficient (ε, δ, ρ)-dist-DP algorithms for k-means and k-median problems to protect the privacy of exact locations as well as achieving good performance.
Strengths: The author proposes new efficient (ε, δ, ρ)-dist-DP algorithms of k-means and k-median problems, which successfully protect the privacy of exact locations as well as achieving good performance. The author explains the proposed method clearly and detailly.
Weaknesses: The structure of this paper needs to be carefully reconsidered. For example, the “Running time and parallel computation” in introduction should be put in latter experiment section. The motivation and contributions of this paper is not clearly written. There is a lack of overall conclusion at the end of this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is PDF in line 141? The definition of abbreviation should be added before using it.
2. Why does the privacy parameters set to be ε = 1, δ = 10-6 ?
3.At the beginning of Section 3, the author should give a brief summary of this section before introducing the content in following sections.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As the author argues, the choices of ε, δ and ρ is difficult and remain open. These privacy parameters should not be determined by our algorithm, but rather by legal teams, policy makers, or other experts for different specific scenarios. This is an expert determination that is outside of the scope of this paper but has been studied by practitioners extensively.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for the valuable comments. We will apply the suggestions on the structure of the paper to improve readability.
Q: What is PDF in line 141? The definition of abbreviation should be added before using it.
A: PDF means probability density function - we will update the paper to clarify this.
Q: Why does the privacy parameters set to be ε = 1, δ = 10-6 ?
A: We experiment with these parameters as they are standard settings used in most DP papers. Eps around 1 is a standard benchmark, while delta is often set to be around 10-6 (or in the order of 1 over the size of the data).
---
Rebuttal Comment 1.1:
Title: Any comment on the rebuttal?
Comment: Dear Reviewer vkqo,
As the deadline for the interactive phase draws to a close we would like to ask your feedback on the rebuttal. Is there any additional questions?
Best regards
---
Reply to Comment 1.1.1:
Comment: Dear reviewer vkqo,
We would like to ask before the deadline, if we have fully addressed your questions.
Best regards | Summary: The paper studies the problems of solving k-means and k-median under a restricted location
privacy model, in which privacy is protected when a point moves by at most distance \rho.
It gives algorithms for these clustering problems with additive errors which are a function
of \rho, instead of the diameter \Lambda, improving on the usual DP results for these problems.
The authors also give a lower bound that depends on \rho. The algorithm shows improvement
over the standard DP algorithm in experiments.
Strengths: The proposed method gives a significant improvement in performance over the prior algorithms
for k-median and k-means with privacy, with additive error being a function of \rho instead
of \Lambda. This can be quite significant in some settings, where this notion of privacy
is reasonable. The authors give rigorous privacy and accuracy guarantees, and the technical ideas
are quite interesting. The experimental results show some interesting behavior.
The presentation is generally quite nice.
Weaknesses: It is not clear how \rho would be chosen for this model to be used. The authors do not consider
this point either in the theoretical part, or through the experiments. The paper builds heavily on
ideas from a prior work, though there are some new aspects.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: lines 35-36: it might be useful to put citations for the results of this type, e.g., [47]
line 56: here it is mentioned that a point moves distance \rho in a mtric space, but the whole
paper is on Euclidean space. It would be useful to explain this further, or make this restricted.
The related work section is quite limited, and could do with some deeper discussion about the
related work, including technical aspects of the papers cited here and in the earlier sections
of the paper. It might also be useful to discuss other notions which have been proposed to deal
with high sensitivity, e.g., Lipshitz extensions (Raskhodnikova and Smith, 2015), which have
been used in the context of graph statistics and other problems.
Is there an extension of the notion of \rho-dist-DP for more general metric spaces (instead
of Euclidean), and similar upper and lower bounds? One might need to put constraints on distance
with respect to other points as well.
line 148: it might be useful to note as in [43] that TLap has 0 probability outside this interval,
and it is a valid probability distribution
line 149: "is known" ("it" is missing)
paragraph in lines 192-198: if only a bounded number of heavy cells from each level are added,
why would the error depend on \Gamma?
line 227: "the noisy version" ("is" missing)
line 339: "partitioning straties"
line 341: "investage"
How do the results depend on \epsilon?
In Figure 1, there is a gap between the dist-DP and DP k-means, but it is not that big. Is that to
be expected, based on these settings and datasets?
In Figure 2, all the plots seem to have a significant increase at 0.08. Is this kind of
behavior expected? This might help in deciding the value of \rho, which is a parameter right now
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: To some extent. There are no negative social impacts
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for the valuable comments.
Q: It is not clear how \rho would be chosen for this model to be used. The authors do not consider this point either in the theoretical part, or through the experiments.
A: As we discussed in Section 7, rho is a free privacy parameter like epsilon and delta in DP. As such it should be determined by those in charge of the privacy policy of the real-world system implementing an algorithm like ours. A similarly process is in place for determining eps and delta in any real DP application (see for instance the complex discussion on the epsilon parameter in 2020 Census, https://www.census.gov/newsroom/press-releases/2021/2020-census-key-parameters.html). In the real-world, these decisions are based on a complex assessment of privacy risk, utility of the system, as well as the legal, policy and regulatory environment. In this work, we focus on the algorithm aspects of the problem and such policy discussions are outside of the scope of the paper (See the review from vkqo06 that agrees with this point). Our goal is to show that it is possible to get a better approximation guarantee for the notion of distance-based privacy and that our algorithm can impose any \rho guarantee desired by the decision maker. For the improved results we provide both theoretical and empirical justification.
Q: Regarding other related work.
A: In this work, we mainly focus on the definition of a variant of differential privacy. The study of how to deal with high sensitivity instances under standard differential privacy is orthogonal to our work. But we agree that discussion of these lines of work would give readers a more complete picture of the literature and thus we will include the discussion of these related works in our final version of the paper.
Q: Regarding Distance-based privacy for general metric space
A: Thank you for pointing it out. The notion of distance-based privacy can be over any metric space. But in our paper, we only consider Euclidean space. We will clarify this in the final version of our paper. The lower bound should apply as well (since the lower bound for Euclidean space also applies for general metric space).
For the upper bound, since our algorithm itself heavily relies on the properties of the Euclidean space, it may not be directly applicable to general metric space. We leave the clustering problem in general metric space under distance-based privacy as a future work.
Q: Doubts on Line 192-198.
A: We assume that you are asking “why does it depend on \Lambda?”. Consider a dataset with a few outlier points, and consider the level where each cell has side length O(\sqrt{d}*\Lamda). Then there should be O(k) non-empty cells, and in the non-DP setting, we are able to find all of them. However, after we add noise to the count of points in each cell, we may miss the cell which contains the outlier due to the noise, which may cause an Omega(\Lambda) additive error.
Q: How does the result depend on eps?
A: The dependence is polynomial in epsilon, specifically proportional to 1/eps^{3.01} with our parameter settings. The constant 3.01 be replaced by 3+\eta for any arbitrarily small constant \eta. We will update the paper to clarify the dependence.
Q: In Figure 1, there is a gap between the dist-DP and DP k-means, but it is not that big. Is that to be expected, based on these settings and datasets?
A: It is as expected given rho and datasets. Note that we only show one fixed rho when comparing with DP k-means, and the point is to show that our clustering cost is already better than DP clustering cost when rho is small enough. If we choose rho to be much smaller, (as presented in Figure 2), our clustering cost can be much smaller (i.e., close to the non-private k-means) and thus can be much better than DP k-means.
Q: In Figure 2, all the plots seem to have a significant increase at 0.08. Is this kind of behavior expected? This might help in deciding the value of \rho, which is a parameter right now
A: Our conjecture is that it depends on the structure of the dataset itself. For deciding \rho please refer to the discussion above. Utility of the system certainly (appropriately obtained) can be an input in the decision process but must be considered in relation to the risk factors and other considerations.
We will address other comments of writing and presentation in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed responses.
---
Rebuttal 2:
Title: Reviewer nn8B -- Any comment on the rebuttal?
Comment: Dear Reviewer nn8B,
As the deadline for the interactive phase draws to a close we would like to ask your feedback on the rebuttal. Is there any additional questions?
Best regards | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Guiding Large Language Models via Directional Stimulus Prompting | Accept (poster) | Summary: This paper proposes a Directional Stimulus Prompting method to provide fine-grained guidance for the output of large language models (LLMs). This method introduces a small trainable policy model (e.g. FLAN-T5) to generate hints for each query to guide LLMs towards desired outputs. This policy model can be trained via a standard paradigm including supervised fine-tuning on pseudo-labeled data and reinforcement learning (RL) to optimize the expected reward. Experiments on summarization and dialogue generation tasks show the effectiveness of the proposed method.
Strengths: 1. This paper proposes a direct and feasible solution to guide black-box LLMs towards better generation results, which is widely applicable with the rapid development of LLMs.
2. This paper is well-written and easy to follow.
Weaknesses: 1. The technical novelty of the proposed method is limited. The training of the policy model in this paper is a standard paradigm including supervised fine-tuning and reinforcement learning. The techniques used in RL including dynamic adjustment of the coefficient and NLPO are all borrowed from existing works. Thus, I feel that the main difference falls into the usage of LLMs as an evaluation function in the reward model, which is devised for adapting this RL algorithm to the scenario of guiding LLM. Thus, the novelty of this design is mainly on the applicational side.
2. During supervised fine-tuning, the authors heuristically select the “pseudo-stimulus” for each input. The pseudo-stimulus indicates keywords / dialogue acts for summarization / task-oriented dialogue generation, respectively. I wonder whether there is a general principle to select pseudo-stimulus especially in open-ended text generation tasks, since recently proposed LLMs such as ChatGPT are mainly applied to these tasks.
3. The benchmark datasets in this paper only contain CNN/DM for summarization and MultiWOZ for task-oriented dialogue generation, which are not convincing enough to evaluate the performance of the proposed method based on LLMs. The authors should conduct experiments on broader tasks and datasets. Also, I’m curious about the motivation to choose MultiWOZ. In Section 3.2, the authors say that LLM-based chatbots such as ChatGPT face challenges in handling task-oriented dialogue generation because this task requires the chatbot to respond based on reliable information from API calls or database queries. But the proposed method still cannot interact with APIs or databases. The experimental setting of MultiWOZ seems like end-to-end response generation (i.e. generating the system response given the dialogue history), which is similar to open-domain conversations.
4. From [1], automatic evaluation metrics such as ROUGE cannot reliably evaluate the quality of LLM-generated summaries. Thus, human evaluation should be added to strengthen the experimental results.
[1] News Summarization and Evaluation in the Era of GPT-3.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I have included my questions in the weaknesses part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors should discuss more on the risk of guiding LLMs to generate unethical contents and how to avoid it via the policy model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude for recognizing the strengths in our manuscript and for the detailed feedback. We value these suggestions and hope our response can address the concerns accordingly.
**Response to W1: The technical novelty of the proposed method is limited. The training of the policy model in this paper is a standard paradigm including supervised fine-tuning and reinforcement learning.**
The goal of our work is to tackle the notable challenge of guiding black-box LLM outputs towards desired outcomes for specific tasks, datasets, and even queries. This is a pressing yet challenging issue to address, as it is both inefficient and infeasible to directly fine-tune black-box LLMs. Our proposed framework innovatively proposes to fine-tune a small policy model to guide black-box LLMs, bypassing the constraints of being unable to fine-tune them directly. This can be accomplished with standard SFT/RL training approaches, showcasing the effectiveness and adaptability of our approach. Therefore, introducing new SFT/RL training methods that are not "standard paradigm" is not the paper's focus. It is worth mentioning that the novelty and value have gained recognition from other reviewers.
**Response to W2: general principle to select pseudo-stimulus, especially in open-ended text generation tasks**
A principle is to select latent control codes that can influence the generated outputs. These control codes are intrinsic to open-ended text generation tasks, where models have more freedom to decide the textual content they generate. In our experiments, we choose keywords as they can guide the key points that should be incorporated in the generated summary. Likewise, dialogue acts are used for task-oriented dialogues, which could indicate the appropriate responses given the existing dialogue context. As for other open-ended text generation tasks like open-domain chit-chat, the control codes can be emotions, topics, styles, etc. Similarly, for story generation tasks, potential control codes could include key events and the narrative style, both of which steer the direction and tone of the story. Taking it a step further, large language models (LLMs) can be utilized to automatically analyze and determine the latent control codes specific to certain tasks.
**Response to W3: The benchmark datasets in this paper only contain CNN/DM for summarization and MultiWOZ for task-oriented dialogue generation. Also, I’m curious about the motivation to choose MultiWOZ.**
We have expanded our experiments by incorporating an additional task on the arithmetic reasoning datasets MultiArith [1] and AQuA [2]. The details can be found in the [global author rebuttal](https://openreview.net/forum?id=UvIN8oQ4uI¬eId=ztu9uesRIz).
As for the reason for the choice of MultiWOZ: Currently, most advances in LLMs and chatbots are predominantly in open-domain conversations where they have demonstrated impressive capabilities. However, LLMs still struggle with task-oriented dialogues where they should follow task-specific dialogue flows and output formats, as observed in [1] [2]. Therefore, we choose to fill the gap and experiment on the widely-used task-oriented dialogue dataset MultiWOZ. In addition, the MultiWOZ dataset provides annotations of dialogue acts, which could be directly used as the stimulus in our DSP framework, avoiding the additional annotation efforts.
**Response to W4: human evaluation should be added to strengthen the experimental results.**
We appreciate and value the suggestion. In response, we incorporate evaluations based on GPT-4, which is found to be able to provide consistently high-quality assessments of text generation, being a good alternative to human evaluations. Specifically, we leveraged GPT-4 to compare the summaries generated with our proposed DSP and the original standard prompting based on the assessment of overlap of key points between generated and reference summaries. GPT-4 was instructed to first generate an explanation, followed by the corresponding answer (who wins). The prompt used for the GPT-4 evaluation can be found [here](https://openreview.net/forum?id=UvIN8oQ4uI¬eId=6sYAdkCX9o).
From the 500 test samples: DSP-generated summaries were favored 255 times (**51.0%**), summaries generated with standard prompting were favored 222 times (**44.4%**), while a tie was observed in 23 cases (**4.6%**). We found that GPT-4 can produce reasonable and detailed explanations of their assessment. We will release the GPT-4 evaluation results, including the explanations.
**References**
[1] Hudeček, Vojtěch, and Ondřej Dušek. "Are LLMs All You Need for Task-Oriented Dialogue?." arXiv preprint arXiv:2304.06556 (2023).
[2] Bang, Yejin, et al. "A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity." arXiv preprint arXiv:2302.04023 (2023).
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks for your rebuttal. My main concern is still about the technical novelty. As I mention in my original review, the novelty of the method is mainly on the applicational side, which means that applying a general method (i.e., SFT+RL) to deal with a new scenario (i.e., guiding black-box LLM outputs towards desired outcomes for specific tasks) is the main focus. I think the applicational papers can also have novel contributions if the authors propose techniques on how to apply the method to the specific task considering the task's characteristics. However, the techniques in this paper are nearly borrowed from existing works, which provide few insights on solving this task.
---
Reply to Comment 1.1.1:
Title: Response to the reviewer's reply
Comment: Thank you for your reply.
Regarding your concern about technical novelty, we would like to emphasize that our main focus is on addressing the significant and urgent challenge of aligning black-box large language models (LLMs) to specific tasks. Importantly, direct fine-tuning of these LLMs is infeasible and inefficient for most users and researchers, making the issue pressing yet challenging. In light of this, our approach innovatively sidesteps this constraint by proposing the fine-tuning of a smaller policy model, which then guides the black-box LLM, instead of directly fine-tuning them. Given that standard paradigms for tuning these policy models already fulfill our intended objective, there seems little incentive and motivation to introduce new training algorithms for our purpose. Moreover, devising novel training techniques for tuning the black-box LLMs themselves would not address the primary constraint: their inaccessibility to most users and researchers.
Regarding the application side of our work, we respectfully believe that we have applied our framework to different tasks considering their unique characteristics:
1. Summarization: By incorporating query-specific keywords, we cue the LLM about essential keypoints that should include, aligning the generated summary more closely with the desired summary.
2. Dialogue Response Generation: Through the generated dialogue acts, we guide the LLM to generate responses that adhere to desired dialogue/business flows and output formats.
3. Reasoning: By employing our policy model to generate query-specific prompts, we trigger the LLM to perform chain-of-thought reasoning, mitigating the inconsistencies from manually selecting prompts for different datasets and enhancing reasoning performance.
We validated the effectiveness of our proposed framework by experimenting on these tasks considering the task characteristics and we believe they are not mirroring previous methods. However, if there are specific works the reviewer feels we are borrowing from, we would greatly appreciate it if the reviewer could point us to them. This would allow us to provide further clarifications or address any oversights.
We are open to further discussions and remain committed to addressing your concerns. | Summary: This paper introduces a novel approach to effectively guide black-box LLMs towards generating desired outputs. The proposed approach involves utilizing a relatively small policy model to generate "directional stimulus," which serves as specific information to assist LLMs in performing tasks such as summarization and task-oriented dialogue. The policy model is trained using rewards obtained from evaluating LLM outputs. Notably, this approach requires only a small amount of training data and eliminates the need to train the entire model.
Strengths: - The writing is clear, making it easy to grasp the motivation and contribution of the paper.
- The proposed approach is simple, which suggests its potential for widespread adoption across various domains.
- The experimental results effectively demonstrate the effectiveness of the approach. Notably, in Figure 3, the model leveraging DSP achieves superior performance in terms of BLEU, METEOR, and BERTScore, despite Rouge being used as the reward metric.
- The fact that the authors have released their code is highly valuable, as it facilitates future research and allows for the replication of their findings.
Weaknesses: - I have a concern regarding the generalizability of the method, as the experiments primarily rely on a single prompt per task.
- It is widely acknowledged that LLMs heavily rely on the structure and format of the prompt, which raises the question of whether this technique would be effective with different prompt formats.
- This approach would hold even greater value if it could consistently yield improvements regardless of the choice of prompt format, demonstrating its robustness.
- Additional analysis about generated clues will be helpful to get lessons from this work. For example,
- How does the result change if the number of clues is increased/reduced?
- What kind of clues are generated considering the semantics/contents of the input (visualization will be helpful)
- The method is not intuitive to me, because all the information that is required to perform the task is already contained in the input. Does the result mean that LLMs are not sufficient to extract information from naive and complex inputs? It will be helpful if the authors provide thoughts about this phenomenon.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - It would be valuable to know the exact API cost associated with training the reward model. The authors seem to have made efforts to minimize API calls, and providing this information would offer additional details for other researchers to reproduce the study accurately.
- In relation to the aforementioned weakness, it would be interesting to investigate how the model's performance is affected by changing the prompt template. Understanding the impact of prompt variations would provide insights into the robustness of the approach.
- Regarding Figure 1, it appears that the part describing what is "highlighted in blue" is missing. Including this information would clarify the intended meaning.
- As a suggestion, exploring the possibility of allowing the model to "edit" the prompt could potentially offer greater flexibility and a wider search space, leading to improved performance. This suggestion aims to enhance the approach by incorporating additional avenues for improvement.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors' discussion of the limitations in the paper is not sufficient. Addressing the suggested weaknesses and questions in reviews would significantly enhance the insights provided by this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your recognition of our work's strengths and the constructive feedback and suggestions. We hope our response can address your concerns accordingly.
**Response to W1: robustness to different prompt formats.**
1. **Robustness of DSP**: Through the experiment detailed in the "Response to Q1" in the [rebuttal to Reviewer JgjJ](https://openreview.net/forum?id=UvIN8oQ4uI¬eId=6sYAdkCX9o), we found that the policy model trained with few(3)-shot prompting can still improve performance when testing with zero-shot prompting. When both training and evaluation are conducted with zero-shot prompting, the performance improvement over standard prompting is also comparable to both using few-shot prompting (the results shown in our current paper). These observations suggest the robustness of our approach to different numbers of examples in the prompt.
2. **Addressing prompt sensitivity of LLMs**: Sensitivity to prompts is a critical and inherent challenge of LLMs. Our proposed DSP could be potentially used to find the suitable prompt for each sample. To prove that, we conducted experiments on the chain-of-thought (CoT) reasoning, as detailed in the [global author rebuttal](https://openreview.net/forum?id=UvIN8oQ4uI¬eId=ztu9uesRIz). Current prompting methods typically rely on task-specific prompts. In our experiment, we use DSP to generate query-specific trigger prompts for chain-of-thought reasoning. Our experimental results demonstrated that text-davinci-002 is highly sensitive to the used prompts. DSP improved its zero-shot CoT reasoning performance over 14 tested human-designed prompts and a prompt discovered by the APE approach [1].
**Response to W2: Additional analysis about generated clues will be helpful to get lessons from this work.**
We included the following analysis of the generated clues/keywords.
- Number of generated keywords: We've outlined changes in the number of generated keywords, hit keywords (those matched in the reference summary), and corresponding ROUGE-1 scores throughout the training process in the table below. As training progresses, the policy model appears to generate keywords with increasing accuracy, which aligns positively with the increasing ROUGE-1 score. However, it's worth noting that even when keywords are generated with high precision if their quantity is too limited, the performance doesn't necessarily improve.
| Training iters | #Generated keywords | #Hit keywords | Keyword Precision | ROUGE-1 |
| ---- | ---- | ---- | ---- | ---- |
| 0 | 7.986 | 2.936 | 0.367 | 39.30 |
| 1 | 6.806 | 2.85 | 0.345 | 39.24 |
| 3 | 7.516 | 3.174 | 0.422 | 39.61 |
| 5 | 7.262 | 3.31 | 0.456 | 39.63 |
| 7 | 6.676 | 3.134 | 0.469 | 39.96 |
| 9 | 6.186 | 2.998 | 0.485 | 39.91 |
- Type of generated keywords: We employed the spacy package for Part-of-Speech (POS) and Named Entity Recognition (NER) tagging on the generated keywords. The results are shown in the tables below. For the POS tagging, we observe that nouns (NOUN) and proper nouns (PROPN) are the most frequently generated keywords, which can serve as informative keywords. As for the NER tagging, the most commonly generated keywords include persons (PERSON), geopolitical entities (GPE), dates (DATE), organizations (ORG), and numerals (CARDINAL).
| POS Tagging | Appearances | Frequency |
| ---- | ---- | ---- |
| NOUN | 2342 | 36.15% |
| PROPN | 2327 | 35.92% |
| ADJ | 516 | 7.97% |
| NUM | 476 | 7.35% |
| DET | 261 | 4.03% |
| VERB | 216 | 3.33% |
| PRON | 81 | 1.25% |
| ADP | 76 | 1.17% |
| ADV | 69 | 1.07% |
| SYM | 42 | 0.65% |
| AUX | 21 | 0.32% |
| PART | 18 | 0.28% |
| CCONJ | 13 | 0.20% |
| SCONJ | 12 | 0.19% |
| X | 5 | 0.08% |
| INTJ | 3 | 0.05% |
| NER Tagging | Appearances | Frequency |
| ---- | ---- | ---- |
| PERSON | 567 | 30.32% |
| GPE | 284 | 15.19% |
| DATE | 272 | 14.55% |
| ORG | 247 | 13.21% |
| CARDINAL | 245 | 13.10% |
| NORP | 95 | 5.08% |
| ORDINAL | 49 | 2.62% |
| MONEY | 37 | 1.98% |
| TIME | 26 | 1.39% |
| QUANTITY | 11 | 0.59% |
| EVENT | 10 | 0.53% |
| LOC | 9 | 0.48% |
| FAC | 6 | 0.32% |
| PRODUCT | 5 | 0.27% |
| PERCENT | 3 | 0.16% |
| LANGUAGE | 2 | 0.11% |
| WORK_OF_ART | 2 | 0.11% |
Visualization (pie chart) of these analyses will be included in the updated version of the paper.
**Response to W3: Does the result mean that LLMs are not sufficient to extract information from naive and complex inputs?**
LLMs have demonstrated their ability to generate high-quality text. However, in open-ended generation tasks, where multiple outputs are valid, LLMs may not consistently produce text that aligns precisely with the desired output for specific tasks and datasets.
For instance, in the summarization task, LLMs can produce high-quality summaries. Nevertheless, different summarizers might have distinct styles and emphases. Capturing these nuanced variations and specific emphases in different queries becomes challenging when guiding LLMs with a general task-specific prompt.
To address this, our approach trains the policy model on labeled data to learn these subtle signals in the dataset and generate query-specific hints to provide LLMs with fine-grained guidance towards desired outputs for the specific tasks and datasets.
**Response to Q1: the exact API cost**
In our estimation, for experiments on the CNNDM dataset, the training cost per iteration is around \\$1.80, while the evaluation cost on the validation/test set is around \\$2.37. As for the experiments on the MultiWOZ dataset, the training cost is around \\$1.05 per iteration and the evaluation cost is around \\$26.7 for the whole test set.
**Response to Q3: highlighted in blue in Figure 1**
The part highlighted in blue is the generated summary. For DSP, the summary is generated given the provided keywords/hints. For standard prompting, such hints are absent when generating the summary.
**Response to Q4: prompt edit experiments**
Please refer to the global author rebuttal for details on the experiments. | Summary: In this paper, a prompting framework called Directional Stimulus Prompting (DSP) is proposed which provides a more fine-grained guidance and control over LLMs by adding directional stimulus into the prompt. These directional stimulus or hints are generated by a small tunable model which is fine-tuned using supervised learning and reinforcement learning. The performance of the model is tested on summarization and dialogue response generation tasks.
Strengths: S1: It is interesting that they fine-tuned and used a small language model to improve the performance of larger language models such as ChatGPT. This approach cleverly bypasses the constraint of being unable to fine-tune large language models.
S2: In both experiments, they demonstrated the effectiveness of their approach by showing the improvement achieved through their framework.
S3: The paper is well written and has a good flow.
S4: A good number of related works are covered in the Related work section.
Weaknesses: In their experiments, they only show the results for the flan-T5-large model as a model for generating directional stimulus. They could have also included the performance of other fine-tuned models in their experiments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: One of the good points that they motioned is that their framework could be used to guide LLMs to generate harmful or biased contents.
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We genuinely appreciate your acknowledgment of our work's strengths and your valuable feedback.
Regarding the concern about the exclusive presentation of results for the flan-t5-large model, we would like to underscore that our proposed DSP is a general framework and it is not tailored or confined to a specific task or policy model/LLM. The policy model's role in our evaluated tasks is to generate an output given the query context as input. Hence, we chose the suitable widely-used seq2seq t5 models. Due to the enhanced capabilities of flan-t5 models relative to same-sized models, we prioritized it in our experiments.
Additionally, we did try t5-base and flant-t5-base in our preliminary tests and the results consistently demonstrated the advantage of using DSP over standard prompting. For instance, in the MultiWOZ dataset, results using t5-base and flan-t5-base are comparable to those of the flan-t5-large model. With the flan-t5-base and Codex as the LLM, after training on 80 dialogues from MultiWOZ 2.0, we can also achieve a BLEU score of 10.32, success rate of 78.33, inform rate of 91.67, and combined score of 95.32.
In our new task detailed in the [global author rebuttal](https://openreview.net/forum?id=UvIN8oQ4uI¬eId=ztu9uesRIz), we employed the t5-base as the policy model, and the results also demonstrated the effectiveness of our framework. | Summary: This paper introduces a interesting module named 'Directional Stimulus Prompting' (DSP). This innovative module functions by generating cues or hints to aid black-box Large Language Models (LLMs) in response generation. For instance, in a summarization task, providing keywords can guide the LLM towards generating more accurate and relevant answers.
The uniqueness of this module lies in its ability to be fine-tuned through both supervised learning and reinforcement learning techniques. In reinforcement learning, automatic metrics (reward) are applied to responses generated from black-box LLMs, which have been influenced by the DSP.
The results from experiments confirm that both the supervised fine-tuning and reinforcement learning approaches to DSP can significantly enhance the performance of black-box LLMs. This finding underscores the potential of DSP as a powerful tool for improving LLM response generation.
Strengths: This paper introduces a compact yet powerful module known as 'Directional Stimulus Prompting' (DSP). Its function is to generate hints that enhance queries posed to black-box Large Language Models (LLMs).
Two distinct approaches for fine-tuning this DSP module are presented by the authors: supervised fine-tuning and reinforcement learning.
The effectiveness of both supervised fine-tuning DSP (SFT DSP) and reinforcement learning DSP (RL DSP) is confirmed through experimental results, underscoring the utility of this novel module in enhancing LLM query performance.
Weaknesses: Currently, automatic metrics are employed for evaluation, as well as for the reinforcement learning (RL) tuning of the Directional Stimulus Prompting (DSP) module. However, presenting these metrics exclusively in the final analysis could restrict the comprehensive assessment of performance. It may be more insightful to incorporate other evaluation metrics into the analysis. This could include human evaluation, entity word extraction, and GPT-based evaluations. Such a diversified metrics approach could potentially provide a more holistic view of the performance improvements achieved through the use of the DSP module.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the "few-shot prompting" or "in-context learning" potentially enhance the effectiveness of the Directional Stimulus Prompting (DSP) module?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your feedback and insightful suggestions. Below is our response to address these points.
**Response to W1: exclusive use of automatic metrics in the final analysis**
To address the concern about the exclusive use of automatic metrics in the final analysis, we incorporated GPT-4-based evaluation on the CNNDM summarization dataset. As we employ ROUGE scores as rewards for tuning the policy model to generate keywords that guide the LLM towards generating summaries more aligned with the reference summary, we leveraged GPT-4 to assess the overlap of key points between generated and reference summaries. Specifically, we use GPT-4 to compare the summaries generated with our proposed DSP and the orginal standard prompting. GPT-4 was instructed to first generate an explanation, followed by the corresponding answer. The prompt for the GPT-4 evaluation is as follows:
```
You are provided with an article and a corresponding reference summary. Additionally, there will be two alternative summaries labeled as 'A' and 'B'.
Your task is to identify which of the two summaries (A or B) is more similar to the reference summary. This similarity should be evaluated based on the presence and accuracy of key points from the reference summary in each alternative summary.
Please detail your reasoning in an explanation. After your explanation, classify the task outcome as: select 'A wins' if Summary A aligns more closely with the reference summary, 'B wins' if Summary B aligns more closely, or 'Tie' if both summaries align equally well with the reference summary.
```
We found that GPT-4 can produce reasonable and detailed explanations of their assessment. From our test set of 500 samples: DSP-generated summaries were favored 255 times (**51.0%**), summaries generated with original standard prompting were favored 222 times (**44.4%**), while a tie was observed in 23 cases (**4.6%**). We will release the GPT-4 evaluation results, including explanations.
**Response to Q1: Could the "few-shot prompting" or "in-context learning" potentially enhance the effectiveness of the Directional Stimulus Prompting (DSP) module?**
In our current experiments, we employ few-shot prompting with 3 examples in the prompt during training and evaluation. The specific prompt and demonstration examples utilized are detailed in the Appendix. In response to your question, we evaluated two experimental settings on the CNNDM dataset (4,000 training samples):
1. Few(3)-shot during training and zero-shot during testing.
2. Zero-shot during both training and testing.
The zero-shot evaluation results are as follows:
| Method | ROUGE-1 | ROUGE-2 | ROUGE-L | BLEU | METEOR | BERTScore |
| --- | --- | --- | --- | --- | --- | --- |
| Standard Prompting | 37.34 | 14.13 | 24.43 | 4.86 | 29.85 | 0.8798 |
| DSP w/ SFT+RL(0-shot) | 38.73 | 15.14 | 25.26 | 5.22 | 31.21 | 0.8820 |
| DSP w/ SFT+RL(3-shot) | 38.47 | 15.20 | 24.82 | 5.17 | 31.29 | 0.8815 |
From the results, our approach exhibits robustness when different numbers of examples are used in prompts during training and evaluation. When both training and testing are conducted using zero-shot prompting, the performance improvement over standard prompting is also comparable to the scenario where both are conducted using few-shot prompting (results shown in the current paper).
---
Rebuttal Comment 1.1:
Title: Response to Authors' Rebuttal
Comment: Thank you to the authors for their rebuttal. The explanations provided have addressed some of the weaknesses I highlighted. As a result, I have revised my score to 7 (Accept). | Rebuttal 1:
Rebuttal: ### Additional experiment
We greatly appreciate all the reviewers' insightful suggestions and feedback. We conducted an additional experiment in which we use DSP to provide query-specific trigger prompts for chain-of-thought reasoning using two widely-used datasets MultiArith [1] and AQuA [2]. Our results showed that our method DSP could improve text-davinci-002's zero-shot chain-of-thought reasoning performance with query-specific prompts, compared with task-specific human-designed prompts and the prompt automatically discovered with the APE approach [4].
**Experimental setup**: We adopted the experimental setup from previous work [3][4]. We tested the zero-shot chain-of-thought reasoning abilities of text-davinci-002 with different trigger prompt templates. There are 600 examples in the MultiArith dataset, which we divided into 300/50/250 for training/validation/test set. As for the AQuA dataset, we use the standard test set with 254 samples, 300 samples from the standard training set for our training, and 100 samples for the standard validation set for our validation.
**Supervised fine-tuning details**: For supervised fine-tuning (SFT), we first run inference on the training set with the 14 human-designed prompts tested in [3], respectively. We then selected those prompt and query pairs which resulted in a correct chain-of-thought reasoning outcome to form the training set for SFT. These query-prompt pairs were used to train a t5-base policy model for 2 epochs, with the model input being the query instance and the target output a trigger prompt.
**RL training details**: After SFT, the prompts generated by the policy model were used to trigger text-davinci-002 for zero-shot CoT prompting. Reasoning accuracy was utilized as the reward for reinforcement learning (RL). A reward of 1 was assigned for correct reasoning results and 0 otherwise. We conducted 20 training iterations (106k episodes), with 5 epochs per batch, a batch size of 8, and a learning rate of 2e-6. The parameters for $KL_{target}$ and $\beta_0$ were set to 0.5 and 0.001, respectively.
**Results**: We compare the performance of using our generated query-specific prompts with using the 14 human-designed prompts which we used as the pseudo-stimulus to constitute the training set for SFT and also a prompt discovered by the APE approach [4]. Note that all these 15 prompts are general task-specific and are used for the whole test set. By contrast, our proposed DSP enables us to generate query-specific prompts to trigger the LLM's CoT reasoning. The performance comparison is shown in the table below.
| No. | Category | Zero-shot CoT Trigger Prompt | MultiArith | AQuA |
| ---- | ---- | ---- | ---- | ---- |
| 1 | Human-Designed | Let's think step by step. | 79.6 | 31.9 |
| 2 | Human-Designed | We should think about this step by step. | 81.2 | 28.7 |
| 3 | Human-Designed | First, | 78.0 | 38.2 |
| 4 | Human-Designed | Before we dive into the answer, | 54.8 | 27.2 |
| 5 | Human-Designed | Proof followed by the answer. | 58.4 | 37.8 |
| 6 | Human-Designed | Let's think step by step in a realistic way. | 59.6 | 33.9 |
| 7 | Human-Designed | Let's think step by step using common sense and knowledge. | 80.0 | 34.3 |
| 8 | Human-Designed | Let's think like a detective step by step. | 73.6 | 24.0 |
| 9 | Human-Designed | Let's think about this logically. | 75.2 | 34.7 |
| 10 | Human-Designed | Let's think step by step. First, | 78.8 | 32.3 |
| 11 | Human-Designed | Let's think | 56.8 | 38.2 |
| 12 | Human-Designed | Let's solve this problem by splitting it into steps. | 72.4 | 32.3 |
| 13 | Human-Designed | The answer is after the proof. | 42.8 | 34.3 |
| 14 | Human-Designed | Let's be realistic and think step by step. | 69.6 | 29.9 |
| 15 | APE [4] | Let's work this out in a step by step way to be sure we have the right answer. | 81.6 | 34.3 |
| 16 | DSP w/ SFT | *Query-specific* | 75.2 | 35.8 |
| 17 | DSP w/ SFT+RL | *Query-specific* | **82.4** | **38.6** |
As can be seen, text-davinci-002's performance varies significantly when using different task-specific prompts. Compared to the 14 task-specific human-designed prompts, DSP enhances the performance of text-davinci-002 with query-specific prompts. It also outperforms the prompt discovered by the APE approach [4]. Solely relying on supervised fine-tuning of the policy model with the dataset comprising the 14 human-designed prompts doesn't lead to its peak performance. After fine-tuning with RL, the policy model is encouraged to explore better query-specific trigger prompts, further improving performance. Some of the newly generated trigger prompts include:
- "*Let's think like a detective step by step. First,*"
- "*Let's solve this problem by splitting it into steps. First,*"
- "*First step:*"
- "*Let’s think step by step. First*",
- "*Let's think step by step using our creative brains.*"
- "*Let's think step by step using both the above information and the testing.*"
- "*Let's think step by step using proven methods.*"
Overall, the results provide further evidence that DSP could provide fine-grained query-specific guidance to LLMs to align them better with the desired output (chain-of-thought reasoning with correct answers in this case).
Further details will be provided in the updated version of our paper.
**References**:
[1] Roy, Subhro, and Dan Roth. "Solving general arithmetic word problems." arXiv preprint arXiv:1608.01413 (2016).
[2] Ling, Wang, et al. "Program induction by rationale generation: Learning to solve and explain algebraic word problems." arXiv preprint arXiv:1705.04146 (2017).
[3] Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa, Y. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199–22213, 2022.
[4] Zhou, Y., Muresanu, A. I., Han, Z., Paster, K., Pitis, S., Chan, H., and Ba, J. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910, 2022. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces Directional Stimulus Prompting (DSP), a new prompting framework that introduces directional stimulus into the prompt, which could provide black-box LLMs with fine-grained and query-specific guidance toward the desired outputs.
The experiments on summarization and dialogue response generation tasks demonstrate the effectiveness of this approach. Notably, on the MultWOZ dataset, our framework enables ChatGPT to achieve a remarkable 41.4% improvement in its combined score with only 80 dialogues.
Strengths: 1.It is quite novel that the paper proposed DSP, a prompting framework for guiding black-box LLMs toward desired outputs, which combined Supervised fine-tuning and Reinforcement learning to further optimize model.
2.The experiment setting is quite detailed, since this paper used varying numbers of training samples from the datasets and evaluation metrics for ease of display and comparison.
Weaknesses: 1.There is too little dataset in the experiment section. Both tasks were conducted on a single dataset and cannot fully demonstrate the effectiveness of the framework.
2.On Page 5 Line 151, the experiment section of the paper is not enough to prove this conclusion. It is not rigorous to say that the framework can be flexibly applied to various types of LMs and generation tasks, just by conducting experiments on two tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for acknowledging the strengths of our paper and offering valuable feedback. We aim to address your concerns in the following response.
Regarding the concerns about the scope of our experiments, we expanded our experiments with an additional reasoning task on two reasoning datasets MultiArith [1] and AQuA [2] (Details are provided in the [global author rebuttal](https://openreview.net/forum?id=UvIN8oQ4uI¬eId=ztu9uesRIz)).
While current prompting methods typically employ general task-specific prompts, LLMs exhibit sensitivity to these prompts. Consequently, a general prompt might not always be the optimal choice for every scenario. In our extended experiment, we leverage DSP to generate query-specific trigger prompts for chain-of-thought reasoning. Our results showed that text-davinci-002's zero-shot chain-of-thought reasoning performance is indeed highly sensitive to the used trigger prompts. With the query-specific prompts generated by the policy model, DSP improved text-davinci-002's performance compared with using 14 different human-designed task-specific prompts [3] and also a prompt automatically discovered by the APE approach [4]. This study offers further evidence that our framework can provide LLMs with fine-grained query-specific guidance to align them with desired outputs (i.e., conducting chain-of-thought reasoning to derive the correct answer in this case).
We hope that our response and the expanded experiments can address your concerns.
**References**
[1] Roy, S. and Roth, D. Solving general arithmetic word problems. arXiv preprint arXiv:1608.01413, 2016.
[2] Ling, Wang, et al. "Program induction by rationale generation: Learning to solve and explain algebraic word problems." arXiv preprint arXiv:1705.04146 (2017).
[3] Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa, Y. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199–22213, 2022.
[4] Zhou, Y., Muresanu, A. I., Han, Z., Paster, K., Pitis, S., Chan, H., and Ba, J. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910, 2022. | null | null | null | null | null | null |
UP-NeRF: Unconstrained Pose Prior-Free Neural Radiance Field | Accept (poster) | Summary: This paper solves an interesting problem — joint pose and NeRF optimization on in-the-wild image collections. Unlike prior works such as BARF, this work aims at handling unconstrained images with varying illumination and transient occluders. To tackle this problem, this work incorporates learnable camera parameters, depth prior, and semantic features from DINO as supervision into the NeRF-W framework. Experiments demonstrate that the proposed method outperforms BARF and its variants on unstructured Internet images.
Strengths: 1. This work identifies an interesting and important research problem — joint pose and NeRF optimization on in-the-wild images. Prior works on the joint pose and NeRF optimization are limited to controlled settings.
2. The proposed method is intuitive and combines the NeRF-W framework and several components (e.g., depth prior, semantic features, etc.) from other works (such as NoPe-NeRF).
Weaknesses: 1. The main contribution of this paper is the problem setting — pose and NeRF estimation in the wild. The proposed method combines NeRF-W and BARF, which has limited technical novelty.
2. The presentation may be improved. Figure 1 and Figure 2 have not been referred to and explained in the main text.
3. Missing references: Meng et al. GNeRF: GAN-based Neural Radiance Field without Posed Camera. ICCV 2021
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What is the effect of using semantic features as supervision, i.e. L_feat?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I did not find discussions on the limitations of this paper. It would be great to discuss the limitations and analyze the possible reasons and future works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We appreciate your constructive feedback on our work and we will reflect them. Below are our responses to the reviewer's questions.**
---
**Comment 1**: The proposed method has limited technical novelty.\
**Answer**: As the reviewer admitted, Unposed-NeRF in the outdoor scene is a challenging problem, and only a handful of recent works (e.g., NoPe-NeRF) partially addressed this (clean images NOT in-the-wild setting). As reviewer qVYM and others mentioned, we believe that our work is novel. Furthermore, as shown by BARF-W in Table 1 and 2 , a simple combination of NeRF-W and BARF fails to optimize unposed-NeRF with unconstrained images.
To solve the problem, we proposed 4 novel techniques: candidate head (Sec3.1), feature-surrogate bundle adjustment (Sec3.2), isolated transient network (Sec3.3) and transient-aware depth prior (Sec3.4).
---
**Comment 2**: Figure 1 and Figure 2 have not been referred to and explained in the main text.\
**Answer**: We will add more explanations in the main text with explicit figure numbers.
---
**Comment 3**: Missing references.\
**Answer**: Thank you for letting us know the missing reference. We will include these in the final version.
---
**Comment 4**: What is the effect of using semantic features as supervision, i.e. L_feat? \
**Answer**: As we mentioned in lines 149-152 of the main paper, because of the diverse appearances of images it is difficult to optimize the model with the photometric loss. In contrast, DINO features, or semantic features, are less sensitive to the variation of appearance, as shown in Figure 1. Therefore, by using semantic features as supervision, UP-NeRF succeeded in jointly optimizing NeRF and camera poses, unlike the prior methods that are based on photometric loss (See Table 1, 2 and Figure 3).
---
**Comment 5**: I did not find discussions on the limitations of this paper.\
**Answer**: We discussed the limitations in the supplement, and additionally shared some failure cases in the Fig. 3 in pdf.
---
Rebuttal Comment 1.1:
Title: Follow-up
Comment:
1. For the response to "What is the effect of using semantic features as supervision, i.e. L_feat?"
I understand the intuition behind this loss function. The proposed method involves many other objective functions besides the photometric loss commonly used in prior works. I would suggest adding a quantitative evaluation of the L_feat to Table 3.
2. For the response to "I did not find discussions on the limitations of this paper."
I'm not sure if I understand "... additionally shared some failure cases in Fig. 3". The visualizations in Fig. 3 seem to be the cases where the proposed method "achieves comparable quality to reference NeRF with perfect camera poses".
Nevertheless, this work explores a very interesting problem and provides a reasonable solution. I'm happy to raise my rating.
---
Reply to Comment 1.1.1:
Comment: We appreciate your support for our work.
We provide the ablation study of $L_{\text{feat}}$ at Brandenburg Gate below, and will add other scene results and reflect them in the final version.
| Method | w/o feature-surrogate | Ours |
| --- | --- | --- |
| Rotation (°) ↓ | 175.3 |0.797 |
| Translation ↓ | 3.846 | 0.148 |
| PSNR ↑ | 12.29 | 23.60 |
| SSIM ↑ | 0.645 | 0.801 |
| LPIPS ↓ | 0.666 | 0.180 |
---
We apologize for the inaccurate reference to “… in the Fig. 3 in pdf”. We added **additional PDF in Author Rebuttal** and some failure cases are in the Fig. 3 of this PDF, **not in the Fig.3 of main paper**. And you can find discussion of the limitations in **section C of supplementary PDF**. | Summary: This paper tackles NeRF training from in-the-wild Internet photos without pre-computed camera poses. The main idea is to leverage (self-supervised) image features and a carefully designed optimization strategy, together with ideas from existing work, including modeling transient regions and using mono-depth supervision. These components work in combination to avoid local minima and enable joint optimization of both pose and NeRF from scratch.
Experiments show that this joint optimization pipeline successfully recovers camera poses in several in-the-wild scenes and leads to good NVS results comparable to those with COLMAP preprocessing, whereas the previous work of BARF and its variants all fail.
Strengths: ### S1 - Technically sound approach to a challenging problem
- The task of estimating camera poses on raw in-the-wild Internet photos using a gradient-descent-based optimization is challenging, due to noisy correspondences and local minima.
- The proposed method incorporates many good ideas from existing work and works well on a number challenging scenes.
-- For instance, the use of self-supervised image features for registering semantic correspondences via rendering;
-- The mechanism to model transient objects using learned opacity. This paper further adapts the original volumetric opacity in NeRF-W to per-image opacity, which is claimed to be more effective.
-- The use of depth supervision from pre-trained models.
### S2 - Promising results
- The paper demonstrates good results on a number of challenging scenes using Internet photos, where existing methods clearly fails.
Weaknesses: ### W1 - Complicated pipeline
- The resulting pipeline is complicated, involving many components, eg, 6 MLPs in total, and easily becomes confusing. It took me several passes back and forth to understand the exact implementation.
- It also requires a heavily crafted training schedule, gradually activating and deactivating some of the components.
- A critical concern on such a pipeline is its robustness across various scenes. Would one need to fight against all the hyperparameters and the training schedule when training on other scenes. I strongly suggest the authors also present results on other standard datasets (W3).
### W2 - Unclear motivation and potential redundancy of some technical designs
- It is unclear to me why the per-image "candidate embeddings" can help with the pose optimization. It seems the motivation is to allow some of the per-image variations (multi-view inconsistencies) to be factored into these per-image embeddings. The paper presents ablation results which shows numerical benefits of this component, but why is it useful for pose estimation?
- Are they still useful in general in the case of a multi-view static scene without transient entities?
- Isn't this the job of the separate mechanism for handling transient regions?
- Also, it is confusing to me why the transient confidence weights $\mathcal{W}_i^{\text{depth}}$ are calculated from the candidate densities $\sigma^{(c)}$, rather than from the transient opacities $\alpha^{(\tau)}$. Are there some redundancy between the two?
### W3 - Only on one dataset
- The paper only presents results on one dataset, consisting of Internet photo collections of 4 scenes, which the method is specifically tailored to.
- However, the paper claims to solve general "unposed NeRF". I strongly recommend the authors to also test it on standard multi-view datasets, eg DTU, CO3D etc, to assess the robustness of this complicated pipeline.
### (minor) W4 - Do intrinsics matter?
- How are the camera intrinsics obtained? From COLMAP? If so, this would undermine the value of avoiding cumbersome COLMAP preprocessing.
- Or are they estimated or assumed to be some value for all images?
### Other comments
- There are a few existing work that leverages DINO for pose estimation of objects, which the authors should consider referencing, eg: Zero-Shot Category-Level Object Pose Estimation [1], LASSIE [2], MagicPony [3].
- Line 125: when the "candidate embeddings" are first introduced, it was not immediately obvious to me they are per-image embedding vectors. Also, are they jointly optimized? The term "candidate head" is also quite obscure to me.
- Fig 2: the blue arrow in the middle connecting $\hat{\mathbf{F}}_i^{(c)}$ to $\theta_4$ is slightly inaccurate, as the $\hat{\mathbf{F}}_i^{(c)}$ is the result of ray integration, whereas the input to $\theta_4$ should be the raw per-point feature $\hat{\mathbf{f}}_i$, if I understand correctly.
- What is the computational overheads compared to vanilla NeRF?
### References:
- [1] Zero-Shot Category-Level Object Pose Estimation. ECCV 2022.
- [2] LASSIE: Learning Articulated Shape from Sparse Image Ensemble via 3D Part Discovery. NeurIPS 2022.
- [3] MagicPony: Learning Articulated 3D Animals in the Wild. CVPR 2023.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Clarify the motivation for the "candidate embeddings";
2. If possible, present more evaluation results on other datasets **with the same hyperparameters and training schedule**, to validate the robustness;
3. It might be difficult as the pipeline is indeed complicated, but if possible, it would be helpful to further simplify the notations, in particular, the super- and sub-scriptions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors included a brief paragraph on the limitation of the image features. I would expect some discussion on the robustness of the proposed pipeline.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We appreciate the detailed comments and suggestions. We address all the questions below. We hope that our answers resolve the reviewer's concerns and lead to support.**
---
**Answer of "W1 - Complicated pipeline with hyperparameters"**
Although our method has several hyperparameters, the method is relatively robust to the choice of hyperparameters. We here analyze the effect of the candidate embedding size $\ell^{(c)}_i$. Our experiments on the phototourism dataset show that when the candidate embedding size is reasonably big $(\ge 16)$, it stably achieves good performance. All other hyperparameters are set in the same way as previous works.
|Size of $\ell^{(c)}_i$ |0|16|32|64
| --- | --- | --- | --- | --- |
|Rotation (°) ↓|4.358|1.225|1.057|**1.047**
|Translation ↓|0.843|0.164|**0.147**|0.152
|PSNR ↑|21.45|23.26|**23.47**|23.40
|SSIM ↑|0.745|**0.800**|0.799|**0.800**
|LPIPS ↓|0.294|**0.181**|0.183|0.183
---
**Answer of "W2-1 Motivation and more explanations of candidate head."**
+ **Candidate head (motivation and intuition)**: \
Thank you for asking for more explanations of our contributions. We provided more explanations of the candidate head in the General Response (Author Rebuttal) above. Please refer to it.
+ **The usefulness of candidate embedding in the case of a multi-view static scene without transient entities**:\
Our candidate head prevents the hard-pose images from falling into local minima. Handling transient regions is only an additional role. To validate the usefulness of candidate embedding, we had additional experiments on the BLEFF dataset presented by the authors of NeRF-- [3], which is a synthetic dataset without transient entities with several perturbations by random rotations and translations. As the table below shows, candidate embedding shows more robust results as the poses of images become more perturbed.
| Scene | Perturb | PSNR ↑ | | SSIM ↑ | | LPIPS ↓ | | Rot (°) ↓ | | Trans↓ | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | ----- | --- |
| | |BARF|BARF+$\ell^{(c)}_i$|BARF |BARF+$\ell^{(c)}_i$|BARF |BARF+$\ell^{(c)}_i$|BARF |BARF+$\ell^{(c)}_i$|BARF |BARF+$\ell^{(c)}_i$
| Classroom | 20 | 25.43|26.13 (**+0.70**) | 0.877|0.893 (**+0.016**) | 0.133|0.105 (**-0.028**) | 0.265|0.293 (+0.028) | 0.009|0.022 (+0.013) |
| Classroom |30 | 12.36|21.36 (**+9.00**) | 0.480|0.806 (**+0.326**) | 0.749|0.183 (**-0.566**) | 29.17|6.182 (**-22.99**) | 2.176|0.354 (**-1.822**) |
||||||||||||
| Bed |20 | 37.42|37.06 (-0.36) | 0.969|0.967 (-0.002) | 0.050|0.048 (**-0.002**) | 0.839|0.296 (**-0.543**) | 0.243|0.230 (**-0.013**) |
| Bed |30 | 13.61|32.47 (**+18.86**) | 0.449|0.890 (**+0.441**) | 0.897|0.142 (**-0.755**) | 141.8|7.077 (**-134.7**) | 16.57|10.43 (**-6.14**) |
**Answer to "W2-2 Redundancy of some technical designs."**
+ **Redundancy between the transient opacities and the candidate densities**: \
If we use the transient network in the early stage of training, it will generate static regions as well as transient regions because the camera poses are not learned. So we should use the candidate densities for transient confidence weights.
---
**Answer of "W3 - Only on one dataset"**
As mentioned, we had additional experiments on BLEFF dataset which is a standard multi-view dataset. We use the same hyperparameters and training schedule. We will add more scene results in the final version.
---
**Answer of "W4 - (minor) W4 - Do intrinsics matter?"**:
As previous works such as BARF, SPARF, and NoPe-NeRF, we assume that intrinsic parameters are given. Our work focuses only on estimating extrinsic parameters (camera pose). In real-world applications, we may get intrinsic parameters from image metadata. Also, we may leverage previous works [1, 2] to estimate the intrinsic parameters.
---
**Other comment 1**: Missing references.
**Answer**: Thank you for letting us know missing reference. We will include these in the final version.
---
**Other comment 2**: Fig 2 is slightly inaccurate.
**Answer**: We appreciate it for pointing out an inaccurate connection in our figure. The update is available in Fig. 1 in PDF.
---
**Other comment 3**: What is the computational overheads compared to vanilla NeRF?\
**Answer**: Vanilla NeRF is not suitable for in the wild dataset, so we conducted comparisons with NeRF-W. You can refer to the table below. NeRF-W needs to obtain camera poses with COLMAP, and it takes over 25 hours. However, 10 minutes is enough for UP-NeRF to prepare for training; the process of obtaining DINO features and DPT mono-depth predictions is included.
| | NeRF-W | | UP-NeRF | |
| --- | --- | --- | :---: | --- |
| | Preprocessing (COLMAP) | Training | Preprocessing (DINO&DPT) | Training |
| Brandenburg Gate | 29h 29m | 50h 30m | 10m | 46h 30m|
| Sacre Coeur | 25h 27m | 50h 30m | 10m | 46h 30m|
---
**Other comment 4**: Complicated notations.\
**Answer**: We agree with your opinion, we will try to find out the way to simplify the notations.
---
### References:
[1] DeepCalib: A Deep Learning Approach for Automatic Intrinsic Calibration of Wide Field-of-View Cameras
[2] DeepPTZ: Deep Self-Calibration for PTZ Cameras
[3] NeRF−−: Neural radiance fields without known camera parameters.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses
Comment: The rebuttal addressed my two main concerns pretty well. The additional explanation on the 'candidate head' makes the motivation much clearer. Also the additional evaluation on BLLFF and the ablation with the 'candidate head' on those static scenes are quite useful in demonstrating both the robustness and efficacy of the proposed approach. I would highly recommend including more results in the final version. I'm happy to raise my rating to accept.
---
Reply to Comment 1.1.1:
Comment: We appreciate your generous appraisal. We will include more comprehensive experiments and results in the final version.
However, we observed that the rating remains unchanged. If it is not a mistake, we would be very grateful if you let us know the reason for this decision for further improvements of our work. | Summary: The paper produces a novel approach for optimisation of pose in NeRF scenarios. The core novelty is the addition of a candidate head that improves network stability when the images poses are not yet converged, along with some other tweaks like a transiency inference head or a feature field.
Edit: I have read the rebuttal, and given the scores from other reviewers and I would like to keep my score.
Strengths: * The idea is novel, interesting and timely.
* The results show good improvements over the current s-o-t-a.
* The experiments section is thorough enough to demonstrate the qualities of the proposed approach.
Overall, while my negatives might sound long and would seen to outweigh the positives, I think it brings a potentially valuable contribution to the community, so I very much support the acceptance of the paper. My main worry is about the somewhat confusing and short explanation of the core contribution of the “candidate” heads, which should be expanded and improved.
Weaknesses: * The paper is somewhat confusing to read at times. For example “unconstrained images” are introduced and used without introducing the term. The meaning does become later in the paper, but it would be good to introduce things earlier on. Similarly, I do not see much point for Figure 1, as features have been used for this type of matching for years.
* Even though the literature review section of the paper is quite good, I think the paper does anchor things a bit too much on BARF, and ignores other works such as NoPe-NERF / GARF / etc in the comparisons. I do understand that some of these works were arxiv at the time of the submission (but are publications now), so this should not penalise the paper too much. That being said, it’d be nice if some of these comparisons could be added.
* I also found some of the notation difficult to follow initially as I found not find any outline of it’s meaning. An example is the (c) in line 126 which is not explained.
* Looking at the core contribution of the paper (i) the intuition (i.e. the “roughly speaking … “ part at line 125+) could be expanded and (ii) the size of the embedding should have been ablated in the results section.
* The results section could have been expanded e.g. with (i) the ablation noted above, (ii) extra ablations where, e.g., the feature matching part is turned off.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Figure 5 seems to show a case where the transiency is wrong … should the horse really be transient?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: * To some extent, but the paper could benefit from a more clear failure case section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for the detailed review and comments. We appreciate your support for our work. The questions will be addressed below.**
---
**Comment 1-1**: The definition of “unconstrained images” should be introduced earlier on. \
**Answer**: We appreciate for the good point. We will add the definition of “unconstrained images” in line 23 where the concept is firstly introduced. \
\
**Comment 1-2**: Similarly, I do not see much point for Figure 1, as features have been used for this type of matching for years. \
**Answer**: There have been many works that use deep features to find matching points between images. However, there have been no attempts to use feature fields to optimize NeRF and pose jointly. This approach, optimizing the feature field instead of the radiance field, succeeded to tackle unconstrained images, unlike prior works.
---
**Comment 2**: The review section has too much focus only on BARF. \
**Answer**: In lines 242-246 of the main paper, although NoPe-NeRF is the most relevant baseline which succeeded in training unposed-NeRF in the outdoor scene, it requires training data to be successive frames in a video sequence, but phototourism dataset is not. So, we cannot get a performance of NoPe-NeRF. However, we agree with you that we need more comparisons, so we share the quantitative results of GARF below. It fails to estimate the poses like BARF because it uses photometric loss for unconstrained image optimization.
| Scenes | PSNR ↑ | SSIM ↑ | LPIPS ↓ | Rotation (°) ↓ | Translation ↓ |
| --- | --- | --- | --- | --- | --- |
| |GARF Ours|GARF Ours|GARF Ours|GARF Ours|GARF Ours
| Brandenburg Gate | 07.75 **25.79** | 0.211 **0.881** | 1.074 **0.122** | 133.5 **0.426** | 4.659 **0.058** |
| Trevi Fountain | 10.82 **22.44** | 0.218 **0.693** | 1.112 **0.233** | 154.4 **1.498** | 7.232 **0.127** |
| Taj Mahal | 11.71 **25.05** | 0.268 **0.839** | 1.033 **0.203** | 91.13 **0.619** | 5.442 **0.183** |
| Sacre Coeur | 10.53 **21.10** | 0.204 **0.791** | 1.084 **0.160** | 82.54 **1.516** | 11.64 **0.224** |
| Mean | 10.20 **23.60** | 0.225 **0.801** | 1.076 **0.180** | 115.4 **0.797** | 7.243 **0.148** |
---
**Comment 3**: Difficulty of notation in the paper. \
**Answer**: We will find a way to simplify the notations and reflect them in the final version.
---
**Comment 4**: Intuition of candidate embedding and ablation study of its size. \
**Answer**: We added a more detailed explanation about candidate head to Author Rebuttal. And we share the ablation study on candidate embedding size below. You can see that the performance is not much sensitive to the candidate embedding size, however, there is a big difference when the candidate embedding is not used.
| Size of $\ell^{(c)}_i$ | 0 | 16 | 32 | 64
| --- | --- | --- | --- | --- |
| Rotation (°) ↓ | 4.358 | 1.225 | 1.057 | **1.047**
| Translation ↓ | 0.843 | 0.164 | **0.147** | 0.152
| PSNR ↑ | 21.45 | 23.26 | **23.47** | 23.40
| SSIM ↑ | 0.745 | **0.800** | 0.799 | **0.800**
| LPIPS ↓ | 0.294 | **0.181** | 0.183 | 0.183
---
**Comment 5**: More ablation studies \
**Answer**: We agree with you and we have several additional experiments for ablation including the feature matching part. You can see the results at Brandenburg Gate below. As we mentioned in lines 149-152 of the main paper, it is difficult to estimate pose by using photometric loss because of the diverse appearances of images. As expected, UP-NeRF without feature-surrogate fails to estimate the poses.
| Method | w/o feature-surrogate | w/o scheduling | Ours
| --- | --- | --- | --- |
| Rotation (°) ↓ | 175.3 | 172.7 | **0.797**
| Translation ↓ | 3.846 | 3.796 | **0.148**
| PSNR ↑ | 12.29 | 11.33 | **23.60**
| SSIM ↑ | 0.645 | 0.607 | **0.801**
| LPIPS ↓ | 0.666 | 0.615 | **0.180**
---
**Comment 6**: Figure 5 seems to show a case where the transiency is wrong … should the horse really be transient? \
**Answer**: Static objects (non-transient objects) mean objects that are always there, but “the horse statue” in Figure 5 doesn't appear in every image. It is a temporary installation. | Summary: The paper proposes a novel method for optimizing NeRF without a pose-prior and on in-the-wild image collections containing transient occluders and varying lightings. The main contributions are four fold. Firstly, the authors propose a candidate head for NeRFs that uses image-level representations via a learned embedding for compensating inaccurate poses during the early optimization. Secondly, learning a view-independent feature field based on DINO features as intermediate representation increases the robustness of the joint optimization wrt Varying lightings, weather and time. To reduce the impact of transient occluders the authors suggest using a separate network that predicts occluder in 2D image level based on the feature maps. Additionally, to achieve higher accuracy in the geometry the authors apply monocular depth supervision on regions without occluders. Experiments are conducted on four scenes of the Phototourism dataset with initial camera poses set to the identity.
Strengths: 1) The paper is very nicely presented and easy to read.
2) The paper tackles the challenging and relevant problem of in-the-wild NeRF reconstruction without given posen. The contribution is clear and solves different subproblems that emerge in this domain, e.g. transient occluders.
3) Most parts are well motivated, the methodology is technical sound and experimentally justified e.g.:
- Candidate head sounds plausible and improves pose optimization and image quality significantly, see Figure 4 and Table 3.
- Feature field optimization seems to help for in-the-wild images, see Table 3.
- Depth supervision improves performance, see Table 3 and Figure 5.
4) The experimental section contains a comparison to BARF, adapted variants with additional supervision cues and numbers for NeRF-in-the-wild. It appears to be a plausible choice and supports the contribution of the paper.
5) The authors provide code in the supplementary materials.
Weaknesses: 1) The candidate head is introduced to output color and density, equation 4, however it is later used to predict features, see Figure 2. This appears to be misleading for the readers in the beginning.
2) One of the main limitations of NeRF based methods is the optimization time and an analysis is missing on that. It would be good to discuss the optimization time for the method and the baselines. To get a sense if the time and computational effort is comparable to COLMAP + NeRF-W and others.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1) The explanation of the loss schedule applied to the candidate head training makes sense however there is no ablation study. What would happen if the weight is not reduced? Why should there be a negative impact if the loss?
2) Can you explain the z(t) in equation 1?
3) Would it make sense to introduce a candidate head with feature output in equation 4?
4) Please provide evidence about the computation time, see Weakness 2.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are discussed in the supplementary.
I'd suggest that the authors discuss the overall optimization time as a general limitation of the method, if applicable. There are state-of-the-art NeRF architecture, such as Instant-NGP that facilitate optimization in a few minutes instead of hour or days, which might be a good follow up.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1**: Mismatch between Equation 4 and Figure 2. \
**Answer**: In lines 158-159, we mentioned the loss $\mathcal{L}_{\text{rgb}}$ is replaced with the loss $\mathcal{L}_{\text{feat}}$, but readers can be confused because feature-surrogate bundle adjustment (Sec 3.2) was described after candidate head (Sec 3.1). We will revise the text (e.g. reorder the sections) so that it can be delivered more clearly.
---
**Comment 2-1**: Analysis of computation time between baseline and UP-NeRF.\
**Answer**: Here, we share a comparison of training time including preprocessing time. Our method requires a relatively shorter training time. Also, NeRF-W needs to obtain camera poses with COLMAP, and it takes over 25 hours as a preprocessing step. However, 10 minutes is enough for UP-NeRF (ours) to prepare DINO features and DPT mono-depth predictions for training.
| | NeRF-W | | UP-NeRF | |
| --- | --- | --- | :---: | --- |
| | Preprocessing (COLMAP) | Training | Preprocessing (DINO&DPT) | Training |
| Brandenburg Gate | 29h 29m | 50h 30m | 10m | 46h 30m|
| Sacre Coeur | 25h 27m | 50h 30m | 10m | 46h 30m|
**Comment 2-2**: Long optimization time as a general limitation of NeRF. Incorporate the proposed method into faster models such as InstantNGP as a future direction.\
**Answer**: That is a great suggestion. It’ll be good future work. UP-NeRF only requires about 30 minutes of preprocessing time, so using faster models such as Instant-NGP will greatly reduce the training time. According to our quick research, Instant-NGP has not been proven effective on in-the-wild datasets yet. We believe Tackling the challenge and applying the proposed method will lead to interesting technical contributions.
---
**Comment 3**: Ablation study of loss scheduling \
**Answer**: In our final loss Eq. (16), when the schedule is crucial for the initial pose optimization because of color inconsistency. We provide the ablation study of loss scheduling at Brandenburg Gate, and the results table is below. Without loss scheduling, a significant degradation was observed.
|Method |w/o scheduling|Ours|
| --- | --- | --- |
|Rotation (°) ↓|172.7|**0.797**|
|Translation ↓|3.796|**0.148**|
|PSNR ↑|11.33|**23.60**|
|SSIM ↑|0.607|**0.801**|
|LPIPS ↓|0.615|**0.180**|
If you want more explanation of candidate head, please refer to the Author Rebuttal.
---
**Comment 4**: Explanation of $\mathbf{z}(t)$ in equation 1. \
**Answer**: We apologize we didn’t explain $\mathbf{z}(t)$ notation properly. $\mathbf{z}(t)$ is the intermediate feature of MLP, which is just an output of MLP $\theta_1$. It will pass through both MLPs $\theta_2$, $\theta_1$.
---
**Comment 5**: Would it make sense to introduce a candidate head with feature output in equation 4?\
**Answer**: The MLP parameterized by $\theta_1$ a base network shared by two heads: shared head and candidate head. The feature $\mathbf{z}(t)$ is output of MLP $\theta_1$ and it is transformed the positional information. The feature (or transformed position embedding) was fed into MLP theta 3 as input.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing detailed explanation. After reading all reviews and the rebuttal, I keep my score and recommend to accept the paper. | Rebuttal 1:
Rebuttal: Multiple reviewers asked about our Candidate Head. We address the common comment below.
Q. **Additional explanation of Candidate Head**
During the initial phases of joint training for NeRF and camera pose estimation, NeRF struggles to accurately capture intricate scene details. This limitation is particularly pronounced in the case where images are hard to infer their pose based solely on coarse information. We refer to these images as "hard-pose images." These inaccurately aligned images subsequently hinder the NeRF training, introducing erroneous supervision.
To mitigate this problem, we proposed a novel architecture with two distinct NeRF heads: the shared head and the candidate head. The intuition is to consider easy/distinct pieces first to complete a large jigsaw puzzle and handle hard pieces later. The shared head is responsible for generating essential parameters, namely density $\sigma$ and color $\hat{\mathbf{c}}$, which are shared across all images. On the other hand, the candidate head generates unique parameters, including density $\sigma^{(c)}_i$ and color $\hat{\mathbf{c}}^{(c)}_i$, tailored to each individual image. In the initial stages of training, the shared head is predominantly learned by easy-pose images. Meanwhile, the candidate head is primarily trained by hard-pose images. This remedy ensures that the candidate head effectively prevents the shared head from being influenced by incorrect guidance. As the shared head becomes accurate enough to facilitate the pose estimation of hard-pose images, these challenging instances are gradually incorporated into the shared head by our loss scheduling that adjusts the weights between two heads, i.e. $w_s$$_e$ in Equation 7 of the main paper.
Pdf: /pdf/6225a82cd6bb3b8a5c550305085eacd0c2e02018.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a joint camera pose and NeRF optimisation method that can handle transient scenes, including moving objects and various light conditions, by integrating NeRF-W, BARF, NoPe-NeRF, and DINO-based feature-metric loss in a sophisticated way.
The method is evaluated on four scenes in the Phototourism dataset, showing good performance compared with (modified) BARF baselines.
---
**After rebuttal**: I have read authors' rebuttal and it addresses my concerns.
Strengths: It’s a challenging task to estimate camera poses in scenes with moving objects and different light conditions, especially in a NeRF setup. This method proposes to make the pipeline more robust by considering
* Moving objects and lighting conditions similar to NeRF-W;
* Pose optimisation similar to BARF;
* Monocular depth un-distortion similar to NoPe-NeRF; and
* DINO-based Feature-metric.
In short, the proposed method is novel and leads to promising results. It’s also a plus that code is also provided in supplementary.
Weaknesses: I think a primary concern is how robust the pose estimation is in more scenes. The method is only evaluated in 4 scenes. Is it possible to have an experiment, especially for pose estimation successful rate on more scenes?
I understand that it takes a long time for NeRF to reach the best rendering quality, but we should be able to tell if the pose optimisation is successful far before NeRF converges. For example, we can consider pose estimation successful if rotation error is lower than $\theta$ degrees in $x$ epochs. In this way, we can see the pose estimation success rate in many more scenes.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for the constructive comments, including new experimental settings. We address the reviewer's questions below.**
---
**Comment 1**: Experiments in more scenes.
**Answer**:
We provide more experimental results. As requested, we set an error criterion of 20 degrees, which was used by NeRF-- [3] to determine whether the pose estimation was successful or not . And at 10% of the total iterations, it was measured whether it was lower than the error criterion. The result is provided in Fig. 2 of the PDF. The experiments demonstrate that our method successfully learns pose estimation (< 20 degree) in all five scenes below before 10% of the total iterations.
- **3 more scenes in phototourism dataset.** Since NeRF-W did not release pre-processed images of all scenes, we evaluated the model only in four scenes (Brandenburg Gate, Taj Mahal, Trevi Fountain, and Sacre Coeur) on the phototourism dataset in the main paper. We implemented the preprocessing step as similarly as possible and evaluated three more scenes (Buckingham Palace, Pantheon Exterior, Nara Temple).
- **2 more scenes in outdoor image dataset.** We additionally presented a nice outdoor image dataset with color variation and transient occluders as in NeRF-OSR [2].
### References
[1] NIMA: Neural image assessment. IEEE TIP, 2018.
[2] NeRF-OSR: Neural Radiance Fields for Outdoor Scene Relighting. ECCV 2022.
[3] NeRF−−: Neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064,338 2021.
---
Rebuttal Comment 1.1:
Title: thanks for additional results
Comment: Thanks for providing more results and addressing my concerns. I'll raise my final rating. | null | null | null | null | null | null |
Bayesian Optimisation of Functions on Graphs | Accept (poster) | Summary: This article proposes a Bayesian optimization approach to solve node-level tasks with graph-structured data. The presented framework, combined with three kernels that capture the covariance functions on graph-structured data, models each node by its ego-graph of a learnable size. The authors argue that this approach not only makes the optimization process tractable but also addresses the challenge that the graph information might not be entirely available. Experiments with synthetic and real-world tasks demonstrate that the presented framework achieves superior or competitive performance.
Strengths: 1. Novelty: good observation on the lack of Bayesian optimization framework on node-level tasks.
2. Soundness: the three kernels, especially the latter two, are well presented. Their functions in the BO process is well explained and persuasive.
3. Clarity: Figure 2 and Algorithm 1 help the paper with information presentation. Most entities are well-defined.
Weaknesses: 1. Significance: BO is usually utilized to approximate black-box functions that are expensive to evaluate. However, the description in the introduction section and the tasks in the experiments fail to demonstrate the necessity of using BO.
2. Arguing the tractability of the framework, this paper did not discuss the time complexity. And there are no experiment results on running time either. (Solved)
3. Typo in Algorithm 1. 2: Output: the node that v^*_T that minimises the objective function. Extra "that", and "minimizes". (Solved)
4. Figure 2 wrong reference. "to the central node (Eq. (??) in the figure)". (Solved)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: If the graph information is not entirely available, how does breaking the entire graph into multiple ego graphs help? When the ego graph of a node selected in an early iteration is wrong due to missing links, the following iterations will make wrong judgments too.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We appreciate the reviewer’s comments and are glad they have positively remarked on our paper’s novelty, soundness and clarity. We believe that we have addressed the reviewer’s concerns below, and in light of this, we hope that the reviewer will consider increasing their rating.
> Significance: BO is usually utilized to approximate black-box functions that are expensive to evaluate. However, the description in the introduction section and the tasks in the experiments fail to demonstrate the necessity of using BO.
We will improve the presentation in Introduction and Experiments as suggested and refer the reviewer to our [overall response](https://openreview.net/forum?id=UuNd9A6noD¬eId=FsjM9qNZ9q) on this point.
> Arguing the tractability of the framework, this paper did not discuss the time complexity. And there are no experiment results on running time either.
We include the time complexity analysis below, which we will add to the final version of the paper.
Firstly, as with all GP-based BO algorithms, BayesOptG scales cubically with the number of training points $N$: $\mathcal{O}(N^3)$, assuming no other efficient approximations are used. A unique challenge in the graph case is that the algorithm can also be bottlenecked by the *search space size*, as computing the kernels requires eigendecomposition of the Laplacian matrix of the graph, a basic algorithm for which scales $\mathcal{O}(n^3)$ where $n$ is the number of nodes. Thus, the overall complexity can be given by $\mathcal{O}(N^3 + n^3)$.
The restart and local modelling algorithm introduced in §3.2 ensures both terms are tractable. By placing the GP over the local subgraph only, $n$ is now upper limited by the max subgraph size instead of the size of the entire graph, which can be very large. By periodically restarting the GP when the optimisation gets stuck, we implicitly prevent $N$ from growing very large. This is why local modelling ensures the tractability of the framework.
We also add the running time comparison the reviewer requested in Fig S6 of the rebuttal PDF to demonstrate this more concretely. It is, however, worth noting that exactly due to what the reviewer mentioned previously, given that BO is commonly used to optimise expensive functions, in real life, the computational cost is likely dominated by the cost of querying the objective functions, and thus the number of objective function evaluation that we used in the paper is often a better proxy of the overall cost.
> Typo in Algorithm 1. 2: Output: the node that v^*_T that minimises the objective function. Extra "that", and "minimizes".
> Figure 2 wrong reference. "to the central node (Eq. (??) in the figure)".
We thank the reviewer and will correct these typos. We were trying to refer to the mathematical definition of the central node (Line 209).
> If the graph information is not entirely available, how does breaking the entire graph into multiple ego graphs help? When the ego graph of a node selected in an early iteration is wrong due to missing links, the following iterations will make wrong judgments too.
We believe this is a misunderstanding in our setup, and we will clarify this crucial aspect when we revise the paper.
When we mentioned that “graph information is not entirely available”, we refer to the setup that we cannot access the graph in full – this is very common either due to the size (e.g., we cannot access the entire Facebook graph as it is too large) or cost (e.g. in the contact tracing example, it is impractical to have the full information beforehand, we that would require us to interview and contact-trace everyone involved). What we can do is query a node and reveal the graph structure around it (the ego-network part), *noiselessly* – this is also a reasonable assumption: using the examples above, this is akin to viewing *a specific person’s* Facebook friends and conducting an exhaustive contact-tracing interview on *a specific person*. Crucially, at this step, we do not consider the case that the network structure revealed locally is noisy or otherwise erroneous (e.g. with missing or fake links), nor are we “breaking the entire graph into multiple ego graphs” as the reviewer suggests – we are simply revealing the structure on the fly.
In other words, when we say “graph information is not entirely available”, we mean that the global graph is not available to the BO agent initially, but it may query nodes and reveal local subgraphs around them when the optimisation proceeds. This does not mean we have a partial, noisy or otherwise inaccurate graph structure to begin with, which seems to be what the reviewer suggests. While we agree that studying such a more challenging setup is interesting and will include it in future works, this is out-of-scope of the current submission (especially in view that the current submission is one of the first attempts of using BO in this setup, as the reviewer also agreed on).
---
Rebuttal Comment 1.1:
Comment: Thanks for your effort. I appreciate your presentation about the time complexity. For the necessity of using BO, I hope this can be identified more clearly in the later version. Rating increased to 5.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their feedback and will make sure to incorporate suggested changes in the final version of the manuscript. | Summary: The authors consider the Bayesian optimization for functions defined on a graph (e.g., finding a node in a graph to min/max some function on that graph). The authors propose a local modeling approach for such problem on generic, large-scale and potentially unknown graphs. The authors demonstrate the advantages of the proposed approach on several experiments.
Strengths: + It is interesting to consider Bayesian optimization approach for a function on a graph (i.e., finding a node in a graph which minimizes/maximizes some function defined on that graph).
+ The local modeling makes the approach to scale up for large-scale graph, potentially unknown graph.
+ The authors also propose two kernels for Bayesian optimization for functions on graph (which are variant of the diffusion kernel on graph)
+ The authors provide extensive experiments for the proposed approach.
Weaknesses: + The experiments seem weak. It is unclear whether the authors have considered any large-scale graphs, or potentially unknown graphs in the experiments from the main manuscript. It is also unclear about the objective functions for all the experiment whether one needs Bayesian optimization for the tasks yet.
+ Some important part needs to elaborate with details, e.g., especially selecting the local subgraph; why the proposed approach can handle potential unknown graph?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The flow of the submission is clear. However, it is better in case the authors elaborate more details for some important parts.
+ Although the authors discuss in line 229-241, the proposed approach (e.g., local modeling) is closely related to the trust-region-BO. In my understanding, the local modeling is at the heart to make the approach handle possibly large-scale graph. In some sense, they share the same role.
+ The authors should elaborate how to select the next “subgraph” (even its size is given). Does the selected subgraph require to be connected? (since it seems that the authors do not assume that the graph is connected?). Is there any overlap between subgraphs at different iterations?
+ The authors should give more details for their experiments in the main manuscript, especially the size of the graph and the objective function for optimization. It is unclear that the authors illustrate the proposed approach for large-scale graphs, (or potentially unknown graphs?) to optimize functions on graph (which are expensive to evaluate) yet.
+ It is unclear how the proposed algorithm can handle unknow graphs as claimed? Could the authors elaborate it with more details?
+ It is better to elaborate more details about the Algorithm 1 and 2 in the main text, especially the Algorithm 2.
+ For the strategy as in line 205-215, could the authors give more explanation when the algorithm finds a subgraph with local minima, is it stuck on that region around the subgraph, or it has some strategy to improve the objective function more? (e.g., by explore to some far regions?)
Some minor points:
+ What is $\tilde{A}$ in line 140? There is no explanation for it.
+ Reference for Equation in Figure 2
---
I thank the authors for the rebuttal.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have not discussed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback. It seems that the primary concern is 1) how our method deal with unknown graphs and 2) some details of BayesOpt. We address both below, and we hope the reviewer will consider increasing their rating in light of our response.
> The experiments seem weak.
First, as the reviewer acknowledged, our paper features an extensive experimental section, perhaps more so than many published works in BO, even though we are considering a novel setup with scarce prior works. Second, BayesOptG consistently outperforms the baselines despite the differences in underlying function properties. We believe both strongly support the strength of our experiments.
> Large-scale & unknown graphs
“Size”: We considered varying graph sizes and detailed them in the figure captions ranging from thousands (synthetic tasks) to tens of thousands in Fig 9 (34,000 for Enron, 14,000 for Facebook graph). In some experiments, the use of smaller graphs (of $\mathcal{O}(10^3)$) is attributable to the cost of computing the objective function (e.g. betweenness or eigenvector centrality) and not because our method cannot be used for large graphs. To illustrate this, we scale BayesOptG to 1,000,000-node graphs for some tasks in Fig S3 of the rebuttal PDF, where it retains strong performance.
“Potentially unknown graphs”: for all experiments, the full information about the graphs is *never* revealed a-priori (except for regression experiments in §5.1 to test the predictive powers), and thus from this perspective, all experiments feature “unknown” graphs to the BO agent at the start of the algorithm. The graph structure is only incrementally revealed to the BO as the optimisation proceeds: when BO queries a point, its ego-network, a local subgraph, is revealed.
> Why one needs BO for the tasks
Please see the overall response.
> details on local subgraph selection and why this handles unknown graphs
The subgraph selection is elaborated by Algorithm 2 and §3.2 – the approach may handle unknown graphs as it does not require the full graph information to be known a-priori. Only when the BO queries a node the graph structure of the ego-network centred around that node is revealed to the BO agent. As such, the algorithm works as long as we can request to reveal the local information around the nodes we query.
> Although the authors discuss in line 229-241, the proposed approach (e.g., local modeling) is closely related to the trust-region-BO. In my understanding, the local modeling is at the heart to make the approach handle possibly large-scale graph. In some sense, they share the same role.
We’d be grateful if the reviewer could clarify the question in this remark, but we will explain what we meant in paragraph *Remarks on the relation to trust-region BO methods* in §3.2.
The reviewer is right to say that the local modelling makes scaling the method to large graphs possible, as we only impose GPs on the subgraph. However, as discussed above, local modelling also makes it possible to apply the algorithm to unknown graphs as, at any point in time, the algorithm *only requires information about this local graph* and works as long as we are capable of collecting and querying local graph information around the queried node.
It’s also worth noting that the primary objective of trust regions in previous BO algorithms is *not* to ensure scalability but to alleviate the curse of dimensionality of GP by focusing on a promising subspace to reduce over-exploration [1]. In the typical BO setup, the scalability is typically purely bottlenecked by the number of training points. Unlike our graph setup, the search space size *plays no role in scalability*. While trust region BO, by the periodic restarting of the GP, resets the training samples of the GP & is indeed more scalable than an algorithm that does not restart, we argue this is a side-effect of restarting and not of trust regions. We argue this is another difference in motivation between us and previous trust region BO methods.
[1] Eriksson et al. (2019). Scalable global optimization via local Bayesian optimization. NeurIPS.
> how to select the next “subgraph”
The procedure to select the next subgraph is in Algorithm 2. As mentioned in Line 209, the subgraph is constructed around the node with the best objective function seen so far.
> Does the selected subgraph require to be connected?
The subgraph is the ego-network centred around the best node seen so far, so it is, by definition, connected. Our algorithm still works when there are disconnected components due to the restart mechanism (line 227): while within one restart, the BO will explore one connected graph, the BO may jump to another disconnected subgraph after a restart since the restart location is determined randomly.
> Overlap between subgraphs at different iterations
Yes – if the best objective value did not increase, the subgraph does not change from the previous iteration. Otherwise, it is moved to be centred around the new best node, but there will still be overlap (as the new best node must lie within the preceding subgraph) – as illustrated in Fig 1. An additional point to clarify is that the GP is always imposed over the *current* subgraph: when the subgraph changes, some evaluated nodes will no longer be in the current subgraph. We do not impose GP over these points, so the subgraph does not accumulate over previous evaluations.
> is the algorithm stuck on that region around the subgraph, or does it have some strategy to improve more?
Please see our description of the restart mechanism to deal with this issue in Lines 227-228.
> tilde{A}
It is the adjacency matrix of the subgraph. We will clarify this in the revised manuscript.
> Equation in Figure 2
We will address this in the revised manuscript. We meant to refer to the mathematical definition of the central node, i.e. the best node seen so far (Line 209).
---
Rebuttal Comment 1.1:
Title: Clarification on the definition of "scalability" in the rebuttal
Comment: We'd like to provide a quick clarification regarding *scalability* when we discuss previous works in BO:
When we argued that "the main objective of trust regions in prior BO works is not for scalability" in the rebuttal, we were referring to the scalability in terms of *the number of training points* -- this is also the definition used in seminal papers like [1]. However, if we use a broader definition to include scalability to *the number of dimensions*, we agree that trust regions also help scalability in prior BO literature as they improve the performance of GPs in high dimensions.
Despite any definitional differences, our core argument still stands: in prior BO methods, trust regions are mainly used to improve GP performance in high dimensions rather than to accommodate more data. In our case, local subgraphs achieve both: they both restrict the GP's attention to a promising subregion similar to trust regions but also make BO more scalable to more training points, as the GP defined on graphs also scale cubically w.r.t the number of nodes $n$, in addition to the number of training points $N$ with $\mathcal{O}(n^3+N^3)$. Using subgraphs ensures $n$ is tractable -- empirical evidence of this is provided in Fig. S6 of the rebuttal PDF.
We thank the reviewer once again for their comments.
### References
[1] Liu, H., Ong, Y. S., Shen, X., & Cai, J. (2020). When Gaussian process meets big data: A review of scalable GPs. IEEE transactions on neural networks and learning systems, 31(11), 4405-4423.
---
Rebuttal Comment 1.2:
Comment: Thank you for the rebuttal. I have no other raised points.
---
Reply to Comment 1.2.1:
Comment: We thank the reviewer again for engaging in the rebuttal process. We hope that our response helped clarify the points raised by the reviewer and ease the understanding of our paper, and we are glad that the reviewer has no further outstanding concerns.
If there is improvement that we can still make at this stage that would make the reviewer evaluate our work more positively and raise their score, we would be very happy to do so. Thank you! | Summary: This paper solves an optimization problem defined on a graph. Since the problem defined on a graph requires the need to search for a solution in a combinatorial manner, it is a challenging problem. To tackle such a problem, the authors investigate diverse kernels on a graph and a local search strategy on a graph. Finally, they provide the experimental results that show the performance of the investigated methods.
Strengths: * The problem defined on a graph has not been studied much. This topic in particular is interesting in the Bayesian optimization community.
* Experiment section contains many things to discuss.
Weaknesses: * Analysis on which method is strong at some circumstances and which factor affects to the performance is lacked.
* Analysis on the investigated methods is lacked.
* Discussion on local search strategies is weak.
* Experimental results seem not consistent.
* Writing can be improved.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. As described above, the authors can provide more thorough discussion on which method is strong at some circumstances and which factor affects to the performance.
2. Moreover, the authors can discuss analysis on the investigated method in terms of kernel types and local search strategies. For example, a diffusion kernel is generally able to capture meaningful relationship between two nodes compared to polynomial and sum of inverse polynomials kernels. On the contrary, the other kernels show their effectiveness in some conditions. The discussion like this should be included in this paper in order to understand the algorithms appropriately.
3. Basically, the experimental results are hard to interpret. Figures are too small and lines are too thin.
4. The number of evaluations is too small.
5. Regardless of the visualization of the experimental results, there is no consistency I think. I do not much care about the inconsistency, but the authors need to discuss why it happens.
6. I cannot understand the experiment of team optimization. Each node represents a team, right? How many team members belong to a team? The size of team does not affect to the evaluation of team? I think a skill vector of each member is sum to one because it is drawn from the Dirichlet distribution. But if the sum of skill sets for team members exceeds 1, what happens? Is it just treated as 1 or the sum of skill values?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I do not think that this work has specific limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments. It seems to us that the biggest concern comes from the discussions of the experimental results and the relative strengths of different methods in different situations. We agree with the reviewer that such a discussion would be helpful for users, especially in a practical setup, and we thank the reviewer for their suggestions. We include a detailed discussion on this point in the [overall response](https://openreview.net/forum?id=UuNd9A6noD¬eId=FsjM9qNZ9q), which will be incorporated into the manuscript, and we are confident that the issues pointed out in the review can be fixed when we update the manuscript. We hope the reviewer can consider increasing their rating in light of this.
> Experimental results seem not consistent. The authors need to discuss why it happens.
While we refer the reviewer to our overall response, we would like to emphasise that *our proposed algorithm, BayesOptG* (marked as “BO\_*” in the figures), *always outperforms or at least performs on par compared to the best baseline* (except for Fig 4 if we opt to use obviously inappropriate kernels such as BO_Diff (without ARD) against the local search, the strongest baseline – see our explanations below). **From this perspective, there is little inconsistency in experimental results**. While there are indeed differences between different kernel choices, as we discussed in the overall response, the fact that different kernels are suited for different problems and that the kernel choice often significantly impacts the modelling performance of GPs is highly expected for *any* GP-based technique. We argue that this is not a weakness of our method, but the fact that BayesOptG performs strongly *regardless of the kernel choice* demonstrates its robustness.
> Analysis on the investigated methods is lacked.
> As described above, the authors can provide more thorough discussion on which method is strong at some circumstances and which factor affects to the performance.
> Analysis on which method is strong at some circumstances and which factor affects to the performance is lacked.
> Discussion on local search strategies is weak.
> Moreover, the authors can discuss analysis on the investigated method in terms of kernel types and local search strategies. For example, a diffusion kernel is generally able to capture meaningful relationship between two nodes compared to polynomial and sum of inverse polynomials kernels. On the contrary, the other kernels show their effectiveness in some conditions. The discussion like this should be included in this paper in order to understand the algorithms appropriately.
As we mentioned at the beginning of the response, we thank the reviewer for the suggestion and agree this can be important for potential users. We direct the reviewer to our “Overall Response” for the requested discussions in response to all the comments above that the reviewer raised.
> Basically, the experimental results are hard to interpret. Figures are too small and lines are too thin.
We thank the reviewer for this feedback. We will address this in the revised manuscript.
> The number of evaluations is too small.
We are unsure whether the reviewer meant i) the query budget or ii) the number of random trials.
If the reviewer meant i): while we agree it is advantageous to run longer experiments, it is not uncommon in BO work to set the query budget around 100, given the assumption that the evaluation is expensive. If the reviewer meant ii), we again agree that increasing the number of random trials would always be beneficial. However, we argue that the BO algorithm's outperformance over the baselines is strong and consistent with the 10 repetitions we used already.
In any case, we thank the reviewer’s suggestion and provide some additional experiments where we increase both the query budget and the number of random trials, and the reviewer is referred to Fig S3 and S4 in the rebuttal pdf where the paper's main findings still stand.
> I cannot understand the experiment of team optimization. Each node represents a team, right? How many team members belong to a team? The size of team does not affect to the evaluation of team? I think a skill vector of each member is sum to one because it is drawn from the Dirichlet distribution. But if the sum of skill sets for team members exceeds 1, what happens? Is it just treated as 1 or the sum of skill values?
We thank the reviewer for the opportunity to clarify our work. We initially set a maximum number of members in team $N$, and a number of “skills” $K$. Then, the graph is built such that each node represents a team with less than $N$ members (there are $2^N$ of these, hence potentially very large), and an edge is defined between teams that have a minimum number of common members (above a certain quantile of the distribution of common members across all possible pairs of teams). The objective function aims at maximising team competence by promoting members who are experts, and that complement each other. One member is indeed a discrete distribution over skills, and so is each team, where each skill component is the average skill component over its members, hence also defining a distribution. If we define the objective function by the difference between the entropy of the skill distribution of the team and the average entropy of the skill distribution over its members, the objective promotes the completeness of the team and the sparsity of skills among the team members. Regarding the size of the team, the reviewer is right in stating it does not influence the objective. We will address this issue by adding a regularising term in a revised version of the manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I acknowledge that I have read your rebuttal.
After reading the rebuttal, I decided to keep my rating.
---
Reply to Comment 1.1.1:
Title: Could you please elaborate?
Comment: We appreciate the reviewer’s acknowledgment, but we’d be grateful if the reviewer could explain why they insist on a rejection recommendation.
As mentioned, we believe the reviewer’s primary concerns were 1) analysis of which method is strong in some circumstances and factors affecting performance and 2) consistency of the results. We firmly believe that we addressed both points. Other concerns include a number of evaluations and clarification on team optimization, and again we provided thorough responses, sometimes through additional experiments.
If the reviewer feels that we have not addressed their concerns satisfactorily, we’d be grateful if they could comment on what, in their opinion, is lacking, and we will try our best to respond.
Again, we appreciate the reviewer’s help in improving the quality of our work. | Summary: The paper presents a Bayesian optimization algorithm for functions defined on the nodes of (potentially unknown) graphs. The algorithm combines local modelling via trust regions to account for the potentially unknown nature of graphs with random restart to avoid becoming stuck in local minima. Novel kernels on graphs are defined (locally) to avoid problems related to overfitting.
Strengths: This paper addresses a novel problem, the discussion is compelling and the results appear to be sound.
Weaknesses: (see questions below).
Minor point: figure 2 - equation (??).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Step 4 in algorithm 1: it is unclear how this step is to be interpreted on the first iteration of the algorithm with $v_t^*$ is not defined yet.
- Why set $Q_{\rm min} = 1$ (line 226)? Wouldn't this mean that you *only* test $v_t^*$ for that iteration (region size of $1$), which is redundant as you already (presumably) know the value of $f$ at this node?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for their positive and insightful impact and are glad the reviewer found our discussions novel, compelling and sound. Please see below for our response to the concerns the reviewer raised.
> Minor point: figure 2 - equation (??).
We thank the reviewer for pointing this out. We were referring to the mathematical definition of the central node (Line 209).
> Step 4 in algorithm 1: it is unclear how this step is to be interpreted on the first iteration of the algorithm with $v^*_t$ is not defined yet.
The reviewer is correct in pointing out this fact. This should instead be placed after Line 8 (i.e. after “end if”). $v^*_t$ is initially found by finding the best point in all the randomly initialising points.
> Why set Q_min = 1(line 226)? Wouldn't this mean that you only test v_t^* for that iteration which is redundant as you already (presumably) know the value of at this node?
The reviewer is correct in pointing out that when $Q=1$, we have a trivial subgraph consisting of only the node we have queried. We set $Q_{min}$ to 1, as when this happens, we know, *with certainty*, that the current subgraph contains no more unknown information and that a better solution definitely does not exist. In practice, we restart when $Q_{min}$ = 1 or all nodes in the subgraph have been queried before. It is indeed possible to set $Q_{min}$ to a value larger than 1, but that creates an additional hyperparameter; we set it to 1 because, as discussed, this is one of the most definite signals requiring a restart.
We will clarify this when we update the manuscript.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the reviewer for their response. I have no further questions and will be keeping my evaluation unchanged.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for engaging in the rebuttal and for their time and effort in helping to improve our manuscript. | Rebuttal 1:
Rebuttal: We thank all reviewers for their feedback. We are glad that they acknowledged the novelty (all reviewers), clarity of writing (4pXS, nGA9, GnE3), soundness (nGA9, sVur, GnE3, 4pXS), and extensiveness (GnE3, tMg5) and strength (sVur, nGA9) of experiments. We address common concerns below.
## Necessity of BO (4pXS, GnE3)
We emphasise the problems studied in Experiments indeed represent or imitate setups where BO is suitable: while they may not be expensive to evaluate *per se*, they each imitate an expensive black-box function in real-life – note that **experimenting on cheap benchmarks as a proxy to expensive, real-life tasks is done in virtually every BO paper** (e.g. evaluating synthetic benchmarks and Contimation/Pest problems in COMBO [1], the most related work, are all cheap if not instant), except that we also design these tasks and provide solutions *ourselves* because of the novel problem setup. For example:
- Identifying patient zero imitates real-life contact tracing. If executed in real life, each function evaluation requires procedures like interviews about the individuals’ travel history and people they were in contact with. This is expensive and potentially disruptive, and given limited resources, the use of a query-efficient method like BO is justified.
- Centrality maximisation & identifying influential social network nodes mirror a common task for online advertising to identify the influential users without access to the full social network information (which would be near-impossible to obtain given the number of users). Real-life social media often limits how much one may interact with their platform through pay-per-use APIs or hard limits (e.g. upper limit of views). In either case, there is a strong reason to identify the influential users in the most query-efficient manner.
## Discussions of relative strengths of different methods and different kernels of BayesOptG (nGA9, tMg5)
Many factors may affect the algorithm’s relative performance, like smoothness and noisiness of the objective functions, graph sizes, number of local minima and whether isotropy holds true (i.e., function variation in all directions is similar). We discuss each algorithm considered below (the descriptions of the baseline are provided in Appendix A.2). For ease of comparison, we also include Fig S5 in the PDF to show the methods’ rank vs the number of evaluations aggregated across all experiments.
- Random Search is simple but typically weak for larger graphs, except for very rough/noisy functions (like Ackley), or the variation in function values is generally small.
- DFS and BFS are relatively weak as they consider graph topology information only but not the node information (on which the objective function is defined) and can be sensitive to initialisation.
- Local Search is, on average, the strongest baseline, and it does particularly well on smoother functions with fewer local minima (as local search is stuck at a local minimum and requires random restarting; having few local minima reduces such occurrences). Its strength is well-documented: for example, local search on smaller-scale benchmarks in neural architecture search is competitive against state-of-the-art search algorithms [2].
- From Fig S5, BayesOptG proposed by us with *any kernel choice* outperforms baselines, but some performance differences due to the kernel choices are bound to occur. We included many possible kernel choices to inform the readers of the performance impact and encourage them to select the most suitable kernel depending on knowledge about the objective function, if any. The importance of kernel choices is well-known for *any* GP-based technique rather than being unique to our method: given that mean functions are typically set to a constant, the kernel completely determines the GP’s behaviour, its modelling effectiveness and, by extension, the effectiveness of any derived technique (e.g. BO).
As is the case for all GP-based methods, the performance is stronger when the underlying assumptions of the kernel match the actual objective function. For example, diffusion kernels work well for patient zero and team optimisation (Figs 7 & 8), as the underlying generative functions for both problems are indeed smooth (in fact, the SIR model in disease propagation is heavily connected to diffusion processes).
Diffusion without ARD further enforces isotropy, assuming the diffusion coefficient in all directions is the same, and thus typically underperforms except for team optimisation, where the generated graph is well structured (see Fig S7 in PDF) and Ackley (Fig 6 ab), which is known to be isotropic and symmetric. We recommend only if we know that the underlying function satisfies its rather stringent assumptions.
The SumInverse and DiffARD kernels are generally better, as they offer more flexibility in learning from the data; *we recommend using one of these as default if we have no prior knowledge*. The difference is that DiffARD has more learnable parameters and thus is even more flexible, but may also overfit initially when the number of observations is small – such a phenomenon can be seen in Fig S5 in the rebuttal PDF where SumInverse outperforms initially. The final kernel, Polynomial, performs weaker than the inverse; we hypothesise a possible reason is, from Table 1, a single epsilon is added to the polynomial before inversion. Even if the epsilon is chosen to be a small constant, it may still dominate the polynomial term (i.e. the learnt signal) when the latter happens to be small, and this may increase optimization difficulty (whereas for SumInverse, we add an epsilon to each polynomial term). We will thoroughly investigate this in the revised manuscript.
[1] Oh et al. (2019). Combinatorial Bayesian optimization using the graph cartesian product. NeurIPS.
[2] White et al. Local search is state of the art for NAS benchmarks. 7th ICML Workshop on Automated Machine Learning.
Pdf: /pdf/0f3ec42bc1ffec313b4796b1e4d43c4b63f0e20f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper extends the use of Bayesian optimization based methods for optimization of the functions over the nodes of graph this algorithm is dubbed as BayesOptG. Paper mainly focuses on following three aspects:
1. Kernel design - Authors introduce suitable kernels, diffusion kernel, polynomial kernel and sum of inverse polynomial kernel, which can be used with the Gaussian process (GP) surrogate.
2. BayesOptG - Algorithm uses GP surrogate with suitable kernels to model the function on graph followed by the use of standard acquisition functions to determine the next query, the algorithm adapts the idea of trust regions not only to scale to larger graphs but also to handle imperfect knowledge about the graphs.
3. Empirical Results - The algorithm is empirically evaluated both on synthetic and real world datasets and is compared to the other baselines such as random search, local search, depth first search, breadth first search.
Strengths: 1. This work extends the BO based methods to the functions defined over the nodes of graph.
2. The algorithm introduced is scalable to larger and general graphs, further it can also be used in the cases with imperfect knowledge of the graph.
3. Clear introduction of the problem setup and where exactly the BO is being applied.
4. Experimental evaluation of the algorithm both on synthetic and Real world datasets.
Weaknesses: Though the paper provides good experimental evaluations for the algorithm it lacks on the theoretical results. Further, little to no intuition is provided why the given choices of kernel functions are the right ones and the guarantees on the semi positive definiteness of the covariance matrix is also missing. The experiments section can further be improved by comparing the results with other algorithms such as spectral bandits[1], and GRUB [2].
[1] Valko, Michal, et al. "Spectral bandits for smooth graph functions." International Conference on Machine Learning. PMLR, 2014.
[2] Thaker, Parth, et al. "Maximizing and Satisficing in Multi-armed Bandits with Graph Information." Advances in Neural Information Processing Systems 35 (2022): 2019-2032.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions -
1. Can you please provide a bit of background on why the kernels mentioned in the paper would result in semi positive definite covariance matrices?
2. Why are the kernels chosen the right choice and what is the intuition behind the choice? Can the Matern kernel introduced in [3] be used?
Suggestions -
1. Check the captions of Figure 2 and caption of figure 3 is hard to follow.
2. Few acronyms were introduced after they were used in prior sections. ex. BA and WS in section 5.1.
3. Section A2 introduction BFS and DFS is probably jumbled.
[3] Borovitskiy, Viacheslav, et al. "Matérn Gaussian processes on graphs." International Conference on Artificial Intelligence and Statistics. PMLR, 2021.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful and positive feedback. We are glad the reviewer commented positively on our method's strength and experimental results. Please see below for our response to the reviewer’s concerns.
> little to no intuition is provided why the given choices of kernel functions are the right ones
> Why are the kernels chosen the right choice and what is the intuition behind the choice?
We thank the reviewer for the suggestions on adding these discussions and insights, which we agree would be useful for prospective users of our algorithm. Please see the [Overall Response: Discussions of relative strengths of different methods and different kernels of BayesOptG](https://openreview.net/forum?id=UuNd9A6noD¬eId=FsjM9qNZ9q) for discussions on this point.
> Can the Matern kernel introduced in [3] be used?
We thank the reviewer for the suggestion. We can indeed introduce the Matern Kernel in our algorithm. We provided experimental results for the centrality tasks with the Matern kernel with parameter $2.5$ in Fig S1 and S2 in the rebuttal pdf.
> the guarantees on the semi positive definiteness of the covariance matrix is also missing.
> Can you please provide a bit of background on why the kernels mentioned in the paper would result in semi positive definite covariance matrices?
Please see the positive semidefiniteness proof below. This will be included in the updated manuscript. In our formulation in eq 1, any map $r : \mathbb{R} \rightarrow [0, +\infty]$ defines a valid covariance kernel.
Indeed,
$\forall \mathbf{X} \subset \mathcal{V}, k(\mathbf{X}, \mathbf{X}) = \sum_{i=1}^{\tilde{n}} r^{-1}(\lambda_i)\mathbf{u}_i[\mathbf{X}] \mathbf{u}_i[\mathbf{X}]^{T}$,
where
$\\mathbf{u}\_{i} [\\mathbf{X}] = [u_i[x_1], u_i[x_2], ..., u_i[x_{l}]]^{\\top}$ with $l = |X|$.
The matrix $u_i[X] u_i[X]^{\top}$ is symmetric positive semidefinite as the outer product of one non-zero vector:
$\forall x \in \mathbb{R}^{l}, x^{\top} u_i[X] u_i[X]^{\top}x = ||u_i[X]^{T}x||_{2}^{2} \geq 0$. As a result, our covariance matrix is symmetric positive semidefinite as the weighted sum of symmetric positive semidefinite matrices with positive coefficients.
The kernels we presented in this paper correspond to a positive $r$, hence they are positive semidefinite.
> The experiments section can further be improved by comparing the results with other algorithms such as spectral bandits [1], and GRUB [2].
We thank the reviewer for their feedback. We agree that the settings in the spectral bandits and GRUB papers share similarities with ours. However, there are significant differences. First, both of these methods assume that the graph information is fully known and therefore does not involve graph (topology) exploration. In comparison, our approach allows for exploring a potentially unknown graph. Second, we understood both methods are global methods in the sense that the spectral bandit work requires the spectral decomposition of the graph Laplacian, while GRUB requires computing its inversion. Both operations are prohibitive for large graphs. In comparison, our method can handle large graphs (as shown in the additional experiments in Fig S3 in the PDF) due to the setup where the graph topology can be explored on-the-fly.
Despite these differences, we can compare the performance of these methods to our algorithm for small and known graphs, which we will endeavour to include in the updated manuscript.
> Check the captions of Figure 2 and caption of figure 3 is hard to follow.
> Few acronyms were introduced after they were used in prior sections. ex. BA and WS in section 5.1.
> Section A2 introduction BFS and DFS is probably jumbled.
We thank the reviewer for spotting these issues, which will be rectified in an updated manuscript.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. The reviewer has no further questions.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their positive evaluation and for helping to improve our manuscript! | null | null | null | null | null | null |
Polynomially Over-Parameterized Convolutional Neural Networks Contain Structured Strong Winning Lottery Tickets | Accept (poster) | Summary: The central theme of this paper is the Strong Lottery Ticket Hypothesis (SLTH), which conjectures that « randomly initialised networks contain sparse subnetworks […] that perform well without any training » (lines 26-30). Pruning large networks in order to obtain those « strong lottery tickets » would be a way to limit the computational costs associated to those over-parameterised networks and facilitate their deployment in real-world applications. The SLTH has already been theoretically verified for various classes of networks, but never while restraining the way the pruning must be done in order to obtain those « strong lottery tickets ». The authors aim to theoretically verify that « strong lottery tickets » can be obtained via structural pruning of a certain class of convolutional neural networks. The way they do it is by proving that « networks in a wide class of CNNs are likely to contain structured subnetworks that approximate any sufficiently smaller CNN in the class » (lines 77-79), based on the assumption that their exists a « sufficiently smaller CNN in the class » that performs well.
Strengths: 1. The manipulated concepts, such as the LTH, the SLTH, or the RSSP are well-explained.
2. A clear understanding of the many proofs can be obtained by reading Section 4.
Weaknesses: _Major_
1. The proof ideas are not that novel. Many cited works from the Related Work section use the same technique in order to obtain similar results, and Theorems 3 and 5 are the only contributions of the paper (lines 81-85; see Question 3).
2. The goal is to demonstrate that winning tickets from the SLTH can be obtained by removing channels from a deep and wide CNN. The underlying assumption here being that there exists a CNN having the same restrictions as discussed in the article AND that is significantly smaller than an over-parameterised one is in fact capable of obtaining good results on given tasks. Considering those limitations (no classification head, that is, no fully-connected layer; no pooling layer; no batch norm layer; ReLU activations only; etc.), such an assumption is not trivial and questions the whole process.
3. The justification for the approach is that for over-parameterised networks, « [their] associated computational cost limits both the progress of [the neural networks in general] and their deployment in real-world applications. » (lines 19-20). Therefore, the justification is on a practical level. But considering the size of the kernels needed for the network to be pruned (with $n_i \geq Cd^{12}c^6_i ...$, line 139), from the practical point of view, it is virtually impossible to encode such a network even before considering pruning it (which would then be a fastidious process and would imperatively be done in an efficient manner, which is not discussed in the article).
4. The end-goal is to obtain a relatively small network, but taking $n_i \geq Cd^{12}c^6_i ...$ (line 139), and plugging it into $m_i = \sqrt{n_i / (C₁ log \frac{1}{e})}$ (line 150) leads to $m_i \geq d^6c_i^3...$, which do not seem to produce a network that would be smaller.
_Typos / Minor_
1. The way the citations are made is inconsistent throughout the paper; their are both appropriate (« The constructions proposed by Burkholz [2022a,b] », line 290) and inappropriate (« both in the past Reed [1993] and in the present [...]. », line 21) inline text citation format.
2. Line 58 : or block sparsity Siswanto [2021] (Figure 1$\underline{c}$).
3. Some statements are not supported; for example : « Much of the success of deep learning techniques relies on extreme over-parameterisation. » (line 17).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Exactly which lemmas are original and which have been taken from the literature?
2. Is there a reason why you specifically chose having filters (or channels) being removed as structured pruning technique?
3. The third contribution states that « Additionally, our pruning scheme focuses on filter pruning, which, like neuron pruning, allows for a direct reduction of the size of the original CNN » (lines 86-87) (which is not a contribution, but an argument assessing the relevance of the work). It is also stated that « through unstructured pruning [...] one can usually reach 95% sparsity without accuracy loss » (lines 54-55). One could argue that by reaching 95% sparsity, the size of the original network is likely to diminish too. From that perspective, why would channel-pruning still be more desirable?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations have been discussed properly, apart from what has been raised in the Weaknesses part of the review.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their time and valuable comments. We hope to clarify some issues brought up by the reviewer.
Weaknesses (major)
1. While our proof is based on a well-known probabilistic technique (the second moment method), our analysis is entirely novel, except for the Young’s Convolution Inequality (underlying Lemma 20) and the technical tools in SM A.1 (Lemmas 14 to 17). All other lemmas and corollaries are new, including those in the main paper. Most notably, our work explores the MRSSP in the presence of dependencies, which is of interest on its own, as appreciated by other reviewers. Despite being similar in spirit to previous works [Borst et al. 2022; Becchetti et al. 2022], the need to overcome stochastic dependencies required new proof strategies in all lemmas related to the SSP. Moreover, we remark that the average behaviour of the multi-dimensional SSP is an active field of research and that attempts to tackle it with more sophisticated probabilistic tools have, so far, not been fruitful (see, e.g., section 3.1 of Becchetti et al. [2022]). As for the pruning section, despite apparent similarities with previous works (e.g., Pensia et al. [2020], da Cunha et al. [2022], and Burkholz [2022]) in applying subset sum results to convolutions, the change from unstructured to structured pruning requires a different formalisation of the problem. Those arguments are closely tied to the structures obtained when pruning the network and we had to devise a new one to obtain a set up that allowed for selecting and summing subsets of entire filters.
2. We highlight that convolutional layers are quite expressive and generalise many other transformations, such as dense layers, batch normalisation and average pooling. Our restriction to ReLU activations is in line with most works in the field and, as discussed with Reviewer ryBt, there are reasons to believe it can be relaxed.
We also believe that an alternative (but equivalent) framing of our results may make the relevance of our contribution more clear to the reviewer. For **any** given random CNN, we wish to know how well it can perform after pruning (without any training). Our results imply that, under suitable hypothesis, it can perform almost as well (if not better) as **the best** CNN among all those that are sufficiently smaller than the random CNN and whose weights can be tuned. In particular, such class includes traditionally trained networks.
3. (and 4) Please refer to points 1 and 2 of the general answer.
Weaknesses (minor)
We thank the reviewer for the notes about minor issues. We have addressed all of them.
Questions
1. Please refer to the answer to weakness number 1.
2. (and 3) We attempted to convey the reasoning behind it in our introduction (mainly in the paragraph starting at line 51), but we acknowledge that this aspect can indeed be counter-intuitive. Concentrating on dense layers for the sake of simplicity, the key point is that making an $n \times n$ matrix 95% sparse does not translate directly into a speed-up of the associated matrix multiplication. For example, if one naively performs the multiplication directly, there are no computational gains whatsoever (the BLAS pipeline is oblivious to the sparsity of the matrix). Of course there are nuances to the matter, as we discuss in the introduction. On the other hand, if the sparsity in the matrix is conveniently structured, even if the degree of sparsity is much lower, the sparsity directly translates into computational performance, potentially offsetting the difference in sparsity. E.g., if 50% of the rows/columns are filled with zeros, rather than storing those rows/columns, we can simply drop them to obtain a (still dense) smaller matrix. Doing so requires no memory overhead, directly translating into a 50% reduction in the memory footprint, while also speeding up the associated matrix multiplication regardless of software/hardware intricacies.
We hope that we have responded satisfactorily to the reviewer's comments and that our response will lead them to appreciate our work more. We look forward to answering any questions the reviewer may still have.
---
Rebuttal Comment 1.1:
Comment: Thank you for the insightful comments. I might originally have viewed the submission from the wrong angle, being too concerned by the practical issues of the presented bounds, maybe because of the justifications raised by the authors for their work. The "real-world application" therefore is not an argument to support the obtained results, but to support research in this particular avenue. It is true that convolutional layers indeed can represent other types of layers (e.g. dense layer, average pooling). Also, it is now clearer to me what results are original contributions.
I'm still having a concern with the following:
Weaknesses (major)
2 - As for the use of ReLU activation function, the authors mentioned in their rebuttal that "[..] as discussed with Reviewer ryBt, there are reasons to believe it can be relaxed." The discussion with reviewer ryBt highlights that the authors $\underline{\textup{think}}$ there are reasons to believe it can be relaxed, not what those reasons are.
Question
When it comes to a dense layer represented by a convolutional layer, removing a filter in the latter is similar to removing a neuron in the former. Is there any work concerning this type of structured pruning for fully-connected layers/networks? If so, how do these results align with the ones from the manuscript?
Nevertheless, I'm satisfied with the answers given by the authors to my concerns and the discussion they had/are having with the other reviewers. I'm modifying my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We are grateful to the reviewer for his time and valuable questions.
Regarding the use of other activation functions, we apologize for our loose language. We meant that even though the arguments present in [Burkholz 2022] do not translate to our setting, it seemed to us that a deeper rework of those ideas could suffice. More precisely, the proof by Burkholz revolves around the identity $x = \phi(x) - \phi(-x)$ and the fact that the SLTH arguments used for the dense case are robust to small perturbations of this identity. In our case, one can see a loose analogy between the identity above and Lemma 11, which we also believe to be robust to small perturbation of the activation function. However, this is just a first impression and we have not formally explored this possibility.
Regarding the reviewer's question, to the best of our knowledge, only the work by Malach et al. [2020] addresses neuron pruning of dense networks in the context of SLTH. We have discussed this with reviewer TQCH and report the main points here for your convenience.
Under reasonable assumptions, Malach et al. proved that pruning neurons in a fully connected feed-forward neural network of two layers (one hidden layer) is equivalent to training the random feature model. Important constraints on the expressiveness of the latter model are known: Yehudai and Shamir [2019] proved that approximating even a single ReLU neuron under the random features model requires either exponentially large weights or an exponentially wide hidden layer.
Our work investigates alternative forms of structured sparsity instead of pure neuron pruning.
Namely, we use a type of block-sparsity to induce the structure described in our definition of *channel-blocked mask* (definition 2) before performing the filter removals (the CNN generalization of neuron removal).
For neuron/filter pruning in general, several papers have investigated the relationship between compression rate and accuracy loss before, during, or after training. A paper that provides state-of-the-art results as well as many references to recent related literature is [Tukan et al. 2022]. To the best of our knowledge, there is no generalization of training-by-pruning methods such as edge pop-up [Ramanujan et al. 2020] to the structured case.
We hope we have provided a satisfactory answer to the reviewer's remaining points, and we are happy to answer any further questions the reviewer may have.
Burkholz [2022]: Most activation functions can win the lottery without excessive depth, Rebekka Burkholz, NeurIPS 2022.
Malach et al. [2020]: Proving the Lottery Ticket Hypothesis: Pruning is All You Need, Eran Malach, Gilad Yehudai, Shai Shalev-shwartz, and Ohad Shamir, ICML 2020.
Yehudai and Shamir [2019]: On the Power and Limitations of Random Features for Understanding Neural Networks, Gilad Yehudai and Ohad Shamir, NeurIPS 2019.
Tukan et al. [2022]: Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions, Murad Tukan, Loay Mualem, and Alaa Maalouf, NeurIPS 2022.
Ramanujan et al. [2020]: What’s Hidden in a Randomly Weighted Neural Network?, Vivek Ramanujan, Mitchell Wortsman, Aniruddha Kembhavi, Ali Farhadi, and Mohammad Rastegari, CVPR 2020. | Summary: This paper tries to discuss the Strong Lottery Ticket Hypothesis (SLTH) in the context of structured pruning in convolutional neural networks (CNNs). The SLTH suggests that randomly-initialized neural networks possess subnetworks that can perform well without any training. While unstructured pruning has been extensively studied, the paper explores the less explored area of structured pruning, which can offer significant computational and memory efficiency gains. The authors overcome the limitations of existing mathematical tools by leveraging recent advancements in the multidimensional generalization of the Random Subset-Sum Problem. By introducing a variant that accounts for stochastic dependencies in structured pruning, the paper proves the existence of structured subnetworks in random CNNs that can approximate smaller networks. This theoretical contribution extends the understanding of the role of overparameterization in deep learning and opens new avenues for further research on the SLTH for structured pruning.
Strengths: - The paper addresses an important research gap by focusing on structured pruning in the context of the Strong Lottery Ticket Hypothesis. This expands the understanding of lottery ticket mechanisms beyond unstructured pruning.
- The authors make use of recent advancements in the multidimensional generalization of the Random Subset-Sum Problem, demonstrating their ability to leverage and adapt existing mathematical tools to overcome limitations in formal analyses of the SLTH.
Weaknesses: The SLTH theory is important in both the pruning and neural network architecture search domains. However, I'm not familiar with the specific analysis methods proposed in this paper, so I can't provide professional advice.
- Line 118, "Preliminaries and contribution" -> "Preliminaries"?
- In this paper, if the convolution kernel $K$ relies on the normality assumption, as mentioned in Line 131, I suggest the author can cite the Convolution Weight Distribution Assumption discussed in [1].
- If possible, I recommend that the author use more intuitive language when writing Section 4 and the introduction. This would help researchers in the pruning and neural network architecture search fields to better understand the contributions of this paper.
[1] Huang Z, Shao W, Wang X, et al. Rethinking the pruning criteria for convolutional neural network[J]. Advances in Neural Information Processing Systems (NeurIPS), 2021, 34: 16305-16318.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of the proposed analysis have been discussed in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and the comments.
Weaknesses
1. We can change the section’s title but we remark that in Section 3 we state our contribution more formally and, to do so, we need to introduce some preliminaries.
2. We thank the reviewer for the reference. We will add it to the text.
3. We will try to improve it.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for your reply and I have also read the comments of other reviewers. I feel like maintaining my score. | Summary: The paper investigates a structured version of strong lottery ticket hypothesis (SLTH) for CNNs, which is important for its computational efficiency against the standard unstructured counterpart. For this purpose, they prove a variant of subset-sum lemma for their situation. Using their subset-sum lemma and splitting lemma for ReLU activations, SLTH for CNNs is proven by combining block-pattern pruning and neuron pruning under the assumption that the network size is polynomially larger than a target network.
Strengths: 1. SLTH by structured pruning is an important topic in terms of applicability and computational efficiency of SLTH. The authors are the first to prove the structured version of SLTH for CNNs.
2. They extend the subset-sum lemma, which is a key lemma for SLTH with logarithmic over-parameterization as shown in Pensia et al. 2020, for their purpose.
Weaknesses: Presentation:
1. Although the overview of their proof is straightforward, the presented proof does not provide any intuition for the evaluation of the required network size.
2. No experiments are provided. Since the structured version of SLTH itself has not yet observed empirically even for MLPs or CNNs, some experiments would be helpful for supporting their claim.
3. They claimed that `To the best of our knowledge, this is the first result around the SLTH for structured pruning of neural networks of any kind` in Introduction, but the previous work by Malach et al. 2020 already provided results on the equivalence between SLTH by neuron pruning and random feature NNs in the 2-layer case.
Results:
4. Even though they use a variant of subset-sum lemma following the approach by Pensia et al. 2020 for logarithmic over-parameterization, the required width in this paper is still polynomially larger than the target one. In this point, I wonder that, If one allows such polynomially large width, the structured version of SLTH might be easily proven by simply extending the previous discussion in Malach et al. or Pensia et al., at least for MLPs. Also I wonder that whether their claim is still valid in practice due to such polynomial over-parameterization. Nevertheless, the first attempt to prove the structured version of SLTH is worthful in itself. I hope the authors would emphasise which technique in their proof is important for either structured pruning or handling CNNs.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Can the proof also work for the easier case of MLPs instead of CNNs? Is the variant of subset-sum lemma in this paper also needed even for MLPs? If so, providing some intuition for MLP case may be helpful to understand the structured SLTH. If not, discussing why your version of subset-sum lemma is needed for CNNs would be helpful for readers.
2. Is the structured version of SLTH with polynomial over-parameterization really practical? I wonder if we can find such a structured subnetwork in randomly initialized CNNs (or MLPs) in a similar way to Ramanujan et al. 2020, where the unstructured version of SLTH is actually observed.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are thankful to the reviewer for their time and comments. We address all main concerns.
Weaknesses
1. We apologise, but we are not sure we understood the issue brought by the reviewer. The polynomial overparameterization comes from recent results on the multidimensional random subset sum problem for independent input random vectors ([Borst et al. 2022; Becchetti et al. 2022]) which we extended to the case of dependent inputs (Corollary 19 in SM A) and used in Lemma 12. We remark that Corollary 19 deals with roughly $d^{6}$ input vectors of dimension $d$, while in Lemma 12 we have vectors of dimension $ \approx d^2$: that is why we get a bound of the form $ n \ge d^{12} \dots$
2. Please refer to point 3 of the general answer.
3. We thank the reviewer for spotting this mistake. We rephrased the sentence in the abstract to specify that we tackle networks of arbitrary depth, and in the main body to credit Malach et al.
4. Please refer to the first point of our global reply.
Questions
1. We directly approached the CNN case to obtain a more general result. Nonetheless, the structure used in our argument does not produce particularly meaningful results for the MLP case, where one would ideally be able to prove the SLTH for pure neuron pruning. We are currently working on this. Regarding the subset-sum lemma, formulating any type of structured pruning as an instance of the subset-sum problem would require a multidimensional version of the problem. Yet, the specific variant will depend on the target structure.
2. Empirical studies with a structured version of edge pop-up [Ramanujan et al., 2020] is a natural and very interesting direction for future work. We also highlight that we provide worst-case bounds and that we do not believe they are optimal. So, much like what we see with unstructured SLTH algorithms, we expect structured versions of them to perform better than the theoretical worst-case guarantees.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: W1: Sorry for the confusion, but my concern was about the proof sketch in the main paper, not the one in Appendix. I read the proof sketch in the main paper but cannot find any intuition on why the degree of the polynomial order is obtained as $d^{12} c^6 \log(\cdots)$, i.e., the intuition of the degrees $12$ or $6$. Also I cannot find any effort for providing some intuition on the existence proof instead of technical details.
W4: Unfortunately, both the paper and the rebuttals finally contain no experimental support for their claim. Since their theoretical result only proves for the case of polynomial over-parameterization, I strength my doubt about the practical validity on the structured version of SLTH. For example, when we want to approximate a CNN with the channel $c=64$ by structured pruning, the proof requires to prune a CNN more than $c^6=68719476736$ times larger.
So my understanding on the paper is: while the theoretical result is totally meaningless in practice for now, no experimental supports are provided. Taking these into consideration, unfortunately, I leaned towards reject side because accepting this work may lead the community to a wrong understanding on the structured SLTH.
---
Reply to Comment 1.1.1:
Comment: We stress that the SLTH was not known to hold for structured pruning and that, since previous techniques lead to exponential bounds, it was possible that it did not. Our work, thus, proves that something that could have been impossible (exponential) is, in fact, possible (polynomial).
Since the influential contribution of Malach et al. [2020] also provides unfeasible bounds¹, we assume that the reviewer considers our result to be "meaningless in practice" due to the current lack of a structured version of *edge pop-up*. That is, when Malach et al. [2020] was published, methods for finding (unstructured) strong tickets in practice already existed. On the other hand, we offer a theoretical analysis of a problem for which no algorithm was developed yet. However, we do not see why this type of theoretical work would not be worthy. In fact, we are excited by the possibility that our results motivate research on algorithms for finding structured lottery tickets.
Regarding the possibility that our work could mislead the community, we politely disagree. We claim to prove a version of the SLTH for structured pruning and proceed to present the proof. We do not claim to provide any algorithm for finding strong tickets, let alone an efficient one. We fail to see how this could be misleading. It is also unclear to us how results with a specific type of algorithm would help support our claims. In particular, our theorem applies to essentially all possible parameterizations of the target architecture, including those that cannot be learned by gradient-based methods.
We sincerely believe that our result, a polynomial bound on a structured version of the SLTH, is of the interest of the theoretical community of NeurIPS.
[¹]: For example, applying the bounds from Malach et al. [2020] to a single layer of Lenet-5 (a very small architecture) within error of at most 1% would require increasing the width of that layer by a factor much larger than $\frac{84^4}{0.01^2} = 497871360000$. (To be clear, we are highly appreciative of the work by Malach et al. and we are not trying to diminish its value. We are merely highlighting that the reviewer's argument is not specific to our work.) | Summary: The authors show that structured Strong Lottery tickets are contained within convolutional neural networks.
In order to construct this proof, the authors propose the Multidimensional Random Subset Sum lemma which approximates a target vector by using a set of random vectors which are normally scaled normally distributed.
This results builds on the Subset Sum approximation by Lueker, which is commonly used in proving the Strong Lottery Ticket hypothesis for standard architectures.
The authors further leverage this lemma to approximate a target function with a structurally pruned CNN which is twice as deep and wider by a factor.
Overall this paper provides an important existence proof for strong lottery tickets in structurally pruned CNNs as a first step in trying to understand structured pruning theoretically.
Strengths: 1. The authors for the first time have proven the existence of SLTs in structurally pruned networks giving hope that a computationally efficient and trainable SLT might exist that can be leveraged for efficienct on existing hardware (contrary to current empirical results in literature which are worse than unstructured pruning).
2. The authors also extend the result of Lueker for random vectors.
3. The paper is clearly written.
Weaknesses: 1. The bounds provided by the authors still require twice the depth in order to approximate a target function. Burkholz et al have proposed a way to use only L + 1 depth to approximate a target function. Is this still possible with structural pruning in CNNs?
2. As mentioned by the authors, the proposed construction looks at structured pruning where entire channels are pruned away. Would block sparsity (eg; N:M sparsity) allow for a better Subset Sum bound potentially i.e. are there specific sparsity patterns which can allow sparser constructions. This might guide us towards a identifying layerwise sparsity ratios for training sparse networks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The construction requires a layer of width $n c_0$ such that after pruning the width is $m c_0$. These factors would effectively determine the overall sparsity a network can achieve. Would this translate to structured pruning results we observe in practice?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors provide their construction for ReLU networks. However, Burkholz et al. [1] show existence results for different activation functions.
Do the proposed existence results also transfer to other activations?
[1] Burkholz, Rebekka. "Most activation functions can win the lottery without excessive depth." NeurIPS 2022
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for all the observations pointed out in the review. We try to address the weaknesses, the question and the limitation.
Weaknesses
1. We believe it is possible, but so far we could not check all the necessary adaptations. Such improvement requires further investigation.
2. In principle, yes. The size of the “structure” to be pruned (be it a filter, a “block” or something else) decides the dimensionality of the Random Subset-Sum Problem to be solved, so, in general, one can expect better bounds from more fine grained pruning. Of course, in the limit, one would obtain the best bounds by considering “blocks” with a single weight, recovering the unstructured pruning case. Thus, one must keep in mind the trade-off we discuss in the paragraph from lines 51-59. Overall, we have not considered $N:M$-sparsity, but we think this is a promising direction for future work.
Questions
1. We do not think so. To recapitulate, our results involve transforming a dense convolutional layer with $nc$ filters to obtain a (still dense) one with $mc$ filters, where $m = O(\sqrt{n/\log 1/\epsilon})$, so that the final network is potentially much smaller than the original one (see, e.g., lines 147-150). However, we expect improved bounds on $n$ to bring $m$ closer to $n$.
Limitations
1. We do believe the result holds for a more general class of activation functions: however, we do not see a trivial way to prove this and further investigation is required. We remark that, as discussed in the limitations section, we do not think that the technique used by Burkholz [2022] can be immediately adapted to our case because we use the fact that the ReLU function gets rid of negative terms.
---
Rebuttal Comment 1.1:
Title: Acknowldegdement of rebuttal
Comment: I thank the authors for the clarifications. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable comments. We are pleased that reviewers appreciated our contribution. Only one of the evaluations leaned towards the negative, and it seemed to be due to misunderstandings that we hope to have solved.
Some reviews brought similar questions, to which we reply below. We also modified the text to further clarify those points.
1. **Our bounds are not optimal**: Directly applying techniques from previous works on the unstructured SLTH to the structured setting would yield exponential bounds (even for the MLP case). Thus, our contribution extends the field qualitatively by showing that polynomial overparameterization suffices. Much like the original polynomial bounds for the unstructured case by Malach et al. [2020] were improved by subsequent works [Orseau et al., 2020; Pensia et al., 2020], we expect our contribution to pave the way for further research and consequent sharper bounds.
2. **We offer a theoretical contribution**: While our work is motivated by the practical concerns underlying structured sparsity, we do not propose a practical algorithm for finding strong lottery tickets. In line with many works related to ours (e.g., Pensia et al. [2020], da Cunha et al. [2022], and Burkholz [2022a,b]), we instead offer theoretical insights about the problem. For example, we establish a baseline for the potential performance of SLTH algorithms. Such a contribution opens the door for valuable future directions, including the extension of SLTH algorithms such as edge pop-up to the structured setting. Finally, our analysis and that of previous works investigate the worst possible case for those algorithms. Thus, they do not set an upper-bound on their practical performance on common architectures.
3. **Experiments are unfeasible for this work**: We agree that experiments would be interesting to see. However, performing meaningful experiments by directly solving subset-sum problem with solvers such as Gurobi (as in, e.g., [da Cunha et al. 2022] or [Pensia et al. 2020]) would be prohibitively expensive as the multidimensional version of the SSP is much harder to solve directly. An alternative would be to extend an algorithm such as edge pop-up [Ramanujan et al., 2020] to our setting (structured pruning). This is a natural and very interesting direction for future work, but it is worth a whole paper on its own. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper extends analytical work on the strong lottery ticket hypothesis from unstructured to structured pruning, and shows the applicability to Convolutional Neural Networks.
Strengths: I'm not an expert in this domain, but both the contribution looks strong to me.
Clarity:
The introduction is extensive and appropriately places the current paper into the existing research landscape. I could follow well, even though I'm not directly in this domain. In the analysis section, proofs and other statements are appropriately accompanied by explanatory statements, which makes following them easier.
Quality:
The paper is of high quality. It advances a subfield that is actively being researched.
Originality:
To my knowledge, the work is original. It extends various previous work in a non-trivial manner. The authors place the contributions appropriately among previous results and reference related work.
Significance:
I really cannot accurately estimate the significance of this work. The overall question around whether structured sparsity in CNNs can induce strong lottery tickets is certainly significant. I'm not sure how much the used structures in this work (for example, every other layer being a 1x1 layer) limit the significance of the results.
Weaknesses: - I would have liked to see at least a small experimental section, as the discussed topic lends itself well to empirical evaluation.
- As I said above, just reading the paper gives me very little insight into the meaning and significance of these results, which could be discussed more extensively.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - In Lemma 8, you deal with variables X_1 through X_{k+j}, but then only condition on Z_1 through Z_k in the text, and Z_1 through Z_n in the statements. It's entirely possible that I misunderstand, but is all of this correct? For example, n is never even introduced as a number.
- I don't quite understand why the n-channel-blocked-masks are needed. In definition 2 you mention that this defines the structured sparsity of the networks, but doesn't the removal of entire filters by itself already constitute structured sparsity?
- Related to above: In Lemma 12 you describe the network including the channel-blocked-masks as the "pruned" network, but from this we still need to remove filters to obtain the smaller network that appropriately approximates K. So which action exactly constitutes the pruning?
- In Lemma 11, you state that V * S-tilde "contains only non-negative edges going from each input channel t [...]". How can that be? V has arbitrary real numbers, and S-tilde is a binary mask. Maybe I've missed something here, I could see that statement be made when including the ReLU in some way, but I'm a bit lost the way it is now.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have appropriately addressed limitations in Section 5. Questions on societal impact do not apply here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our work and the valuable comments. We try to address all concerns.
Weaknesses
1. Please refer to point 3 of the general answer.
2. We will expand the text in the main body of our document to further discuss the significance of our results by recovering some of the contextualisation from previous works.
Questions
1. We apologise for the typo. There, $n = i + k$ and all the shown probabilities are conditional on the outcomes of $X_1, \dots, X_n$. In the supplementary material, the indices in the conditional event are correct (but we forgot to explicitly state $n = i + k$). We updated the main text and the SM accordingly.
2. The removal of entire filters does constitute structured sparsity. However, besides choosing which of the (random) filters to keep, we also need to ensure that those are effectively combined in a way that allows us to leverage our MRSSP results to approximate target filters. We achieve this through the $n$-channel-blocked structure. Put loosely, while filter pruning allows us to select subsets (of filters), we need other structures to ensure the subset is “summed” in a suitable way. Of course, the desired operation has many more nuances than a simple sum.
3. (and 4) We agree that the text around Lemma 11 is unclear. There are two ways of looking at the pruning in Lemma 11. The mask can be seen as a channel-blocked mask that is applied only to some filters, i.e., we keep filter $k$ if its $t = \lceil \frac{k}{2n} \rceil $-th channel is non-negative and $2t-1 = \lceil \frac{k}{n} \rceil $ or it is non-positive and $2t = \lceil \frac{k}{n} \rceil $. Formally, one can again use a combination of two masks: the first one is a $2n$-channel blocked mask, the second one removes filters. We clarified this explicitly in the statement and proof.
The pruning in Lemma 12 consists of the actions described in Lemma 11 – the application of the channel blocked mask and the filter removal based on entries’ signs – plus another step of filter removal given by the MRSSP result. We realise that this might cause some confusion, and we added some sentences that explain that all these three parts define the final pruning. When looking at Lemma 11 and 12 together, one can see a three-stage pruning process: application of $2n$-channel blocked masks (1st stage), filter removal according to entries' signs (2nd stage), filter removal given by the MRSS result (3rd stage). Since masking is both associative and commutative, one can look at the whole process as follows: First, we apply channel blocked masks (1st stage). Second, we remove filters (combination of the 2nd and the 3rd stage).
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the considered responses, and for the adjustments.
As I said, I have limited knowledge, but in my opinion, the paper should be accepted. | null | null | null | null | null | null |
Language Is Not All You Need: Aligning Perception with Language Models | Accept (poster) | Summary: This paper propose a pretrained multimodal large language model, which can achieve impressive zero-shot performance on many downstream tasks.
The experimental results verify the transfer knowledge from language to multimodal or from multimodal to language.
However, compared to the very similar model FROMAGe and BLIP-2, this model structure seems to be of little difference. The most advantage of this model is that it uses more multimodal data.
It seems that the performance on visual question answering is less than BLIP-2.
Strengths: The motivation is clear and the writing can be easy to be followed.
The multimodal COT reasoning ability is analyzed which is interesting.
The pretrained multimodal large language model achieve impressive zero-shot performance on many downstream tasks.
Weaknesses: This paper presents a lot of experiments to demonstrate how good their model is, but it does not compare GPT-4 or BLIP-2 to show how much room for improvement their model has. Directly adopting a good image captioning model to convert images into diverse text and feeding them into a large language model (gpt-3.5 or llama) will also achieve good results (please see the multimodal instruction-following data construction way shown in the paper ``visual instruction tuning’’ ). The technique novelty advantage of this model compared to the common approach should be elaborated.
This paper claims that they proposed a multi-modal large language model that can perceive general modalities, learn in context (i.e., few-shot), and follow instructions (i.e., zero-shot), but the experimental baseline always lacks some key models that are relatively strong in the field, such as the experiment results (Table 6) shown in Section 3.8: language tasks. They only compared the language model trained on their own corpus. In fact, the multimodal version of KOSMOS-1 sees more text data and image description, leading to the comparison a little unfair. On the one hand, the comparison is not sufficient because there is no comparison with other large language models. What are the overall advantages and core contribution of KOSMOS-1 compared to LLMs (such as LLaMA, BLOOM, OPT) and previous multimodal models (BLIP-2, Flamingo)?
In addition, Sec 3.6 Multimodal Chain-of-Thought Prompting analyzed the multimodal COT reasoning ability, but it is only evaluated on the classification task SST-2. There is no demonstration of corresponding examples and no evaluation on complex reasoning tasks (e.g., ScienceQA), so it is difficult to judge whether this KOSMOS-1 really has the multimodal COT reasoning ability. Because prior work has shown that the ability of the chain of thought may emerge when the number of parameters of large language model is greater than 10B. The conclusion drawn by this paper is hard to convince.
Finally, Sec 2.4:Training Objective does not specify in detail how the visual representation model and language model are trained. If you train separately, how do you perform multimodal alignment and fusion? From the description, what is the difference between this whole training process and Oscar's training method? If following your description to train KOSMOS-1, will the model have the multi-modal COT reasoning ability?
In sec.2.1, should the input textual token embedding and visual token embedding be pre-aligned and then feed to the decoder? Or is it entirely dependent on the learning ability of the following decoder.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weaknesses part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and suggestions on our paper.
**About compared to LLM**
> They only compared the language model trained on their own corpus. In fact, the multimodal version of KOSMOS-1 sees more text data and image description, leading to the comparison a little unfair. On the one hand, the comparison is not sufficient because there is no comparison with other large language models.
For fair comparisons, we train the language model baseline with the same text corpora and training setup. They have the same number of training tokens.
Other LLMs use different architectures, data, hyperparameters, steps, etc., resulting in too many incomparable factors. Therefore, we specifically align the settings for a fair comparison, allowing for more scientific comparisons.
**About core contribution compared to LLMs and previous multimodal models**
> What are the overall advantages and core contribution of KOSMOS-1 compared to LLMs (such as LLaMA, BLOOM, OPT) and previous multimodal models (BLIP-2, Flamingo)?
Compared to LLMs:
1) We are a Multimodal Large Language Model (MLLM), supporting multimodal input and cross-modality transfer.
2) Our model brings more possibilities and application scenarios, such as embodied AI.
3) Moreover, our training data is different from LLMs, which use pure text corpora. We employ datasets like text corpora, image-caption pairs, and interleaved image-text data.
Compared to previous multimodal models:
1) We train our model from scratch, enabling it to learn visual commonsense through cross-modality transfer.
2) We can perform in-context learning, while BLIP-2 lacks the ability to perform in-context learning (as stated in their Limitation section)[1].
3) Although our model size is relatively small, the results are impressive, indicating great potential.
4) In terms of the methodology, our model implementation is remarkably simple, following a minimalist approach.
5) In terms of evaluation, we explore OCR-free language understanding, text instruction tuning, Raven IQ tests, customized image classifiers, and cross-modality transfer, which have not been analyzed in previous works.
[1] Li, Junnan, et al. "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models." arXiv preprint arXiv:2301.12597 (2023).
**No evaluation on complex reasoning tasks (e.g., ScienceQA)**
> There is no demonstration of corresponding examples and no evaluation on complex reasoning tasks (e.g., ScienceQA), so it is difficult to judge whether this KOSMOS-1 really has the multimodal COT reasoning ability. Because prior work has shown that the ability of the chain of thought may emerge when the number of parameters of large language model is greater than 10B.
We evaluate KOSMOS-1 on ScienceQA benchmark under zero-shot and few-shot settings with and without explanations. The table below presents the results on ScienceQA test image set. We find that chain-of-thought prompting via introducing explanations performs slightly better than standard prompting (Few-shot (k=1) in the Table), which demonstrates the chain-of-thought ability of KOSMOS-1 on complex reasoning tasks. We believe that chain-of-thought prompting can bring greater improvements as the model size increases, as the text part improves.
| Setting | ScienceQA |
|-----------|----------------|
| Zero-shot | 56.1 |
| Few-shot (k=1) | 56.8 |
| Few-shot (k=1) w/ explanations | 57.2 |
**About training objective**
> Finally, Sec 2.4:Training Objective does not specify in detail how the visual representation model and language model are trained. If you train separately, how do you perform multimodal alignment and fusion? From the description, what is the difference between this whole training process and Oscar’s training method?
We train the language decoder and visual encoder together via the next-token prediction task on text corpora, image-text pairs and interleaved image-text data.
Given the input containing images and texts, we obtain image embeddings and text embeddings via the visual encoder and lookup table of word embeddings. Then we feed these image and text embeddings into the Transformer-based decoder. The self-attention module in the decoder can fuse the images and texts.
For the training on image-text pairs and interleaved data, the model is trained to generate captions/texts based on images, which helps the model to learn its alignment.
Compared to Oscar:
1) The main component of KOSMOS-1 is a Transformer-based decoder, which is unidirectional, processing the input sequence only from left to right. While Oscar is a bidirectional Transformer-based encoder.
2) KOSMOS-1 shows zero-shot and in-context learning capabilities on vision-language tasks, Oscar is mainly used for fine-tuning on specific vision-language tasks.
3) KOSMOS-1 is trained using next-token prediction task, while Oscar is trained using masked token predication since it is a bidirectional model.
**About CoT reasoning ability**
> If following your description to train KOSMOS-1, will the model have the multi-modal COT reasoning ability?
CoT (or step-by-step generation) ability can be inherited from the capabilities of text language models, and we will also conduct more experiments on a larger scale.
**About pre-align textual token embedding and visual token embedding**
> In sec.2.1, should the input textual token embedding and visual token embedding be pre-aligned and then feed to the decoder? Or is it entirely dependent on the learning ability of the following decoder.
It is not necessary to pre-align the input textual token embeddings and visual token embeddings before feeding them to the decoder.
We feed the visual and textual embeddings into the decoder, and train the model to generate the next token depending on the previous context (images and texts). The training helps the model to align image and text representations.
---
Rebuttal Comment 1.1:
Title: Official comments from BRko
Comment: I have read the authors' rebuttal as well as the other reviewers' comments. First the authors have addressed my comments as well other reviewers' comments. Second, the proposed method does have merits, which can use different data ( text corpora, image-text pairs and interleaved image-text data), and good model performances.
Therefore, I am increasing my rating from Borderline Accept to Weak Accept. | Summary: This work introduces KOSMOS-1, a multimodal large language model trained on large-scale text corpora, image-caption pairs, and interleaved image-text data. KOSMOS-1 can perform classic captioning/VQA tasks in a zero-shot or few-shot in-context prompting fashion. It can also perform OCR from visual document, and answer questions based on a webpage. It consists of a pre-trained CLIP-L/14 model for image representation and MAGNETO for language decoding. KOSMOS achieves competitive results on a wide suite of vision-language tasks with 1.6B parameters compared against Flamingo-9B. Furthermore, it introduces a Raven’s IQ benchmark to evaluate nonverbal intelligence.
Strengths: The paper presents comprehensive technical details about the proposed system, including pre-trained models, loss design, and datasets. It also shows impressive qualitative sample usage of the proposed system on a wide range of tasks in supplemental, including multi-turn multimodal dialog, image classification with description, few-shot multimodal in-context learning, multimodal chain-of-thought prompting, and reasoning with webpage screenshots.
Weaknesses: I do not find the proposed system particularly novel, because its architecture design, training losses, and pre-training datasets are widely adopted in prior art such as Flamingo [1] and BLIP-2 [2]. Furthermore, BLIP-2 [2] achieves stronger zero-shot performance on VQAv2 (Table 2b) 65% compared to KOSMOS-1's 51.0%, even though it is trained on much fewer data (LAION114M) with much fewer tunable parameters.
The small-scale Raven IQ test with 50 samples is undoubtedly challenging and interesting, however, it is hard to say KOSMOS-1 is better than random chance because it is only marginally better by 5.3%. Could authors discuss what are some promising directions to improve KOSMOS-1 on this benchmark?
Language-only instruction-following paradigm shows worse performance on COCO but leads to better results on Flickr30K, VQAv2, and VizWiz (Table 7). Could the author explain why language-only instruction-tuning might lead to better or worse results on different vision-language tasks?
To demonstrate that the KOSMOS-1 has better commonsense reasoning capabilities, the authors show language-only zero-shot evaluation results on RelativeSize, MemoryColor, and ColorTerms even though KOSMOS-1 is a multimodal model. Is there a reason not to report the multimodal zero-shot performance?
The authors do not promise a model release.
[1] Flamingo: a Visual Language Model for Few-Shot Learning. 2022.
[2] BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models. 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How can one improve KOSMOS-1 for better performance on Raven-IQ test?
- Why is the zero-shot VQAv2 performance of KOSMOS-1 worse than BLIP-2?
- Could authors report KOSMOS-1 performance on commonsense reasoning tasks while using multimodal inputs?
- What is the intuition behind language-only instruction-tuning? Why could this benefit vision-language tasks?
- Will the code and model be released to the public?
The writing can be further improved. For example:
L24: “it is still struggling to natively use LLMs for multimodal data, such as image, and audio.” -> “it still struggles to natively use LLMs for multimodal data, such as image and audio.”
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, the authors discuss limitation in appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and suggestions on our paper.
**Improving Raven IQ test**
> The small-scale Raven IQ test with 50 samples is undoubtedly challenging and interesting, however, it is hard to say KOSMOS-1 is better than random chance because it is only marginally better by 5.3%. Could authors discuss what are some promising directions to improve KOSMOS-1 on this benchmark?
The performance gain looks marginal because the baseline is relatively low at 16.7%. The relative gain of KOSMOS-1 over the random chance is 31.8%, which looks more substantial.
These are a few promising directions to explore for improving KOSMOS-1 on Raven IQ test:
1) **Increasing model capacity**: Scaling up the model size can potentially improve performance. This allows the model to learn more complex representations and better capture the relationships between different elements in the Raven IQ test task.
2) **Training data**: Enhancing the training data by incorporating more diverse and complex Raven-style problems can lead to better generalization.
3) **More fine-grained image representations**: Raven IQ test often requires a more fine-grained representation of images, which allows the model to understand the global relationships between multiple input images.
**About Language-only Instruction-tuning**
> Language-only instruction-following paradigm shows worse performance on COCO but leads to better results on Flickr30K, VQAv2, and VizWiz (Table 7). Could the author explain why language-only instruction-tuning might lead to better or worse results on different vision-language tasks?
For image captioning tasks, we conduct some statistics on the COCO and Flickr30k datasets, and we find that the Flickr30k dataset has a higher average number of caption tokens. The model tends to generate longer outputs after instruction tuning, which might lead to some improvement in the Flickr30k dataset, but some decline in the COCO dataset.
VQAv2 and VizWiz are question-answering tasks, where the model needs to understand both the question and the image content to provide an accurate answer. Language-only instruction-tuning tends to obtain better results on these tasks as the text-based instructions can help the model better follow instructions and improve its ability to answer questions.
**Question 1**
> Why is the zero-shot VQAv2 performance of KOSMOS-1 worse than BLIP-2?
The zero-shot VQAv2 performance of KOSMOS-1 is not worse than BLIP-2.
As shown in Table 2 of the BLIP-2 paper, the VQAv2 (test-dev) performance of the KOSMOS-1 model (1.3B text decoder + 300M image decoder) is 51.0 in the zero-shot setting, while the performance of BLIP-2 (2.7B text decoder + 300M image decoder) is 49.7. Given that the text decoder of KOSMOS-1 is smaller than that of BLIP-2, this demonstrates the superior performance of KOSMOS-1 in this context.
BLIP-2 achieves a VQAv2 score of 65.0 in the zero-shot setting when using the CLIP ViT-g (1.2B) image encoder and FlanT5XXL (11B) text decoder. We think that the significant improvement in performance can be attributed to the substantially larger model size.
Additionally, the training dataset may have played a role in the performance disparity. BLIP-2 included the COCO dataset in their training, which is the source of images for VQAv2. Consequently, the model can learn in-domain knowledge from COCO dataset, further enhancing its performance on VQAv2 tasks.
**Question 2**
> Could authors report KOSMOS-1 performance on commonsense reasoning tasks while using multimodal inputs?
To demonstrate that the KOSMOS-1 has better commonsense reasoning capabilities, the authors show language-only zero-shot evaluation results on RelativeSize, MemoryColor, and ColorTerms even though KOSMOS-1 is a multimodal model. Is there a reason not to report the multimodal zero-shot performance?
We have conducted experiments on commonsense reasoning tasks while using multimodal inputs. As the original dataset does not provide images of objects, we obtained relevant images using Google Image Search. These images were then prepended to the text descriptions and fed into the model. The results of these experiments are presented in the following table:
| Model | RelativeSize | MemoryColor | ColorTerms |
|------------------------------------------|--------------|-------------|------------|
| KOSMOS-1 | 94.2 | 76.1 | 73.1 |
| KOSMOS-1 + Google searched image | 80.7 | 82.6 | 86.5 |
The table reveals that KOSMOS-1's performance on object color tasks (MemoryColor and ColorTerms) is improved when using multimodal inputs.
We found that Google searched images are relatively noisy, which can affect the predictions. As a result, the performance of the RelativeSize task may decline. In this case, it is better to use text for prediction directly, which also demonstrates the importance of our model's cross-modality.
**Question 3**
> What is the intuition behind language-only instruction-tuning? Why could this benefit vision-language tasks?
The motivation of language-only instruction-tuning is to explore cross-modal transfer in KOSMOS-1 whether a model uses information learned in one modality (e.g., language) to enhance its performance in another modality (e.g., vision). As shown in Table 7, language-only instruction-tuning helps the model follow instructions. It helps the model to better understand the questions and generate answers in an appropriate format.
**Question 4**
> Will the code and model be released to the public?
Yes, we will indeed release our model and code to the public.
---
Rebuttal Comment 1.1:
Title: The authors have addressed my concerns
Comment: I will retain my positive rating as the authors promised to release the model and dataset and have addressed all of my concerns. | Summary: This paper presents a vision/language model trained on text and interleaved image/text data. It uses a ViT to encode the image into tokens and a transformer predict output tokens from the previous ones. It is trained on web-scale data and then fine-tuned on NLP instruction tuning data.
Strengths: - Despite a lot of interest in instruction following for V/L models, training them from scratch is a somewhat less well explored area so I think contributions to this direction are useful.
- Many evaluations that help understand how effective the model is and what it has learned in different settings
- Showing NLP instruction tuning benefits multi-modal tasks is very interesting, although I would have liked see the ablation consider tasks with more complex instruction then just captioning/VQA kind of tasks. I also found the the chain-of-thought and visual common sense results interesting.
Weaknesses: - The models appears to be limited in its ability to take advantage of few-shot data, which I think should be one of the main theoretical advantage of supporting interleaved V/L data and pre-training on such data. Captioning seems to be the only task where we really see a clear benefit.
- The instruction following shown in the paper is a bit simple, just for tasks like captioning, QA or classification which are tasks that are pretty close to the pre-training tasks. While it is interesting that this data helps the model. it has not really been shown the model can follow instruction for very different tasks in the same way LLM can.
- Related work is poorly discussed. There is no related work section, and as far I can see there is not much discussion about similar works such as other V/L foundation models or models like BLIP that adapt a pre-trained LLM in other parts of the paper or appendix either.
Overall I feel like the model and many experiments have scientific value despite the relatively smaller scale compared to other recent models, but I also feel like the lack related work is a non-trivial issue and I am not really sure how to balance those two points. I have recommend accepting the paper for now since I still think the paper would a benefit for the conference.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Will the model or the interleaved data corpus be released?
Is there any evidence the model "forgets" any of its multi-modal knowledge during instruction fine-tuning, which only has NLP data? Is this a concern?
Did the authors consider initializing the model somehow with a pre-trained LLM? With so much work following that approach recently it would be interesting to hear why everything was trained from scratch.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and suggestions on our paper.
**About related work**
> Related work is poorly discussed. There is no related work section, and as far I can see there is not much discussion about similar works such as other V/L foundation models or models like BLIP that adapt a pre-trained LLM in other parts of the paper or appendix either.
We will incorporate a discussion of related work in the final version.
**Question 1**
> Will the model or the interleaved data corpus be released?
We will indeed release our model for the benefit of the research community.
Without violating copyright restrictions and compliance, we will release the interleaved data corpus.
**Question 2**
> Is there any evidence the model “forgets” any of its multi-modal knowledge during instruction fine-tuning, which only has NLP data? Is this a concern?
An ablation study was conducted to assess the impact of language-only instruction tuning on the model's performance (Table 7). The results indicate that language-only instruction tuning leads to improved performance on VL tasks.
**Question 3**
>Did the authors consider initializing the model somehow with a pre-trained LLM? With so much work following that approach recently it would be interesting to hear why everything was trained from scratch.
Here are some key factors to this decision:
1) **Cross-modality transfer**: One of our goals is to achieve cross-modality transfer and enable language models to learn multimodal commonsense. By training from scratch, the model can be designed specifically to handle such transfers.
2) **Generalization through large-scale training**: Large-scale training tends to result in better generalization, as the model is exposed to more diverse data. Starting from scratch allows for the incorporation of more diverse data during the training process.
3) **Native multimodal LLM**: The model is a native multimodal LLM, meaning it has been designed from the ground up with multimodal learning in mind.
4) **Results drop in small size model**: We conducted a comprehensive comparison between training from scratch and initializing with a pre-trained LLM (text decoder contains 110 million parameters, and the image encoder is initialized from CLIP ViT-B/16. The training step is 300k). For a fair comparison, we trained these two settings with the same number of training steps. The results, as shown in the table below, indicate a decline in performance when initializing with a small-sized pre-trained LLM.
| Model | COCO (CIDEr) | Flickr30k (CIDEr) | VQAv2 (VQA-acc) |
|---------- |------- |--------------|-------|
| From scratch | 75.2 | 59.1 | 37.0 |
| Cont. training | 67.6 | 51.3 | 35.1 |
Further research by PaLM-E [1] corroborated these findings (Figure 6), revealing that concatenating two pre-trained models could lead to a substantial performance degradation on NLP tasks, when the model size is small. To avoid this issue, we chose to train the model from scratch.
[1] Driess, Danny, et al. "PaLM-E: An embodied multimodal language model." arXiv preprint arXiv:2303.03378 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions. | Summary: This paper proposed KOSMOS-1, a Multimodal Large Language Model (MLLM) that can take image embeddings as additional input to auto-regressive LLM. Trained on both text-only and image-text web-crawled data, KOSMOS-1 can get strong performance across a variety of tasks including language understanding, perception-language tasks, and vision tasks, demonstrating the utility of cross-modal knowledge transfer.
Strengths: 1. Although there are no specific novelties in terms of loss and module design, the introduced model is simple, unified, versatile, and effective.
2. The experiments cover both language-focused, vision-language, and vision-focused evaluations and demonstrate the strong performance of the proposed models.
3. The study about whether one modality can benefit the other (cross-modal transfer) is interesting and showcase some insightful observations.
Weaknesses: 1. Comparison with recent MLLM works either in related work or experiments was missing. For example, MiniGPT4, LLaVA, etc.
2. Although in ablation about cross-modal transfer, authors show that Language-Only Instruction Tuning can help several visual-language tasks. However, that experiment is only about language instruction tuning data. I wonder if the general Text Corpora (not just language instruction tuning data) in pre-training help general vision-language downstream tasks?
3. Can Visual Instruction Tuning data (LLAVA's or MiniGPT4's) help NLP tasks?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Discussed in Supp.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments.
**Question 1**
> Comparison with recent MLLM works either in related work or experiments was missing. For example, MiniGPT4, LLaVA, etc.
We will add the comparison with recent MLLM works. The table below presents the comparison with MiniGPT4 and LLaVA on zero-shot image captioning (COCO and Flickr30k) and visual question answering (VQAv2). Since MiniGPT4 and LLaVA do not report results on these benchmarks, we test their released model on them ourselves. We use the MiniGPT4 version 'Vicuna-7B' and the LLaVA version 'LLaVA-Lightning-MPT-7B-preview'.
| Model | Model Size | COCO (CIDEr) | Flickr30k (CIDEr) | VQAv2 (VQA-acc) |
|---------- |------- |------- |--------------|-------|
| KOSMOS-1 | 1.6B | 84.7 | 67.1 | 51.0 |
| MiniGPT4 | 7.3B | 86.3 | 54.4 | 28.9 |
| LLaVA | 7.3B | 74.3 | 45.9 | 34.9 |
Experimental results show that KOSMOS-1 outperforms both MiniGPT4 and LLaVA in terms of Flickr30k and VQAv2 tasks. Our model also achieves competitive performance on COCO captioning.
**Question 2**
> Although in ablation about cross-modal transfer, authors show that Language-Only Instruction Tuning can help several visual-language tasks. However, that experiment is only about language instruction tuning data. I wonder if the general Text Corpora (not just language instruction tuning data) in pre-training help general vision-language downstream tasks?
We conducted an ablation study on the general text corpora in a smaller setting, where the text decoder contains 300 million parameters, and the image encoder is initialized from CLIP ViT-B/16. The training step is 100k.
The results are presented in the accompanying table. The general text corpora improve the performance on visual question answering (VQAv2) tasks, while resulting in a decline in the model's performance on image captioning (COCO, Flickr30k). The downstream COCO and Flickr30k image captioning data are similar to the image-caption pairs used in training (LAION-2B and COYO-700M). Adding general text corpora into training prevents the model from converging towards captioning tasks and results in a slight drop. But training on general text corpora improves the model to follow instructions. Learning from text question answering data improves the model to achieve a better performance on VQAv2.
| Dataset | COCO (CIDEr) | Flickr30k (CIDEr) | VQAv2 (VQA-acc) |
|------------------------------------------------------------------|------|-----------|-------|
| Text Corpora + Image-Capton Pairs + Interleaved Image-Text Data | 74.9 | 52.5 | 33.4 |
| Image-Capton Pairs + Interleaved Image-Text Data | 77.8 | 54.1 | 28.1 |
**Question 3**
> Can Visual Instruction Tuning data (LLAVA's or MiniGPT4's) help NLP tasks?
We perform instruction tuning with LLaVA’s visual instruction tuning data and evaluate the model on NLP tasks. As shown in the table, performing visual instruction tuning improves the zero-shot performance on HellaSwag, Winograd, Winogrande, PIQA and COPA. But we also observe a decline on StoryCloze and a large drop on BoolQ. Overall, introducing visual instruction tuning data does not significantly help NLP tasks, and we will explore more in the future.
In addition, we evaluate the model on vision-language tasks and find that adding visual instruction data significantly improves the zero-shot performance on Flickr30k and VQAv2.
| Model | StoryCloze | HellaSwag | Winograd | Winogrande | PIQA | BoolQ | COPA |
|-------------------|------------|-----------|----------|------------|------|-------|------|
| KOSMOS-1 | 72.1 | 50.0 | 69.8 | 54.8 | 72.9 | 56.4 | 63.0 |
| + Visual instruction tunning | 71.6 | 50.6 | 70.5 | 55.7 | 73.0 | 50.8 | 69.0 |
---
Rebuttal 2:
Comment: Thank the authors for adding the comparison and explanation. I am inclined to keep the rating. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Extending the Design Space of Graph Neural Networks by Rethinking Folklore Weisfeiler-Lehman | Accept (poster) | Summary: The paper builds an extension to the $k$-FWL isomorphism test algorithms called $(k,t)$-FWL+. The proposed extended family subsumes the existing $k$-FWL and adds finer variants to the expressivity landscape of isomorphism testing algorithms. The $(k,t)$-FWL+ algorithm is a combination of two modified schemes to the original $k$-FWL algorithm.
The first one, called $(k,t)$-FWL, which also colors $k$ tuples, and draws insparation from the tuple based aggregation of the $k$-FWL algorithm vs. $k$-WL. The proposed algorithm changes the construction of neighborhood multi-stets used in the encoding step of the $k$-FWL algorithm. Briefly explained, instead of consisting of $n$ tuples of size $k$ , $(k,t)$-FWL aggregates a multi-sets consisting of $n^t$ tuples of size $\sum_{i=0}^{\min (k,t)}{ k \choose i} {t \choose i}$. Where each tuple in the new proposed algorithm holds more information than in the original $k$-FWL algorithm. The authors prove that for a fixed $k$, increasing $t$ induces a strict hierarchy where there exist a large enough $t$ for which the algorithm solves the isomorphism problem.
The second modification, called $k$-FWL+, unifies many previously introduced approaches, and suggests modifying the neighborhoods of tuples to be equivariant sets of nodes instead of the entire node set of the graph. Which can reduce time complexity of the algorithm.
The main contributions of the paper are (i) introducing a novel finer hierarchy of isomorphism testing algorithms; (ii) showing that it is possible to construct a strict hierarchy at fixed memory complexity that can achieve solving the isomorphism problem; (iii) coming up with a practical neural instantiation; and (iv) achieving state of the art results on common benchmarks.
Strengths: - The paper is well written and successfully introduces the limitations of the $k$-WL hierarchy for the evaluation and design of more expressive GNNs. It is part of a body of works that refine the WL hierarchy aiming for more efficient algorithms with higher expressive power.
- The paper introduces a general framework that is also shown to unify several recent approaches and can provide a canonical design space for previous and future GNNs.
- The construction of the $(k,t)$-FWL hierarchy naturally arises from the original $k$-FWL algorithms and elegantly shifts memory complexity to runtime complexity in the fixed $k$ scenario.
- The experimental evaluation is extensive and the proposed method seems to achieve SOTA performance.
Weaknesses: - The paper does not properly discuss the "no free lunch" that hides in the proposed method. Indeed, the authors came up with a fixed space complexity hierarchy which, for large enough parameter $t$, can solve the isomorphism problem. But, this comes at the cost of time complexity - having to aggregate an exponentially growing number of tuples ($O(n^t)$). The authors deferred most of this discussion to the appendix, while I believe it should be placed in the main body of the paper.
- A valuable ablation study lacking in the paper is an experiment showing the performance of several variants of the new proposed hierarchy on the substructure counting benchmark. Since it is unclear what each variant gains in expressiveness, I believe such an evaluation will contribute to the paper's intuition and wholeness.
- Questionable results in the QM9 dataset. See the questions section.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - A neural counterpart is suggested only for a single instance in the hierarchy. An interesting realm to explore would be to find a neural analog to the proposed hierarchy (similar to [1])
- There seems to be a mismatch in the units for the QM9 experiment between your code and the code in the $I^2$-GNN repository [2]. They seem to convert back to previously used units to be consistent with previous works while your code uses the new conversion. Can the authors please clarify this issue? To my understanding, for example, the value that should be reported for $\tilde{U}_0=U_0/0.04336414=0.5744$, where $U_0$ is the value reported in this paper.
- Can the authors report runtimes of their GNN training and how it compares to other approaches? Since their approach trades space complexity for runtime complexity, I think it is necessary to show that the method remains scalable where the authors hinted that runtimes might increase when graphs are dense.
- In the appendix, the authors say (l1094): "...we observe it some time hard to optimize
1095 the model to achieve its theoretical power by using fewer embedding". Can the authors clarify this statement?
[1] Provably Powerful Graph Neural Networks. Maron et. al. (NuerIPS 2019)
[2] [$I^2$-GNN GitHub](https://github.com/GraphPKU/I2GNN/blob/master/run_qm9.py#L37)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - The authors describe the limitations of the method mainly in the appendix, I think some of the discussion should be moved to the main body (see questions section)
**General comment:** I lean towards raising my score upon clarification of the experimental results. I think the proposed framework is a natural extension of the $k$-FWL family that elegantly formulates the trade-off between time and space complexity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your acknowledgment of our contribution and constructive comments! We reply to all your concerns below.
### Weaknesses:
>1. Further discussion on $(k, t)$-FWL in the main paper.
**Answer:**
Thanks for the helpful comment! We mentioned the exponentially growing on the time complexity when increase the $t$ in $(k, t)$-FWL in line 159-160. However, due to the space limitation, we can only give more discussion in Appendix. We will provide in-depth discussion on the "no-free lunch" in the main paper in the revision.
>2. Experiment on different variant of $(k, t)$-FWL+ hierarchy.
**Answer:**
Thanks for the insightful comment! In general response point 3, we provide further ablation study on testing different $ES$ and $t$ on counting substructure dataset. In the experiment, we can observe clear expressive power difference on different variants.
### Questions:
>1. Neural analog to the proposed hierarchy.
**Answer:**
Although we didn't mention the neural analog of $(k, t)$-FWL+ but only $N^2$-FWL. It is straightforward to obtain neural analog for any variant of $(k, t)$-FWL+ by replacing $HASH$ with MLP, multiset pooling with injective multiset function like DeepSet [1], or summation. We will add detailed discussion in the revision.
>2. Issue on the QM9 experiment.
**Answer:**
We check our code and the code provided in $I^2$-GNN Github again. We believe we are following the same evaluation standard as previous works including $I^2$-GNN [2], KP-GNN [3], NGNN [4]. For all methods including ours, we train the model using the unconverted but normalized target value. After training, we first un-normalize it by multiplying the standard deviation and then use the conversion to convert the value and report the converted one. The only difference between ours and $I^2$-GNN is that they use a slightly different constant for **HAR2EV** in the conversion, where we use 27.211386246 and $I^2$-GNN use 27.2113825435. However, the value we used is consistent with the original code in PyG [5], KP-GNN [3], and NGNN [4]. Additionally, the constant factor for $U_0$ is HAR2EV=27.211386246 but not KCALMOL2EV=0.04336414. Therefore, we believe the comparison of the QM9 dataset is fair.
>3. Practical usage of $N^2$-GNN.
**Answer:**
In general response point 2, we provide further analysis and discussion on both time and memory complexity of $(k, t)$-FWL and $N^2$-GNN.
>4. Optimization issue on the $(k, t)$-FWL+.
**Answer:**
Thanks for mentioning that! The key point here is the injectiveness of the multiset function. For example, if we use large $t$ and small $k$ for $(k, t)$-FWL+ to conduct graph isomorphism test, for each $k$-tuple, the hierarchical multiset $\{\{\}\}_t$ will contains exponentially increased elements. So far, we only use the summation as the function to encode this hierarchical multiset. Although it is an injective function from a theoretical view, we find it works badly with information loss when the number of elements is too large. A similar issue can be founded in over-squashing [6]. However, we believe this issue can be alleviated by more powerful injective multiset functions like DeepSet [1] or increase the width of the embedding.
**References**
[1] Zaheer et al., Deep Sets, NeurIPS17.
[2] Huang et al., Boosting the cycle counting power of graph neural networks with i$^2$-GNNs, ICLR23.
[3] Feng et al., How powerful are k-hop message passing graph neural network, NeurIPS22.
[4] Zhang and Li, Nested graph neural networks, NeurIPS21.
[5] Fey and Lenssen, Fast Graph Representation Learning with PyTorch Geometric, ICLR19 workshops.
[6] Topping et al., Understanding over-squashing and bottlenecks on graphs via curvature, ICLR22.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers and clarification of the experimental results.
I agree with the other reviewers on the overselling (or the 'no free lunch') of the memory complexity O(n^2).
However, I think this paper presents a general framework, whether one decides to frame it as a WL-hierarchy of a subgraph enhancement approach, that introduces flexibility in the design of algorithms and unifies existing ones.
---
Reply to Comment 1.1.1:
Comment: Thanks again for your efforts in reviewing our work and providing insightful suggestions to improve our work. We will try our best to incorporate all of your comments in the revision. | Summary: The Weisfeiler-Leman algorithm has been established as the formal tool of choice for measuring the expressivity of GNNs. The paper tackles the problem of designing "higher-order" GNNs, which has been previously considered by several papers.
The authors propose and combine two new ideas to supplement k-WL: (1) add in further subgraph information in a space-efficient manner via a hyperparameter $t$, and (2) allow neighbor selection in folklore-k-WL from any equivariant fashion, instead of the standard sets such as the set of all vertices or the set of all neighbors.
Combining these two ideas yields the (k,t)-FWL+ algorithm. The version for k=2 and t=2, i.e. (2,2)-FWL+, is thoroughly evaluated on several datasets, and it yields very good experimental results, esp. on the ZINC dataset.
Strengths: 1) The (2,2)-FWL+ performs very well on several datasets. On the ZINC datasets, they report 10.6% and 40.9% improvement over the second-best methods.
2) The authors identify the problem of space limitations for higher-order GNNs and they seek to address them in their model.
Weaknesses: 1) The proposed algorithm lacks novelty: the claimed theoretical contributions are superficial. Graph Isomorphism can be obviously solved in quadratic space, if we were to allow iteration over all possible mappings between the two vertex sets. Such an algorithm might be space-efficient on paper but does not indicate any tractability since it brute-force tries all possible mappings. The proofs outlining the power and limitations of (k,t)-FWL are mild generalizations of known results, and lack any novel insights.
2) Allowing equivariant sets for vertex choice in k-FWL is a reasonable idea: however, I would expect the authors to compare several "equivariant set strategies" and demonstrate the relative gains over non-classic equivariant sets such as V(G) or N(v,G), keeping other things equal (an ablation study).
Overall, I feel that the paper is not much different from a multitude of papers on designing higher order GNNs using subgraph enhancement. Given the work that had already been done on this problem, the proposed model is incremental.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful comments! We reply all concerns below.
### Weaknesses:
>1. Further clarification on the novelties.
**Answer:**
We agree that the Graph Isomorphism problem can be solved by brute-force enumerating all possible mappings in quadratic space. However, our $(k, t)$-FWL does not aim to do this but aims to provide a flexible and theoretically-guaranteed way to balance the expressive power and space complexity. $(k, t)$-FWL also brings a new view on how to trade-off the memory and time complexity of WL-based algorithms, which was not mentioned in any previous literature (previous $k$-FWL-based algorithms' time and space complexity **need to increase simultaneously**). In our general response point 1, contribution (1), we further clarified the contribution of $(k, t)$-FWL.
>2. Further study on the $ES$.
**Answer:**
In our general response point 3, we provide further ablation study on testing different $ES$ and $t$ using the expressiveness and counting datasets.
We kindly refer the reviewer to take the time to go through the general response and let us know if there are any further concern. We are more than glad to discuss them all!
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my questions.
My main problem with the paper is: I am not convinced that the "quadratic space" perspective is a significant contribution which deserves to be the centerpiece of this paper. My reasons stand as follows.
The presented work (the “t”-extenstion and the “+”-extension) is more in the nature of feature extraction, rather than something which yields actual insights into how and why WL learns on graphs. The claimed premise that “quadratic-space WL is the right inductive bias that unifies several existing architectures” is too strong a statement which requires much better evidence that what is presented.
Theoretically, the main results (Proposition 3.1, Theorem 3.2 and Proposition 3.3) and their proofs are either trivial or comprise trivial modifications to existing literature. Yes, indeed, the paper points out one could re-parameterize Weisfeiler-Leman from $k$ to $(k,t)$ to obtain a finer hierarchy. Yet, in my opinion, the main contribution of this re-parameterization is the following: the (k,t)-FWL algorithm performs "subgraph enhancement using subgraphs of size $t$" in "quadratic space" instead of "potentially $n^t$ space". Hence, the proposed re-parameterization is more of a \emph{subgraph enhancement} GNN which works in quadratic space. It is in this context that I find the novelty lacking, given the plethora of existing "subgraph enhancement" GNNs already.
Experimentally, the most promising results are for the case k=2 and t=2. The "2-tuple" regime is also heavily researched with a fairly big crowd of similar architectures for aggregating information for 2-tuples: It is not clear at all if the empirical success of (2,2)-FWL+ can be attributed solely to the "quadratic space" inductive bias, or something else. It seems to me that the empirical success can be attributed mainly to the "+"-extension (i.e. choosing equivariant sets) rather than the "t"-extension (i.e. using small space).
It would be much better if the authors had built this paper around their solid contribution, namely the (2,2)-FWL+ algorithm, which speaks for itself with its strong performance on the ZINC dataset. Instead the authors have chosen to focus on the quadratic space perspective, which is not very insightful, at least to me, and forms an artificial framework that I find hard to justify. In its current form, the paper does not meet the NeurIPS bar in my opinion.
---
Reply to Comment 1.1.1:
Title: Follow-up discussion part 1
Comment: We thank the reviewer for the follow-up discussion. We are happy to further clarify them all.
> 1. The presented work (the “t”-extenstion and the “+”-extension) is more in the nature of feature extraction, rather than something which yields actual insights into how and why WL learns on graphs.
**Answer:**
Why WL can learn on graphs is a complex question. In $k$-WL/FWL, the isomorphism type of each $k$-tuple ($k$-subgraph) is encoded as the initial color. As long as the $k$ increases, $k$-WL/FWL can naturally encode higher-order subgraph information about the graph and learn more fine-grained information through the message passing. However, in order to keep permutation invariant, $k$-WL/FWL **needs to encode all possible $k$-tuples**, which is why $k$-WL/FWL has the **concurrent growth of both the time and space complexity**. Our $(k, t)$-FWL is motivated exactly by this limitation as well as the difference between $k$-WL and $k$-FWL: $k$-FWL could infer higher-order structure information through the **tuple-style** aggregation using the same space complexity as $k$-WL. Then, the $(k, t)$-FWL is designed to extend this **insight** to encode arbitrary higher-order structures (controlled by $t$) with fixed-length tuples (controlled by $k$). Through theoretical results (Proposition 3.1, 3.3, Theorem 3.2), we show that $(k, t)$-FWL constructs **a full expressiveness hierarchy with any fixed-length tuples**. The $(k, t)$-FWL+ further extends the design space by allowing arbitrary equivariant sets to define neighbor tuples. Therefore, our work **at least provides the following insights** into WL: 1) The space complexity need not grow with the time complexity to define a full hierarchy, and 2) $k$-WL/FWL has a much broader design space than originally thought by extending it to $(k, t)$-FWL+. Our work is not in the nature of feature extraction, but rather a general new framework for designing arbitrarily expressive GNNs.
> 2. The claimed premise that “quadratic-space WL is the right inductive bias that unifies several existing architectures” is too strong a statement which requires much better evidence that what is presented.
**Answer:**
we never stated that “quadratic-space WL is the right inductive bias that unifies several existing architectures”. Instead, $(k, t)$-FWL is a framework that not only works on quadratic space, but can construct an expressiveness hierarchy given any fixed space complexity ($O(n^k), k \geq 2$). That is also one of the key reasons why $(k, t)$-FWL+ is very flexible and can unify several existing architectures. So far, we support our claim with a series of theoretical results (**Proposition 3.4-3.9**), which already cover most of the existing works discussed in our related works (Section 5). In fact, there might be a misunderstanding on the "$O(n^2)$ Space" in our title. We emphasize the "$O(n^2)$ Space" mainly to deliver a message that the space complexity can be fixed instead of growing with time for defining a WL hierarchy. We did not mean that the $O(n^2)$ space should be the right or only inductive bias. Instead, $O(n^3), O(n^4)$ may work even better for different problems, as long as we can afford the memory cost.
> 3. Theoretically, the main results (Proposition 3.1, Theorem 3.2 and Proposition 3.3) and their proofs are either trivial or comprise trivial modifications to existing literature.
**Answer:**
Although our proof leverages some existing techniques used in the previous works [1-2], we do want to highlight that the core part of our proof is how to represent high-dimensional tuples using low-dimensional tuples, which, to our best knowledge, does not exist in any previous work and is nontrivial to do. Furthermore, our main contributions stand on the design of the $(k, t)$-FWL+ framework with a theoretical guarantee instead of proposing novel proof strategies.
> 4. the main contribution of this re-parameterization is the following: the (k,t)-FWL algorithm performs "subgraph enhancement using subgraphs of size
$t$ in "quadratic space" instead of "potentially $n^t$ space".
**Answer:**
To clarify, $(k, t)$-FWL can encode subgraphs of size $k+t$ with $O(n^k)$ space instead of $O(n^{k+t})$ space for $(k+t)$-WL. Only when $k=2$, we can have a quadratic space complexity. The idea of encoding subgraphs of size $k+t$ with $O(n^k)$ space never shows in any previous works and is a novel contribution of our work.
We kindly refer reviewer to discussion part 2 for further clarification.
---
Reply to Comment 1.1.2:
Comment: Dear reviewer eNLB:
We would like to sincerely thank you again for your efforts and constructive comments on our work. As the discussion period ends soon, could we kindly know if the responses have addressed your concerns and if further explanations or clarifications are needed? We provide a detailed follow-up discussion on (1) comparing different subgraph enhancement methods (point 5), (2) additional ablation study on methods with different space complexity (point 6), and (3) further clarification of our work (points 1-4, 7). If there is no further concern, we kindly hope that the reviewer could consider re-evaluating our work. Thank you again.
Authors. | Summary: The authors propose generalizations of k-WL and k-FWL to (k, t)-FWL, and k-FWL+, which when combined, is notated as (k, t)-FWL+. They argue that (k, t)-FWL+ allow a more "flexible and fine-grained" space for exploring the graph expressiveness hierarchy, which is helpful in designing new GNN architectures. As a practical application, the authors propose Neighborhood^2-GNN which is based off a version of (2, 2)-FWL+, and outperforms state-of-the-art models on standard and synthetic tasks alike.
Strengths: The paper's strengths lie in the authors' extensions of FWL, Neighborhood^2-FWL (N^2-FWL), and its strong empirical performance with regards to baselines.
The new formulation of (k, t)-FWL+ allows researchers to easily derive new extensions of FWL algorithms (and lead to new GNN models as a result). N^2-FWL is an empirical demonstration of the power of (k, t)-FWL+, showing that even with O(n^2) space complexity, these algorithms can be highly expressive, which is supported with multiple experiments.
Weaknesses: The paper's weaknesses lie in its lack of clarity. Especially as a rather technical paper, there seems to be a lack of intuition-based explanations and motivations for a large portion of the paper, which makes it hard to follow at points. More figures may help as well.
In addition, the authors' do not mention any details about time in the main paper (only in the Appendix), with a time complexity of $O(nd^{h+2})$, which they claim to be practical for real-world tasks. It would be nice to see empirical run-time, as well as memory consumption charts/tables which would help solidify the empirical contribution.
Typo on line 181.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. How was N^2-FWL motivated? Why was it chosen over other possible (k, t)-WFL+ configurations? It seems as if it is not the only configuration that gives such properties.
2. How does the runtime and memory consumption compare to node-based GNN models? And to traditional edge-based subgraph models?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Some limitations are addressed, but only in the Appendix. Most points are addressed in the section in the appendix, but would prefer if it were in the main paper.
No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive and constructive comments! We fix typo in revision and reply to all other concerns below.
### Weaknesses:
>1. Lack of clarity.
**Answer:**
We apologize for the confusion regarding the intuitions and motivations as we want to ensure the correctness and completeness of all theoretical results in the paper. To make it easy to follow, we also provide further explanations (e.g. discussion at the beginning of Section 3.1, 3.2), examples (e.g. many definitions), and figures (Figure 1). In the general response point 1, we further clarified the main motivations and contributions of our paper. We will try our best to further improve the readability of our paper in the revision.
>2. Further analysis on the complexity and practical usage.
**Answer:**
In general response point 2, we provide further analysis and discussion on both time and memory complexity of $(k, t)$-FWL and $N^2$-GNN.
### Questions:
>1. The motivation behind the $N^2$-FWL.
**Answer:**
Thanks for your great comment! The motivation behind the $N^2$-FWL is that we want to demonstrate $(k, t)$-FWL+ can be used to design a model that can fit the complexity of real-world tasks. Since most current works evaluate the model on molecular tasks, we also target on it. The success of molecular tasks highly depends on the substructure counting ability [1] and we choose $N^2$-FWL because of its provable power on substructure counting (Theorem 4.3).
>2. Practical usage of $N^2$-GNN.
**Answer:**
In general response point 2, we provide further analysis and discussion on both time and memory complexity of $(k, t)$-FWL and $N^2$-GNN.
**References**
[1] Huang et al., Boosting the cycle counting power of graph neural networks with i$^2$-GNNs, ICLR23.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers. At the moment, there does not seem to be sufficient new information to increase or decrease my score, so I am inclined to keep it.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for diligently reading our responses and giving valuable suggestions. We will try our best to further improve the clarity of the paper in the future version by providing additional examples/figures. | Summary: The paper deals with supervised learning on graphs. Specifically, it proposes a more fine-grained version of the (folklore) k-WL (k-FWL) hierarchy. In turn, these variants are neuralized via standard techniques.
Specifically, the authors propose the $(k,t)$-FWL, which extends the $k$-FWL by the parameter $t$, which controls which tuples are used for refining the color of a tuple. That is, following Eq. (6), the authors consider $t$-tuples to control the cardinality of the neighborhood of a given $k$-tuple. Subsequently, building on **standard constructions** from the $k$-WL literature, they investigate how, for a fixed $k$, varying $t$ controls the expressivity in terms of distinguishing non-isomorphic graphs (Prop. 3.1 to 3.3).
They further propose a simple extension of the above, the $k$-FWL, which uses a relaxed version of the neighborhood between a given $k$-tuple and a vertex in the underlying graph. Furthermore, they show that some recent GNN enhancements are subsumed by their framework.
The theory is complemented by an experimental study showing that the neuralized variants exhibit SOTA predictive performance on two common benchmark datasets.
Strengths: - The authors report good experimental results on established benchmark datasets. The experimental protocol looks meaningful.
Weaknesses: - The paper lacks clarity. For example, the exact definition of $Q^F_w(v)$ (line 136ff) is unclear. It is a central concept introduced in the paper, and it should be properly formalized. Moreover, the definitions in Section 3.3 are not entirely clear and make it hard to verify the formal proofs.
- The authors seem to **oversell their contribution**, e.g.,
- The discussion on the benefits of the k-FWL over the k-WL (lines 119 to 135) is folklore. There is no new insight. This also **holds for the
quadratic complexity of their algorithm** (e.g., lines 60 to 67). Note that big O notation is wrongly used here. Big Omega should be appropriate here.
- Subsection 3.1 just contains a simple extension of established ideas, e.g., from [8,12].
- Results in Section 4.2 are obvious by standard results from the GNN literature
- ...
- The running time and memory complexity of the algorithms are **not formally proved**. This seems crucial for the present work. Also the authors seem to confuse big O with big Omega notation.
- Relevant related work such as https://arxiv.org/abs/2203.13913 seems missing.
Comments:
- It is not correct saying that the $k$-WL requires $\mathcal{O}(n^k)$ space, it is a **lower** bound not an upper bound.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - What is the definition of "hierarchical multiset" (line 145)?
- Can you quantify the h in Theorem 4.1?
- What is required h for the statement of Theorem 4.3
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments! We will add corresponding references in the revision and reply to all other concerns below.
### Weaknesses:
>1. Further clarification of the definitions.
**Answer:**
We are sorry for the confusion. Briefly speaking, $Q^F_{\mathbf{w}}(\mathbf{v})$ contains all possible tuples that pick up $m=0, \ldots, min(k, t)$ elements from a $t$-tuple $\mathbf{w}$ to replace elements in $k$-tuple $\mathbf{v}$. Due to the page limitation, we provide the formal definition of neighborhood tuple $Q^F_{\mathbf{w}}(\mathbf{v})$ in Appendix A (as indicated in line 141). To further clarify it, we also provide an example in lines 140-141 and its construction procedure in Figure 1. For definitions in Section 3.3, we assume you are confused about the definition of equivariant set $ES$ as all other notations are standard notations. We provide the formal definition of $ES$ in lines 179-181. Please let us know if there are still confusions and we are more than glad to explain them all.
>2. Further clarification on the novelties and big O notation.
**Answer:**
In section 3.1, the discussion on folklore is mainly for showing the advantage of $k$-FWL over $k$-WL, i.e., we can achieve higher expressiveness with the same space complexity by redesigning the neighborhood aggregation, which is the insight and motivation behind our proposed $(k, t)$-FWL. There is no previous work, to our best knowledge, that leverages this insight further to propose similar methods as ours. In our general response point 1, contribution (1), we further clarify the contribution of the proposed $(k, t)$-FWL.
The big Omega notation $\Omega(n^k)$ means the function is no less complex than $n^k$. It is usually used when we cannot prove the upper bound of a function is $n^k$. However, it is well-known that for $k$-WL, we have implementations whose complexity is exactly $n^k$, which obviously belongs to $O(n^k)$. If you are right, we believe there is no algoritm that can use the big O notation, since there definitely exists some inefficient implementation that requires larger complexity than its known lower bound.
>3. Subsection 3.1 just contains a simple extension of established ideas, e.g., from [1,2].
**Answer:**
In subsection 3.1, we introduce the $(k, t)$-FWL, which is totally different from the idea in [1, 2]. In subsection 3.2, we introduce equivariant set $ES$ to replace the $V(G)$ used in the original $k$-WL/FWL. Methods in [1, 2] are only special cases of our proposed method. Further, as discussed in the general response point 1, contribution (2-3), this extension allows us to design high-expressiveness GNNs in a fine-grained design space. It also allows the proposed framework to unify the majority of existing high-expressiveness GNNs (**Proposition 3.4-3.9**), which is hard to achieve and new to the community. These obviously cannot be done by the methods proposed in [1, 2].
>4. Discussion on results in Section 4.2.
**Answer:**
The reuslt in Section 4.2 (**Proposition 4.4**) is necessary for the completeness of our work. It shows that $N^2$-GNN can be as powerful as $N^2$-FWL.
>5. further discussion on the complexity.
**Answer:**
In the general response point 2, we provide further analysis and discussion on both the time and memory complexity of $(k, t)$-FWL and $N^2$-GNN.
### Questions:
>1. What is the definition of "hierarchical multiset" (line 145)?
**Answer:**
We provide the definition and an example of hierarchical multiset in line 145-147. In a hierarchical multiset $\{\{\mathbf{v} | \mathbf{v}\in V^t(G)\}\}_t$ over a $t$-tuple, elements are grouped hierarchically according to the node order of the tuple. For example, to construct $\{\{(v_1, v_2, v_3)|(v_1, v_2, v_3)\in V^3(G) \}\}_3$ from $\{\{(v_1, v_2, v_3)|(v_1, v_2, v_3)\in V^3(G) \}\}$, we first group together all elements with the same $v_2$ and $v_3$. That is, $\forall v_2, v_3 \in V(G)$, we denote $t(v_2, v_3)=\{\{(v_1, v_2, v_3)|v_1 \in V(G)\}\}$ as the grouped result. Next, use the similar procedure, we have $\forall v_3 \in V(G)$, $t(v_3) = \{\{t(v_2, v_3)|v_2 \in V(G)\}\}$. Finally, we group all possible $v_3 \in V(G)$ to get $\{\{(v_1, v_2, v_3)|(v_1, v_2, v_3)\in V^3(G) \}\}_3 = \{\{t(v_3)|v_3 \in V(G)\}\}$.
>2. Can you quantify the h in Theorem 4.1?
**Answer:**
$h$ is a subgraph hop hyperparameter to restrict the subgraph scope to find neighbor tuples (thus the name subgraph GNN), as also used in previous work such as $I^2$-GNN [3]. From an implementation point, it can be understood as the radius of the subgraph to extract for each tuple. It does not affect the expressive power analysis, since we can simply set $h$ to be large enough to cover the whole graph so as to match the theoretical full-graph-view setting. For SLFWL(2) [2], as they don't have the $h$ hyper-parameter and operate on the whole graph (the full-graph-view setting), Theorem 4.1 need $h$ to be large enough to cover the whole graph.
>3. What is required h for the statement of Theorem 4.3.
**Answer:**
The required $h$ is the same as stated in [3] as the model need to see upon such hop to have the full picture of a particular substructure. That is, 1, 2, 2, 3, 2, 2, 1, 4, 2 for 3-cycles, 4-cycles, 5-cycles, 6-cycles, tailed triangles, chordal cycles, 4-cliques, 4-paths, and triangle-rectangles, respectively. We also use this setting in our experiments. We will include all the discussion in our revision.
**References:**
[1] Morris et al., Weisfeiler and leman go sparse: Towards scalable higher-order graph embeddings, NeurIPS20.
[2] Zhang et al., A complete expressiveness hierarchy for subgraph gnns via subgraph weisfeiler-lehman tests, ICML23.
[3] Huang et al., Boosting the cycle counting power of graph neural networks with i$^2$-GNNs, ICLR23.
---
Rebuttal Comment 1.1:
Title: Answer to rebutall.
Comment: I am very well aware of the definition of the big Omega. There does not exist proof that the $k$-WL runs in $\mathcal{O}(n^k)$. Simply iterating over all possible $k$-tuples already takes $\Omega(n^k)$ time; note that does not include the aggregation over neighboring $k$-tuples.
I do not believe that the work in its current form is ready for publication. I also strongly agree with the points mentioned by Reviewer eNLB.
The only merit of this work is its strong empirical performance.
---
Reply to Comment 1.1.1:
Title: Follow-up discussion
Comment: Thanks for your follow-up comments. We further clarify them as follows.
> Big O vs Big Omega
**Answer:** First, we rewrite the update function of $k$-WL here:
$ \mathcal{C}^{l-1}(\mathbf{v}) = \text{HASH}( \mathcal{C}^{l-1}(\mathbf{v}), ( \\{\\!\\!\\{ \mathcal{C}^{l-1}( \mathbf{u}) | \mathbf{u} \in Q_j(\mathbf{v})\\}\\!\\!\\} |\ j \in [k] )) $,
where $Q_j(\mathbf{v})= \\{\mathbf{v}_{w/j}|w \in V(G)\\}$
and $ \mathbf{v}_{w/j} $=
$\left(v_{1}, \ldots, v_{j-1}, w, v_{j+1}, \ldots, v_{k} \right)$.
In our paper, we state that the $k$-WL has a space complexity of $O(n^k)$ and time complexity of $O(n^{k+1})$ (line 33-34). Now, we show there exists an implementation of $k$-WL that satisifies the above complexity. First, since the number of $k$-tuples in an $n$-node graph is $n^k$, to save the initial color of each $k$-tuple, we need $n^k$ space.
At each iteration, to update the color of each $k$-tuple $\mathbf{v}$:
(1) We first create a new space to save all the updated colors. This results in another $n^k$ space.
(2) Then, $\forall i \in [k]$, $(v_{1}, \ldots, v_{i-1}, v_{i+1}, \ldots, v_{k}) \in V^{k-1}(G)$, we compute the result color of the aggregated multiset $\\{\\!\\!\\{\mathcal{C}^{l-1}(v_{1}, \ldots, v_{i-1}, u, v_{i+1}, \ldots, v_{k})| u \in V(G) \\}\\!\\!\\}$ and save it to a new space as an intermediate result. Since there are in total $kn^{k-1}$ different multisets and each multiset requires $n$ time to compute result, this step requires another $kn^{k-1}$ space and $kn^k$ time complexity.
(3) Next, we update the color of each tuple $\mathbf{v}$. Since the color of each multiset is already computed and saved in step (2), we could use the saved result to directly update the color of each tuple $\mathbf{v}$, and save it to the space we created in step (1). This results in another $k+1$ time complexity (as there are $k$ multisets per tuple plus the tuple itself). Thus, the total time complexity in this step is $(k+1)n^k \leq (n+1) n^{k} = O(n^{k+1})$.
(4) Finally, we delete all the saved colors except for the updated color for each tuple.
In summary, for space complexity, we have $n^k$ (saving old color) + $n^k$ (saving new color) + $kn^{k-1}$ (intermidiate results) = $O(n^k)$. For time complexity, we have $kn^k$ (compute all possible multisets) + $O(n^{k+1})$ (update color for each $k$-tuple) = $O(n^{k+1})$. We are aware that the running time of $k$-WL can be more tightly bounded by $O(kn^k)$. However, the $O(n^{k+1})$ stated in the paper still holds.
In the "Comments" of your review, you stated "It is not correct saying that the $k$-WL requires $\mathcal{O}(n^k)$ space, it is a lower bound not an upper bound." So we assume your main concern is on space complexity. Now we have proved that the $\mathcal{O}(n^k)$ space complexity is indeed feasible. Therefore, the big Omega notation need not be used.
> Strongly agree with the points mentioned by Reviewer eNLB.
**Answer:** We admit that the criteria for novelty can be subjective. In the general response point 1, we further clarified the main contributions of our paper. We kindly ask the reviewer to go through the response and inform us which points you are concerned about. We are more than glad to explain them further.
> The only merit of this work is its strong empirical performance.
**Answer**: We believe the strong empirical performance of $N^2$-GNN further proves that our proposed $(k, t)$-FWL+ framework has great potential for designing new expressive GNN variants, as evidenced by the two new SOTA results on ZINC and ZINC-full (general response point 1, contribution (4)). We strongly believe $(k, t)$-FWL+ can inspire more novel and expressive GNNs and broaden the design space of current high-expressiveness GNNs. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their constructive and insightful comments. Here we respond to some general concerns mentioned by the reviewers. **All tables mentioned in the response are provided in the PDF**.
### 1. Further clarify the motivations and contributions.
**Motivations:** We try to tackle two intrinsic limitations of $k$-WL/FWL. Namely, (1) the concurrent growth of both the time and space complexity when increasing $k$ and (2) the huge expressiveness gap between two consecutive values of $k$.
**Contributions:**
(1) To tackle the first limitation, we propose $(k, t)$-FWL, which replaces the nodes to traverse (used for defining a $k$-tuple's neighbors) with $t$-tuples. We theoretically show that by varying $t$, we can construct an expressiveness hierarchy with a fixed $k$ (fixed space complexity) (**Proposition 3.1; 3.3, Theorem 3.2**). Although it is true we can solve the graph isomorphism problem by permuting all possible mappings between two graphs ($O(n^2)$ space complexity), there is no previous work theoretically establish an expressiveness hierarchy on a fixed space complexity---this hierarchy enables **flexibly tuning the expressive power** (with theoretical guarantee) in a fixed space budget, which is **not feasible** by brute-force methods.
(2) To tackle the second limitation, we propose $k$-FWL+. In $k$-FWL+, we introduce the equivariant set $ES$ to replace the $V(G)$ used in the original $k$-WL/FWL. Although previous works [1, 2] also modified the $V(G)$, they only introduce a particular case, like 1-hop neighboring nodes of the $k$-tuple. Instead, we **generalize it to a much broader space**. This brings two advantages, (1) a new insight into how to design GNNs based on $k$-WL/FWL using fine-grained ES definitions and (2) unification of a wide range of previous GNNs. These cannot be achieved by any previous work like [1, 2].
(3) Combining (1-2) results in a new framework named $(k, t)$-FWL+. $(k, t)$-FWL+ is a very flexible framework that can unify the majority of previously proposed high-expressive GNNs (**Propositions 3.4-3.9**). Based on the $(k, t)$-FWL+ framework, we propose $N^2$-FWL and its neural version $N^2$-GNN, which is provably more powerful than all node-based subgraph GNN and SLFWL(2)[2], as well as existing edge-based subgraph GNNs [3] (**Theorems 4.1-4.2**). We also prove its strong substructure counting power in **Theorem 4.3**.
(4) We evaluate $N^2$-GNN on many benchmark datasets. $N^2$-GNN achieves **new SOTA results** on ZINC-subset/full datasets with the **smallest parameter size** (306k in ZINC-subset/ZINC-full compared to ~500k of other models). Specifically, on ZINC-subset, we improve the previous SOTA MAE **from 0.066 (Specformer) to 0.059**, which is the first time a model can reach below 0.06 MAE. On ZINC-full, the advantage is even huger, **with 0.013 new SOTA MAE** compared to the previous 0.022. These results are **significant enough** for the community, and further demonstrate the power and flexibility of $(k, t)$-FWL+ on designing GNNs fitting the complexity of real-world tasks.
### 2. Analysis of time and space complexity of the proposed methods.
**Theoretical analysis**: For $N^2$-GNN, we provide complexity analysis in Appendix D.2. For $(k, t)$-FWL, the space complexity is $O(n^k)$ as we only need to store representations of all $k$-tuples. For time complexity, if $k$ and $t$ are relatively small, to update the color of each $k$-tuple, $(k, t)$-FWL needs to aggregate all $V^t(G)$ possible neighborhood tuples, resulting in $O(n^{k+t})$ time complexity. If both $k$ and $t$ are large enough (related to $n$), the time complexity further increases to $O(n^{k+t}\cdot m\cdot(\frac{q!}{m!})^2)$, where $m=min(k, t)$ and $q=max(k, t)$. For $k$-FWL+ and $(k, t)$-FWL+, the space complexity is $O(n^k)$. However, since the time complexity highly depends on the choice of $ES$, it is hard to provide a formal analysis for it.
**Empirical study**: Here we show the practical usage of $N^2$-GNN (Table 1, 2) using the cycle counting datasets with a similar parameter size for all models and a batch size of 256. We report the training time, inference time, and maximum memory usage during inference for both $h=1$ and $h=2$ for an average of 20 epochs.
We can see that the empirical memory usage of $N^2$-GNN is almost the same as $I^2$-GNN and the time is worse. As we mentioned in limitations (Appendix D.3), to enjoy some extent of parallelism, the current implementation of $N^2$-GNN needs to save all neighbor indices, which introduces a constant factor of more memory. However, we do want to highlight several points: (1) By changing the way of implementation, $N^2$-GNN can be implemented strictly within $O(n^2)$ space. Therefore, the merit of $N^2$-GNN still holds. (2) The experiment is conducted on the same parameter budget level. However, in real-world tasks, $N^2$-GNN needs much fewer parameters to achieve SOTA results. (3) In this work, we not only aim to propose a new model but also mean to introduce a new flexible framework, $(k, t)$-FWL+. The empirical success of $N^2$-GNN demonstrates the great potential of this framework for designing new expressive GNN variants.
### 3. Additional ablation study.
We provide additional ablation study on $(2, t)$-FWL+ by varying different $t$ and $ES$ (Table 3 and 4). We select two important $ES$ mentioned in our paper---$Q_1(v)$ and $\mathcal{SP}(v_1, v_2)$. Notes that we find the performance of $\mathcal{SP}$ or $\mathcal{SP} \times \mathcal{SP}$ on counting dataset is similar to MPNN and thus omit it in Table 4.
**References:**
[1] Morris et al., Weisfeiler and leman go sparse: Towards scalable higher-order graph embeddings, NeurIPS20.
[2] Zhang et al., A complete expressiveness hierarchy for subgraph gnns via subgraph weisfeiler-lehman tests, ICML23.
[3] Huang et al., Boosting the cycle counting power of graph neural networks with i$^2$-GNNs, ICLR23.
Pdf: /pdf/e490e990a1fa4c196f92a3b023df7dd200b70e39.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors start by considering the k-dim Weisfeiler-Lehman test (WL). To solve the well-known problems of k-WL/FWL (high space complexity $O(n^k)$ and rigid design space), the authors propose two extensions of the k-FWL, named (k, t)-FWL and k-FWL+.
The first extension is based on the simple observation that k-WL aggregates each color separately while k-FWL aggregates the tuple of colors. Expressive results of (k,t)-FWL are provided (3.1 - 3.3). The second extension is based on the simple modification of k-FWL; instead of considering the whole vertex set, consider only a particular neighbor. These two extensions are then combined to (k, t)-FWL+, of which expressiveness results are provided compared to the previous gallery of GNN/WL models (3.5 - 3.9). Lastly, the authors then propose N2-GNN, an implementation of (2, 2)-FWL+ as well as empirical results showing the efficacy of N2-GNN.
Strengths: - Despite the complex nature of the concepts, the exposition of their main algorithm and the conceptual design was very well done (especially Figure 1)
- I very much appreciate the simplicity of the idea that leads to space-efficient variants of (F)WL tests which are still quite powerful.
- Although I did not check the proofs in detail, the authors provide an impressive list of theoretical results regarding their proposed (k, t)-FWL+, all of which provide solid theoretical superiority over previous works.
- The experimental results seem promising.
Weaknesses: - Although the concepts are well delivered, all technical details, especially the proofs, are entirely relegated to the Appendix. Are there any notable technical novelties in the proofs of the myriad of theoretical results, or are they somewhat standard?
- It is expected that the expressive power of (k, t)-FWL+ depends heavily on the definition of ES, which the authors addressed via a myriad of propositions (3.4 - 3.9) for specific instances. However, if possible, it would be great to have some intuitions on the interplay of the choices of (k, t) and ES. Maybe the authors could choose a single proposition and try to explain the intuition behind it further (e.g., Proposition 3.4) via additional figures or discussions.
- Some of the references are in poor condition (e.g., [11], [12], [16]).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It states on pg. 4 lines 159-160, that "size of Q_wF(w) will grow exponentially with an increase in the size of t". Then does that mean that requiring O(n^2) is equivalent to confining t=2 (or at least small t)? Also, throughout the paper, the only time t is not 2 is in the theories when the authors prove a hierarchy w.r.t. t. Are there situations in which t should be set to 3 or higher?
2. Is there any way of comparing the expressiveness of (k, t)-FWL and (k', t')-FWL with k+t = k'+t'? For instance, can one compare the expressiveness of (2, 3)-FWL and (3, 2)-FWL?
3. Before combining (k,t)-FWL and FWL+, is it possible to also obtain expressiveness hierarchy results on FLW+ depending on the choice of ES? For instance, with fixed t=1 and k=2, can you say anything about the expressiveness across the choices of ES(v) as considered in Proposition 3.4 - 3.9?
4. It is mentioned in the conclusions that the practical complexity of N2-GNN can still be unbearable, especially for dense graphs. Thus I'm curious about the running times of all the algorithms in comparison. Is there a trade-off if one sets the training budget approximately the same for all the algorithms?
5. There was a mention of Puny et al. [43] that performs O(n^2) space 3-WL via graph polynomial features. Although the authors mentioned that they suffer from overfitting, I'm still curious. Are there any theoretical results (e.g., expressiveness hierarchy) or empirical results that compare [43] to (k, t)-FWL+?
(minor suggestion: it would be nice to have a table of contents, maybe at the start of the appendix, to make it easier to navigate)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Although this work needs not address any societal impact, the work has no mention of its limitations and possible future works. I would like to ask the authors to include some of the possible limitations of their work (possibly with possible solutions, but I won't be negative if there are none for now, which by the way is also quite natural in my eyes.)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and insightful comments! We revise all references and add contents for Appendix in our revision and reply to all other concerns below.
### Weaknesses:
>1. The novelties in the proofs.
**Answer:**
Thanks for your mention. For all proofs, we mainly follow the previous strategies used in [1, 2]. However, we do want to highlight that most of our proof involving represent high-dimensional tuples using low-dimensional tuples, which is, to our best knowledge, not exists in any previous works and not trivial to prove. For example, in Proposition 3.7, we match the expressiveness of edge-based subgraph GNNs using $(k, t)$-FWL+ with $k=2$ (2-tuple based). However, it is well-known that edge-based subgraph GNNs are intrinsically 3-tuple-based. How to align each element in a 3-tuple to a 2-tuple-based method is non-trivial.
>2. The interplay between $k$, $t$, and $ES$ and design principles.
**Answer:**
This is a great question and we would like to answer it from two parts.
**Interplay between $(k, t)$ and $ES$:** In general, this is a hard question to tackle. The introduction of $(k, t)$-FWL allows us to fine-tune between the time and space complexity to achieve the desire expressive power. Increasing either $k$ or $t$ can increase the expressive power (As indicated in Proposition 3.3). The introduction of $ES$ makes the expressive power not bounded by either $k$ or $t$. For example, we can let $k=2$, $t=1$, and $ES(\mathbf{v})$ be all nodes in $4$-clique pass nodes $v_1$. This will give the model a better ability on counting this particular substructure. However, it is well-known this cannot be counted by 2-FWL or 3-WL [3].
**Intuitions behind Proposition 3.4-3.9:** We think the most important principle in our mind when proving Proposition 3.4-3.9 is to find a variant that can best match the expressiveness of the existing one. Therefore, the choice of $k$, $t$, and $ES$ all serve this purpose. We do believe there exist multiple solutions for some methods and leave it to our future works.
### Questions:
>1. Further clarification on the space complexity.
**Answer:**
The space complexity of the $(k, t)$-FWL only depends on $k$ (the size of the tuple and thus the number of tuple $|V^k(G)|=O(n^k)$). If we fix $k=2$, the space complexity should be $O(n^2)$ no matter which $t$ we use. In the situation where we only have a limited memory budget and we still want a high-expressive GNN, a large $t$ is a great solution here.
>2. The comparison on the expressiveness of $(k, t)$-FWL and $(k', t')$-FWL with $k+t = k'+t'$.
>
**Answer:**
Thanks for your insightful comment! We do believe there exists a further relationship between $(k, t)$-FWL and $(k', t')$-FWL with $k+t = k'+t'$ and had some attempts to prove that. However, we find it non-trivial to give formal theoretical proof and leave it to our future works.
>3. The expressiveness of $ES$.
**Answer:**
Thanks for your great question! As we mentioned in the answer to weakness 2, the choice of $ES$ is the key to the final expressive power of the model and its power will not be limited by either $k$ or $t$. In general response point 3, we further provide additional ablation study and show the practical power of different choices of $ES$. However, currently, it is hard to provide a canonical characterization of the expressive power of different $ES$.
>4. Practical usage of $N^2$-GNN.
**Answer:**
In general response point 2, we provide further analysis and discussion on both time and memory complexity of $(k, t)$-FWL and $N^2$-GNN.
>5. Discussion on Puny et al [4].
**Answer:**
Thanks for your insightful comment, [4] Introduce a new way to evaluate the expressive power of the graph model---the ability of the model to compute the equivariant polynomials up to a certain degree. However, as mentioned in [4], the equivariant polynomial is highly related to subgraph counting. Theoretically, we think equivariant polynomial is more suitable to compare with models like CIN [5]. It is hard to directly connect $(k, t)$-FWL+ with equivariant polynomials. Empirically, The comparison depends on the implementation of $(k, t)$-FWL+ and the choice of equivariant polynomial. Puny et al [4] do introduce PPGN++, which adds additional polynomial features up to 6 degrees that cannot be computed by PPGN and test its performance on ZINC-subset (0.071) and ZINC-full (0.020), which is lower than $N^2$-GNN.
### Limitations:
Thanks for your great question! Due to the page limitation, we mainly discuss the limitation of $N^2$-GNN in Appendix D.3. Specifically, (1) although $N^2$-GNN holds its theoretical space complexity, it could still be memory-costly in dense graph due to the way of implementation. This can be solved by changing the way of implementation. (2) The model become hard to optimize with large $t$ and dense graph. The key reason is how to realize the injectiveness of multiset pooling function in neural version. So far, we only test the summation. We will investiage more powerful multiset pooling function like DeepSet [6] in the future. These are all future topic in this work. Meanwhile, as you mentioned, the in-depth theoretical analysis on different $ES$ is also very interesting and worth further investigation! We will provide complete discussion on the limitation and future work in the revision.
**References:**
[1] Morris et al., Weisfeiler and leman go sparse: Towards scalable higher-order graph embeddings, NeurIPS20.
[2] Zhang et al., A complete expressiveness hierarchy for subgraph gnns via subgraph weisfeiler-lehman tests, ICML23.
[3] Arvind et al., On Weisfeiler-Leman invariance: Subgraph counts and related graph properties. Journal of Computer and System Sciences, 2020.
[4] Puny et al., Equivariant polynomials for graph neural networks, ICML23.
[5] Bodnar et al., Weisfeiler and Lehman Go Cellular: CW Networks, Neurips21.
[6] Zaheer et al., Deep Sets, NeurIPS17.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for providing detailed responses to my questions and comments. I'm satisfied with the responses and intend to keep my score.
---
Reply to Comment 1.1.1:
Comment: We thank you again for your constructive feedbacks and suggestions! | null | null | null | null | null | null |
Textually Pretrained Speech Language Models | Accept (poster) | Summary: The authors explain a method to improve the performance of a speech language model by reusing the weights of a language model trained on text.
Strengths: Using a well-trained text-based language model as an initial model seems like a good idea.
If the authors' claims can be generalized, it can be used as a good starting point for speech language models, which are relatively difficult to train.
Weaknesses: The authors empirically discovered that they achieved better performance in the same training step, but there seems to be a lack of theoretical analysis. The process of analyzing, estimating, and confirming evidence for why such improvement occurs is missing. This simple empirical discovery without the analysis and validation of why it happens would have limited contribution to the conference-level community.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I would like to ask the authors for additional comments regarding the following points and the weaknesses mentioned.
The audio samples provided by the authors are not sufficient. While the authors support their claims through various experiments in the manuscript, only some of the samples are included. While source code is the most clear evidence for the experiments and results, it is necessary to present other detailed experimental results when the source code is not available. Furthermore, in the field of speech synthesis, subjective evaluations hold more weight than objective evaluations, so it is essential to provide reviewers with the opportunity to listen to and evaluate the samples directly.
The method proposed by the authors is a form of transfer learning. However, when observing the training phase's perplexity (PPL) curve presented by the authors, it is difficult to consider the model as having converged at the 400k step where the authors claim to have ended the training. Therefore, it is not clear whether the performance difference claimed by the authors is simply due to the difference in convergence time caused by the initialization of the initial weights or the advantages obtained from the text-based language model (such as having trained on larger data, more diverse contexts, etc.).
The MMOS results are presented in a table. It is difficult to verify the confidence intervals (CI). If the CI cannot be verified, it should be indicated how many evaluators participated and with how many samples.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations have been well described.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging that using text LMs as initialization for SpeechLMs is a good idea, as our empirical results demonstrate.
**Regarding theoretical justification:** We agree that providing theoretical justification of why TWIST outperforms Cold-Init models is interesting. However, such theoretical justification is far from trivial. Furthermore, offering techniques (such as TWIST) that result in consistent and substantial performance improvements, even in the absence of a well-defined theoretical rationale, holds considerable potential to make a meaningful and beneficial contribution to the broader community, for instance, see [1].
We hypothesize the benefit of TWIST comes from providing a better prior initialization of the model weights for building SpeechLMs. This allows better capturing of long range dependencies in the input sequence (something that we do not observe in image data for instance. See Table 8 in the Appendix). In the attached rebuttal pdf file (Figure.1), we visualize the performance gap between TWIST and Cold-Init. model (350M params), using tStoryCloze, while slowly decreasing the context length. We observe that as the bigger the context, the bigger the gap between the models. We believe such findings will be valuable and of high-interest to the community, especially at the intersection between written and spoken language.
**Regarding audio samples and source-code:** In the appendix material, we provide 15 samples from the proposed method. Per the reviewer's request, we provide an additional 450 samples (link was sent to the AC as per the NeurIPS guidelines), those samples were sent to the MMOS evaluation. As for source-code, we will open source both code and pretrained models.
However, we would like to emphasize that our contribution in this work is not synthesis quality but improved spoken language modeling (i.e., improving what is said rather than how it sounds). Improving speech generation is orthogonal to our method, and future research could improve generation quality on top of TWIST.
**Regarding model convergence:** We agree that if we had infinite compute and infinite data both models might converge to the same point. However, we note that we used the biggest publicly available speech data so far for training speechLMs (that we are aware of), so scaling further is far from trivial, and further, our setup is already quite expensive as it is (training the 1.3B model for 400k steps takes ~9 days on 32 GPUs). Hence, for fair comparison, we decided to limit each run to 400k steps (which is reasonable considering the change in the training loss). Under this setup we do observe better performance across all settings and faster convergence as can be seen in Figure 2a.
Furthermore, we do train our LLaMA model until convergence, the graphs are presented in the PDF attached to the rebuttal (Figure 2). One can see that, even at full convergence, the TWIST model outperforms the Cold-Init model in terms of speech PPL (similarly to Figure 2b).
**Regarding source of performance difference:** we believe the performance improvement of TWIST is due to both better initialization and the data used to initialize the model: (i) for better initialization: we do observe the model gets better loss values and overall scores (sWUGGY, sBLIMP, etc.) using TWIST from the beginning of the training, hence we conclude TWIST provide better prior weight initialization; (ii) for the type of data used for initialization: we do not observe performance boost when we use ImageGPT (using image data) as initialization for TWIST, which suggests that the type of data also plays an important role in the performance improvement obtained by TWIST.
**Regarding CI and other details for MMOS:** we include the full table of all MMOS scores used for Figure 3 (we will include these results in the manuscript). As for details regarding the number of raters and number of examples, please see line 175, section 3.3 and line 267in section 4, human evaluation paragraph: we use 50 samples per dataset (total of 150 samples), while we enforce 10 raters per sample.
| Method | Mean | CI@95 | Mean | CI@95 | Mean | CI@95 |
|:-------------:|:--------:|:-----:|:--------:|:-----:|:----:|:-----:|
| | LS-clean | | LS-other | | LL | |
| Ref. | 4.11 | 0.04 | 4.23 | 0.04 | 4.05 | 0.05 |
| Resynth | 3.95 | 0.07 | 3.87 | 0.06 | 3.96 | 0.06 |
| no-TWIST 1.3B | 3.34 | 0.08 | 3.37 | 0.06 | 3.31 | 0.07 |
| TWIST 1.3B | 3.51 | 0.07 | 3.67 | 0.07 | 3.65 | 0.06 |
| TWIST 7B | 3.79 | 0.06 | 3.85 | 0.07 | 3.81 | 0.06 |
[1] Peters, Matthew E., et al. "Deep contextualized word representations." Proceedings of NAACL-HLT. 2018.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed explanations.
I have better understood the authors' work through the rebuttal and attached audio samples. I agree that the base model trained with large-scale text data would have improved performance, and the proposed method showed good results. However, considering the level of the conference, supplementation of academic analysis and evidence can be needed beyond empirical things.
I agree with most authors' opinions and will update my score to 6.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your insightful feedback. We're delighted that our rebuttal and the additional audio samples proved helpful, and thankful for your input and the revised score. | Summary: This work proposes TWIST, Textually Warm-Initialized Speech Transformer-based LMs, a technique to initialize SpeechLMs with pretrained textual LMs. Different textual LMs, tokenizers, models and dataset sizes are evaluated using TWIST. The authors find that a warm start with a textual LM helps compared to a random initialization when evaluated on a number of metrics.
Strengths: - TWIST shows how a warm start with any pretrained textual LM (OPT, Bloom, Pythia) benefits a speech-based LM.
- This is one of the first works to use large-scale audio data (i.e., approximately 150K hours of speech) to train a speech-based LM.
- This work presents a new evaluation benchmark for speech-based LMs based on StoryCloze.
Weaknesses: Warm start with pretrained textual LMs was found to be effective for speechLMs. A more detailed analysis of the learned representations with and without TWIST would have been useful for the reader. I elaborate on this further in my questions below.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - An analysis of the learned speech-based discrete tokens with and without the use of TWIST would be interesting to see. For example:
- Visualizing the representations learned via TWIST (i.e., a warm initialization from a textual LM) and those learned via a randomly initialized model. Color-coding representations based on phoneme labels (derived from a forced alignment) might reveal that phoneme distinctions are less pronounced with using TWIST.
- Training simple probing classifiers (e.g., to recognize phones or subword textual units) whose inputs are speech representations with and without TWIST and comparing their accuracies.
- Due to the warm start from textual LMs that model long-term textual dependencies, one would assume that TWIST would do well on measures like sBLIMP that look at the grammaticality of entire sentences. However, from Table 3, TWIST does not fare as well on sBLIMP as prior work from Borsos et al. Could the authors comment on this? Also, it would be useful to show numbers using a randomly initialized model (in direct comparison to TWIST) in Table 3.
- Are the results in Table 2 using TWIST with OPT-350M or OPT-1.3B? Please clarify. Assuming it is 1.3B, why is the sBLIMP score for TWIST-1.3B in Table 3 (60.3) different from that reported in Table 2 (59.3)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations have been addressed in the submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the extensive evaluation we conducted across different textual LMs, tokenizers, models and dataset sizes. We also appreciate the reviewer for noting that our work is one of the first ones to work on large-scale speech language modeling and the introduction of new benchmarks for speech language modeling.
**Regarding analysis of the proposed method:** We thank the reviewer for their suggestion, however, we would like to emphasize that both cold- and warm-started models use the same tokenizer (obtained by a HuBERT model). Once analyzing such discrete representation, we already observe a strong correlation to phoneme states (even without any SpeechLM optimization involved). We can consider such input to the model as “pseudo-phonemes”. Similar findings can also be observed in prior work (see [1]). Hence, we do not expect to see strong trends in better-capturing phoneme information between TWIST vs. non-TWIST models.
We hypothesize the performance improvement we observe while using TWIST is due to better prior weight initialization, which provides better modeling of long sequences. Following the reviewer’s request, we visualize the performance gap between TWIST and non-TWIST model (350M params) while slowly decreasing the context length (see Figure.1 in the attached rebuttal pdf file). We observe that as the bigger the context the bigger the gap between the models.
We additionally provide below a few textual examples from tStoryCloze in which the TWIST model succeeds in modeling, while the non-TWIST one does not. \
Each example is followed with the correct / incorrect suffix.
*Example 123:*
*Harry went to the theme park with his family. His dad and brother rode on a big roller coaster with him. He then rode on some smaller rides with his mom. The family ate at a restaurant at the park. \
Harry had a great time at the park. / After counting the cars, we went back home.*
*Example 223:*
*Jennifer forgot to close the front door when she got home. By the time she noticed her pet cat had disappeared. She walked around the neighborhood calling for the cat. She made flyers with her contact information. \
Jennifer's cat came home the next day, acting very hungry. / She decided to watch her favorite comedy show on Netflix.*
*Example 456:*
*The sea was frightening during a storm. Marcus hated the way the boat rose up and crashed down. It made him sick to his stomach. He cowered in his cabin and shut his eyes but it didn't help. \
Marcus then vomited from sea sickness. / The worker thanked Mark for his patronage.*
As can be seen, the provided examples consistently demonstrate that the accurate suffix incorporates a name that offers a clue to the correctness of the suffix itself. Notice that the last appearance of the name in the prefix is positioned at a considerable distance from the suffix, (initial or second sentences of the prefix). We note that this property does not hold for all examples. This provides further evidence that TWIST models can capture long range dependencies better.
**Regarding sBLIMP scores:** The AudioLM (Borsos et al.) model consists of a cascade of three Transformer models and uses a different speech tokenizer (which is not publicly available). We hypothesize the different tokenizer is the main factor that affects the sBLIMP results, which is why we do a systematic comparison using the same tokenizer and same model, with and without TWIST throughout the paper (e.g., Tables 2 & 6). We also highlight that our contribution is orthogonal to that of AudioLM, which could additionally use TWIST and potentially get a further performance improvement.
**Regarding adding a randomly initialized model for Table 3:** Thank you for your comment, we will add that result to Table 3.
**Regarding results in Table 2:** We use the OPT-350M, we will clarify.
[1] Sicherman, Amitay, and Yossi Adi. "Analysing discrete self supervised speech representation for spoken language modeling." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer pWSQ,
We would like to thank you once again for taking the time to review our manuscript! Your comments and feedback are highly appreciated.
We would like to know whether our rebuttals addressed your previous comments. If they did, we would greatly value an increase of the score.
Please, let us know if you have any follow-up concerns or comments. We would be happy to clarify those. | Summary: This paper proposes a simple initialization method for speech LMs named, TWIST (Textually Warm-Initialized Speech Transformer Language Models). Instead of cold-initializing a speech LM, twist initializes it with LLM weights (minus the token embeddings, which are replaced with speech vocabulary). The authors show than on OPT models of size 125M to 1.3B (+ BLOOM, Pythia of 350M size), the proposed method of initialization helps in convergence and final perplexity evaluation, as well as shows benefits over cold initialization over human evaluation (MMOS) of generated completions. The authors also create two spoken versions of the StoryCloze dataset.
Strengths: 1. The proposed method is quite simple (just a straight-forward initialization trick) and the empirical results show small benefits across speech LMs of different sizes.
Weaknesses: 1. The experiments could be more comprehensive, e.g., only one size of BLOOM/Pythia is considered. Similarly, the authors do not report the PPL over the transcription of the generated speech. Even if the results are high-variance, it is an important aspect of evaluation as far as the quality and diversity of speech continuations is considered.
2. The proposed method is extremely simple, it just initializes the speech LM with LLM weights. There is prior empirical justification to do this, however, the weight spaces as quite incongruent, as acknowledged by the authors, and this problem hasn't been addressed at all. As such, it is hard to consider the contribution very solid in terms of its technical merit. The authors show that text based initialization outperforms Image based initialization, but that again is not a very novel result.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why not include the PPL over speech transcription results in section 3.3? If only one component is varied (cold-init model vs TWIST), then the results could be quite useful?
2. The data scaling is highlighted as a major contribution, however, except OPT other models aren't varied upto the Billion scale.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s acknowledgement of the new benchmark contributions, the benefit of using TWIST in terms of convergence speed and quality while being simple and easy to use.
**Regarding reporting only one model size for Bloom/Pythia (weakness 1, question 2):** To have a fair comparison in the paper, we report results for different model sizes using the same family of model architecture, OPT. To evaluate the effect of model architecture, we additionally report results for Bloom and Pythia. We also report results for ImageGPT and LLaMA models. Hence, we believe the reported experiments are comprehensive and include a versatile set of models. We highlight that each such experiment is resource intensive (e.g., it takes ~9 days to train the 1.3B parameter model for 400k steps using 32 GPUs), which limits the number of experiments we can run. Nevertheless, we provide cold- and warm-init results for Bloom and Pythia using a 1.3B parameter models. Results show similar trends as we observe with the other reported models.
| TWIST | MODEL | PPL↓ | SWUGGY↑ | SBLIMP↑ |
|:-------------:|:-------------:|:--------:|:-----------:|:-----------:|
| ✗ | BLOOM 1.3B | 6.16 | 81.88 | 59.40 |
| ✓ | BLOOM 1.3B | 6.02 | 82.87 | 60.47 |
| ✗ | Pythia 1.3B | 6.13 | 81.47 | 59.72 |
| ✓ | Pythia 1.3B | 5.91 | 82.73 | 60.19 |
**Regarding reporting PPL over transcripts (weakness 1, question 1):**
As stated in the paper (lines 177-182), text-PPL scores over the transcribed speech generations are sensitive to modification in the sampling parameter and ASR errors [1]. Furthermore, text-PPL tells only one side of the story, we also need to count for diversity in the generation (such as auto-BLEU, [1]). For instance, repeated speech gets good text-PPL scores, at the cost of bad diversity measures.
To mitigate that, when computing the automatic metrics for generated speech, we threshold the temperature parameter using the ASR model confidence. Next, as there can be trade-offs in terms of sentence coherence and diversity, we calibrated the temperature so that the generated speech matches the transcription of the original utterance (as much as possible) in terms of generation diversity (auto-BLEU). Due to all of the above, we decided to drop this evaluation from our manuscript. Nevertheless, as per the reviewer’s request, below we report both text-PPL and auto-BLEU results for our main family of models (OPT architecture).
| TWIST | MODEL-SIZE | text-PPL↓ | auto-BLEU↓ |
|:-------------:|:------------------:|:-------------:|:--------------:|
| --- | orig. Transcript | 36.35 | 0.225 |
| ✗ | 125M param | 88.17 | 0.305 |
| ✓ | 125M param | 74.95 | 0.288 |
| ✗ | 350M param | 117.17 | 0.265 |
| ✓ | 350M param | 92.69 | 0.253 |
| ✗ | 1.3B param | 67.68 | 0.298 |
| ✓ | 1.3B param | 47.8 | 0.296 |
As can be seen, while the diversity metric is comparable across models (due to calibration), TWIST models outperform the cold-init. ones in terms of sentence coherence. These measures further support our claims: TWIST models outperform cold-init, and models generally improve with scale. All text-PPL measures were computed using a LLaMA-7B (text model).
Notice, the text-PPL for the 350M param models are worse than the text-PPL of the 125M param models. However, as stated, the diversity of the generations (as computed using auto-BLEU) is better for the 350M models.
**Regarding the simplicity of the proposed method:** As noted by the reviewer themselves, the simplicity of our approach is one of its strengths, especially combined with consistent performance gains that it leads to.
**Regarding prior empirical justification:** To the best of our knowledge, we are the first work to empirically show this phenomenon for SpeechLMs (i.e., text initialization is beneficial for speech based LMs). Similarly, for ImageGPT, we are not aware of any prior work that shows similar findings in the past (see background and method sections). Additionally, the main purpose of the ImageGPT experiment is to better highlight the advantages of using text-based LMs for initialization. We believe such findings are interesting and would be valuable for the community. We would be happy if the reviewer could point to prior work that presents similar findings.
**Regarding data scaling (Question 2):** We believe there is a misunderstanding. Scaling the data does help the model get significantly better performance across all model sizes, except for the 125M param model using 10% or 100% of the data where we observe minor improvements. Which we believe is due to model capacity (see Figure 2a). When considering model scaling, we also observe a significant improvement in performance, when scaling the model. Again, with the exception of only 1% of the data. In this case, the bigger model tends to overfit faster. Can the reviewer please clarify their question / concern?
In addition, we report the results for BLOOM and Pythia using the 1.3B parameter model and observe similar trends. See the table above.
[1] Lakhotia, Kushal, et al. "On generative spoken language modeling from raw audio." Transactions of the Association for Computational Linguistics 9 (2021): 1336-1354.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 6BNW,
We would like to thank you once again for taking the time to review our manuscript! Your comments and feedback are highly appreciated.
We would like to know whether our rebuttals addressed your previous comments. If they did, we would greatly value an increase of the score.
Please, let us know if you have any follow-up concerns or comments. We would be happy to clarify those. | Summary: This paper studies the effect of textual LM on SpeechLMs. They propose TWIST, which initializes the SpeechLM model with a pre-trained textual LM, and then finetune with the speech datasets.
This paper provides a complementary exploration of generative spoken language modeling, including front-end processing of speech tokenizer, the core component of SpeechLM, and back-end speech synthesis of Vocoder.
TWIST uses a warm-start for SpeechLM and outperforms the cold-start model in all experiments. In addition, this paper also provides empirical results across various pretrained models, and presents the large-scale 7B-sized Speech language model.
Strengths: 1. This paper provides insight into the design of SpeechLMs based on rapidly developing large- scale textual pretrained models.
2. This paper presents many empirical conclusions and findings, such as the HuBERT setting, results on different scales of training data.
3. This paper introduces new benchmarks for evaluating SpeechLMs.
Weaknesses: 1. Some details are missing. For example, warm-start allows better performance compared to cold-start, but training costs for both methods are not reported. These results help researchers estimate the training costs of large-scale SpeechLMs.
2. Automatic evaluation results are missed. This paper reports the human evaluation of MMOS for speech generation, but no automatic evaluation results are included, like WER for ASR results. Therefore, it is different to compare results across related work.
3. Although TWIST shows powerful capability of speech understanding, SpeechLMs still lack deep semantic understanding compared to textual models, as stated in Limitation. Therefore, the proposed method is somewhat of a compromise.
Despite these issues, I still believe that this work is highly valuable and can provide direction for the following researchers.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Does HuBERT use different vocoders to resynthesize speech for different clustering tokens and frequencies in Section A.2?
2. Can the authors provide an explanation for the poor performance of cold-start methods? Is it because there is insufficient training data to fully train the model?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer believes our work is “highly valuable and can provide direction for following research.” We also thank the reviewer for acknowledging the contributions of our paper including the rigorous empirical results, the insights into the design considerations of SpeechLMs and the introduction of a new speech benchmark based on the StoryCloze benchmark.
**Regarding reporting training costs:** . We highlight that both the cold- and warm-started models are trained using similar resources and have the same computational cost, for example the 1.3B models are each trained on 32 GPUs for 400K steps (about 9 days). See Appendix A.1 for more details. Moreover, in figure 2.b we show that for some models our approach (TWIST) saves about 75% of the training time/cost compared to the cold-init baseline. Finally, we note that these numbers do not include the cost of pretraining the textual models as we assume such models are readily available, and do not require further processing on our behalf.
**Regarding lack of automatic evaluations:** We agree that additional automatic evaluations would make the paper stronger. Unfortunately, automatic evaluations is a weak point of the emerging field of SpeechLMs. In fact, one of our main contributions addresses this particular aspect: the introduction of tStoryCloze and sStoryCloze as other objective metrics. In addition to these metrics, we provided the standard metrics in evaluating speechLMs: sWUGGY and sBLIMP (see [1-3]).
Furthermore, as discussed in lines 177-182, we tried to add metrics over text transcription (produced by an ASR model) of the generated speech such as sentence coherence (text-PPL) and generation diversity (auto-BLEU) as suggested in [1]. We observe that these metrics can be unstable and sensitive to different hyper-parameters. For instance, errors from the ASR model, such as word corrections for unintelligible speech, can affect the text-PPL metric. To mitigate that, we threshold the temperature parameter using the ASR model confidence. Next, as there can be trade-offs in terms of sentence coherence and diversity, e.g., repeated generations can get good text-PPL but bad diversity measures, we calibrated the temperature so that the generated speech matches the transcription of the original utterance (as much as possible) in terms of generation diversity. Due to all of the above, we decided to drop this evaluation from our manuscript. Nevertheless, as per the reviewer’s request, below we report both text-PPL and auto-BLEU results for our main family of models (OPT architecture).
| TWIST | MODEL-SIZE | text-PPL↓ | auto-BLEU↓ |
|:-------------:|:------------------:|:-------------:|:--------------:|
| --- | orig. Transcript | 36.35 | 0.225 |
| ✗ | 125M param | 88.17 | 0.305 |
| ✓ | 125M param | 74.95 | 0.288 |
| ✗ | 350M param | 117.17 | 0.265 |
| ✓ | 350M param | 92.69 | 0.253 |
| ✗ | 1.3B param | 67.68 | 0.298 |
| ✓ | 1.3B param | 47.8 | 0.296 |
As can be seen, while the diversity metric is comparable across models (due to calibration), TWIST models outperform the cold-init. ones in terms of sentence coherence. These measures further support our claims: TWIST models outperform cold-init, and models generally improve with scale. All text-PPL measures were computed using a LLaMA-7B (text model).
If the reviewer has additional metric suggestions, we would be happy to incorporate them into the paper.
**Regarding SpeechLMs not on par with TextLMs:** We agree there is still room for improvement for speechLMs when compared to text-based models. While not perfect, we believe our work is a step in the right direction towards closing that gap, and hope it will allow others to build on our approach and make further progress.
**Answers to questions:**
1. Yes, each HuBERT model comes along with its “own” vocoder.
2. We can only speculate as to why cold-initialized models perform worse than warm-initialized ones. One option would be, as the reviewer suggests, that the amount of data is still insufficient for a cold-started model. This is backed by Figure 2.a which shows that for larger models using more data makes the gap between cold- and warm-initialized models smaller. This hypothesis is not trivial to test, as we used all publicly available speech data that we are aware of, so simply scaling the models with more data is challenging, which further highlights the need for alternative methods for improving SpeechLMs, such as our proposed method.
Having said that, the type of data used for pretraining plays an important role, as can be observed from the modality experiment, which used ImageGPT as warm initialization for TWIST (lines 237-241).
[1] Lakhotia, Kushal, et al. "On generative spoken language modeling from raw audio." Transactions of the Association for Computational Linguistics 9 (2021): 1336-1354.
[2] Borsos, Zalán, et al. "Audiolm: a language modeling approach to audio generation." IEEE/ACM Transactions on Audio, Speech, and Language Processing (2023).
[3] Qian, Kaizhi, et al. "Contentvec: An improved self-supervised speech representation by disentangling speakers." International Conference on Machine Learning. PMLR, 2022.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer qFrm,
We would like to thank you once again for taking the time to review our manuscript! Your comments and feedback are highly appreciated.
We would like to know whether our rebuttals addressed your previous comments. Please, let us know if you have any follow-up concerns or comments. We would be happy to clarify those.
---
Rebuttal Comment 1.2:
Comment: Thanks for the detailed explanations. I'll keep my score, but I hope the authors can add more content as suggested by other reviewers. | Rebuttal 1:
Rebuttal: We thank all reviewers for their detailed responses and valuable feedback. We are happy the reviewers acknowledge the use of our method (TWIST) benefits the training and quality of speech language models (SpeechLMs). Reviewer 6BNW mentioned that **TWIST is effective, simple and easy to use**, while reviewer 5UoA agreed that **using a warm initialization from text-LMs is a good idea**. We appreciate reviewers qFrm, 6BNW and pWSQ for acknowledging that **our work is one of the first works to perform large scale experiments on SpeechLMs, both in terms of data scale and model sizes**. We thank reviewers qFrm pWSQ and 5UoA for emphasizing that **our paper includes extensive evaluations across different textual LMs, tokenizers, models and dataset size**. Furthermore, we thank reviewer qFrm for believing that our work is **“highly valuable and can provide direction for following research”** We are grateful for reviewers qFrm, 6BNW and pWSQ noting our contribution to the speechLM community by **presenting new benchmarks (tStoryCloze and sStoryCloze)**.
A specific response for each reviewer appears in the “personal” rebuttal section.
Pdf: /pdf/45d59d468c4ac535e218220d9e8726c72952b2a4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Approximate Heavy Tails in Offline (Multi-Pass) Stochastic Gradient Descent | Accept (spotlight) | Summary: This paper investigates the approximate heavy-tailed behavior of stochastic gradient descent (SGD) in local minima in practical settings and its correlation with overall performance. It shows that the stationary distribution of offline SGD exhibits approximate power-law distribution, and the approximation error is controlled by how fast the empirical distribution of the training data converges to the true underlying data distribution in the Wasserstein metric. The main contribution is to fill the gap in understanding the underlying mechanism generating the reported heavy-tailed behavior in practical settings, where the amount of training data is finite. The paper also proves nonasymptotic Wasserstein convergence bounds for offline SGD to online SGD as the number of data points increases. The theory is verified with experiments
Strengths: - **Originality**: The paper investigates the heavy-tailed behavior of SGD in practical settings where the amount of data is finite, which has not been well understood before. It proves a nonasymptotic Wasserstein convergence bound for offline SGD which reduces to online SGD as the number of data points increases. The difference from a true heavy-tailed distribution with a finite number of data is given by the Wasserstein distance.
- **Quality**: The paper provides rigorous theoretical analysis and proof to support its claims. It also conducts experiments to verify its theory.
- **Clarity**: The paper is well-written and easy to follow, with clear explanations of technical terms and concepts. The authors also provide many clear schematic illustrations to help readers understand their concepts and results.
- **Significance**: The theory of this paper is of both theoretical interest and direct practical significance. The results are easy to understand and bounds on the distance from a true heavy-tailed distribution are given explicitly with a clear form.
Weaknesses: - Possible evaluation method for the values of estimated tail indices is not provided. In Figure 5, the estimated tail indices depend on the step size $\eta$, the batch size $b$, or their ratio $\eta/b$, and of course the number of data $n$. It will be strong if an evaluation method for the tail indices, even approximately, can be provided.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - In Line 267, the authors state that the tail indices depend on the ratio of the step size $\eta$ to the batch size $b$. However, a mere dependence on the ratio $\eta/b$ has been found broken in online SGD with a large $\eta$ as discussed in Sec.6.6 of Ref.[1]. A modification that is linear in the batch size $b$ is found for a simple 1-dimensional example. Could the authors elaborate on this result?
- Still in Sec.6.6 of Ref.[1], the tail index is estimated with an explicit expression in terms of $\eta$ and $b$ for a 1D example. Is it possible to compare the estimated tail indices predicted by your theorem with the explicit expression given in Ref.[1]?
[1] **Strength of Minibatch Noise in SGD**, Liu Ziyin, Kangqiao Liu, Takashi Mori, Masahito Ueda, *ICLR 2022*
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to invest in our paper and for the encouraging feedback. In the following, we try to address your remaining concerns. However, before proceeding, we would also like to thank you for pointing out the interesting and relevant paper. We apologize for not including it previously (the paper escaped our notice) - we will properly cite it in the next version and add it to the related work section.
> Possible evaluation method for the values of estimated tail indices is not provided. In Figure 5, the estimated tail indices depend on the step size $\eta$, the batch size $b$, or their ratio $\eta/b$, and of course the number of data $n$. It will be strong if an evaluation method for the tail indices, even approximately, can be provided.
As the analytical computation of the true tail exponent is not possible, we were unfortunately not able to perform such an evaluation. Instead, we utilized an estimator from [1], previously employed in [2, 3], which has been demonstrated to be asymptotically consistent (as per Corollary 2.4), ensuring that as the number of samples it uses increases, the estimate will converge to its true value. Furthermore, an empirical evaluation of this estimator has already been undertaken in [3] (we invite the reader to examine their Figure 3(b), which can also be found in the attached PDF in the response to all reviewers), which shows encouraging results. We will mention this in the next version clearly.
> In Line 267, the authors state that the tail indices depend on the ratio of the step size $\eta$ to the batch size $b$. However, a mere dependence on the ratio $\eta/b$ has been found broken in online SGD with a large $\eta$ as discussed in Sec.6.6 of Ref.[1]. A modification that is linear in the batch size $b$ is found for a simple 1-dimensional example. Could the authors elaborate on this result?
While prior studies on heavy-tailed phenomena have explored the tail relationship with the ratio $\eta/b$, you are absolutely correct in pointing out that this dependence may not exhibit a monotonic behavior. In our experiments, we prioritized replicating the NN experimental setup outlined in [2] in order to conduct a comparative analysis between offline SGD with varying $n$ and its online counterpart.
However, the validity of your remark remains: we suspect that the reason why we still observe a monotonic relation between the tail exponent and the eta/b ratio might be that the range that we used for $b$ is too restrictive that the discrepancy result you mentioned does not kick in. We will mention this as a footnote and cite the corresponding part of the paper you mentioned. Nonetheless, our primary focus is to convey the increasing similarity in behavior between offline and online SGD (with the increase in sample size $n$), and we would prefer to avoid the discussion about the monotonicity of the tail exponent with respect to the $\eta/b$ ratio.
> Still in Sec.6.6 of Ref.[1], the tail index is estimated with an explicit expression in terms of $\eta$ and $b$ for a 1D example. Is it possible to compare the estimated tail indices predicted by your theorem with the explicit expression given in Ref.[1]?
Although the expression discovered in [4] is highly captivating, particularly due to its correction term (which is absent in the analysis of [2]), it is important to emphasize that our theoretical findings are strictly orthogonal to estimating the tail index $\alpha$ or the behavior of $\alpha$ with respect to $\eta$, $b$, and their ratio. This relation was established in an earlier paper by Gurbuzbalaban et al [2]. Rather, our analysis focuses on characterizing the dissimilarities in behavior between online and offline SGD. In this respect, our theorems do not predict the tail index directly; they say the tail behavior of offline SGD will become increasingly power-law. Hence, the comparison of their estimate and the article you mentioned would be rather unrelated to our problematic (yet interesting in its correct scope), and we would like to avoid it in order not to convolute our message.
If you have any remaining questions, we look forward to them and remain at your disposal.
[1] Mohammad Mohammadi, Adel Mohammadpour, and Hiroaki Ogata. On the tail index and the spectral measure of multivariate α-stable distributions. Metrika,381 78(5):549–561, 2015
[2] Mert Gurbuzbalaban, Umut Simsekli, and Lingjiong Zhu. The heavy-tail phenomenon in SGD, ICML 2021
[3] Umut Simsekli, Levent Sagun, and Mert Gurbuzbalaban. A tail-index analysis of stochastic gradient noise in deep neural networks, ICML 2019
[4] Strength of Minibatch Noise in SGD, Liu Ziyin, Kangqiao Liu, Takashi Mori, Masahito Ueda, ICLR 2022
---
Rebuttal Comment 1.1:
Comment: Your reply addresses my questions and comments. Thank you very much. I will keep my score as "Accept". I do not raise my score because a possible estimation of the tail index is beyond the scope of this paper. Anyway, I like your idea of using the Wasserstein distance.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for going over our rebuttal. | Summary: This manuscript considers the problem of multi-pass stochastic gradient descent with a finite batch-size and strongly convex objective. The key result is to show that when the stationary distribution of the parameters is heavy-tailed in the infinite data (one-pass) limit, then the stationary distribution of the finite batch limit is approximately heavy-tailed, in the sense of having a heavy-tailed component plus a correction depending on the 1-Wasserstein distance between empirical and population data distribution (hence decaying at worst as $\sim n^{-1/2}$). The authors also present numerical experiments that suggest the relevance of this theoretical result to more practical settings.
Strengths: Understanding SGD in its different flavours is an important problem in theoretical machine learning, and results on multi-pass SGD are scarce compared to the one-pass case. Therefore, the contribution is timely and significant. The technical part is sound. Moreover, the paper is well-written and the thread is relatively easy to follow: the motivation and goals are clearly stated and the main result addresses them. Finally, the numerical simulations supporting a broader scope of applicability are nice.
Weaknesses: There are two immediate shortcomings of the work. First, the setting is rather restricted (strongly convex goals). Second, while the authors cite related literature connecting heavy-tails with generalization, this is not explicitly explored in the context of the paper. Overall, one question that remains in the end of the reading is: the stationary distribution of the weights are approximately heavy-tailed, but so what?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - **[Q1]**: As a reader who is not very familiar with this line of work ([HM21, HSKM22, MM], etc.) I wonder how to conciliate these results to the classical literature [Fabian 1968, Kushner 1981, Pflug 1986] showing asymptotic normality of the stationary distribution under certain conditions. For instance, considering online SGD in the simple least-squares setting. Since the gradient is proportional to the residual $r_{i} = y_{i}-\langle a_{i}, x\rangle$:
$$
x^{k+1} = x^{k} + \gamma r_{k} a_{k}
$$
if the residuals are Gaussian close to the minimum (e.g. $y_{i}=\angle a_{i}, x_{\star}\rangle + z_{i}$ for some $x_{\star}$ and $z_{i}\sim\mathcal{N}(0,\sigma^2)$), won't the stationary distribution be Gaussian? What I am missing?
- **[Q2]**: Theorems 2 and 3 have an explicit dependence on the sample size $n$, but how do they depend on the batch size b?
**Minor comments**
- The following sentences are strong and misleading:
> *Previous works on this problem only considered, up to our knowledge, online (also called single-pass) SGD, which requires an infinite amount of data.*
> *However, all these results rely on exact heavy tails, which, to our current knowledge, can only occur in the online SGD regime where there is access to an infinite sequence of data points.*
While it is true that in one-pass SGD the maximum number of steps is limited by the availability of data, this doesn't necessarily means that it requires an infinite amount of data. Note that the number of steps required generically depend on the task under consideration, and in some practical scenarios where data is abundant only a few epochs are required, see e.g. Table 2.2. of [[Kaplan et al. 2020]](https://arxiv.org/pdf/2005.14165.pdf). The authors even acknowledge this fact in L29-30.
- Please add the details of the plots in the caption in Fig. 2 & 5: the reported tail indices correspond to what simulation?
- L128: You mean $[n] = \{1,\cdots, n\}$?
**References**
[[Fabian 1968]](https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-39/issue-4/On-Asymptotic-Normality-in-Stochastic-Approximation/10.1214/aoms/1177698258.full) Vaclav Fabian. *On Asymptotic Normality in Stochastic Approximation*. Ann. Math. Statist. 39 (4) 1327 - 1332, August, 1968. https://doi.org/10.1214/aoms/1177698258
[[Kushner 1981]](https://epubs.siam.org/doi/10.1137/0319007) Harold J. Kushner and Hai Huang. *Asymptotic Properties of Stochastic Approximations with Constant Coefficients*. SIAM Journal on Control and Optimization, 19(1), 1981
[[Pflug 1986]](https://epubs.siam.org/doi/10.1137/0324039) Georg Ch. Pflug. *Stochastic minimization with constant step-size: Asymptotic laws*. SIAM Journal on Control and Optimization, 24(4):655–666, 1986
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I have discussed some of the limitations in the "Weaknesses" part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your in-depth review of the paper and the feedback you shared. In the following, we aim to respond to your concerns.
> First, the setting is rather restricted (strongly convex goals)
We agree that considering strongly convex objective functions may seem restricted. However, this setting is in general privileged and considered interesting to study local SGD behaviour, even in the non-convex setting. As a result, analyses of stochastic optimization algorithms in strongly convex settings still attract significant attention, especially in under-examined contexts, such as ours.
That being said, we believe that it is possible to extend our results to non-convex settings, where loss functions satisfy the so-called “dissipativity” property. This condition would essentially let us obtain contractions in the Wasserstein distance and would include several practical settings such as NNs [4] and functions strongly convex outside a bounded region (see [3, Sec. 4] for more examples). However, we leave analyses of such objectives as future work as it would make the paper much more technical and jeopardize the clarity of our main message. We will mention this future direction explicitly in the next version of the paper.
> The question remains: the stationary distribution of the weights are approximately heavy-tailed, but so what?
This is a fair question. As you mentioned, the connection between heavy tails and generalization has been established in certain “sterile” settings; however, an explicit connection for heavy tails in the sense of Thm. 1 and 2 is yet to be developed. Nevertheless, there has been empirical evidence that the tail exponents given in Thms. 1 and 2 may have direct links to generalization, (see [5, Fig. 2]). Hence, we believe that our results will play a key role in understanding generalization error of **offline** SGD, once the link between the tail exponent of **online** SGD and generalization is made explicit – which seems to be an easier task as several quantities can be computed more easily in the online setting.
> [Q1]: … I wonder how to conciliate the results to the classical literature showing asymptotic normality of the stationary distribution...
You are right in your comment: **if the step-size is small enough**, the tail index $\alpha$ will be larger than 2, hence admitting a finite variance. In such cases, one can show that the (properly) averaged iterates will converge to a Gaussian law, which is the main message of the papers you mentioned. However, there is a caveat:
1) If the step-size is large enough, $\alpha$ can be strictly smaller than 2, and the averaged iterates will converge to an $\alpha$-stable law, where the studies you mentioned cannot be applied. This has been proven in [1, Corollary 11].
2) Even when $\alpha>2$, the iterates will have a power-law distribution, but the behavior can vastly differ to a Gaussian.
Given the surge of interest in large step-sizes and edge-of-stability, we believe that point 1) has significance and illuminates under-explored settings considered in modern practical ML settings. We will mention this point more clearly in the next version.
> [Q2]: Theorems 2 and 3 have an explicit dependence on the sample size n, but how do they depend on the batch size b?
Firstly, due to Thm. 1 the tail index strictly increases in batch size $b$ (see Thm. 4 [1]). Furthermore, Thm. 2 establishes that reducing $b$ leads to heavier tail exponents (see Sec. 4 [2]).
Concerning our presented results, the linear regression bound error terms (see Thm. 3) do not depend on $b$. This arises from the quadratic loss (i.e., linear derivative), linearity of expectation, and Minkowski's inequality (see Appendix C2 (L496) in the supplement).
For Thm. 5, in order to maintain clarity and facilitate a better understanding of the proofs, we have set $b=1$ (as emphasized in L218). We believe increasing $b$ would introduce further notational complexity, potentially compromising the overall presentation.
> While in one-pass SGD the maximum number of steps is limited by data availability, this doesn't necessarily mean it requires an infinite amount of data.
We agree with you: in terms of convergence analysis, an infinite amount of data might not be needed for SGD to find a local minimum. However, to provide a rigorous theoretical analysis of SGD’s tail exponent, existing theory, unfortunately, requires an infinite amount of data.
Nonetheless, your concern is valid: the original sentence might have appeared too assertive. Therefore, we will reword it to highlight that the emergence of heavy tails in theoretical findings is contingent upon access to an infinite amount of data.
> Please add the details of the plots in the caption in Fig. 2 & 5: the reported tail indices correspond to what simulation?
Figures 2 and 5 align with the experiments emphasized on L268. However, it appears certain plots lacked clarity; therefore, we nomenclated the relevant paragraph, referencing it in the corresponding figures. Additionally, we provided further details to the plots. Finally, we provided new sets of plots to Fig. 2 and 3. The new figures and findings can be found in the PDF attached to our general response.
We hope these efforts will enhance the lucidity of our results.
> L128: You mean [n]=1,⋯,n ?
You are right. To make it more visible, we will update it as $[n] = ${$1, \dots, n $}.
[1] Gurbuzbalaban, et al. The heavy-tail phenomenon in SGD, ICML 2021
[2] Hodgkinson and Mahoney. Multiplicative noise and heavy tails in stochastic optimization, ICML 2021
[3] Erdogdu et al. "Convergence of Langevin Monte Carlo in chi-squared and Rényi divergence." AISTATS, 2022
[4] Akiyama, and Suzuki. "Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel Methods.'' arXiv:2205.14818
[5] Raj, Anant, et al. "Algorithmic stability of heavy-tailed stochastic gradient descent on least squares." ALT, 2023
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed answer and for welcoming my suggestions.
> *If the step-size is large enough, $\alpha$ can be strictly smaller than 2, and the averaged iterates will converge to an $\alpha$-stable law, where the studies you mentioned cannot be applied. This has been proven in [1, Corollary 11].*
How large is large enough in this simple least-squares example? To my knowledge for $\gamma\geq 2$ the mean-squared error $||x_{\star} - x_{k}||^{2}_{2} \to\infty$ as $k\to\infty$, so am confused on how this can correlate with generalization in this case.
---
Reply to Comment 1.1.1:
Comment: Thank you for going over our rebuttal.
This is indeed a good question; however, we shall mention that the question rather concerns prior work and it is quite orthogonal to our contributions.
Nevertheless, let us try to provide an answer, which essentially relies on multiple "surprising" facts. We will try to summarize the picture as follows.
1. For the case of linear regression with Gaussian data, the step-size required for the iterates to converge to a distribution with infinite variance is given in arxiv:2006.04740 Proposition 5. For general loss functions, we are not aware such an explicit result.
2. You are right, when the step-size is large the iterates may diverge in L2 (as you mentioned); however, they can still converge in Lp with p < alpha < 2 (not to a point though, to a random vector instead), see arxiv:2006.04740 Theorem 8 and Theorem 6 (or you can see arxiv:2102.10346 Theorem 3 for a similar outcome in a different setting).
3. As we mentioned in our earlier response, the connection between heavy tails and generalization has been established in certain “sterile” settings. As an example for such a connection, let us consider arxiv:2206.01274 Theorem 4, where the authors showed that even when the original loss value diverges (e.g. your L2 example), the algorithm can generalize well under a "surrogate loss" (for instance the original loss can be L2 but the surrogate loss can be Lp -- this is what they considered in their result). There are other papers, which use bounded surrogate losses -- for instance using 0-1 loss (accuracy) as a surrogate loss whereas the algorithm tries to minimize another loss function (cross entropy etc).
We hope that this answers your question. We remain at your disposal if there is any further questions. | Summary: In this paper, the authors study the heavy-tail distribution for the parameters in offline stochastic gradient descent algorithm (SGD). Theoretical results are provided for a quadratic loss and a strongly convex problem, while numerical results cover more realistic cases such as fully connected NN or CNN. In the theoretical part of the paper, the authors link the tail of the distribution of the offline SGD to the online one. To be more specific, the authors show that the qualitative difference between the online and the offline tail is bounded by the Wasserstein distance between the generating and empirical distribution of the data. The author further bound the Wasserstein distance between the distribution of the online and the offline parameters by the same distance. The theoretical result is qualitatively supported by the numerical experiment.
Strengths: * Originality and significance: the authors extend the analysis of heavy tail in SGD from the online settings to offline settings. Personally, I find it really interesting of having a quantitative non-asymptotic bound of the difference between the two cases. It could make the existing and future online SGD analysis more relevant.
* Quality: In my personal opinion, the authors perform convincing theoretical and numerical analysis.
* Clarity: The paper is well written. The authors present their results in a self-contained way, with multiple examples and intuitions helping illustrating the idea behind their main results. And I appreciate that the code is attached to the paper.
Weaknesses: It appears to me that the fig.1 and fig.3 are not as convincing as the rest of the paper. The axes are unlabeled in these figures. The fact that the distribution has a heavy tail is not very obvious in these figures. For example, to my eye, the n=250 histogram is more power-law comparing to the n=1000 histogram.
The weaknesses and questions are well addressed by the authors in their rebuttal. The authors provided detailed explanations and additional figures.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I believe that it would be nice if the authors could include results, expressed in numbers, which characterizes the difference between the experimental tail and a real power-law tail.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the encouraging feedback and valuable comments you provided. Moving forward, we will address your primary concern:
> It appears to me that the fig.1 and fig.3 are not as convincing as the rest of the paper. The axes are unlabeled in these figures.
We have taken your feedback into account and therefore carefully labeled all the plots, aiming to improve the clarity and interpretability of the paper.
> The fact that the distribution has a heavy tail is not very obvious in these figures. For example, to my eye, the n=250 histogram is more power-law comparing to the n=1000 histogram.
In reference to the histograms in Figure 1, our assertion is that with an increase in the number of data samples $n$ in offline SGD, the distributions exhibit characteristics of heavy-tailed distributions more predominantly. Specifically, we focus on a characteristic property of heavy-tailed distributions: as the tails get heavier, it becomes more likely to observe samples that can be far from the bulk of the distribution (this can be viewed as outliers in a sense).
Accordingly, in Figure 1, we assert that the number of samples located far from the bulk of the distribution increases as the number of available samples $n$ increases.
However, your argument regarding the clarity of the results remains valid. Therefore, we have re-run the experiments and introduced a new set of plots. In Figure 1, we highlighted the mean and mean+2std. of the distribution, as well as marked the samples which exceed the mean+2std threshold. With this, our aim is twofold: firstly, it can be observed that the mean and the std. of the distributions increase as $n$ increases. Secondly, it can be observed that the number of samples that exceed the mean+2std threshold increases, as well as their distance from the threshold. The added figures can be found in the attached PDF file in our response to all reviewers (Fig. 2(a)-(b)).
Regarding Figure 3, we have conducted another set of experiments to provide a clearer analysis of the distributions’ tails. The corresponding plots can also be found in our response to all reviewers (attached PDF, Fig. 1). Specifically, we have run a 1-dimensional linear regression experiment for both offline and online SGD and plotted the corresponding QQ plots of the (estimated) stabilizing distributions. From this, one can observe that:
* Online SGD with a sufficiently large learning rate exhibits heavy, non-Gaussian tails.
* Offline SGD exhibits increasingly heavier tails as the sample size $n$ increases.
We hope that these new experiments offer further evidence supporting the notion that the distribution's tail becomes heavier as the number of available samples increases.
>I believe that it would be nice if the authors could include results, expressed in numbers, which characterizes the difference between the experimental tail and a real power-law tail.
Analytical computation of the true tail exponent, even in the linear regression setting, is unfortunately not possible to our knowledge. As we cannot compute the tail exponent, we utilize the estimator from [1], which has been demonstrated to be an asymptotically consistent estimator (as indicated in their Corollary 2.4 in their paper): meaning that as the number of samples we use in the estimator grows, the estimated alpha value will converge to its true value.
As the same estimator has been already used in various articles, some of its qualities have already been demonstrated. We invite you to see [2], Figure 3(b), where the authors empirically demonstrate the accuracy of the estimator on a synthetic task, which we believe is encouraging. In our response to all the reviewers, we have included the aforementioned figure (see Fig. 2(c)), as well as further justification of the estimator choice.
We hope the new experiments and plots have effectively addressed your concerns and we remain at your disposal if there will be further questions.
[1] Mohammad Mohammadi, Adel Mohammadpour, and Hiroaki Ogata. On the tail index and the spectral measure of multivariate α-stable distributions. Metrika,381 78(5):549–561, 2015
[2] Simsekli, Umut, Levent Sagun, and Mert Gurbuzbalaban. "A tail-index analysis of stochastic gradient noise in deep neural networks." International Conference on Machine Learning. PMLR, 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you for the explanation and additional figures. I have raised my rating to "Accept".
---
Reply to Comment 1.1.1:
Comment: Thank you very much. We are happy to see that our rebuttal have addressed your concerns. | Summary: This paper considers offline SGD and proves an approximate power-law tail behavior of the stochastic gradient for strongly convex objectives, confirming the heavy-tail heuristic encountered in practices. Explicit tail estimations are obtained, and as a intermediate result, nonasymptotic Wasserstein convergence results for offline SGD are proved.
Strengths: 1. The paper provides a good analysis of the behavior of offline SGD, and contributes to our understanding of heavy-tail phenomena in SGD training.
2. I have gone over the proofs and believe it's basically correct.
3. The paper is well-written and comfortable to read.
Weaknesses: I do not see apparent weaknesses in this paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I have no additional questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your encouraging feedback and for taking the time to understand the paper clearly and go through the mathematical proofs. If any questions arise regarding our work, we remain at your disposal. | Rebuttal 1:
Rebuttal: We want to extend our gratitude to the reviewers for their insightful feedback. In the following, we outline the modifications implemented in response to their input.
**In response to the clarity of the presented figures**:
* Every plot has been carefully labeled and experimental paragraphs nomenclated with the intention of improving the manuscript’s clarity and readability
* We have included new experiments and figures (see Fig. 1 and Fig. 2(a)-(b) in the attached PDF file), emphasizing even further how the tail heaviness of offline SGD iterates increases as the number of samples $n$ increases
More specifically, we have added the following two figures.
Fig. 1 depicts a 1-dimensional linear regression task, connecting how the tail behavior differs between online and offline SGD (with varying $n$). To be precise, the depicted QQ plots of the corresponding (estimated) stabilizing distributions convey the following:
* Online SGD with a sufficiently large learning rate exhibits heavy, non-Gaussian tails.
* Offline SGD exhibits increasingly heavier tails as the sample size $n$ increases
Fig. 2(a)-(b) corresponds to offline SGD applied to a 100-dimensional linear regression task and a NN classification task utilizing a 3-layer fully-connected NN on the MNIST dataset. In both experiments, the following consistent patterns emerge:
* The means and standard deviations of the parameters’ estimated stabilizing distributions in offline SGD increase with $n$
* The quantity and magnitude of the parameters far from the bulk of the distribution increase with $n$
**In response to the validity of the chosen tail estimator**:
* We will add a paragraph in the Appendix that highlights the main strengths of the estimator and justifies our choice for its selection. The main arguments are as follows:
* The estimator's theoretical framework is established through its convergence in distribution to the true tail index (via a Central Limit Theorem result, detailed in Theorem 2.3 [1]), which is further complemented by its demonstrated asymptotic consistency (as per Corollary 2.4 [1]).
* The estimator has already been used in various articles and its qualities have been thoroughly examined. This can be observed in Figure 2.(c) in the attached PDF (taken from [2]), which, in our opinion, contains convincing estimation results (e.g., small error bars regardless of the magnitude of the true tail index).
We hope that the aforementioned points convince the reviewers as well as future readers of the estimator choice, a decision that has been endorsed both by our research and preceding academic studies [2,3].
**In response to the remaining points, we**:
* Carefully emphasized that the claims of our paper are predominantly related to offline SGD, in contrast to previous works that consider online SGD
* Included new references and further insights into related work
* Fixed smaller technical notation
* Further elaborated on certain aspects, such as:
* Extending our results to the non-convex settings with “dissipative” loss functions
* The significance of our contribution towards linking heavy tails and generalization
* Discussed potential connections between the true step size and parameters such as $\eta$, $b$, and $n$
[1] Mohammad Mohammadi, Adel Mohammadpour, and Hiroaki Ogata. On the tail index and the spectral measure of multivariate α-stable distributions. Metrika,381 78(5):549–561, 2015
[2] Umut Simsekli, Levent Sagun, and Mert Gurbuzbalaban. A tail-index analysis of stochastic gradient noise in deep neural networks, ICML 2019
[3] Mert Gurbuzbalaban, Umut Simsekli, and Lingjiong Zhu. The heavy-tail phenomenon in SGD, ICML 2021
Pdf: /pdf/b3195d1ad42ce38472c055f5b3128396aa7f0a70.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Inner-Outer Aware Reconstruction Model for Monocular 3D Scene Reconstruction | Accept (poster) | Summary: The authors propose a method for 3D reconstruction given images and the respective camera poses. Existing methods predict a TSDF volume and convert to 3D mesh using marching cubes. However, the authors proposed method as a coarse to fine strategy which uses a classifier to classify a voxel into surface, inner-surface or outer-surface voxel. Doing this they achieved better performance on 3D reconstruction
Strengths: - The main strength of the paper is the proposed coarse-to-fine strategy which aids in the reduction of the memory costs of reconstruction and better performance on the reconstructed 3D meshes.
- The method has been evaluated on reconstruction datasets such as ScanNet, ICL-NUIM, and TUM-RGBD
- Code has been provided for reference. (I did not run it but have referred it when referring to the write up)
- The related work has referenced all the relevant works in the area.
Weaknesses: - The impact of classifying the difference between inner surface voxels and outer surface voxel is not entirely clear. Especially since the claim is that the classification of vowels into different classes brings major gains. Section 4.3 provides write up on this but it is not entirely clear from the write up cosine similarity between the average of features of voxels is providing demonstrable differences.
Please address the typos :
1. Line 86 predicts*
2. Line 89 merges*
3. Line 177 get an*
3. Line 183 medium/fine*
4. Line 189 extracting*
5. Line 193 rest of the procedure*
6. Line 286 Since there are*
7. Line 304 What is more -> In addition,
There are further typos in the supplementary section which I have not listed here.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Perhaps section 4.3 could be written better?
- Could you provide details on the inference time etc.,?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The method is subject to limitations by occlusion and partial scans along with transparent surfaces as listed in Section B (supplementary)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The impact of classifying the difference between inner surface voxels and outer surface voxels is not entirely clear.**
**Re:** We showed the impact of classifying the difference between inner surface voxels and outer surface voxels with qualitative experiment results in the ablation study (Section 4.2). The experiment results in Table 4 showed that IOAR (line 4 of Table 4) outperforms the IOAR variant that groups the inner surface voxels and outer surface voxels into the same class (line 2 of Table 4). Specifically, the precision improves from 0.712 to 0.748, and the F-score improves from 0.654 to 0.663 when classifying the difference between inner surface voxels and outer surface voxels. This experiment result demonstrates that classifying the difference between inner surface voxels and outer surface voxels can lead to clear performance improvement.
**Q2: The authors write section 4.3 to show the difference between inner surface voxels and outer surface voxels, but it is not entirely clear. Perhaps it could be written better?**
**Re:** Thank you for your comment! We summarize the content in Section 4.3 in the following. If you are confused about any part, please question us for more details. In the ablation study (Section 4.2), we experimentally demonstrated the effectiveness of the new coarse-to-fine strategy, which classifies inner-surface voxels, outer-surface voxels, and surface voxels into different classes. In the introduction section, we claim this strategy works since it solves the problem that the classifier has to waste its capability to bridge the gap between inner-surface voxels and outer-surface voxels. Grouping inner-surface and outer-surface voxels into the same class will force the model to waste its capability only when there is a gap between these two types of voxels. Therefore, the first part of Section 4.3 aims to show this. We provide qualitative and quantitative results to show the difference between inner-surface voxel features and outer-surface voxel features. First, we visualize the feature space of surface, inner-surface and outer-surface voxels using t-SNE. As shown in Figure 3, inner-surface voxels (red points) are far from outer-surface voxels (blue points) in the feature space, thus demonstrating they are quite different. Second, we measure the similarity between inner-surface and outer-surface voxels, the similarity between inner-surface and surface voxels, and the similarity between outer-surface and surface voxels. The result shows that the similarity between inner-surface and outer-surface voxels is relatively lower. This result also demonstrates that inner-surface voxels are different from outer-surface voxels. In addition, we refer the reviewer to Questions 4 and 6 of reviewer xj9n. In response to Question 4 of reviewer xj9n, we conduct experiments to show the accuracy in discriminating voxels. The result shows that only 2.67% and 4.46% of inner and outer voxels are misclassified as outer and inner voxels at the coarse level. This also supports our assumption that there is an intrinsic gap between inner-surface voxels and outer-surface voxels since the model can easily distinguish them in most cases. In response to Question 6 of reviewer xj9n, we conduct another experiment to evaluate the similarity between different types of voxels. The result also supports that inner-surface voxels are different from outer-surface voxels.
**Q3: Could you provide details on the inference time?**
**Re:** IOAR costs 16 minutes and 18 seconds to reconstruct meshes for 100 scenes in the test set of the ScanNet dataset. On average, it costs 9.78 seconds per scene. The time of data loading, data preprocessing, and data postprocessing is included. If we only consider the inference time of the model, IOAR costs 14 minutes and 58 seconds to predict the TSDF volumes for 100 scenes. On average, it costs 8.98 seconds per scene. We also provide details of the inference phase. In the inference phase, the input is a video in which all frames have corresponding camera intrinsic and extrinsic. Based on the camera extrinsic, camera intrinsic, and image resolution, the boundary of the space captured by these images can be estimated. We split the entire space volume into sub-volumes to save memory. Since it is computationally expensive to consider all input frames, we select $N=60$ frames from the video for each sub-volume. We follow the selection method in [9] to reduce redundancy. Specifically, this selection method ensures that the distance between any two frames is far enough and the angle is large enough. These selected frames are input into the CNN backbone to extract image features at different scales. With the help of camera intrinsic and extrinsic, the image features are back-projected to form 3D feature volumes. A transformer module fuses the feature volumes from $N$ frames into a single feature volume. The fused feature volumes of sub-volumes are combined to form the fused feature volume of the global volume. The fused global feature volume is input into the TSDF branch and inner-outer aware occupancy branch to extract the TSDF feature volume and occupancy feature volume. The TSDF and occupancy feature volume at coarse and medium levels will pass to the next level to provide information from the previous scale. At coarse and medium levels, our model predicts the occupancy based on occupancy feature volume and then filters out inner-surface and outer-surface voxels based on the predicted occupancy. Finally, our model predicts the TSDF volume of the scene based on the TSDF feature volume at the fine level and then generates the 3D meshes using the marching cube algorithm[13].
**Q4: There are some typos in the paper.**
**Re:** We will fix these typos in the camera-ready version.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarifications
Comment: Thank you for addressing the concerns. I agree rewriting some of the write up to learn the misunderstanding (as also pointed by xj9n) would be helpful. I updated my review based on the rebuttal.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for taking the time to review our paper! We are pleased that our response has helped address your concerns. | Summary: The paper presents a multi-resolution method for volumetric 3D reconstruction from an input video of a static scene captured by a moving camera. Camera poses must be externally provided. The key observation is that non-surface voxels inside and outside the surfaces have different properties which have been ignored by previous methods for estimating occupancy or the signed distance function. Based on this observation, the paper proposes a three-label classification of space: inner, surface and outer voxels. The claim is that learning decision boundaries between these three types is an easier problem than learning to discriminate between surface and non-surface voxels. This claim is supported by experimental results on widely used data, in comparison with SOTA methods, and by analysis of the classifier.
** Post discussion update**
As I wrote during the discussion, I have been convinced by the other reviews and the authors responses to upgrade my rating.
Strengths: S1. The largest strength of the paper is that the argument that inner and outer voxels with respect to the surface are different makes sense, is novel, and leads to superior quantitative results. My impression is that this distinction is stronger near the surfaces, where it is most useful. I would argue that inner and outer voxels far from the surfaces are just photo-inconsistent, as we used to say, but the critical decisions have to be made near the surface.
S2. Experiments are conducted on large, popular datasets, with carefully analyzed protocols, and the selection of baseline methods is satisfactory. Experimental results demonstrate that inner-outer distinction is powerful, as is the multi-resolution scheme. IOAR, the proposed method, outperforms the baselines by a wide margin on all metrics and all datasets.
S3. Good generalization performance is also shown. This is an important property to enable future deployment of learning-based systems.
Weaknesses: W1. There is a claim that occupancy and TSDF branches interfere with each other. In many other papers, it has been shown that multi-task learning is beneficial when the network learns related tasks. Here, there is a common 3D backbone that generates input for the occupancy and TSDF heads, so it is possible that synergies do exist and the point I am raising is due to lack of clarity. The corresponding ablation study, summarized in Table 4, does not shed enough light on how separate the TSDF and occupancy branches should be. Does the absence of branches mean that occupancy is predicted from the TSDF or vice versa? In any case, the differences are small.
W2. Equation (9), the loss, requires further explanation. Why are occupancy losses applied to the coarse and medium resolution and the TSDF loss to the fine resolution? I assume that this combination worked best in practice, but I would like to see the intuition behind it. Hopefully, the reasons/observations that led to this choice could be useful to other researchers.
W3. Despite criticism of volumetric methods in the abstract, the proposed approach is fully volumetric. Standard techniques to reduce the memory footprint, such as octrees and hashing, are applicable similarly to most other volumetric methods. The criticism in line 4 is unwarranted. (It is also not the best way to introduce the contribution of IOAR.)
W4. Some additional ablation studies would be informative. If the authors already have the relevant data even on parts of the datasets, I would like to see (i) the effects of the multi-resolution scheme compared to a single-resolution implementation, and (ii) the accuracy in discriminating inner vs outer voxels.
W5 (minor). Lines 195-196 contain: “At the fine level, all voxels are treated as potential surface voxels. Therefore, we do not set a fine-level occupancy branch.” This is unclear to me. Conceptually, there should be more empty voxels at a finer resolution since surfaces have zero thickness.
W6 (minor). Lines 286-291 describe an unsatisfactory procedure for measuring the similarity across the three voxel types. Assuming that the distributions are multi-modal, which is likely, representing each category by the average may be a poor approximation. Randomly sampling thousands of representatives from each category would have been better.
Minor Comments
The survey of related work is thorough. I have two suggestions, but their degree of overlap with the proposed method is similar to other methods.
Liu C, Gu J, Kim K, Narasimhan SG, Kautz J. Neural RGB->D sensing: Depth and uncertainty from a video camera. CVPR 2019.
Xie J, Lei C, Li Z, Li LE, Chen Q. Video depth estimation by fusing flow-to-depth proposals. IROS 2020.
34, 117: “marching cubes”
41: “their observations”
42: “there is” and “an intrinsic” – “intrinsical” is not a word and it appears several times in the paper.
58, 198, among other instances: “inter-outer” should be corrected.
86: “a 3D U-Net predicts”
98-105: I disagree that Occupancy Networks (please correct spelling) and DeepSDF focus at novel view synthesis rather than geometric accuracy.
Eqs. (1), (3), (4), (5), (7): large spaces between the two parts of these equations would help.
176: “state” would be better than “situation” here.
179: “operations… are”
193: “the rest procedure” should be corrected.
Many occurrences: there should be a space between a word and a reference in square brackets.
Table 3: the reference to VoRTX is wrong.
Supp. 42: “Broader”
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The distinction between inner and outer voxels is worthy of acceptance, but there are open questions. My preliminary ranking is low, but can be improved if there are satisfactory answers to the first three weaknesses pointed out above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: There is a paragraph in the supplement describing three challenging cases for the proposed algorithm. A sentence summarizing difficulties due to occlusion and transparency could have been included in the main paper. There is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The difference between using separate or shared TSDF and occupancy branches is not clear.**
**Re:** To clarify the difference, we explain how existing methods and IOAR predict occupancy and TSDF. Conventionally, a shared 3D CNN is used to refine the 3D feature volume. Then the TSDF and occupancy heads (i.e., two MLPs) predict the TSDF and occupancy for each voxel based on the same refined 3D feature volume. We notice that it is difficult for a 3D CNN to learn such versatile features that can provide information for both TSDF and occupancy prediction. Therefore, different from existing methods, IOAR contains two separate 3D CNNs for TSDF and occupancy prediction. We experimentally demonstrate the effectiveness of this design in the ablation study (section 4.2). The full IOAR model with separate 3D CNNs for TSDF and occupancy heads (line 4 of Table 4) outperforms the IOAR variant using a shared 3D CNN for TSDF and occupancy heads (line 3 of Table 4).
**Q2: Why are occupancy losses applied to the coarse and medium resolutions and the TSDF loss to the fine resolution?**
**Re:** The occupancy prediction aims to reduce memory cost by reducing the voxel number in the next stage. Since we filter out non-surface voxels at the end of coarse and medium levels, we only supervise the occupancy prediction in these two resolutions. Similarly, we aim to reconstruct the surface in fine resolution by predicting the TSDF. So, we only supervise the TSDF prediction in fine resolution. We conduct experiments to evaluate different variants. Here, "IOAR" is the original IOAR model; "IOAR all Occ" is a variant with occupancy losses at all resolutions; "IOAR all TSDF" is a variant with TSDF losses at all resolutions; "IOAR all" is a variant with occupancy and TSDF losses at all resolutions. All variants achieve comparable performance except the variant with all TSDF losses drops a bit on Recall.
|Model|Acc|Comp|Prec|Recall|F-score|
|-|-|-|-|-|-|
|IOAR|0.043|0.090|0.748|0.597|0.663|
|IOAR all Occ|0.044|0.093|0.750|0.592|0.660|
|IOAR all TSDF|0.042|0.100|0.751|0.579|0.652|
|IOAR all|0.044|0.089|0.743|0.593|0.658|
**Q3: The criticism in line 4 is unwarranted.**
**Re:** Line 4 does not aim to criticize the memory cost of all volumetric-based methods increases cubically. As one of the volumetric-based methods, the memory cost of IOAR also increases cubically. In fact, we aim to explain coarse-to-fine strategy is necessary for volumetric-based methods since their memory cost increase cubically. This is a misunderstanding caused by our writing. We believe rewriting line 4 as "The memory cost of volumetric-based methods will grow cubically as the volume size increases, so a coarse-to-fine strategy is necessary for saving memory." can clear the misunderstanding.
**Q4: I would like to see (i) a single-resolution implementation, and (ii) the accuracy of discriminating voxels.**
**Re:** For (i), a single-resolution implementation will lead to out-of-memory since the coarse-to-fine framework is necessary for volumetric-based methods [8,9,10,11,12] to reduce memory costs. As an alternative, we design an IOAR variant with only two levels. Without the medium resolution, the performance drops a lot.
|Model|Acc|Comp|Prec|Recall|F-score|
|-|-|-|-|-|-|
|two level|0.058|0.096|0.670|0.553|0.604|
|three level|0.043|0.090|0.748|0.597|0.663|
For (ii), we conduct experiments to evaluate the accuracy of discriminating voxels. At the coarse level, only 2.67% and 4.46% of inner and outer voxels are misclassified as outer and inner voxels. This supports our assumption that outer voxels are quite different from inner voxels since the model can easily distinguish them in most cases. At the medium level, we can observe a lot of inner and outer voxels are predicted as surface voxels. After filtering at the coarse level, most remaining voxels are close to the surface, so their features are similar to surface voxels. As a result, our model misclassifies them into surface voxels.
|Coarse|Inner|Outer|Surface|
|-|-|-|-|
|Predict as Inner|71.83%|4.46%|4.17%|
|Predict as Outer|2.67%|60.17%|0.77%|
|Predict as Surface|25.50%|35.37%|95.06%|
|Medium|Inner|Outer|Surface|
|-|-|-|-|
|Predict as Inner|47.30%|4.73%|6.14%|
|Predict as Outer|2.48%|48.40%|4.82%|
|Predict as Surface|50.22%|46.87%|89.04%|
**Q5: Why do not authors set a fine-level occupancy branch to filter out more non-surface voxels?**
**Re:** First, the occupancy prediction at the coarse and medium levels aims to reduce the memory cost at the next level by reducing the voxel number. Since there are no finer levels, we do not set a fine-level occupancy branch. Second, as demonstrated in Q4, these voxels at the fine level are very close to the surface. Therefore, adding another occupancy branch can hardly further filter out non-surface voxels. Finally, the area that does not contain a surface can be filtered out based on the predicted TSDF. Intuitively, if the TSDF value of eight voxels has the same sign, the cubic space with these voxels as its vertices does not contain a surface. We refer the reviewer to the marching cube algorithm [13] for more details.
**Q6: Randomly sampling thousands of representatives from each category is better for measuring the similarity.**
**Re:** We conducted experiments following the advice. In this setting, the similarity between inner and outer voxels is 0.4512, the similarity between inner and surface voxels is 0.5925, and the similarity between outer and surface voxels is 0.5441. These results also support that inner voxels are quite different from outer voxels.
**Q7: There are two works related to your work.**
**Re:** NeuralRGBD and DeepV2D estimate depth for an RGB video, thus related to the depth-based 3D reconstruction methods. We will add the introduction for these works in the related work section.
**Q8: There are some typos in the paper.**
**Re:** We will fix these typos in the camera-ready version.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications
Comment: Below are my responses to the authors’ rebuttal. The first three weaknesses were the most important ones to be addressed.
I consider the response to W1 satisfactory. I will accept experimental results on the shortcomings of a multi-task implementation over my speculation.
The response to W2 is informative. I suggest integrating the first sentences into the paper because the new text seems clearer. I agree with the authors that rewriting the description of the coarse-to-fine approach would also eliminate W3 and W5.
The response to W4 is also satisfactory. Tuning the system to favor recall of surface voxels makes sense.
The rest of my comments were minor.
I have also read the other reviews and responses. I am now more positive towards the paper, but I will wait for the end of the discussion before modifying my recommendation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your time and efforts in reviewing our paper! Your comments help us improve the quality of this work. | Summary: This paper modifies previous coarse-to-fine frameworks to classify voxels into outer-surface, inner-surface, and surface voxels. In addition, the TSDF branch is added to further improve the performance. Extensive experiments show the good performance of the proposed method.
Strengths: 1. The motivation is interesting.
2. Extensive experiments are conducted and the performance is good.
Weaknesses: 1. This paper proposes to predict TSDF and reformulate the occupancy into 3 cases. However, the basic framework is built based on 3D-Former and VoRTX (i.e., coarse-to-fine). Hence, the novelty is somehow limited.
2. In Line 55, it would be confusing to say that "IOAR explores a new coarse-to-fine strategy" since the coarse-to-fine framework is similar to previous works.
3. As a SoTA method, 3D-Former is not compared in the experiments.
Typos:
- Line 140: voRTX -> VoRTX
- Table 2: Vortx [8] -> VoRTX [11]
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No limitations are mentioned in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The novelty of this paper is somehow limited since their model is built based on the coarse-to-fine framework like previous methods (e.g., 3D-Former and VoRTX).**
**Re:** Using the basic coarse-to-fine framework does not mean the novelty of IOAR is limited. As we have introduced in the abstract and introduction, using the coarse-to-fine framework to reduce the cubically increasing memory cost is necessary for volumetric-based methods [8,9,10,11,12]. After Atlas [8] proposed the basic coarse-to-fine framework, succeeding methods like NeuralRecon[9], TransformFusion [10], VoRTX [11], and 3D-Former [12] are all built based on this basic framework. Although these works are based on the coarse-to-fine framework, their novelty is not limited since they improve the 3D reconstruction in different aspects. So does the proposed IOAR. To make it more specific, we briefly introduce the novelty of IOAR and the mentioned VoRTX and 3D-Former. VoRTX weights the contribution of image features from different views for each voxel to extract more informative feature volumes. 3D-Former design a 3D Transformer to replace the 3D CNN, which provide a stronger 3D backbone for 3D reconstruction. Different from them, IOAR aims to classify the potential surface voxels more accurately with a new coarse-to-fine strategy, which classifies the inner-surface, outer-surface, and surface voxels into different classes. Since VoRTX, 3D-Former, and IOAR improve the 3D reconstruction in different aspects, they are three complementary and orthogonal works with their own novelty.
**Q2: In Line 55, it would be confusing to say that "IOAR explores a new coarse-to-fine strategy" since the coarse-to-fine framework is similar to previous works.**
**Re:** IOAR does explore a new coarse-to-fine strategy. As we have explained in Q1, all volumetric-based methods [8,9,10,11,12] are built based on the basic coarse-to-fine framework, which utilizes a coarse-to-fine strategy to decide which voxels will keep at the next stage. The conventional coarse-to-fine strategy distinguishes potential surface voxels from other voxels and reduces the memory cost by removing the voxels far from the surface. However, this strategy groups the inner-surface voxels and outer-surface voxels into the same class, which neglects the fact that the inner-surface voxels are different from the outer-surface voxels. As a result, the classifier has to spend its capability to bridge this intrinsic gap which can lead to overfitting and thus harm the model generalization. To avoid this problem, IOAR explores a new coarse-to-fine strategy that classifies inner-surface, outer-surface, and surface voxels into different classes. Due to the intrinsic gap, the classifier can easily find the classification boundary between the inner-surface voxels and the outer-surface voxels. Therefore, the classifier can focus on finding the classification boundary between the surface voxels and inner-surface voxels and the classification boundary between the surface voxels and outer-surface voxels. Thanks to the merits of the new coarse-to-fine strategy, IOAR can predict surface voxels more accurately. Our experiments support this claim with both quantitive and qualitative results.
**Q3: As a SoTA method, 3D-Former is not compared in the experiments.**
**Re:** As we have mentioned in the related work section, 3D-Former [12] is an excellent work that designs a 3D Transformer to replace the 3D CNN used by previous works. In their paper, they conducted the ablation study to demonstrate that 3D-Former is a stronger 3D backbone compared with traditional 3D CNN. Therefore, replacing the 3D CNN with the 3D-Former can theoretically improve the performance of any existing volumetric-based 3D reconstruction methods. This is intuitive, just like replacing LeNet with ResNet can theoretically improve the performance of any downstream task. However, all previous methods use 3D CNN as the 3D backbone. Therefore, to have a fair comparison with existing works, we still use the 3D CNN as the 3D backbone. And, since we treat 3D-Former as a stronger backbone, we have not listed it in the experiment tables. An ideal way to make the result more complete is to split the experiment tables into two parts to report the results using the 3D CNN backbone and the results using the 3D-Former backbone. However, reproducing all existing methods using the 3D-Former as the 3D backbone is too computationally expensive. Specifically, training a basic 3D-Former requires 8 V100 GPUs to train 5,000 epochs (costs about 6000 GPU hours), let alone combining it with other methods.
**Q4: There are two typos in the paper.**
**Re:** We will fix these two typos in the camera-ready version. | Summary: This paper proposes an inner-outer aware reconstruction (IOAR) model for monocular 3D scene reconstruction. Different from existing methods, IOAR can classify the inner-surface voxels and outer-surface voxels, which could lead to better occupancy prediction and TSDF prediction. More specifically it proposes a new inner-outer aware coarse-to-fine strategy and loss, which leads the model to learn to classifier the inner-surface voxels and outer-surface voxels. To avoid the mutual interference between TSDF prediction and occupancy prediction, IOAR separates the occupancy branch from the TSDF branch. Experimental results show IOAR achieves state-of-the-art performance on the large-scale indoor scene dataset ScanNet and can generalize reasonably on unseen scenes.
Strengths: Overall the idea of further classifying exterior and interior surface voxels has merit and seems to generate good results. Separate TSDF and occupancy also make sense. The results are good.
Weaknesses: only scannet is used for testing. would be nice to test on more.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How about other datasets?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: the loss function of equation 9 is a sum of all losses, should it be weighted? How should the weights be set to?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: only ScanNet is used for testing. would be nice to test on more.**
**Re:** In the paper, we have tested our model on three different datasets: ScanNet (Tables 1 and 2), ICL-NUIM (Table 3), and TUM-RGBD (Table 3). Following previous methods [11, 12], we evaluate the performance of IOAR on ICL-NUIM and TUM-RGBD datasets with a model trained on the ScanNet dataset. The reason why our model and previous methods test on ICL-NUIM and TUM-RGBD datasets in this manner is ICL-NUIM and TUM-RGBD datasets are too small. Specifically, the ICL-NUIM dataset contains only 8 scenes and the TUM-RGBD dataset contains 13 scenes. On the contrary, the ScanNet dataset contains 1,613 scenes (1,513 for training and 100 for testing). Since the lack of data, training on ICL-NUIM and TUM-RGBD datasets and subsequently testing on the respective datasets may result in poor performance.
We conduct experiments to verify this assumption. We manually split ICL-NUIM and TUM-RGBD datasets, with 6 and 10 scenes for training and 2 and 3 scenes for testing, respectively. The experiment result is worse than expected. With limited data for training, the model failed to learn to reconstruct any surface in the test scenes. That is, no meshes are reconstructed for the test scenes. This result explains why previous methods do not evaluate their model under this setting.
**Q2: the loss function of equation 9 is a sum of all losses, should it be weighted? How should the weights be set to?**
**Re:** Assigning different weights to different losses may result in better performance, while we simply set equal weights to each loss since these losses are in the same order of magnitude. To evaluate the impact of assigning different weights to different losses, we split the losses into three groups: the occupancy losses, the TSDF loss and the projective occupancy losses. We conduct experiments to set different weights to three groups of losses. We denote the weight for occupancy losses as $\lambda_1$, the weight for TSDF loss as $\lambda_2$, and the weight for projective occupancy losses as $\lambda_3$. So, the weighted loss $\mathcal{L}^W = \lambda_1(\mathcal{L}^c_{OCC} + \mathcal{L}^m_{OCC}) + \lambda_2\mathcal{L}^f_{TSDF} + \lambda_3(\mathcal{L}^c_{P} + \mathcal{L}^m_{P} + \mathcal{L}^f_{P}).$ The results are reported in following table. All variants achieves comparable performance except the one set $\lambda_1$ to 0.5. In summary, assigning different weights to these losses can lead to slight performance variance, especially decreasing the weight of occupancy losses.
| $\lambda_1$ | $\lambda_2$ | $\lambda_3$ | Acc | Comp | Prec | Recall | F-score |
| ----------- | ----------- | ----------- | ----- | ----- | ----- | ------ | ------- |
| 0.5 | 1 | 1 | 0.045 | 0.095 | 0.733 | 0.580 | 0.646 |
| 1 | 0.5 | 1 | 0.046 | 0.087 | 0.740 | 0.600 | 0.661 |
| 1 | 1 | 0.5 | 0.044 | 0.090 | 0.744 | 0.597 | 0.661 |
| 1 | 1 | 1 | 0.043 | 0.090 | 0.748 | 0.597 | 0.663 | | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a novel method for monocular 3D scene reconstruction, termed Inner-Outer Aware Reconstruction (IOAR). In contrast to prior works, IOAR incorporates a unique classification process for outer surfaces, inner surfaces, and surface voxels. Additionally, the method distinctly separates the occupancy branches for IOAR from the TSDF branches, effectively mitigating any mutual interference between them. Notably, the proposed method exhibits exemplary performance across multiple datasets, including ScanNet, ICL-NUIM, and TUM-RGBD. Furthermore, the model, when trained solely on the ScanNet dataset, still achieves superior performance in both ICL-NUIM and TUM-RGBD datasets.
Strengths: - The central concept underlying this paper is both logically sound and remarkably straightforward, yet it proves to be highly effective. The provided ablation study reinforces the efficacy of the proposed module.
- The IOAR method demonstrates unparalleled performance across a range of datasets, indicating its robustness and adaptability.
Weaknesses: - While the contributions are impactful, they are rather direct and simplistic, which could potentially limit the depth and breadth of the paper.
- There is an inconsistency between the numerical values presented in lines 250-251 on page 7 and those in Table 3. For instance, 0.596 is cited in the text, whereas 0.607 is reported in the table. Similar discrepancies are noted for 0.569 (text) vs. 0.564 (table), and 0.488 (text) vs. 0.507 (table). This discrepancy warrants clarification.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - What is the rationale behind defining the occupancy ground truth to range from 0 to 2? Additionally, how are other values, such as those ranging from -1 to 1, handled in Equation (6)?
- What is the performance of the model when trained on the ICL-NUIM and TUM-RGBD datasets and subsequently tested on the respective datasets?
- It appears that the proposed method continuously learns to discern between the three distinct regions, even during inference. If this assumption is correct, how does the neural network ascertain these three different regions within the voxels from just the initial few frames? It seems plausible that there may not be sufficient occupied 3D voxels to fully comprehend the 3D scene initially. As more frames are encoded into 3D voxels, the neural network may begin to recognize the three regions. Is the information primarily extracted from the visual input or from the coarsely reconstructed 3D voxels?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, in supplementary materials.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: While the contributions are impactful, they are rather direct and simplistic, which could potentially limit the depth and breadth of the paper.**
**Re:** The insight behind IOAR is intuitive, but we believe it can inspire future works on 3D reconstruction. The starting point of IOAR is the fact that classifying inner-surface voxels and outer-surface voxels into the same class will force the model to waste its capability to bridge the gap between the two types of voxels. IOAR explores a new coarse-to-fine strategy to solve this problem and achieves performance improvement. The value of IOAR is more than the performance improvement. Existing works improve the performance of 3D reconstruction in various ways, such as adding a transformer mechanism to weigh the features [10, 11] and designing a stronger backbone [12]. On a higher level, they focus on improving 3D reconstruction by increasing the model capability. Different from them, IOAR provides another direction, i.e., to reduce the waste of the model capability. IOAR proves that reducing the waste of model capability is a potential way to improve 3D reconstruction with limited cost. Therefore, IOAR can inspire future works to try to enhance 3D reconstruction by improving the usage of model capability instead of increasing the model capability. We believe this is more valuable than performance improvement since the extensive computational cost is a critical problem in 3D reconstruction, and generally, reducing the waste of model capability requires less computational cost than increasing the model capability.
**Q2: There is an inconsistency between the numerical values presented in lines 250-251 on page 7 and those in Table 3.**
**Re:** The values presented in Table 3 are correct. We updated the new experiment result in Table 3 while forgetting to update those on page 7. We will fix this problem in the camera-ready version.
**Q3: What is the rationale behind defining the occupancy ground truth to range from 0 to 2? Additionally, how are other values, such as those ranging from -1 to 1, handled in Equation (6)?**
**Re:** Conventionally, the Signed Distance Function (SDF) measures the distance between each voxel and its closest surface. And, the Truncated Signed Distance Function (TSDF) truncated the distance at a given threshold (e.g., 12cm) and normalized the distance to range from -1 to 1 (the sign indicates the inner or outer side). Based on the definition of TSDF, we can easily define the inner-surface, outer-surface, and surface voxels in Equation (6). We define the ground-truth occupancy of inner-surface, surface, and outer-surface voxels as 0, 1, and 2 because assigning ground-truth labels from 0 is conventional in classification tasks. In this case, we can easily indicate the probability that our model makes the correct prediction using a subscript (e.g., in Equation (8)). Theoretically, the ground-truth occupancy of inner-surface, surface, and outer-surface voxels can be defined as any discrete numbers (e.g., -1, 0, 1).
**Q4: What is the performance of the model when trained on the ICL-NUIM and TUM-RGBD datasets and subsequently tested on the respective datasets?**
**Re:** Following previous methods [11,12], we evaluate ICL-NUIM and TUM-RGBD datasets with the model trained on the ScanNet dataset. The reason why previous methods and our model tested on ICL-NUIM and TUM-RGBD datasets using a model trained on ScanNet dataset is ICL-NUIM and TUM-RGBD datasets are too small. Specifically, the ICL-NUIM dataset contains only 8 scenes and the TUM-RGBD dataset contains only 13 scenes. On the contrary, the ScanNet dataset contains 1,613 scenes (1,513 for training and 100 for testing). Because of the lack of data, training on ICL-NUIM and TUM-RGBD datasets and subsequently testing on the respective datasets may result in poor performance.
We conduct experiments to verify this assumption. We manually split ICL-NUIM and TUM-RGBD datasets, with 6 and 10 scenes for training and 2 and 3 scenes for testing, respectively. The experiment result is worse than expected. With limited data for training, the model failed to learn to reconstruct any surface in the test scenes. That is, no meshes are reconstructed for the test scenes. This result explains why previous methods do not evaluate their model under this setting.
**Q5: How does the neural network ascertain these three different regions within the voxels from just the initial few frames in the inference phase? Is the information primarily extracted from the visual input or from the coarsely reconstructed 3D voxels?**
**Re:** In the inference phase, our model does not have to ascertain three different regions from just the initial few frames. Following previous works [8,9,10,11,12], given an input video, we first select $N$ ($N=60$ in the inference time) frames from the entire video. We follow the selection method in [9] to reduce redundancy, which ensures the distance and the angle between any two selected frames are large enough. Image features of all the selected frames will be utilized to generate the 3D feature volumes. Therefore, Our model can consider the information from all selected frames to predict occupancy (i.e., whether a voxel belongs to the inner region, outer region, or on the surface). Since images are the origin of all information, we believe the information is primarily extracted from the visual input.
---
Rebuttal Comment 1.1:
Title: Thank you for the reuttal
Comment: After carefully reading the rebuttal, I appreciate the clarifications provided. I have decided to keep my initial rating.
---
Reply to Comment 1.1.1:
Comment: We appreciate you taking the time and effort to review our paper! Your comments help us improve the clarity and presentation of our paper. | null | null | null | null | null | null |
LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning | Accept (poster) | Summary: The paper proposed locally regularized Context Optimization for OOD detection inspired by CoOP. Their primary claim is that the CLIP feature contains many ID-irrelevant nuances, such as backgrounds. Hence pushing the ID and these embedding from each other will lead to a better separation between ID and OOD samples. It is a few shot OOD detection methods that utilize learnable prompts. The author conducted experiments on zero/one/few shot methods, which shows their proposed method outperforms existing approaches.
Strengths: The strengths of the paper are :
1. The paper is well-written and easy to follow.
2. It is a simple approach combined with CoOP, MCM, and GL-MCM scores for test time prediction, giving 3. SOTA performance in zero-shot settings.
4. Pushing apart the ID-irrelevant features from CLIP, similar to MaskCLIP, would lead to a better OOD classification is a novel idea and has much potential.
5. The results are also good; it outperforms most existing models using both ResNet and ViT.
Weaknesses: There are a small few weaknesses of the paper which the author needs to address :
1. The prompt learning used here is the same as CoOP, then why is it novel?
2. Table 1. Zero-shot uses CLIP or CoOP, it is not clear.
3. The performance using CoOP with MCM/GL-MCM is already quite good, and LoCoOP improves an additional 1-2% gain in AUROC only in the one-shot or few-shot setting. Hence I doubt its efficacy in large-scale OOD detection or in near OOD detection.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Are there any results on near OOD or LargeScale datasets?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: No negative social impact and limitations are addressed adequately
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and respond to them appropriately as follows. We will add suggested experiments and explanations in the updated manuscript.
> Q1. The novelty of LoCoOp
A1. While CoOp learns to bring the global image features and GT text feature closer together, LoCoOp treats the portions of CLIP's local features as OOD and performs OOD regularization in addition to the CoOp's loss. Although it seems like just adding an OOD regularization term to CoOp, it is very effective for OOD detection. Therefore, we consider that LoCoOp has a solid novelty for few-shot OOD detection.
> Q2. Zero-shot method
A2. We use CLIP for zero-shot results.
> Q3. Performance improvements with LoCoOp
A3. That is true that LoCoOp brings 1-2% gain in AUROC over CoOp, but we argue that this improvement is not small on ImageNet-1K OOD benchmarks. Existing work (NPOS [43]) reported that NPOS brings only a 0.03 % gain over VOS [9] on ImageNet-1K benchmarks. Therefore, we consider a 1~2% gain to be very significant in this tight setting where we have only a few labels data with the large ImageNet benchmarks.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer epH7, we are in the middle of the discussion period. Please read the rebuttal and use the time to discuss with the authors.
---
Rebuttal 2:
Title: Looking Forward to Hearing from Reviewer epH7
Comment: Dear reviewer, we would like to thank you again for your careful review and constructive suggestions. We have provided additional explanations in response to your concerns.
As the deadline for the author-reviewer discussion period is approaching, we would like to discuss with you whether your concerns have been addressed. And if you have any additional comments, we are happy to answer them further. | Summary: This paper focused on the problem of vision-language prompt learning for few-shot OOD detection, i.e., using CLIP model to detect OOD images from unseen classes using only a few labeled in-distribution (ID) images. Previous zero-shot methods may encounter a domain gap with ID downstream data, while fully fine-tuned methods require enormous training costs. This work focused on the few-shot learning setting and extends CoOp, a few-shot vision-language prompt learning, to OOD detection. The main difference is to extract ID-irrelevant regions based on region-class similarity for OOD regularization. Experiments are conducted on iNaturalist, SUN, Places, and TEXTURE as OOD datasets and take ImageNet-1K as ID data. Experimental results verified the effectiveness of the proposed LoCoOp, even performing better than zero-shot and fully fine-tuned methods with 1-shot training data.
Strengths: + The problem of vision-language prompt learning for few-shot OOD detection is novel and interesting. Previous works focused on zero-shot or fully fine-tuning settings, while few-shot OOD detection is not well explored. Also, it makes sense to adapt few-shot prompt tuning method CoOp for the few-shot OOD detection.
+ The proposed method is simple but effective and intuitive. It directly extract pseudo OOD samples from few-shot ID samples. The motivation and solution are clear and inspiring.
+ The experimental results verified the effectiveness of the proposed method. Especially, it can perform better than zero-shot or fine-tuned methods even with one-shot training data. Compared to complex method CoCoOp, it can also achieve faster inference speed.
+ The writing quality is satisfying. It is easy to follow the idea and implementation.
Weaknesses: - Although Table 3 compares the ID accuracies of CoOp and LoCoOp, it is interesting to know how other zero-shot and fine-tuned methods perform on ID accuracy, as CoOp may not achieve good ID performance due to the limitation of sample numbers.
- As the motivation is to extract OOD regions from ID images, it would be interesting to visualize the OOD regions to illustrate how well CLIP can output OOD regions.
------------------
After rebuttal:
Thank the authors for providing further comparisons and discussions. The comparisons and discussions are comprehensive. I will keep my positive rating.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please see weaknesses for my quesions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and respond to them as follows. We will add suggested experiments and explanations in the updated manuscript.
> Q1. The ID accuracy with zero-shot and fully-supervised methods
A1. The results of the ID accuracy and OOD performance on the ImageNet validation set for the zero-shot, fully supervised, and prompt-based methods are shown below.
| | ID acc. (%) | AUROC (%) |
| --- | --- | --- |
| Zero-shot (CLIP [35]) | 67.01 | 90.83 |
| Fine-tune (NPOS [43]) | 79.42 | 90.37 |
| CoOp [56] | 72.10 | 91.82 |
| LoCoOp (ours) | 71.70 | 93.52 |
Even though MCM, LoCoOp, and CoOp are inferior to fully supervised methods in ID accuracy, they performed better in OOD detection.
> Q2. Visualization of OOD regions
A2. Thanks for the suggestion. We attached visualization results for some samples to demonstrate the effectiveness of our method in the global response. This shows that our method can correctly extract OOD regions.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer RaSR, we are in the middle of the discussion period. Please read the rebuttal and use the time to discuss with the authors.
---
Rebuttal Comment 1.2:
Title: ID and OOD accuracies in the same table
Comment: Thank the authors for the feedback. For the ID accuracy, could the authors show a new table combining both ID and OOD accuracies for a more clear comparison? It would be more straightforward to see how LoCoOp decreases ID accuracy and increases OOD accuracy, and whether it achieves a great trade-off or simply increases one by decreasing the other to the same extend.
---
Reply to Comment 1.2.1:
Title: Relationship between ID accuracy and OOD detection performance
Comment: Thanks for the suggestion. We show a new table combining both OOD detection performance (Table 1 in the main paper) and ID accuracies in the following. The column on ID accuracy is on the far right.
| | iNaturalist | | SUN | | Places | | Texture | | Average | | **ID acc.** |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | FPR | AUROC | FPR | AUROC | FPR | AUROC | FPR | AUROC | FPR | AUROC |- |
| Zero-shot | | | | | | | | | | | |
| MCM | 30.94 | 94.61 | 37.67 | 92.56 | 44.76 | 89.76 | 57.91 | 86.10 | 42.82 | 90.76 | 67.01 |
| GL-MCM | 15.18 | 96.71 | 30.42 | 93.09 | 38.85 | 89.90 | 57.93 | 83.63 | 35.47 | 90.83 | 67.01 |
| Fine-tune | | | | | | | | | | | |
| ODIN | 30.22 | 94.65 | 54.04 | 87.17 | 55.06 | 85.54 | 51.67 | 87.85 | 47.75 | 88.80 | 79.64 |
| ViM | 32.19 | 93.16 | 54.01 | 87.19 | 60.67 | 83.75 | 53.94 | 87.18 | 50.20 | 87.82 | 79.64 |
| KNN | 29.17 | 94.52 | 35.62 | 92.67 | 39.61 | 91.02 | 64.35 | 85.67 | 42.19 | 90.97 | 79.64 |
| NPOS | 16.58 | 96.19 | 43.77 | 90.44 | 45.27 | 89.44 | 46.12 | 88.80 | 37.93 | 91.22 | 79.42 |
| Prompt learning | | | | | | | | | | | |
| CoOp w. MCM(1-shot) | 43.38 | 91.26 | 38.53 | 91.95 | 46.68 | 89.09 | 50.64 | 87.83 | 44.81 | 90.03 | 66.23 |
| CoOp w. GL-MCM(1-shot) | 21.30 | 95.27 | 31.66 | 92.16 | 40.44 | 89.31 | 52.93 | 84.25 | 36.58 | 90.25 | 66.23 |
| LoCoOp w. MCM (1-shot) | 38.49 | 92.49 | 33.27 | 93.67 | 39.23 | 91.07 | 49.25 | 89.13 | 40.17 | 91.53 | 66.88 |
| LoCoOp w. GL-MCM (1-shot) | 24.61 | 94.89 | 25.62 | 94.59 | 34.00 | 92.12 | 49.86 | 87.49 | 33.52 | 92.14 | 66.88 |
| CoOp w. MCM(16-shot) | 28.00 | 94.43 | 36.95 | 92.29 | 43.03 | 89.74 | 39.33 | 91.24 | 36.83 | 91.93 | 72.10 |
| CoOp w. GL-MCM(16-shot) | 14.60 | 96.62 | 28.48 | 92.65 | 36.49 | 89.98 | 43.13 | 88.03 | 30.67 | 91.82 | 72.10 |
| LoCoOp w. MCM (16-shot) | 23.06 | 95.45 | 32.70 | 93.35 | 39.92 | 90.64 | 40.23 | 91.32 | 33.98 | 92.69 | 71.70 |
| LoCoOp w. GL-MCM (16-shot) | 16.05 | 96.86 | 23.44 | 95.07 | 32.87 | 91.98 | 42.28 | 90.19 | 28.66 | 93.52 | 71.70 |
---
We discuss the relationships between ID accuracy and OOD detection performance in the following three points.
- Why zero-shot and prompt learning methods outperform fully-supervised methods in OOD detection performance while their ID accuracies are considerably lower.
The key point in OOD detection is to avoid incorrectly assigning a high confidence score to OOD samples. In this respect, zero-shot and prompt learning methods calculate confidence scores based on the similarity between the text and the image, so models are less likely to produce unnaturally high confidence scores for OOD samples. On the other hand, most fully-supervised methods do not use the language, and use the probability distribution through the last fc layer to calculate confidence scores. Therefore, even if the ID accuracy is high, there is a higher possibility that the model will produce an incorrect high confidence score for an OOD sample due to some reasons (e.g., noisy activation signal [40]).
- Why LoCoOp has higher ID accuracy than CoOp in a 1-shot setting
This is because CoOp does not have enough training samples in a 1-shot setting. As shown in Fig. 2, CoOp and LoCoOp require about 16-shot image-label pairs to reach the upper score.
On the other hand, even in a 1-shot setting, LoCoOp can learn from many OOD features, so LoCoOp outperforms CoOp in ID accuracy in a 1-shot setting.
- Why LoCoOp has lower ID accuracy than CoOp in a 16-shot setting
This reason is described in Analysis section in the main paper.
In a 16-shot setting (sufficient training data for prompting methods), excluding OOD nuisances that are correlated with ID objects will degrade the ID accuracy. For example, in some images of dogs, the presence of green grass in the background may help identify the image as a dog. Therefore, learning to remove the background information could make it difficult to rely on such background information to determine that the image is a dog. However, this study reveals that excluding such backgrounds improves OOD detection performance.
As the reviewer says, it is intriguing to discuss the relationship between ID accuracy and OOD detection performance. On the other hand, we are concerned that incorporating ID accuracy into Table 1 would increase the amount of information to be discussed in a single table.
Therefore, in the final version, I will create another section discussing the relationship between ID accuracy and OOD detection performance and include the above table with ID accuracy and detection performance. | Summary: In this paper, the authors propose a CLIP-based few-shot OOD detector named Local regularized Context Optimization (LoCoOp). The method LoCoOp uses learnable prompts and local tokens' class scores to optimize the OOD detection performance. During the few-shot training phase, the authors extract the ID-irrelevant regions by top-k ranking w.r.t. the ground truth class and minimize their class entropy. In this way, the affinities between local tokens and ID classes are suppressed, thus enhancing the distinction between ID and OOD samples. The experimental results show the proposed method outperforms baseline methods.
Strengths: 1. The paper is well-organized and clearly presented. The proposed method is easy to follow.
2. LoCoOp is a novel method that uses OOD region regularization to suppress the ID similarity of the background components. The idea of ID-irrelevant local context suppression is interesting. The proposed method raises a valuable point that suppressing extraneous background helps distinguish ID from OOD data.
Weaknesses: 1. The method needs more analysis.
- The authors may provide some visualizations for ID and OOD samples to show which regions are suppressed. This kind of case study helps demonstrate that the proposed method works as the authors envision.
- The foreground object F and its background context B may tend to co-occurrence, so there may be a spurious correlation issue [1]. Hence, the GT label may also rank higher in the background context. For example, for the background, like the beach, categories associated with the beach (e.g., beach chairs, surfboards) may rank higher than categories that have no relations. Based on this, for some categories with spurious correlation, will this training strategy fail? Because the ground-truth label may occur in top-k candidates for almost all background regions.
- The OOD regularization loss (Eqn. 5) needs to be further analyzed. The authors may investigate the training dynamics of the regularization loss. How does this loss affect prompt learning convergence? Will it cause instability and affect ID accuracy in a longer non-few-shot training schedule? How does it affect hard samples?
- The proposed method may need some theoretical analysis and support.
2. More experiments and validation are necessary.
- I suggest that the authors extensively verify the effectiveness of the proposed method on various OOD benchmarks. In particular, it is necessary to pay attention to whether the proposed OOD regularization loss will behave as expected when the network is trained with fewer categories (such as CIFAR10)?
- The authors only train the network with up to 16 samples per class. I'm curious if one can achieve better results by using more samples to finetune the network? Where is the bottleneck of OOD detection based on prompt learning? Can we unfreeze more layers to obtain stronger OOD regularization capabilities?
- In Table 1, why does one-shot prompt learning of CoOp+MCM result in lower performances compared to zero-shot MCM?
[1] Ming, Yifei, Hang Yin, and Yixuan Li. "On the impact of spurious correlation for out-of-distribution detection." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 36. No. 9. 2022.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Some minor issues.
- The concept *OOD image features* in Line 146 is prone to ambiguity. Readers may think that it is a feature taken from the OOD image.
- There may be a typo in Figure 2 x-axis label *# of labels*. Perhaps *# of samples* is the correct one?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and respond to them as follows. We will add suggested experiments and explanations in the updated manuscript.
> Q1. Visualization results
A1. Thanks for pointing this out. We attached the visualization results. From this result, we can see the OOD regions are extracted correctly.
> Q2. Spurious correlation issue
A2. Thanks for the interesting point of view. We consider that the spurious correlation issue does not occur in the CLIP’s local features. Unlike global features, CLIP's local features are features before pooling, so the feature in each region has no mixed concepts. In other words, areas with beaches have the concept of beaches only, and areas with beach chairs have the concept of beach chairs only. Therefore, ground-truth labels are unlikely to be included in the top-k candidates in background regions. This may be more readily understood by examining the visualization results that I have attached. The unmasked areas in these visualization examples are areas where the ground-truth labels are not included in the top-200 prediction. Therefore, we consider that spurious correlation does not occur in the local features.
Based on this observation, in our view, LoCoOp can directly solve the problem of the spurious correlation issue. LoCoOp separates the background from the foreground, which results in the elimination of spurious correlations between things that are normally correlated, such as beach and beach chairs. We will document this discussion in detail in the final version.
> Q3. Analysis of the OOD regularization loss
A3. The training convergence and instability for LoCoOp are similar to those of CoOp, and the training config is the same as that of CoOp.
> Q4. Theoretical analysis and support.
A4. Thanks for the important suggestion. In terms of the effectiveness of disentangling background information, our work is supported by existing theoretical studies [1, 2, 3], which state, when the classifier only uses foreground features, the optimal decision boundary can be obtained. Therefore, we will cite these theoretical studies [1, 2, 3] to reinforce the theoretical background in the final version.
[1] Ren et al., Likelihood Ratios for Out-of-Distribution Detection, NeurIPS 2019.
[2] Nagarajan et al., Understanding the Failure Modes of Out-of-Distribution Generalization, ICLR 2021.
[3] Ming et al., On the impact of spurious correlation for out-of-distribution detection, AAAI2022.
> Q5. The effectiveness of LoCoOp on small-scale datasets
A5. As for the small-scale dataset, there are no existing studies dealing with CIFAR-10 in CLIP, because CLIP is not compatible with toy datasets (e.g., too low resolution) like CIFAR-10. Hence, we experiment with an ImageNet subset dataset.
Previous work [30] reported that the result on ImageNet-10 [30] (a 10-class subset of ImageNet) and ImageNet-20 [30] (a 20-class subset of ImageNet) with zero-shot MCM reached the upper-bound score (e.g., average AUROC is 99.78 on ImageNet-10). Therefore, we use ImageNet-100 (a 100-class subset of ImageNet) as the ID dataset. As for the OOD datasets, we adopt the same ones as the ImageNet-1K OOD datasets.
The result on ImageNet100 OOD datasets is as follows.
| | FPR (%) | AUROC (%) |
| --- | --- | --- |
| CoOp with MCM | 14.57 | 97.12 |
| CoOp with GL-MCM | 13.82 | 96.93 |
| LoCoOp with MCM (ours) | 12.68 | 97.49 |
| LoCoOp with GL-MCM (ours) | **10.77** | **97.67** |
From this result, LoCoOp outperforms CoOp on small-scale datasets.
> Q6. The performance of LoCoOp with more than 16-shot samples
A6. In this paper, we conducted experiments up to 16 shots following the setting of CoOp. Figure 2 shows that the improvement in performance will become slower after 16-shot training. This is due to the fact that CoOp and LoCoOp only train text prompts that have only a few parameters, and a small number of samples is sufficient to reach the upper score.
A key element to further improve performance is CLIP's vision encoder. LoCoOp freezes the vision encoder, so the image features may contain some OOD information. Removing OOD information in image features can be a possible solution to further OOD detection performance.
> Q7. The lower performance of 1-shot CoOp with MCM than that of zero-shot MCM
A7. According to the paper in CoOp, the classification accuracy of prompt learning methods with 1-shot training is inferior to the zero-shot performance because there are not enough samples to learn enough discriminative representations in prompts. Similarly, in the case of OOD detection, CoOp cannot learn enough features with only one sample. However, LoCoOp is capable of learning multiple OOD features from a single image. Therefore, even in a setting with only one image, it can learn a sufficient amount of features to achieve robust performance in OOD detection.
> Minor issues: ambiguous description and typo
Thanks for the point out. We will fix them in the final version.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer wEUW, we are in the middle of the discussion period. Please read the rebuttal and use the time to discuss with the authors.
---
Rebuttal Comment 1.2:
Comment: Sorry for the late reply. The authors' responses address most of my concerns. I decided to raise my rating to 5. I still suggest that the author strengthen the theoretical modelling of the article.
---
Rebuttal 2:
Title: Looking Forward to Hearing from Reviewer wEUW
Comment: Dear reviewer, we would like to thank you again for your careful review and constructive remarks. We have provided additional explanations and experiments in response to your concerns.
As the deadline for the author-reviewer discussion period is approaching, we would like to discuss with you whether your concerns have been addressed. And if you have any additional comments, we are happy to answer them further. | Summary: The task of this article is to use the CLIP model for image classification, and the target scene is few-shot data and out-of-distribution (OoD). The author confirms the final OoD region by querying the ID-independent region in the image, and further optimizes the model through this region. Remarkably, the authors validated their method on the large-scale dataset Image-Net 1K. As far as I know the task is a new one, and its application makes sense.
Strengths: - This paper proposed a new and meaningful task.
- The performance of the method is fine.
- The author completed method verification on a very challenging dataset (ImageNet-1K full dataset).
- It's an interesting idea to remove regions not related to IDs.
- The author provides the source code to ensure the reproducibility of the article.
Weaknesses: The idea of this article is very interesting and theoretically feasible. However, I have the following concerns:
- My biggest concern is the accuracy of locating regions not related to IDs. Because the author did not provide an experimental indicator of the accuracy of the ID-independent region positioning, it is only shown from the results of training and detecting OoD. My concerns come from another article I've read that points to the limitations of CLIP's current interpretation of location regions [1]. It is a pity not to see related visualization results in the article and supplementary material. If the author can provide some indicators to prove that his method can well locate regions that have nothing to do with ID, I think it will be more convincing.
- In addition to OoD detection, the model should have Zero-Shot capability, should the author also compare the zero-shot performance with CoOp or CoCoOp methods?
[1] Li, Yi, et al. "CLIP surgery for better explainability with enhancement in open-vocabulary tasks." *arXiv preprint arXiv:2304.05653* (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My main questions are in the weaknesses section, and I would like to improve my score if the author can address my concerns convincingly.
In addition, I still want to know how long (how many hours or how many days) does this method take to train the ImageNet dataset on an A100GPU?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have listed the limitation in the article.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and respond to them as follows. We will add suggested experiments and explanations in the updated manuscript.
> Q1. The accuracy of locating regions not related to IDs.
A1. Thanks for the question. That is correct, and the accuracy of the segmentation results is a major factor. However, unlike the usual segmentation task, in this setting, it is not necessary to correctly guess the segmentation label. Therefore, to compensate for errors in the segmentation results, we introduce ranks into equation (4) to allow for segmentation errors up to top-K predictions.
As for the evaluation metrics, the ImageNet dataset does not have segmentation masks, so quantitative evaluation is difficult. Therefore, we have attached visualization results for some samples in the global response to demonstrate the effectiveness of our method. This shows that our method can correctly extract OOD regions.
> Q2. Zero-shot generalization ID accuracy
A2. We report the zero-shot generalization ID accuracy (not OOD detection performance) with CoOp and LoCoOp on Flower102, Food101, and Oxford-Pets datasets. Note that LoCoOp aims to improve the performance of OOD detection on specific ID datasets, so the zero-shot generalization ID accuracy is not objective.
| Method | Flower | Food | Pets |
|--|--------|-------|-------|
| CoOp | 65.63 | 84.00 | 87.53 |
| LoCoOp| 61.27 | 83.13 | 88.17 |
For the Flower dataset, the ID accuracy of LoCoOp is considerably lower than that of CoOp. This is because the information similar to flowers (e.g., grass) might be removed as backgrounds during training on ImageNet. On the other hand, the ID accuracy of LoCoOp is higher than that of CoOp for the Pets dataset. The background in ImageNet is similar to that of Pets. In addition, many of the images in Pets have similar backgrounds. Therefore, LoCoOp can remove unnecessary information (e.g., common backgrounds) to identify fine-grained kinds of dogs, so the ID accuracy is improved.
> Q3. Training time
A3. The training efficiency is also one of the strengths of our LoCoOp.
For a 1-shot setting on ImageNet-1K, LoCoOp takes about 13 minutes with a single A100 GPU. For a 16-shot setting on ImageNet-1K, LoCoOp takes about 3.5 hours with a single A100 GPU.
Therefore, LoCoOp is easy to implement in environments with limited GPU resources.
---
Rebuttal Comment 1.1:
Title: Reply to the authors' rebuttal
Comment: Thanks to the author for answering my doubts in detail. However, since the author only provided very few visualizations, I still have concerns about the accuracy of the regional positioning, so I decided to maintain the current score. | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for giving excellent and positive comments and recognizing our contributions: “Few-shot out-of-distribution detection is interesting and practical” (**Lcii, uDwi, RaSR**), “LoCoOp is well motivated and interesting” (**Lcii, uDwi, wEUW, RaSR, epH7**), and “The effectiveness of LoCoOp is clear” (**Lcii, uDwi, RaSR, epH7**), and “This paper is well written” (**wEUW, RaSR, epH7**).
We write responses to each reviewer in each thread. Visualization is a common question, and we share the visualization results here.
## Visualization of OOD regions
Thanks for the really important point. We have attached the visualization results of OOD regions LoCoOp extracts in the pdf. This shows that our method can correctly extract OOD regions.
As reviewers (**Lcii, uDwi, wEUW, RaSR**) pointed out, the performance of segmentation results is a key to our method. However, unlike normal segmentation tasks, it is not necessary to correctly guess the segmentation label in our setting. Therefore, to compensate for some errors in segmentation results, we introduce Rank in Eq. (4) in the main, which allows segmentation mistakes up to top-K.
We will add this figure and this explanation in the final version.
Pdf: /pdf/f12d2aa457c1921a630e7f539c32f9b8d5fe264d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a vision-language prompt learning method named local regularized context optimization (LoCoOp) for few-shot out-of-distribution detection. The proposed LoCoOp method performs OOD regularization that uses the portions of CLIP local features as OOD features during training. The experimental results show the effectiveness of the proposed LoCoOp method over zero-shot and fully supervised methods.
Strengths: - The addressed problem of few-shot out-of-distribution detection is interesting and practical.
- The proposed CLIP based few-shot OOD detection method with local regularized context optimization is well motivated and achieves good results.
Weaknesses: - The proposed local regularized context optimization method is not specific to the few-shot setting, and can actually also be applied to fully supervised OOD detection setting, where the ID-irrelevant nuisances could also be learned. Then will the proposed method also improve the results in fully supervised setting? And what are the results?
- Similar to the CLIP based segmentation task in [55], the ID-irrelevant regions are predicted based on CLIP prediction results. However, the segmentation results may not be good, will that be a key obstacle to the proposed method?
- The training process of the proposed method is a little bit confusing. Does the L_coop loss apply to the whole image or only the ID-relevant regions?
- In Table 1, it shows that the performance of one-shot LoCoOp method is even worse than the zero-shot CL-MCM method (and only comparable for 16-shot LoCoOp) on iNaturalist dataset. What is the reason?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Will the proposed method also improve the results in fully supervised setting? And what are the results?
- Will the region segmentation results be a key obstacle to the proposed method?
- What is the reason that the performance of one-shot LoCoOp method is even worse than the zero-shot CL-MCM method (and only comparable for 16-shot LoCoOp) on iNaturalist dataset?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and respond to them as follows. We will add suggested experiments and explanations in the updated manuscript.
> Q1. The effectiveness of LoCoOp in a fully supervised setting.
A1. As the reviewer pointed out, our LoCoOp can be applied to a fully supervised setting (i.e., using all training data). However, Figure 2 shows that the improvement in performance will become slower after 16-shot training. This is due to the fact that even if many images are used for training, CoOp and LoCoOp only train text prompts that have only a few parameters, and a small number of samples is sufficient to reach the upper score.
> Q2. Is the segmentation result a key obstacle to the proposed method?
A2. That is correct, and the accuracy of the segmentation results is a major factor.
In our setting, unlike typical segmentation tasks, correctly guessing the segmentation label is not necessary. Hence, to account for potential errors in the segmentation results, we introduce Rank in Eq.(4) to allow segmentation errors up to top-K.
We also attach some visualization results in the global response to show how well the OOD regions are extracted. This shows that our method can correctly extract OOD regions.
> Q3. Does the L_coop loss apply to the whole image or only the ID-relevant regions?
A3. Thanks for pointing this out. L_coop is the loss applied for the entire image. We will add an explanation in the final version.
> Q4. The reason for low performance on the iNaturalist dataset
A4. The reason for this may be difficult to analyze. In the case of ResNet in Table 4, the detection performance of LoCoOp with GL-MCM is superior to that of zero-shot GL-MCM on iNaturalist, so it cannot be said that LoCoOp's training strategy is the main cause of the problem.
Besides, comparisons of different backbone networks for OOD detection often have been explored [1, 2] but no conclusions have yet been drawn because of the different performance on each dataset.
[1] Fort et al., Exploring the limits of out-of-distribution detection, NeurIPS 2021.
[2] Hendrycks et al., Scaling out-of-distribution detection for real-world settings, ICML 2022.
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: The authors addressed most my concerns. So I keep my original rating and lean towards acceptance. | null | null | null | null | null | null |
Multi-Agent First Order Constrained Optimization in Policy Space | Accept (poster) | Summary: The paper introduces a fresh approach to tackle the problem of safe Multi-Agent Reinforcement Learning (MARL) within a fully-cooperative multi-agent environment where all agents share a common reward function. The authors propose a novel algorithm known as Multi-Agent First Order Constrained Optimization in Policy Space (MAFOCOPS). This algorithm aims to maximize the expected total reward while ensuring each agent adheres to its safety constraint. The authors claim that MAFOCOPS is easy to implement and provides an approximate upper bound for worst-case constraint violation. To demonstrate its effectiveness, the paper conducts experiments on two safe MARL benchmarks, Safe MAMuJoCo and Safe MAIG. The results show that MAFOCOPS outperforms MACPO.
Strengths: 1. The paper introduces a unique approach to multi-agent reinforcement learning, focusing on a fully-cooperative setting. It provides a new perspective on the problem by considering the influence of an agent's action on the total costs, even if it doesn't directly impact the costs of other agents. This approach captures the realistic multi-agent interactions in the real world, such as the disruption in traffic flow caused by a car running a red light.
2. The paper is well-structured and provides a clear explanation of the proposed method. It also includes a proof in the appendix, demonstrating the mathematical rigor of the work. The paper also compares its method with other algorithms, showing that it maintains a soft safety awareness, unlike other algorithms that reach safety via hard constraints.
3. The approach is significant as it provides a new perspective on multi-agent reinforcement learning, considering the influence of an agent's action on the total costs. The empirical results demonstrate the outstanding performance and computational efficiency of the proposed method, compared to more intricate second-order methods. The paper also suggests future work, planning to test the approach in more environments and physical settings.
Weaknesses: The paper does not evaluate the performance of the algorithm across multiple costs. The benchmarks adopted only return one cost for agents, which limits the comprehensiveness of the evaluation. This could potentially mask some weaknesses or limitations of the proposed method when dealing with multiple costs.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. The paper acknowledges that solving the dual problems presented in the optimization process is computationally challenging when dealing with large state/action spaces. Calculating the partition function often involves evaluating a high-dimensional integral or sum, which can be computationally intensive. Moreover, the parameters λj and νj are dependent on the iteration k and need to be adjusted at every iteration to ensure the effectiveness of optimization. This adds to the complexity of the implementation. It would be beneficial if the authors could provide more details on how they plan to address these challenges in practical implementations. Are there any strategies or methods that could be used to reduce the computational complexity?
2. The paper mentions that the proposed method only employs first-order approximations, making it straightforward to implement. However, could the use of only first-order approximations limit the accuracy or effectiveness of the method in certain scenarios?
3. The paper mentions that the proposed method has an approximate upper bound for worst-case constraint violation. Could the authors elaborate more on what this upper bound is and how it was determined?
4. The paper discusses the use of a two-step process to solve the optimization problem. Could there be potential issues with this approach, such as the possibility of getting stuck in local optima?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Based on the information extracted from the paper, the authors have acknowledged some limitations of their work. They mention that due to the benchmarks they used, the performance of their algorithm across multiple costs was not evaluated. They also express their intention to test their approach in more environments and physical settings in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1*: It would be beneficial if the authors could provide more details on how they plan to address computational complexity in practice. Are there any strategies or methods that could be used to reduce the computational complexity?
*A1*: We introduce the implementation of these two hyperparameters in the Section 4.3. In fact, we note that $\lambda_j$ is similar to the temperature term utilized in maximum entropy RL [1]. Previous studies have shown that fixed values of $\lambda$ can yield reasonable outcomes [2]. Therefore, in practice, we adopt fixed $\lambda_j$ through grid search, allowing us to strike a balance between computational efficiency and model performance.
As for $\nu_j$, we derive Corollary 2, which enables us to apply gradient descent method to optimize this parameter. In practice, we use Equation (10) to update $\nu_j$. The equation exhibits an intuitive characteristic: it raises $\nu_j$ if $J_j^{i_h}(\boldsymbol{\pi_{\theta_k}}) > c_j^{i_h}$, indicating a violation of the cost constraint, and reduces $\nu_j$ otherwise. Aligning with the update rule employed in MACPO to estimate $J_j^{i_h}(\boldsymbol{\pi_{\theta_k}})$, $\nu_j$ can be updated during training, which is shown in the Procedure of our algorithm in the Appendix. We think using these strategies can greatly reduce the computational complexity. The efficiency analysis can also illustrate the efficiency of these strategies in some way.
*Q2*: The paper mentions that the proposed method only employs first-order approximations, making it straightforward to implement. However, could the use of only first-order approximations limit the accuracy or effectiveness of the method in certain scenarios?
*A2*: We acknowledge that first-order approximations may introduce a trade-off between efficiency and accuracy. However, we would like to emphasize that despite employing first-order approximations, our method exhibits superior empirical performance compared to baseline algorithms. This indicates that the approximations are not detrimental to the overall effectiveness of our approach. Meanwhile, second-order methods will introduce many approximation errors, so whether they are more accurate than first-order methods is hard to determine. Moreover, one of the significant strengths of our method is its obvious improved efficiency compared to MACPO, where the use of first-order approximations is crucial. Even if in some certain scenarios using only first-order approximations may limit the accuracy, we believe that our approach can strike a well-balanced compromise between efficiency and accuracy.
*Q3*: The paper mentions that the proposed method has an approximate upper bound for worst-case constraint violation. Could the authors elaborate more on what this upper bound is and how it was determined?
*A3*: Due to the page limit, the approximate upper bound is given in the Appendix. Here we give a brief description due to character limit. For each agent $i$, after getting the optimal joint update policy for all agents, $J_j^i(\boldsymbol{\pi^*}) \leq J_j^i(\boldsymbol{\pi_{\theta_k}}) + L_{j, \boldsymbol{\pi_{\theta_k}}}^i(\pi^{i_h*}) + \nu_j^{i_h} \sum_{l=1}^n D_{KL}^{max}(\pi^{l*}, \pi_{\theta_k}^l)$ can be obtained, where $L_{j, \boldsymbol{\pi}}^i(\bar{\pi}^i) = E_{s \sim \rho_{\boldsymbol{\pi}}, a^i \sim \bar{\pi}^i}[A_{j,\boldsymbol{\pi}}^i(s,a^i)]$. Considering the constraint in the optimization problem, we can know that $J_j^i(\boldsymbol{\pi^*}) \leq c_j^{i_h}+ \nu_j^{i_h} \sum_{l=1}^n D_{KL}^{max}(\pi^{l*}, \pi_{\theta_k}^l)$. Besides, the kl divergence for each $l$ has an upper bound, which we call $\delta^l$. To this end, we achieve $J_j^i(\boldsymbol{\pi^*}) \leq c_j^{i_h} + \frac{2 \gamma max_{s,a^i}|A^i_{j.\boldsymbol{\pi}}(s,a^i)|}{(1-\gamma)^2} \sum_{l=1}^n \delta^l$ , which is the upper bound for worst-case constraint violation. This bound indicates that with more agents, the optimization becomes more challenging, aligning with our intuition.
*Q4*: The paper discusses the use of a two-step process to solve the optimization problem. Could there be potential issues with this approach, such as the possibility of getting stuck in local optima?
*A4*: We appreciate your comments and understand your concern. In fact, the idea of solving problems in nonparametric space and then projecting back into the parameter space has been successfully applied to tackle various challenging problems. For instance, Abdolmaleki et al. [3] takes the view of ''inference view'' of policy search and attempts to find the desired policy via the EM algorithm. Montgomery et al. [4] develop a guided policy update method and provide theoretical guarantees on the bounded error in the projection step. In addition, Supervised Policy Update (SPU) [5] solves a constrained optimization problem in the nonparameteric policy space and then converts the optimal policy to a parameterized one using supervised regression. These previous works provide theoretical foundations for our methodology. Furthermore, the performance of our algorithm in extensive experiments supports the validity of our approach, showing that the two-step process does not lead our method to getting stuck in local optima or other significant issues.
[1] B. D. Ziebart et al., “Maximum entropy inverse reinforcement learning.” in Aaai, vol. 8. 2008, pp. 1433–1.
[2] T. Haarnoja et al, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International Conference on Machine Learning, 2018, pp. 1861–1870.
[3]A. Abdolmaleki et al, “Maximum a posteriori policy optimisation,” in International Conference on Learning Representations, 2018.
[4] W. H. Montgomery and S. Levine, “Guided policy search via approximate mirror descent,” Advances in Neural Information Processing Systems, vol. 29, 2016.
[5] Q. Vuong et al, “Supervised policy update for deep reinforcement learning,” in International Conference on Learning Representations, 2018. | Summary: This paper proposes a first-order method to solve the safety-constrained multi-agent policy optimization problem. The method is evaluated on two benchmarks and shows better performance over the baselines.
Strengths: - The studied problem is important and might interest the community.
- The writing is easy to follow.
Weaknesses: - My main concern is about the correctness of the derivation in the proposed method. The proof of *Throrem 1* says both *Equation (7)*, the objective and *Equation (8)*, the cost constraint, are **linear** *w.r.t.* $\pi^{i_h}$, which looks strange to me. The equations involve advantage estimation which definitely has a non-linear relationship with $\pi^{i_h}$. Why would it be linear?
- When solving the optimization problem, in *Corollary 2*, the expectation of the advantage over the optimal policy $\pi^{i_h\ast}$ is assumed to be zero. The advantage means the performance improvement of the optimal policy. How can it be reasonable to assume it to be zero?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Please see the above *Weakness* section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - The correctness of the proposed method seems skeptical. Several important steps in the derivation are confusing or taken as granted without sufficient argument.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1*: My main concern is about the correctness of the derivation in the proposed method. The proof of Theorem 1 says both Equation (7), the objective and Equation (8), the cost constraint, are linear w.r.t. $\pi^{i_h}$, which looks strange to me. The equations involve advantage estimation which definitely has a non-linear relationship with $\pi^{i_h}$. Why would it be linear?
*A1*: We apologize for the confusion caused by the incorrect description in the proof of Theorem 1. As the policy for an agent is actually a distribution, it may be inappropriate to say the objective (Eq.7)and cost constraint (Eq.8) are linear w.r.t. $\pi^{i_h}$. We appreciate your attention to this matter. In fact, the core intention is to demonstrate that the Problem (7) and (8) are convex. Here, because $\boldsymbol{\pi_{\theta_k}}$ and $\theta_{k+1}^{i_{1:h-1}}$ are given, $E_{s \sim \rho_{\boldsymbol{\pi_{\theta_k}}},a^{i_{1:h-1}}\sim \pi_{\theta_{k+1}}^{i_{1:h-1}},a^{i_h}\sim \pi^{}i_h}[A_{\boldsymbol{\pi_{\theta_k}}}^{i_h}(s, a^{i_{1:h-1}}, a^{i_h})]$ is similar to $E_{s \sim \rho_{\boldsymbol{\pi_{\theta_k}}},a^{i_h}\sim \pi^{i_h}}[A_{j,\boldsymbol{\pi_{\theta_k}}}^{i_h}(s, a^{i_h})]$ so that we only consider the latter one. We can divide the formula like this: $E_{s \sim \rho_{\boldsymbol{\pi_{\theta_k}}},a^{i_h}\sim \pi^{i_h}}[A_{j,\boldsymbol{\pi_{\theta_k}}}^{i_h}(s, a^{i_h})] =\sum_s \rho_{\boldsymbol{\pi_{\theta_k}}}(s) \sum_{a^{i_h}} \pi^{i_h}(a^{i_h}|s) A_{j,\boldsymbol{\pi_{\theta_k}}}^{i_h}(s, a^{i_h})$, where $\rho_{\boldsymbol{\pi_{\theta_k}}}(s)$ represents state visitation frequencies. We easily know that $\rho_{\boldsymbol{\pi_{\theta_k}}}(s)$ is not affected by $\pi^{i_h}$. Similarly, for each action of agent $i_h$, the $A_{j,\boldsymbol{\pi_{\theta_k}}}^{i_h}(s, a^{i_h})$ is also not relative to $\pi^{i_h}$. To this end, when $\boldsymbol{\pi_{\theta_k}}$ and $\theta_{k+1}^{i_{1:h-1}}$ are given, $E_{s \sim \rho_{\boldsymbol{\pi_{\theta_k}}},a^{i_h}\sim \pi^{i_h}}[A_{j,\boldsymbol{\pi_{\theta_k}}}^{i_h}(s, a^{i_h})]$ is convex to $\pi^{i_h}$. The same analysis applies to Equation (7) as well, confirming the convexity of both the objective and the cost constraint. What's more, the good performance can also show the correctness of our algorithm to some degree.
*Q2*: When solving the optimization problem, in Corollary 2, the expectation of the advantage over the optimal policy $\pi^{i_h*}$ is assumed to be zero. The advantage means the performance improvement of the optimal policy. How can it be reasonable to assume it to be zero?
*A2*: We are sorry that we may have omitted some derivation process in the manuscript. As we employ an intuitive heuristic based on primal-dual gradient methods [1], we apply the gradient descent method to estimate $\nu_j$ during the optimization for agent $i_h$. As for the last term in the gradient expression, direct evaluation may be infeasible since $\pi^{i_h*}$ may not locate in parameterized policy space. However, we can know that the optimal policy $\pi^{i_h*}$ satisfies the constraint of Equation (3). This allows us to assume that $\pi^{i_h*}$ and $\pi_{\theta_k}^{i_h}$ are very close. Following the definitions in reinforcement learning, we can know $A_\pi(s,a)=Q_\pi(s,a)-V_\pi(s)$ and $V_\pi(s)=\sum_a \pi(s,a) Q_\pi(s,a) =E_{a \sim \pi}[Q_\pi(s,a) ]$, we can easily know the $E_{a \sim \pi}[A_\pi(s,a)]=\sum_a \pi(s,a) A_\pi(s,a)=\sum_a \pi(s,a) Q_\pi(s,a)-\sum_a \pi(s,a) V_\pi(s)=\sum_a \pi(s,a) Q_\pi(s,a) -V_\pi(s)=0$. To this end, we can adopt $\pi_{\theta_k}^{i_h}$ to approximate the expectation of advantage over $\pi^{i_h*}$ and get $E_{s \sim \rho_{\boldsymbol{\pi_{\theta_k}}},a^{i_h} \sim \pi^{i_h*}}[A_{j,\boldsymbol{\pi_{\theta_k}}}^{i_h}(s, a^{i_h})] \approx E_{s \sim \rho_{\boldsymbol{\pi_{\theta_k}}},a^{i_h} \sim \pi_{\theta_k}^{i_h}}[A_{j,\boldsymbol{\pi_{\theta_k}}}^{i_h}(s, a^{i_h})]=0$.
This approximation may bring some error so that we also introduce a per-state acceptance indicator function $I(s_j)=\boldsymbol{1}\_{D\_{KL}(\pi\_\theta^{i_h},\pi\_{\theta_k}^{i_h}) \leq \delta}$ to enforce the constraint that updated policy remains close to $\pi_{\theta_k}^{i_h}$. Our experimental results also demonstrate that the proposed approximation is reasonable and leads to satisfactory performance in practice.
Finally, thank you again for your comments. We will incorporate your suggestions and describe our method more clearly in our next revision. If some of your concerns are addressed, you could consider raising the rating. This is very important for us and we will appreciate it very much.
[1] D. P. Bertsekas, Constrained optimization and Lagrange multiplier methods. Academic press, 2014.
---
Rebuttal Comment 1.1:
Title: Similar to the FOCOPS paper
Comment: Thanks for the rebuttal. However, after reading the response and the cited works in the paper more detailedly, I find the paper has mostly repeated the theory from the single-agent FOCOPS paper [1] while only slightly mentioning the paper. Roughly, the paper can be summarized as applying the first-order optimization method in [1] to the MARL framework in [2]. Moreover, the multiple importance ratio introduced in [2] may actually incur unstable and degraded performance.
As such, I would not recommend acceptance of the paper to the NeurIPS venue.
[1] Zhang, Yiming, Quan Vuong, and Keith Ross. "First order constrained optimization in policy space." Advances in Neural Information Processing Systems 33 (2020): 15338-15349.
[2] Kuba, Jakub Grudzien, et al. "Trust region policy optimisation in multi-agent reinforcement learning." arXiv preprint arXiv:2109.11251 (2021).
---
Reply to Comment 1.1.1:
Comment: Thanks for the comment. As for the multiple importance ratio, we think the reviewer may refer to the $M^{i_{1:h}}(s, \pmb{a})$ in [1]. In fact, it is derived by the sequential update scheme to model the correlations among agents in multi-agent problems, which has been adopted by quite a few works [1, 2, 3, 4]. This framework has been rigorously proven and shows good performance in solving multi-agent problems. In this way, we believe it can help deal with the challenges in MARL instead of incurring unstable and degraded performance.
In addition, we would like to emphasize that our manuscript aims to extend and apply the first-order optimization method to the safe multi-agent problems, where there are very few studies. We believe our work offers new insights into the challenges and opportunities presented by multi-agent settings, which are considerably more complex than single-agent scenarios.
[1] J. G. Kuba, R. Chen, M. Wen, Y. Wen, F. Sun, J. Wang, and Y. Yang, “Trust region policy optimisation in multi-agent reinforcement learning,” in International Conference on Learning Representations, 2022.
[2] Wen, Muning, et al. "Multi-agent reinforcement learning is a sequence modeling problem." Advances in Neural Information Processing Systems 35 (2022): 16509-16521.
[3] S. Gu, J. G. Kuba, Y. Chen, Y. Du, L. Yang, A. Knoll, and Y. Yang, “Safe multi-agent reinforcement learning for multi-robot control,” Artificial Intelligence, p. 103905, 2023.
[4] Kuba, Jakub Grudzien, et al. "Heterogeneous-agent mirror learning: A continuum of solutions to cooperative marl." arXiv preprint arXiv:2208.01682 (2022). | Summary: The paper builds over previous work (1) to construct a safe policy optimization algorithm based on first-order methods. It shows empirical improvement over some baselines.
Strengths:
# quality
The quality of this work is fairly satisfactory, as all the needed theory is introduced and a practical counterpart is proposed.
# clarity
The exposition is clear
Weaknesses: # originality
The work seems not to be much original since it consists of a fair extension of a previously cited paper.
# significance
I might say that the work does not seem significant enough compared to related works. For instance, one positive fact referred to in the abstract (upper bound violation) is a direct property of using the previous algorithm with an additional constraint.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I don't have questions
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I didn't find any section where the limitations of the proposed method were addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1*: The work seems not to be much original since it consists of a fair extension of a previously cited paper.
*A1*: We believe our work is not merely an extension of a previously cited paper, which we think you refer to MACPO. As is discussed in the paper, although MACPO offers an impressive solution for safe multi-agent reinforcement learning, it does owns some limitations. For example, achieving parameterized policies involves second order optimization, resulting in complex computation and implementations. Furthermore, the approximation during the optimization process will bring nonnegligible approximation error, necessitating additional steps during each update in the training process to recover from constraint violations. In contrast, our work proposes a fundamentally different way to tackle the optimization problem, where we provide full derivatives. We design a two-step approach to solve the optimization problem and prove that we can successfully employ first order method for the optimization. This significant departure from MACPO's methodology sets our work apart from the previously cited paper.
To reinforce the distinctiveness of our contribution, we conduct extensive experiments to compare our algorithm with MACPO, from the performance to the computing efficiency. The results from these experiments also highlight the outstanding performance of our algorithm. To sum up, we think our work offers a novel and distinct contribution to the field of safe multi-agent reinforcement learning, as evidenced by the differences in methodology, proofs, and experimental results when compared to the previously cited paper.
*Q2*: I might say that the work does not seem significant enough compared to related works. For instance, one positive fact referred to in the abstract (upper bound violation) is a direct property of using the previous algorithm with an additional constraint.
*A2*: As is introduced in our work, safe multi-agent reinforcement learning holds vital significance as many applications often require the agents to refrain from taking certain actions or visiting particular states in real world. Among related research, MACPO achieves the state-of-the-art performance. But compared to this second-order method, our work not only achieves superior performance but also improve the computation efficiency, with rigorous theoretical guarantees and relying exclusively on first-order optimization techniques. These advantages are crucial, particularly when dealing with large state/action spaces and complex scenarios. Regarding the mentioned upper bound violation, it serves as an important security guarantee in safe reinforcement learning, ensuring the degree of constraint violation can be controlled within a certain range even in worst-case scenarios. It should be noted that this property is not a direct consequence of using an additional constraint as the addition of such a constraint could also lead to potential violations. Taking MAPPO-Lagrangian as an example, as a soft constraint algorithm, whether this algorithm satisfies any worst-case constraint guarantees remains unknown. The empirical results in Safe Multi-Agent-Isaac-Gym obviously shows that it exhibits a delay in guaranteeing safety and even a sudden drop in performance when enforcing the safety constraint. This indicates that the property of upper bound violation is important, which also demonstrating the significance of our work.
Finally, thank you again for your comments. If some of your concerns are addressed, you could consider raising the rating. This is very important for us and we will appreciate it very much.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the Authors for the clarifications, I believe all my doubts have been clarified and I am raising my score. I would suggest adding a section (Maybe on the +1 page of the camera-ready version) that clarifies the relationship with MACPO as in the Authors' reply. It would be of high value, not only to help the reader understand the main point more clearly, but also because I believe some of the contributions stand in the step forward with respect to it.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback and for raising your score after considering our clarifications. We appreciate your suggestion and agree that adding clarifications about the relationship between our algorithm and MACPO would be of high value to the readers. We will incorporate your suggestion and include the suggested section in the revised version of the paper. Thank you once again for your thoughtful comments and for helping us improve our manuscript. | Summary: The paper proposes a new method called first-order constraint optimization in multi-agent policy space (MAFOCOPS), which solves constraint optimization problems in a non-parametric policy space and then projects the updated policy back into the parametric policy space to achieve feasible strategies that meet safety constraints. Experimental results show that the proposed method has achieved remarkable performance while satisfying safe constraints on multiple safe MARL benchmarks.
Strengths: The paper proposes a new method called Multi-Agent First Order Constrained Optimization in Policy Space (MAFOCOPS) to address the challenge of developing safety-aware methods for multi-agent reinforcement learning (MARL). The proposed approach solves the constrained optimization problem in a non-parametric policy space and then projects the updated policy back into a parametric policy space, ensuring feasible policies that satisfy safety constraints.
The MAFOCOPS method has first-order characteristics, making it relatively easy to implement compared to more complex optimization methods. Additionally, it provides an approximate upper bound for constraint violations in worst-case scenarios. Experimental results demonstrate that the MAFOCOPS method achieves significant performance improvements in multiple safety MARL benchmarks while simultaneously satisfying safety constraints. This indicates that the proposed approach effectively balances performance and safety considerations. Unlike existing safety MARL methods that are often tailored to specific tasks or rely on specific assumptions and techniques, the MAFOCOPS method is designed to be applicable to a wide range of multi-agent systems and tasks. This generalizability enhances the practicality and versatility of the proposed approach.
Weaknesses: The performance of the method is sensitive to certain hyperparameters, such as the Lagrange multipliers and the safety bound. While the paper claims that the method is relatively insensitive to variations in these hyperparameter values, it would be helpful to provide a more detailed analysis of the sensitivity and its impact on performance.
The paper does not provide information on the implementation details of the proposed method, such as the specific algorithms used for policy optimization or the computational complexity of the method. Providing these details would help in assessing the practical feasibility and efficiency of the proposed approach.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How can we select the best hyperparameter values for a new environment which hasn’t been met before, is there any search method like grid search or multi-arm bandit?
2. Is it possible to introduce more baseline algorithms for comparison?
3. I didn’t see a detailed explanation of whether MAFOCOPS uses Monte Carlo return or bootstrap return in the Safe-Multi-Agent-Isaac-Gym in the paper. I am curious if this factor will have a significant impact on the results. If possible, please provide experimental results using these different target returns.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The penultimate section of the paper contains a discussion of the main limitations. The method itself should not have a direct negative outcome. However, note that in practical applications they could occur. Some examples and ways to address them could be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1*:The performance of the method is sensitive to certain hyperparameters, such as the Lagrange multipliers and the safety bound. While the paper claims that the method is relatively insensitive to variations in these hyperparameter values, it would be helpful to provide a more detailed analysis of the sensitivity and its impact on performance.
*A1*:Due to the page limit, the results of our sensitivity analysis is presented in the appendix. Considering the complexity of multi-agent environments, establishing a precise correlation between the performance of our algorithm the Lagrange multipliers $\lambda_j$ and $\nu_{max}$ is difficult. It can be observed that different scenarios have different sensitivity to these hyperparameters from the results. Overall, the reward achieved under different settings is relatively insensitive as even setting these parameters across a broad range only leads to an average degradation of less than 10\%. On the other hand, the cost may be more sensitive to these parameters, highlighting the inherent challenges in ensuring safety guarantees in safe multi-agent reinforcement learning to some degree.
As for the safety bound, we can see that setting it too high could lead to increased oscillations in cost performance, although it may yield better reward performance. This observation is reasonable since higher safety bounds may allow agents to explore actions with potentially higher returns but less safety. If the safety bound is low enough, the agents will take actions that are definitely safe, leading to less vibration. Whereas, from a global perspective, the effectiveness of our algorithm remains consistent across these different safety levels. In general, we need to strike a balance between ensuring safety and achieving a good reward performance when applying the method.
*Q2*:The paper does not provide information on the implementation details of the proposed method, such as the specific algorithms used for policy optimization or the computational complexity of the method. Providing these details would help in assessing the practical feasibility and efficiency of the proposed approach.
*A2*:Thank you for your advice and we will provide more analysis in future revision. In fact, we introduce some details about our algorithms in Section 4.3 and specific algorithms in the appendix. Here we also introduce some basic information for convenience. For the training process, after initialising the needed parameters and networks, we can utilize the models to generate data and then estimate the advantages. Then the update of $\nu_j$ is performed using Equation (10) in the manuscript. After that, we can update the value networks using Mean Square Error loss and update policy networks using the equations of $\nabla_\theta L(\theta)$, which is expressed in Equation (11). During the training, our algorithm also employs the early stopping criterion to ensure that the updated policy satisfies the trust region constraint. In addition, the computational complexity of our algorithm is $\mathcal{O}$$(NKMHP)$, where $N$ denotes the number of agents, $K$ denotes the number of update times in every epoch, $M$ denotes the number of training steps, $H$ denotes the number of constraints and $P$ denotes the number of parameters.
*Q3*: How can we select the best hyperparameter values for a new environment which hasn't been met before, is there any search method like grid search or multi-arm bandit?
*A3*: In our work, when we conduct the experiments in a new environment, we use grid search to select the hyperparameters, which is a commonly used method for hyperparameter tuning.
*Q4*: Is it possible to introduce more baseline algorithms for comparison?
*A4*: As we mentioned in our work, the realm of safe multi-agent reinforcement learning is a nascent area of research and there are limited related works available. What's more, most current algorithms have certain limitations or are tailed for robotics tasks, whose settings are very different from our work. In this way, we choose to adopt the state-of-the-art (SOTA) algorithms in this field, namely MACPO and MAPPO-Lagrangian [1], as the baseline algorithms, which are published in the well-known journal, Artificial Intelligence. We think that the comparison with these two baseline methods are convincing to prove the effectiveness of our algorithm.
*Q5*: I didn't see a detailed explanation of whether MAFOCOPS uses Monte Carlo return or bootstrap return in the Safe-Multi-Agent-Isaac-Gym in the paper. I am curious if this factor will have a significant impact on the results. If possible, please provide experimental results using these different target returns.
*A5*: Thank you for your advice. In fact, as is mentioned in our work, we use Generalized Advantage Estimator (GAE) [2] to estimate the advantages. As Monte Carlo return is known to have low bias but high variance while bootstrap return has low variance but high bias, GAE is a method that can achieve a balance, making it be widely adopted to estimate return in policy-based methods. Considering that the baseline algorithms both use this method for advantages estimation, we also utilize it based on its effectiveness and compatibility with our approach. As the hyperparameters of GAE remain the same across our experiments, we believe that the superior performance of our approach is sufficient evidence of its effectiveness.
[1] S. Gu, J. G. Kuba, Y. Chen, Y. Du, L. Yang, A. Knoll, and Y. Yang, “Safe multi-agent reinforcement learning for multi-robot control,” Artificial Intelligence, p. 103905, 2023.
[2] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel, “High-dimensional continuous control using generalized advantage estimation,” arXiv preprint arXiv:1506.02438, 2015.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for providing detailed answers to the questions I raised, but due to the fact that the performance demonstrated by multi-agent reinforcement learning (MARL) algorithms mostly relies on the tuning of parameters, the addition of safety settings to the MARL algorithm has further improved the level of "tricky" tuning. I actually prefer to see the author present more ablation experimental results under different hyperparameter search strategies. Based on the above point of view, I choose to keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and comments. We understand your interest in seeing more ablation experimental results under different hyperparameter search strategies in the research of MARL. In fact, there are few works that pay attention to the tuning of parameters in MARL while many works still show good effectiveness. We agree that this is a valuable question and we would conduct more experiments and research about this field in future. | Rebuttal 1:
Rebuttal: Thank all reviewers for your time and valuable suggestions. We hope our rebuttal could address your concerns. We would appreciate it if you could re-evaluate our submission and we are looking forward to discussions if you have any other concerns. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DAMEX: Dataset-aware Mixture-of-Experts for visual understanding of mixture-of-datasets | Accept (poster) | Summary: This manuscript is motivated from training a universal object detector on the mixture of unlimited counts of datasets. Concretely, the previous works cannot scale the MoE due to budget, and the conventional MoE using balanced route suffers from knowledge sharing issues. To tackle the aforementioned issues, the authors devised a Dataset Aware Mixture of Experts (DAMEX), in which a novel MoE loss is proposed to learn the dataset index distribution. Then, extensive experiments are conducted on various settings to show the effectiveness.
Strengths: 1. The paper is well-organized and well-motivated. The authors provide detailed explaination to the conventional MoE and provides certain discussion on the motivation.
2. The idea of this manuscript is self-consistency, where the motivation is to tackle the knowledge distribution shift among datasets for existing MoE, and the $\mathcal{L}_{DAMEX}$ pushes router learning the dataset index distribution seems to be work in intuition.
3. The conducted experiments are wide and adequate to demonstrate the effectiveness of the proposed method.
Weaknesses: 1. The contribution of this manuscript is limited. All of the technical contributions are summarized as loss function of DAMEX, while it seems not settle the dataset distribution shift well. Concretely, as shown in Tab. 1, DAMEX fails to balance the results among datasets. As far as I am concerned, the dataset shift can be treated as hardness. To balance it, a larger improvement should be gained at harder datasets. Comparing with baseline DINO-MoE, the improvement of DAMEX concentrates on KITTI (3.9), COCO (1.1), DOTA (1.3), Clipart (2.4), Watercolor (3.6). There goes without strong correlation with dataset hardness. Welcome further discussion.
2. The manuscript has unfair expression. The authors claimed they outperform the state-of-the-art with almost 10% AP. I cannot agree with a paper published in 2019 could be called as sota, and the DINO looks to be sota.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Why the authors repeatedly mention GPU? Is there any efficient design for DAMEX? How to implement MoE on multi-GPU has been mentioned in Sec. 3, is this necessary to keep mentioning GPU setting?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: As discussed in Weakness, I don' t agree with the authors tackle the dataset shift. But this can be further discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for investing their time in reviewing our work and suggesting improvements.
Q. “The contributions of …”
We want to thank the reviewer for their feedback and wish to re-highlight our contributions.
- DAMEX does not require test-time dataset labels as DAMEX learns to route appropriate dataset-expert during training itself as mentioned in L226 under “Baselines and Metrics”. This is a more challenging setup compared to Wang et al. [1] or individual detectors which have test-time dataset source information. Our observation is that by providing supervision over expert selection in DAMEX eliminates the need of dataset source during inference as each expert learns dataset-specific features.
- DAMEX avoids mode collapse of vanilla MoE by learning dataset-specific experts. Furthermore, DAMEX provides better representation learning by utilizing human-prior over dataset characteristics as a part of its dataset to expert mapping. This can be seen in Table A2 (rebuttal pdf) where DAMEX mapping is compared against a random assignment and vanilla MoE. We would like to extend our gratitude to R1 for suggesting this experiment and making our manuscript better.
- The number of parameters of DAMEX are smaller than baselines with individual detectors having 11x more parameters and previous works [2] having 1.5x more parameters. Yet, we observe consistent improvements from DAMEX on (1) UODB set, (2) Limited data availability, (3) Disparate domains, and (4) Divergent label sets.
We note reviewer’s point of view regarding harder datasets and observe that DAMEX gains are distributed across multiple dataset domains, natural images, traffic images, aerial images and style images. Further, prior to this work MoE are considered to be a scalability tool only and DINO-MoE hasn’t been implemented before in this context. We believe that DINO-MoE is a strong baseline and is a part of our contribution in showcasing that MoE can handle mixture-of-datasets well. A fair comparison would be against DINO where we can observe large gains across the UODB set.
Q. “The manuscript has …”
Duly noted. We discuss this concern in the common answers and plan to update the paper accordingly.
Q. “Why the authors …”
Thanks for bringing this to our notice. As you already pointed out, the implementation is discussed in Sec.3. In the final draft, we will reduce the repeated mentioning of GPU.
[1] Xudong Wang, Zhaowei Cai, Dashan Gao, and Nuno Vasconcelos. Towards universal object detection by domain attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7289–7298, 2019 | Summary: The paper aims to develop a universal object detector that is applicable to a wide range of object detection datasets. For this aim, the authors proposes Dataset-aware Mixture of Experts (DAMEX). DAMEX is an extension of the vanilla Mixture of Experts layer to the multi-dataset scenario, in which each dataset is aimed to be routed to a specific expert. The authors also suggest replacing the fully connected layer in the transformer blocks by DAMEX layers. They incorporate such transformer blocks into DINO object detector. The experiments show that the approach improves upon previous SOTA as well as training a detector specifically for each dataset.
Strengths: - The paper adapts MoEs to obtain universal object detection, which is intuitive.
- The approach does not increase the number of parameters significantly while obtaining a MoE for several datasets.
- The gains in terms of AP are significant over previous baseline on UODB benchmark. Also, the improvement over training a different DINO for each dataset is notable with ~4 AP. The method also outperforms using a Vanilla MoE by 1.3 AP.
- The analysis part is comprehensive and it shows that the method does work as expected. For example, the model learns to assign the experts to the datasets properly.
- Generally speaking, the paper is clearly written.
Weaknesses: 1. I think this task is closely related to open vocabulary object detection, on which there are already several different baseline methods. Such models are also evaluated on a set of multiple object detection datasets, e.g. using Object Detection in the Wild benchmark [A]. As an example method, GLIP [A] can easily be prompted given the union of the label spaces of each dataset in UODB benchmark. By considering recent advances, I'd expect at least an example of these models to be considered as a baseline.
2. The authors use UODB benchmark (compared to Wang et. al, 2019) to evaluate their method. This dataset normally uses Pascal VOC style AP. Here, the authors use COCO-style AP. That's why, the AP values are significantly low in this paper (compared to Wang et. al, 2019) owing to the choice of this evaluation measure. It took me a while to dig into this and see the difference in evaluation. I'd expect this to be very clear in the metrics section.
3. Please note that k index to define summation is not used in p_i or e_i. I'd recommend checking that equation.
[A] Grounded Language-Image Pre-training, CVPR 2022
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Do you think open vocabulary object detection methods are also valid baselines for this task? Specifically, why don't you use a model such as GLIP to compare your method against?
2. You mention that each GPU stores the weights of one expert. Following from this, do you pay specific attention to sampling the batch? What happens if all images (16 in total across all GPUs) come from the same dataset, in which case they should be sent to the same GPU, potentially causing memory issues?
3. The learning rate of the proposed method is tuned to be 1.414e-4. Did you tune the learning rate for each dataset while obtaining the single domain models (the first two lines in Table 1)? If so, how?
4. Why did you prefer using COCO-style AP instead of the performance measure of the benchmark that is Pascal VOC style AP?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time in reviewing our work.
Q. “I think this …”
We note the review feedback and their suggestion for GLIP. We answer the question under common answers and report the results in Table A1 (rebuttal pdf).
Q. “The authors use UODB…”
We understand reviewer's concerns regarding the metrics used in our work and Wang et al.[2]. We consistently report COCO AP scores as a metric throughout the paper (L233 and L238) as it is an average over multiple IoU thresholds between 0.5 (coarse localization) and 0.95 (near-perfect localization). Pascal Style AP score is calculated over a fixed IoU threshold of 0.5. Nevertheless, to remove any doubts we have provided Pascal Style AP score of DAMEX against all the baselines in Table A3 in the rebuttal pdf.
Q. “Please note…”
We thank the reviewer for catching this typo, we will fix it in the camera-ready version.
Q. “Do you think …”
We covered this question previously.
Q. “You mention that …”
We are grateful for the reviewer for bringing this insightful point. Since the datasets are extremely imbalanced with Watercolor having only 500 train images vs 23k images of DeepLesion, we found in our initial experiments that uniform sampling across training images worked best for overall performance. For e.g., sampling based on inverse dataset size hurts overall performance. However, we agree that there can be a better sampling strategy for UODB which can improve DAMEX performance further and is agnostic to our contributions.
In the scenario mentioned, the method does not suffer from memory issues but a decrease in speed as all tokens are processed through a single expert. This bottleneck exists in vanilla MoE also and is an inherent feature of Mixture-of-Experts.
Q. “The learning rate …”
Duly noted. We change the learning to sqrt(2)*1e-4 following Krizhevsky’s [1] recommendation of learning rate dependence on batch size as we used a batch size of 2 during training. We followed a similar strategy for all baselines for fair comparison.
Q. “Why did you prefer …”
Answered previously.
We hope it will resolve all the concerns!
[1] Krizhevsky, A., 2014. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997.
[2] Xudong Wang, Zhaowei Cai, Dashan Gao, and Nuno Vasconcelos. Towards universal object detection by domain attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7289–7298, 2019
---
Rebuttal Comment 1.1:
Title: Rebuttal acknowledged
Comment: Thanks for your time and effort to address my concerns. I am happy to keep my tendency for acceptance. | Summary: This work, motivated by the goal of developing a “universal detector,” seeks to understand how best to train a model across a large set of existing curated, labeled datasets that might differ in collection strategy, labeling standards, categories of interest, etc. They posit that the best strategy is to train dataset-specific features within a model that can be used to jointly predict on new images via mixture-of-experts, where dataset tokens are used to route images through the network based on the “expertise” of the learned features effectively and with many fewer parameters than prior multi-headed approaches. They show nice improvement using their method on the Universal Object-Detection Benchmark, and demonstrate that their method reduces representation collapse, a common issue with MoE training.
After reading the authors rebuttal and discussing with them, I will maintain my score of 6.
Strengths: This work, which extends MoE-based dataset mixing to object detection, is differentiated from prior work using MoE for dataset mixing in image classification via the inclusion of specific per-dataset-expertise-routing layers within a larger shared-across-datasets architecture. They distribute expertise via a load balancing loss (similar to prior work), which they adapt to the detection setting by applying it only to foreground tokens (reducing the influence of background on expertise routing). They use explicit dataset tokens to route information to specific experts during training and inference, which requires train- and test-time dataset labels. DAMEX particularly seems to do well over vanilla MoE in a few-shot setting, as seen in the figure in Table 2.
Weaknesses: The claims of performance improvements of 10% on average are slightly misleading, as there are probably lots of additional contributing factors that are not attributable to their method that led to prior reported numbers being lower. They see only a 2.3% improvement on average over vanilla DINO, and their contribution of a dataset-specific routing layer only improves performance over DINO-MoE by 1.3%. On many test sets their method underperforms by a significant margin. I would appreciate a more nuanced and representative claim, and some additional investigation and analysis as to why their dataset performs better on, e.g., DOTA, and much worse on Watercolor or LISA.
Since the dataset tokens are explicit, I believe the model would need to be completely retrained to add a single additional source of data, which is (perhaps prohibitively) inefficient. Maybe an extension of this work would be to enable add-on routing/expertise generation for new mixing datasets without end-to-end retraining across all datasets.
Nit:
I’m not sure the citation style matches the NeurIPS standard?
Grammar is inconsistent and frequently in error throughout, with too many small mistakes to reasonably capture in this review. I would recommend significant text editing to increase the readability of a camera-ready manuscript for this work.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: How would you add an additional dataset into the mix with this method? Would it require training from scratch?
It seems in figure 3 that despite the load balancing, expertise is heavily relying on “expert 1” for many of the classes. How is this related to the class distribution across datasets?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: More nuanced evaluation and discussion of failure modes of their method on some datasets but high performance on others would help the reader gain intuition as to when this method would be worth implementing and applying. What about the individual dataset structure contributes to when this dataset-specific routing helps vs hurts?
I do appreciate the discussion of label unification as a current limitation.
The comment “Going forward it is important that we ensure that the universal object detector are unbiased and maintain fairness in their predictions” feels like a throwaway. How would this work possibly contribute to that or how could it be adapted to contribute to that goal? How is the current paradigm possibly entrenching bias?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time in understanding our work and providing feedback to improve the manuscript.
Q. “The claims of performance…”
Duly noted. As mentioned in common concerns, we will change our writing to better convey the performance gains to the readers.
We understand reviewer's concern regarding why some baselines are better on few datasets and we believe it is because:
- Individual detectors tend to overfit on datasets and have lot more parameters.
- We choose DINO as our detection pipeline as it is SOTA in unimodal object-detection method. With DINO as base architecture, we observe that DAMEX improves further on the base numbers, and we see consistent gains across datasets with the best overall performance. Note that, DAMEX is an architecture agnostic idea of using MoE in a dataset-aware setting.
- Individual detectors and Wang et al. have knowledge of dataset-source during test time which is absent for DAMEX (L105-107).
We thank the reviewer for this insightful question and will update the writing to share these nuanced reasons.
Q. “Since the dataset tokens…”
Yes, this is a very good point by the reviewer. In our current setup the addition of a new dataset will require re-training from scratch. We agree that an extension of this work would be addition of a new dataset in a continual learning environment by introducing a new expert with some frozen layers in the shared network. We avoided delving into this scenario as it would have diluted our core idea of DAMEX. We motivate the community to pursue this as future work.
Q. Question regarding Fig 3.
We understand the issues with fig 3 and have cleared them up under common answers by providing a new figure (Figure A1, rebuttal pdf).
Q. “What about the individual …”
We agree with the reviewer's thinking. Individual dataset structure is important in assessing dataset-expert mapping. In our experience, assigning datasets with similar domains to same expert tend to help performance while keeping disparate domains to separate experts. We will add this in the camera-ready manuscript.
Q. Nits
We accept the change in citation style and will improve the grammar in the camera-ready manuscript. We will also make changes in Limitations sections as suggested by the reviewer.
---
Rebuttal Comment 1.1:
Title: What about the potential implications of the work on bias and fairness, and mentioned in the text by the authors?
Comment: As I mentioned in my review, "The comment “Going forward it is important that we ensure that the universal object detector are unbiased and maintain fairness in their predictions” feels like a throwaway. How would this work possibly contribute to that or how could it be adapted to contribute to that goal? How is the current paradigm possibly entrenching bias?"
I would appreciate if the authors respond to this comment, as I believe engaging more deeply with broader questions about the implications of our work is an important and necessary opportunity for growth in our field.
I thank the authors for their thoughtful responses to my other questions and concerns.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their engagement and discussion.
Universal object detectors should be unbiased and fair. Here unbiased refers to classes with very few training examples or domain shift which can be called underrepresented classes. The detectors should be able to work on both classes with large, medium and few training samples in any environment. Similarly, for fairness, a universal object detector should be able to handle objects from different geographical regions, races, genders, etc. These are issues that we believe are ameliorated by methods focused on learning under imbalance or domain shift.
As a result, we feel DAMEX could be a method that takes a step towards this where each dataset, even with very few samples can be represented by different experts. We can potentially divide a large dataset into different sets which cater to at least one aspect of fairness and bias. However, we understand the need for further analysis and research before we have a truly unbiased and fair universal object detector and implore the community to do so too. | Summary: This paper introduces a DAMEX layer based on the idea of assigning samples conforming to the charactersitiscs of a dataset to the corresponding expert. The underlying thought is to build dataset-relevant modules and ensemble them all together. Previous approaches leverage Mixture of Experts to scale their model while maintaining approximately the same inference speed. This work deals with the scenario of training multiple datasets together for object detection.
Strengths: The writing is clear. Results seem to prove the effectiveness of the proposed approach. The added number of parameters is small.
Weaknesses: 1. There is no introduced novelty in the proposed approach. It is good to know dataset specific experts are good for improving performances on object detection but the proposed DAMEX is in principle a linear routing layer that maps the input to different experts. The training procedure is also very straightforward: training different datasets on different GPUs. Essentially, this is equivalent to model ensemble a per-block router. Most of credits should go to GShard and Tutel.
2. More clarification is needed for the improvement of performances. Throughout the results part, clarifications of why the proposed method is better than the others are missing. For example, I miss why DAMEX could deal with imbalanced datasets? Why it has better domain adaptation than DINO?
3. Scalability is limited. From the experimental setting and the results, the maximum number of experts is 8, which is pretty small. It seems that it is highly correlated with the number of GPUs as well. From Tab. 5, we can see that when multiple experts are distributed to a single dataset, the performance drop is significant, which demonstrates that the approach is not scalable.
4. Important experimental details are missing. For example, in Tab. 5, which two datasets are distributed together when # Experts = 2? The number of iterations needed for training is not given.
5. From the visualization in Fig. 3, DAMEX has more condensed cluster such as Expert 1. This may indicate that other experts are not exploited well and it might work to delete some experts with no obvious decrease on performance. This potentially would have robustness issues. Again, it is more like doing majority voting from the ensemble perspective.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weaknesses please.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: No. The authors mentioned the negative societal impact but does not have a solution yet.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback.
Q. “There is no introduced novelty…”
We believe that performance gain and novelty are inter-related.
Sparsely activated networks are composed of components (experts), each of which learn to handle a subset of the complete set of training cases.
The key here is the division of training set into different subsets, or in other words, routing to different experts.
GShard [1], Tutel [2] and Switch [3] learn the expert division at train time.
We take a different approach. We utilize the extra information that is available in the form of dataset/domain to guide the division. The training involves letting one expert handle samples from one data set.
- As noted above, the idea is different from what is being done commonly in the form of MoEs. And thus, it is novel in the context of this problem.
- While it seems simple from the outlook, it is quite perplexing to see why it works so well, as noted by the reviewer.
- The fascinating part is that no such dataset label is provided at test time. But our training enables the network to recognize the dataset/domain. Thus, our novel loss function and model design allows the network to aggregate the knowledge across datasets in the shared parts and learn dataset-specific features in the dataset experts.
- The dataset-expert mapping adds human-prior in the expert training which further helps performance. This can be seen in Table A2 (rebuttal pdf) where DAMEX mapping is compared against a random assignment and vanilla MoE. We would like to extend our gratitude to R1 for suggesting this experiment and making our manuscript better.
- To summarize, this seemingly simple form of work division allows us to learn a good multi-dataset object detector. This finding is very useful for the community. This could lead the community to explore other kinds of expert division rather than just learning them at train time.
Q. “More clarification is needed…”
Thank you for your feedback. DAMEX is better than baselines in domain-adaptation and imbalanced datasets because dataset-aware experts are able to learn dataset-specific features while the shared part of the network aggregate common information from the datasets. This again connects to our contributions: (a) human-prior expert-dataset mapping, (b) dataset-specific experts training, (c) correct expert selection during inference. We will further improve our writing in the corresponding sections of the manuscript to emphasize these points.
Q. “Scalability is limited…”
We thank the reviewer for their question. We believe scalability can be achieved from two perspectives (a) Compute and (b) Datasets. We focus on the latter as the focus of our work. In DAMEX we know the number of datasets at training, hence, we can scale the number of optimal experts by matching it with the datasets. Instead of having 1 expert/gpu we can have 2 experts/gpu to accommodate higher expert load on the same compute. However, if we have less training datasets (say 4) then as shown in Tab 5, the optimal setup is using same number of experts as datasets. Having more experts (say 8, Tab 5) does not help in performance.
Q. “Important experimental …”
Duly noted. In Tab 5, VOC and KITTI share the same expert following same strategy as Tab 1. Other hyperparameters including number of epochs were set to be same as DINO (36 epochs) as mentioned at L223. We accept reviewer’s suggestions in writing and will clarify it in the camera-ready version.
Q. “From the visualization …”
We understand reviewer's concern for Fig 3 and have clarified it in common answers. We provide an updated Fig 3 as Fig A1 in rebuttal pdf.
[1] Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with condi tional computation and automatic sharding. arXiv preprint arXiv:2006.16668, 2020.
[2] Changho Hwang, Wei Cui, Yifan Xiong, Ziyue Yang, Ze Liu, Han Hu, Zilong Wang, Rafael Salas, Jithin Jose, Prabhat Ram, Joe Chau, Peng Cheng, Fan Yang, Mao Yang, and Yongqiang Xiong. Tutel: Adaptive mixture-of-experts at scale. CoRR, abs/2206.03382, June 2022.
[3] William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. The Journal of Machine Learning Research, 23(1):5232–5270, 2022.
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: I've read the authors' response. The author response addresses most of my concerns. I'm happy to increase my score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their feedback on DAMEX. Through their comments, we have gained a valuable insight of the paper from a reader’s perspective, and we are thankful for their suggestions on improving our manuscript.
1. **Comparison against Vision-Language Foundation models (R1, R4)**
Our method and baselines are based on vision-modality and that’s why we didn’t compare it against VLFMs as has been done by recently published prior works [1,2]. However, following request of R1,R4, we ran zero-shot GLIP [3] on UODB benchmark and report the performance in Table A1 (rebuttal pdf). GLIP [3] has been pre-trained on large datasets and show very good performance on natural image datasets like VOC and COCO but fail to perform on other domains, notably DeepLesion (medical dataset). Finally, we want to highlight that DAMEX is an architectural solution for mixing datasets and can be also applied on VLFMs like GLIP that can be explored as a future work.
2. **Explanation of Fig 3 (R2, R3)**
Fig 3 indicates the expert selection with respect to all the classes in the mixture of 11 datasets in UODB. As mentioned in the caption of Fig 3, the majority of classes are from VOC and COCO dataset which are mapped to Expert 1 and hence, there is a higher expert 1 utilization as both of these datasets are majority in the set. Contrary to this in vanilla MoE, we can observe that MoE Layer 1 collapses for Expert 0 and 1 for all the datasets. However, we thank the reviewers for their suggestions, and we have realized that dividing expert utilization wrt classes has resulted in a confusing figure. Please refer to Figure A1 (rebuttal pdf), where we have updated the figure 3. We compare datasets against expert IDs and observe high expert utilization for each dataset against its assigned expert.
3. **Reporting of improvement over prev. SOTA (R3, R5)**
We try to be transparent in our performance gains by introducing DINO based baselines for comparison of DAMEX. However, we understand reviewer’s concerns and will rephrase writing to help the readers understand our performance gains accurately.
[1] Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel M Ni, and Heung-Yeung Shum. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. arXiv preprint arXiv:2203.03605, 2022.
[2] Xingyi Zhou, Vladlen Koltun, and Philipp Krähenbühl. Simple multi-dataset detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7571–7580, 2022.
[3] Liunian Harold Li*, Pengchuan Zhang*, Haotian Zhang*, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, and Jianfeng Gao. Grounded language-image pre-training. In CVPR, 2022
Pdf: /pdf/08bb468bfb35d1566dc030e06bc2169b85ae2263.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper tackles the problem of mixture-of-datasets training for object detection. The authors propose a mixture-of-expert-based (MoE) model that utilizes dataset-specific features to tackle mixing of heterogeneous datasets. The backbone is based on DINO transformer and one expert is assigned to one dataset. Results on the UODB dataset shows the efficacy of the proposed method.
Strengths: 1. The idea is straightforward and the intuition of using one expert per dataset makes sense. The method seems easy to implement and the authors promise to release code so the reproducibility is good.
2. The proposed method is better than the previous SOTA by a large margin.
Weaknesses: 1. The design of assigning one expert for each dataset seems limited. The MoE model learns to route the tokens to a specific dataset expert, but in the real world there are many more than 11 datasets/styles. For images that are watercolor-comic etc., would the model get confused? Also, please discuss the need for a universal object detector trained on limited supervised datasets when large vision-language foundation models (VLMs) like [4*] can already get 60+ mAP (vs. 41 in this paper) on COCO. It would help to see the performance of the VLMs on these lesser-tested datasets like DOTA and Watercolor.
2. The performance on common object detection benchmarks (MSCOCO, VOC, etc.) is lower than baselines. This may suggest that some features are difficult to learn under the MoE setting.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The line of work on mixture-of-datasets for video understanding should also be discussed [1*, 2*, 3*] in related work. Also, open-vocabulary object detection based on large vision-language models (VLMs) [4*] should also be considered.
2. A inference speed comparison is needed. What is the FPS, for example, of the proposed method vs. others?
3. Other minor comments:
The citation format makes some sentences strange: “Firstly, these datasets have been collected over time Everingham et al. [2015] and …” (Line 29)
The text in Figure 2 is too small to read.
[1*] Akbari, H., Yuan, L., Qian, R., Chuang, W. H., Chang, S. F., Cui, Y., & Gong, B. (2021). Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. Advances in Neural Information Processing Systems, 34, 24206-24221.
[2*] Duan, H., Zhao, Y., Xiong, Y., Liu, W., & Lin, D. (2020, August). Omni-sourced webly-supervised learning for video recognition. In European Conference on Computer Vision (pp. 670-688). Cham: Springer International Publishing.
[3*] Liang, J., Zhang, E., Zhang, J., & Shen, C. (2022). Multi-dataset Training of Transformers for Robust Action Recognition. Advances in Neural Information Processing Systems, 35, 14475-14488.
[4*] Yuan, L., Chen, D., Chen, Y. L., Codella, N., Dai, X., Gao, J., ... & Zhang, P. (2021). Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432.
--------------------------Post rebuttal
I have read the author rebuttal and other reviewers. The authors have addressed my concerns and questions. I'm increasing my score.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitation and potential issues of the research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and investing their time in understanding our work.
Q. “The design of assigning…”.
We would like to thank the reviewer for this question that prompted us to run a new experiment and improve our manuscript further. DAMEX allows the user to incorporate human prior in mapping datasets to experts which in-turn helps in leveraging similar features from larger datasets. Table A2 (rebuttal pdf) shows that DAMEX w/ random expert-dataset mapping performs worse than DAMEX w/ human-prior mapping but is still better than learned experts through load-balancing loss as in vanilla MoE. This shows that mapping plays an important part in the performance and having a human-prior in mapping can boost model performance. Further, DAMEX isn’t limited to one dataset/expert as we show in Table 1, where Expert 1 shares COCO, VOC and Clipart dataset. We will add a section on mapping in the camera-ready manuscript based on these results.
Q. “For images that are watercolor-comic etc.,...?”
For the images that are watercolor-comic, we believe it will be routed to any of the watercolor or comic expert and should get equally good representations from both experts.
Q. Need for VLFM.
Thank you for the question. We have answered this question under common answers.
Q. “A inference speed …”
Thank you for the question. We find that the inference speed of DAMEX (2.99 FPS) is very similar to MoE (3.46) and believe that it can further be improved through code optimizations as the goal of our work is not higher inference speed but better performing mixture-of-datasets object detector. Also, our implementation library (Tutel) is catered towards vanilla MoE not DAMEX.
Q. “The performance on common…”
Yes, we agree with the reviewer in pointing this out. We observe that single dataset detectors of COCO, VOC outperform DAMEX. This is a trend that we notice in all baseline dataset-mixing papers too [1,2] which have non-MoE architectures as well. Our understanding is that natural image dataset distribution is significantly impacted with other dataset distribution types due to lighting, camera views and scale. Further, most of the parameters are shared between the datasets and there is only a fraction of dataset-specific parameters,
Q. Writing suggestions.
Thanks for the suggestions. As suggested, we will add a section on mixture-of-datasets for video understanding in related works. We acknowledge the issue with the citation style and fig 2. font size. We will fix the manuscript and incorporate the changes.
[1] Xudong Wang, Zhaowei Cai, Dashan Gao, and Nuno Vasconcelos. Towards universal object detection by domain attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7289–7298, 2019
[2] Xingyi Zhou, Vladlen Koltun, and Philipp Krähenbühl. Simple multi-dataset detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7571–7580, 2022. | null | null | null | null | null | null |
When can Regression-Adjusted Control Variate Help? Rare Events, Sobolev Embedding and Minimax Optimality | Accept (poster) | Summary: This paper presents theoretical results on estimates of the integral of f^q based, and when regression adjusted control variates can lead to better Monte Carlo estimators.
Strengths: The results look to be novel and improve existing ones in this area. Furthermore, this is an area that is important in practice and for which strong theory can be impactful.
I am not familiar with the historical work on this problem, and thus find it hard to say just how impressive the theoretical developments are -- thus the main focus of my review has been on their practical usefulness. It seemed that there is potential for these to be important, but (see Weaknesses) the presentation currently limits this.
Weaknesses: The paper's presentation limits its accessibility to those who are very familiar with the theoretical area, which means that it is unlikely that the work would have impact on practice. As just one example, the introduction is written under the assumption that the reader knows what a Sobolev space is.
Similarly, there is little attempt to relate the theory to practice. E.g. how would a user know what Sobolev space their function is in? This may be straightforward in some situations (but is not explained). In general settings where we want to estimate E_pi(f(X)) and area using MCMC to simulate from pi -- things are complicated if the domain of X is unbounded, as I believe the theoretical results assume require to transform X to e.g. lie on the unit hyper-cube. The expectation is still an integral of a function over the hyper-cube, but perhaps it is less clear what the function is/what its properties are.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The main thing that would improve my opinion of the work would be if the authors could add some details linking the theory to practice.
These were questions/comments I had when reading the paper that relate to the current presentation, and are suggestions that may make the paper more accessible to a wider audience:
The paper would be more accessible if an informal definition and intuition for a Sobolev space is given.
I like the idea of something like Figure 1 to try and get some intuition about the dichotomy between situations where rare/extreme events are and are not possible. But at the moment I think this is of limited use — it just shows a curve with and without one. Could more be said, I.e. how does the Sobolev spaces in (a) and (b) differ (i.e. a has larger s perhaps?) and what does this mean about the functions (e.g. a is smoother than b). It may not be clear why smoothness as defined by the Sobolev space immediately means that you do not get extreme events as in the figure, as compared to assumptions on the derivatives of the function. Also the definition of extreme event seems to depend on q. Again some intuition here would help.
Presumably there is some link in that if f is in a Sobolev space, then f^q is in a different space — and really what matters are the properties of f^q?
What if Omega is unbounded (as you use a uniform distribution for the points in Theorem 2.1)? Or equivalently how does the distribution on the points affect the results in (2.1). At one point in the introduction you mention x_i uniform on the unit cube, but initially you just say that Omega is in R^d. I think things could be clearer.
Figure 2 mentions truncated Monte Carlo — has this been defined at this stage (even briefly/informally)?
Section 4: I think the introduction here could be clearer. That is stating that you now consider situations where you observe f(x_i) with error, and evaluate how the noise level impacts performance.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the feedback on our work. Below are our responses to the questions raised in the review:
1. Connection between application and our theoretical work
We already discussed related application in subsection 1.1 "Regression-Adjusted Control Variate (RACV)" of the paper. Here we will list the most important applications of RACV along with their references. These include gradient estimation [1], statistical inference in biology [2], causal inference [3], estimation of the normalization factor and sampling [4], MCMC simulation [5].
Moreover, we would like to emphasize that our main contribution is to provide a theoretical understanding of RACV. We examine all possible cases and pinpoint the regimes when RACV can help us obtain a minimax optimal estimator. We will make sure to clarify our main contributions in our next version.
We admit that our theoretical results are derived under a few assumptions (The given function is in a sobolev space; The data points are uniformly sampled from a unit cube; etc). We will discuss how these assumptions limit the application of our theory while revising the conclusion section.
2. Intuitive Explanation of mathematical concepts (Sobolev spaces and embedding)
We have given the definition of Sobolev space in section 1.3. Intuitively, one may interpret $W^{s,p}(\Omega)$ as the space formed by all the functions $f$ satisfies the following two conditions: (1) $f$ can be differentiated up to $s$ times; (2) The $i$-th derivative of $f$ has finite $L^p$ norm for any $0 \leq i \leq s$.
Moreover, we also list a simplified version of the Sobolev Embedding Theorem below:
For any non-negative integer $s$ and $1 \leq p < r \leq \infty$ satisfying $\frac{1}{p}-\frac{s}{d} = \frac{1}{r}$, we have $W^{s,p}(\Omega) \subset L^r(\Omega)$, $\textit{i.e,}$ every sobolev function in $W^{s,p}$ is also a $L^r$ function.
Now let's consider dividing the unit cube $\Omega$ into $n$ grids, each of which has side length $n^{-1/d}$. We will then use a bump function supported on one grid $\Omega'$ as an example to provide some intuition on the connection between existence of rare events and the smoothness parameter $s$. For simplicity, here we just take $s=\frac{d}{p}$ to be the threshold. Our bump function is given by
$$
g(x) = n^{(-\frac{s}{d}+\frac{1}{p})}K(n^{\frac{1}{d}}(x-c))
$$
where $c$ denotes center of the grid and $K(y) := \prod_{i=1}^{d}\exp(-(1-y_{i}^2)^{-1})$ is compactly supported on $[-1,1]^d$. We will then proceed to verify that $g \in W^{s,p}(\Omega)$. In fact, for any $|t| \leq s$, we may use change of variable to directly compute the $L^p$ norm $\|D^{t}g\|_{L^p}$ of the $t$-th derivative $D^{t}g$, which is given by:
$$n^{\frac{|t|-s}{d}} \|D^{t}K\|_{L^p}$$
Now let's consider different values of $s$. On the one hand, when $s >\frac{d}{p}$, we have that $f \in L^{\infty}(\Omega)$ for any Sobolev function $f \in W^{s,p}(\Omega)$ via the Sobolev Embedding Theorem above. This corresponds to Part (a) of Figure 1 in our paper, where the function is uniformly bounded without any extreme event. On the other hand, when $s <\frac{d}{p}$, from the bump function above we have that the power $-\frac{s}{d}+\frac{1}{p}$ in the leading coefficient $n^{-\frac{s}{d}+\frac{1}{p}}$ is positive. As the bump function $g$ constructed above lies in the Sobolev space $W^{s,p}(\Omega)$ for any $n$, we may take the limit $n \rightarrow \infty$ to deduce that a peak (rare event) can exist in this bump function, which corresponds to Part (b) of Figure 1 in our paper. Essentially speaking, Part (a) corresponds to $W^{s,p}$ with larger $s$, while Part (b) corresponds to $W^{s,p}$ with smaller $s$.
Furthermore, we will explain how Sobolev Embedding Theorem above can help us find the space that $f^q$ lies in. Let $r = (\frac{1}{p}-\frac{s}{d})^{-1}$. Then we have $f \in L^{r}(\Omega)$, which implies that $f^q$ is a $L^{\frac{r}{q}}$ function. When proving that our estimator is minimax optimal, our main strategy is to embed the influence function $f^{q-1}$ and the error function $f-\hat{f}$ into "dual" $L^p$ spaces. Hence, it's crucial to figure out how to use some $\hat{f}$ to approximate the original $f$, which is also the main technique contribution we made in the paper.
3. Miscellaneous questions
The phrase "Truncated Monte Carlo" is defined in Subsection 3.2 of our paper. Regarding the introduction of Section 4, we will follow your advice to revise it in the new version.
We will take all suggestions into account while revising our manuscript. Finally, we would like to express our gratitude to the reviewer for your time and dedication, which helps us improve the quality of our manuscript.
[1] Shi, J., Zhou, Y., Hwang, J., Titsias, M. and Mackey, L., 2022. Gradient estimation with discrete Stein operators. Advances in neural information processing systems, 35, pp.25829-25841.
[2] Angelopoulos, A.N., Bates, S., Fannjiang, C., Jordan, M.I. and Zrnic, T., 2023. Prediction-powered inference. arXiv preprint arXiv:2301.09633.
[3] Liu, H. and Yang, Y., 2020. Regression-adjusted average treatment effect estimates in stratified randomized experiments. Biometrika, 107(4), pp.935-948.
[4] Holzmüller, D. and Bach, F., 2023. Convergence rates for non-log-concave sampling and log-partition estimation. arXiv preprint arXiv:2303.03237.
[5] Belomestny, D., Goldman, A., Naumov, A. and Samsonov, S., 2023. Theoretical guarantees for neural control variates in MCMC. arXiv preprint arXiv:2304.01111.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. However, I do not think it addressed my main concern -- about making the paper accessible to a wider audience, giving clearer intuition and linking the results in with practice a bit more clearly.
Looking at the other reviews -- I think this was also the main concern of the most negative review. By comparison, those who are most familiar with the underlying mathematics are more positive about the paper. I accept that it has some new theoretical results, to me it is just a shame that the impact of the work will be limited by a lack of interest in making these accessible to practitioners, as this would increase the likely impact of the work.
Given the theoretical aspects of the work are good, I have increased my score to 5.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your response. We are grateful for your detailed review, which helped us improve the presentation of our paper a lot. We believe the new version of our manuscript has already become more approachable for the general audience. We also think that it will capture broader attention if it gets accepted. After receiving your review, we’ve already included all the explanations about the link between our work and applications in the rebuttal and added a broader impact section to the main text. Additionally, we also added a background section to the appendix, which provides preliminaries for researchers without sufficient mathematical background. Just as we pointed out earlier, there were lots of related papers (already cited in the manuscript) studying regression-adjusted control variates (RACV) from an application perspective. Hence, we believe that the story presented in our paper will catch the interest of the general NeurIPS audience.
Here's what we've already changed to our manuscript:
We've extended the description from lines 79-91 to detailly discuss the relationship of our work with empirical work [1-7] and changed the subtitle from "Regression-adjusted Control Variate" to "relationship to empirical works". We believe such subtitles will let empirical users easier to find who to find insight for their application.
We'll add the following description to section 1.2 contribution and section 5 conclusion
All previous works made strong assumptions that $s<d/p$, which made the function uniformly bounded and neglected the possibility of spike functions. As a result, they were unable to discover the transition between the two regimes that our paper found. Our paper also introduces a new technique using the Sobolev embedding theorem to embed the influence function into the dual norm of the function estimation evaluation norm. This technique could have a significant impact on the semi-parametric literature.
We've added a section in the appendix to introduce the Sobolev space and Sobolev embedding theorem in the appendix. This is an extension of our previous rebuttal.
[1] Shi, J., Zhou, Y., Hwang, J., Titsias, M. and Mackey, L., 2022. Gradient estimation with discrete Stein operators. Advances in neural information processing systems, 35, pp.25829-25841.
[2] Angelopoulos, A.N., Bates, S., Fannjiang, C., Jordan, M.I. and Zrnic, T., 2023. Prediction-powered inference. arXiv preprint arXiv:2301.09633.
[3] Liu, H. and Yang, Y., 2020. Regression-adjusted average treatment effect estimates in stratified randomized experiments. Biometrika, 107(4), pp.935-948.
[4] Holzmüller, D. and Bach, F., 2023. Convergence rates for non-log-concave sampling and log-partition estimation. arXiv preprint arXiv:2303.03237.
[5] Sun Z, Barp A, Briol F X. Vector-valued control variates International Conference on Machine Learning. PMLR, 2023: 32819-32846.
[6] Yaniv Romano, Evan Patterson, and Emmanuel Candes. Conformalized quantile regression. Advances in neural information processing systems, 32, 2019.
[7] Aleksandros Sobczyk and Mathieu Luisier. Approximate euclidean lengths and distances beyond johnson-lindenstrauss. Neurips 2022. | Summary: This paper studies whether we can learn a control variate to reduce variance in Monte Carlo sampling.
Strengths: This paper studies whether we can learn a control variate to reduce variance in Monte Carlo sampling.
Weaknesses: To someone who's not familiar with this area of research, the paper lacks introduction and is very confusing.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Is there a simple and intuitive toy example to better understand problem and proposed solutions?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: To someone who's not familiar with this area of research, the paper lacks introduction and is very confusing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | null | Summary: The authors study the theoretical properties of Monte Carlo estimators of the moments of a Sobolev function with nonparametric regression-adjusted control variates. In particular, they show that when a certain smoothness assumption is satisfied, then the regression-adjusted rule achieves the minimax optimal rate (where the minimum is over estimators, the maximum over test functions/integrands). When the smoothness assumption considered is not satisfied, the Monte Carlo estimate of the moments has infinite variance, and the authors show that to again achieve the minimax optimal rate one needs to resort to a truncated estimator. Further, the authors study the performance of the regression-adjusted estimator for integral estimation with *noisy* observations, a somewhat nonstandard setting, and provide some lower and upper bounds for the estimator's absolute error.
Strengths: - A good theoretical contribution to the literature on minimax-like results for Monte Carlo estimators
- Good effort in making the notation and results accessible to a wider audience
- Interesting setting with noisy function evaluations
Weaknesses: - The paper reads a bit like a laundry list where results are presented in a sequence without much of a "story" connecting them; in particular, sections 3 and 4 feel disconnected, the former being about minimax results with noiseless observations specifically about moment estimation, the latter being about integral estimation (q=1 ? ) with noisy observations. Why does the setting change, what is the motivation?
- Regarding your contribution, you mention "existing proof techniques are only applicable in scenarios where there is either no noise or a constant level of noise" - here are you talking about results for moment estimation or integral estimation ? Can you clarify in both settings which aspects are missing from the existing literature and how your results filll that gap ?
- While I understand that the contribution is of theoretical nature, could you make more effort motivating why the setting of noisy observations is interesting / relevant to applications of Monte Carlo ? Further on this point, could you give intuitions for when the main smoothness assumption is expected to hold or not hold (beyond rare events) ?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - What are other valid choices beyond KNN for Section 4?
- It would improve the paper to give example classes of $f$'s of interest
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors should add a couple of sentences about limitations in the conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are really grateful for the reviewer's valuable feedback on our work. For questions raised in the review, we list our answers below:
1. A coherent story connecting all the results
We already built a “story” connecting the results presented in the paper as the reviewer suggested. The story is to investigate whether a machine learning based control variate can help improve the Monte Carlo methods in terms of convergence rate. We find out that if the random variable simulated via the Monte Carlo method is of finite variance, then we can always use a non-parametric control variate to improve the convergence rate. However, if rare events of infinite variance exist, a non-parametric control variate no longer helps to improve the convergence rate. This "story" is repeated several times in the title, abstract, introduction, and contribution sections.
2. Our contributions in the paper
We've already listed all our contributions in section 1.2 (which is the "story" introduced before) in an intuitive way. We discuss our contributions in proof techniques here.
Firstly, we emphasize that our paper is the first to consider the case when the underlying function $f$ is in the Sobolev space $W^{s,p}(\Omega)$, where $s < \frac{d}{p}$. In this regime, $f \in W^{s,p}(\Omega)$ may not be embedded in $L^{\infty}(\Omega)$. Different from literatures of doubly robust estimators, here we are considering a nonlinear functional, so the influence function is not exactly zero in our estimator. Therefore, we need to use Sobolev Embedding Theorem to embed the influence function $f^{q-1}$ and the error function $f-\hat{f}$ into "dual" $L^p$ spaces to obtain the desired upper bound. For the sake of completeness, we list a simplified version of the Sobolev Embedding Theorem below:
For any non-negative integer $s$ and $1 \leq p < r \leq \infty$ satisfying $\frac{1}{p}-\frac{s}{d} = \frac{1}{r}$, we have $W^{s,p}(\Omega) \subset L^r(\Omega)$, $\textit{i.e,}$ every sobolev function in $W^{s,p}$ is also a $L^r$ function.
Secondly, the technique used in our proof of the information-theoretic lower bounds is a bit different from that of previous work [1]. In one case of our proof, we pick the two priors to be two discrete distributions of functions such that any function’s sign on each grid in the divided domain is a discrete random variable whose distribution depends on $n$ (the number of data points). This enables us to calculate the amount of information required to distinguish between the two hypotheses even when there is no observational noise. We think that the technique we used here might be useful for the nonparametric statistics community.
In addition, for the problem of estimating integral based on noisy observations, our contribution is to characterize the minimax optimal convergence rate for any noise level ranging from zero to $O(1)$.
3. Applications of our theoretical analysis
We already discussed related application in subsection 1.1 related work "Regression-Adjusted Control Variate (RACV)" of the paper. Here we will list the most important applications of RACV along with their references. These include gradient estimation [2], statistical inference in biology [3], causal inference [4], estimation of the normalization factor and sampling [5], MCMC simulation [6].
We admit that our theoretical results are derived under a few assumptions (The given function is in a sobolev space; The data points are uniformly sampled from a unit cube; etc). We will discuss how these assumptions limit the application of our theory while revising the conclusion section.
Finally, we would like to thank the reviewer again for all the helpful comments, which help us improve the quality of our manuscript.
[1] Tsybakov, A.B., 2004. Introduction to nonparametric estimation, 2009. URL https://doi.org/10.1007/b13794.
[2] Shi, J., Zhou, Y., Hwang, J., Titsias, M. and Mackey, L., 2022. Gradient estimation with discrete Stein operators. Advances in neural information processing systems, 35, pp.25829-25841.
[3] Angelopoulos, A.N., Bates, S., Fannjiang, C., Jordan, M.I. and Zrnic, T., 2023. Prediction-powered inference. arXiv preprint arXiv:2301.09633.
[4] Liu, H. and Yang, Y., 2020. Regression-adjusted average treatment effect estimates in stratified randomized experiments. Biometrika, 107(4), pp.935-948.
[5] Holzmüller, D. and Bach, F., 2023. Convergence rates for non-log-concave sampling and log-partition estimation. arXiv preprint arXiv:2303.03237.
[6] Belomestny, D., Goldman, A., Naumov, A. and Samsonov, S., 2023. Theoretical guarantees for neural control variates in MCMC. arXiv preprint arXiv:2304.01111.
---
Rebuttal Comment 1.1:
Title: Reply to authors' rebuttal
Comment: I am satisfied with the response to my questions and therefore increase my score.
I still highly recommend to the authors to not simply cite relevant potential applications (such as gradient estimation), but to actually instantiate a realistic problems with equations in an example paragraph (without necessarily an experiment needed, actually).
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and feedback on our work. We appreciate it! | Summary: In this papers the authors consider estimating the $q^{\rm th}$-moment of a function $f$ (i.e. $\int f^q(x)dx$) by observing samples of $x_i,f(x_i)$ when $x_i$-essentially follows a uniform distribution over (a compact) domain of integration. The paper first develops an information theoretic lower bound (using standard testing arguments) for estimating $\int f^q(x)dx$ for $f\in W^{s,p}$ (Sobolev Space of smoothness $s\in \mathbb{N}$ and order $p\geq 1$). In order to match the lower bounds the authors subsequently specialize the results in two domains -- high and low smoothness respectively. In the high smoothness regime, a suitably bias corrected estimator based on an initial ML based estimator of $f$ (that satisfies some desirable initial properties) is shown to be optimal whereas in low smoothness regimes, owing to existence of rare and extreme events due to unboundedness of the $2q^{\rm th}$-moment of $f$, the problem needs to addressed differently and the authors show that a truncated version of the classical Monte Carlo method can provide optimality guarantees. Finally the paper also provides some details on the noisy version of the problem (i.e. when one observes $x_i,f(x_i)+\varepsilon_i$, $\varepsilon_i\sim N(0,\sigma_n^2)$) to obtain connections to the nonparametric regression literature.
Strengths: (i) A complete paper with minimax upper and lower bounds for estimating $q^{\rm th}$-moment of a function $f$ (i.e. $\int f^q(x)dx$) by observing samples of $x_i,f(x_i)$.
(ii) Considers both high and low smoothness regimes of the problem.
Weaknesses: Not much discussion on a a very well developed non-parametric theory of estimating $\int f^q(x)dx$) by observing samples of $x_i,f(x_i)+\epsilon_i,\epsilon_i\sim N(0,\sigma^2)$ where both $1/\sqrt{n}$ and (slower than)non-$1/\sqrt{n}$-rates along with efficiency bounds $1/\sqrt{n}$ are presented.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: If one is allowed to choose the design points carefully from a suitable class of distributions, can the authors discuss how the minimax rates change based on the class of distributions of $x$ -- indeed for uniform distribution one understands the problem and the problem does not change much for compactly supported (known/sufficiently regular) densities of $x$ which are bounded away from $0$ on its compact support. However, when one goes beyond this class, some properties might/might not change (e.g. for entropy estimation in density models it makes a fundamental density when the density is bounded away from $0$ versus when its not) since not having uniform distribution like samples in all sub-domains renders the integration complexity different.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None noted.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful feedback on our work. Below are our responses to the questions raised in the review:
1. Relation between our work and classical non-parametric theory
Regarding a missing discussion on the classical non-parametric theory of estimating functionals via noisy data samples, we would like to mention that previous work on non-parametric estimation of functionals have been cited and discussed in line 97-105 of the paper. Below we also provide a more detailed discussion on the relation between our work and previous work on the estimation of nonlinear functionals (moments).
We remark that our work is the first to consider the case when the underlying function $f$ is in the Sobolev space $W^{s,p}(\Omega)$, where $s < \frac{d}{p}$. This implies that $f$ is not embedded in $L^{\infty}(\Omega)$ and may have a spike. In contrast, previous work all assumed that the underlying function is sufficiently smooth.
Secondly, the technique used in our proof of the information-theoretic lower bounds is a bit different from that of previous work [1,2]. In one case of our proof, we pick the two priors to be two discrete distributions of functions such that any function’s sign on each grid in the divided domain is a discrete random variable whose distribution depends on $n$ (the number of data points). This enables us to calculate the amount of information required to distinguish between the two hypotheses even when there is no observational noise. We think that the technique we used here might be useful for the nonparametric statistics community.
In addition, our proof of the upper bound illustrates how the idea of doubly robust estimation can help us design estimators to achieve minimax optimality when there’s no observational noise. Under the other regime when observational noise exists, however, the behavior is completely different. As the influence function is not exactly zero in our estimator, we use Sobolev Embedding Theorem to embed the influence function $f^{q-1}$ and the error function $f-\hat{f}$ into different $L^p$ spaces, which yields the desired upper bound. For the sake of completeness, we list a simplified version of the Sobolev Embedding Theorem below:
For any non-negative integer $s$ and $1 \leq p < r \leq \infty$ satisfying $\frac{1}{p}-\frac{s}{d} = \frac{1}{r}$, we have $W^{s,p}(\Omega) \subset L^r(\Omega)$, $\textit{i.e,}$ every sobolev function in $W^{s,p}$ is also a $L^r$ function.
2. Design of points
As a response to the question on the design of points, we would like to point out that if the given density supported on the unit cube $\Omega$ is both upper bounded and lower bounded by fixed constants, we can claim that the convergence rate remains unchanged and minimax optimal. However, the difficult part is that we don't have any prior information about the distribution from the given data points.
Moreover, the design of points is in fact related to one important take home message emphasized in our work. Specifically, the minimax optimal convergence rate derived in our paper matches that of the Quasi-Monte Carlo method. From the perspective of information theory, our result reveals that a set of uniformly sampled points doesn’t lead to any loss in information compared to a set of carefully chosen points. For the Quasi-Monte Carlo method, most of the computational resources are used to locate the positions of desired points. After the points are picked, we may simply take the sample average of the function’s values on those points as our estimation. However, when we choose to use a set of uniformly sampled points, we no longer need to spend resources locating the desired points. The tradeoff is that our algorithm for estimation is more complicated, which leads to an extra computational cost.
As a side-note, we also provide some intuitive explanations that may help people understand the tradeoff between algorithms and design of points in Reproducing Kernel Hilbert Space (RKHS). Authors of [3,4] showed that one can compress the data points to present functions in RKHS with convergence rate $n^{-1}$. Once the data points are selected, one only need to take an expectation over the function values at the selected points to estimate the function's integral. In our algorithm, we first use half of the data to run kernel regression, which has a convergence rate of $n^{-\frac{1}{2}}$. Then we use the Monte Carlo method to estimate the error between the original function and the result given by kernel regression. Thus, the convergence rate of our algorithm is also $n^{-1} =n^{-\frac{1}{2}} * n^{-\frac{1}{2}}$, where the first $n^{-\frac{1}{2}}$ is the magnitude of the error induced by kernel regression and the second corresponds to the convergence rate of the Monte-Carlo Method.
We will take all the suggestions given by the reviewer into account while revising our manuscript. Finally, we would like to thank the reviewers again for their time and efforts, which help us improve the quality of our manuscript.
[1] Han, Y., Jiao, J. and Mukherjee, R., 2020. On estimation of $l_r$-norms in Gaussian white noise models. Probability Theory and Related Fields, 177(3-4), pp.1243-1294.
[2] Tsybakov, A.B., 2004. Introduction to nonparametric estimation, 2009. URL https://doi.org/10.1007/b13794.
[3] Dwivedi, R. and Mackey, L., 2021. Kernel thinning. arXiv preprint arXiv:2105.05842.
[4] Dwivedi, R. and Mackey, L., 2021. Generalized kernel thinning. arXiv preprint arXiv:2110.01593. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Double Gumbel Q-Learning | Accept (spotlight) | Summary: The authors propose a novel off-policy algorithm DoubleGum based on their new noise model which is claimed to be more stable for training and reaches higher performance for various discrete and continuous environments.
Strengths: The paper is well-written and well organized. The authors proposed a novel noise model and off-policy DRL algorithm and analyse their strength through a set of extensive experiments. Some experimental results are interesting and strong. They also provide nice math supporting the paper.
Weaknesses: Figures are a bit confusing. For example, it's a bit hard to see just from the figures that how stable is the new algorithm comparing to existing state-of-the-art. Also, I saw TD3, DDPG MoG-DDPG in figure3, then it is just DDPG (best w/wo Twin Networks) in figure 5. Makes me a bit confused and not sure whether DoubleGum with adjusting pessimism outperforms all the other algorithms or just DDPG?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.Curious why SAC is not included in the experiments. To my knowledge, it is a very strong off-policy algorithm. It is stable and outperforms DDPG and TD3 in a bunch of continuous tasks.
2.Can we claim DoubleGum with adjusting pessimism the new state-of-the art?
3.Will there be a way to measure how much more stable is DoubleGum(with fixed pessimism) across different tasks?Since in fixed pessimism, as shown in fig3, the performance seems to be sacrificed.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As authors mentioned in the paper, the proposed algorithm seems to be very sensitive to hyper-parameters, which is not very applicable if we need to tune pessimism per task.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your review!
We are happy to read that you found our algorithm novel and that our paper was well-written and organized.
> TD3, DDPG MoG-DDPG in figure3, then it is just DDPG (best w/wo Twin Networks) in figure 5
Figures 3 and 5 show two different ways of evaluating DoubleGum. They correspond to two different methods in which we benchmark DoubleGum, namely 1) keeping hyperparameters constant (to their default values) across all tasks and suites 2) varying hyperparameters between suites, but constant within all tasks per suite. We would like to point out that we do not vary hyperparameters across tasks belonging to the same suite, and thus we do not tune pessimism per task. We realize that we did not make this clear in our original text, and, if accepted, would be happy to change it.
The hyperparameter that we tune is pessimism. For DoubleGum, this corresponds to changing pessimism factor $c$. For DDPG, this corresponds to whether we use Twin Networks or not, resulting in our caption of "DDPG (best w/wo Twin Networks)". As our implementations of DDPG with Twin Networks and TD3 are equivalent, we have updated the aforementioned caption to "Best of DDPG/TD3" in our 1-page `.pdf` whenever necessary.
> whether DoubleGum with adjusting pessimism outperforms all the other algorithms or just DDPG?
We have uploaded Figures 14 and 15 in our attached 1-page `.pdf`, which show aggregate learning curves across all 33 continuous control tasks. In these graphs, DoubleGum outperforms all benchmark algorithms, including the newly benchmarked SAC and XQL. More details are given in point 1 of our global rebuttal.
> Curious why SAC is not included in the experiments
We now benchmark against SAC and show that in Updated Figures 3 and 5, as well as Figures 14 and 15 in our attached 1-page `.pdf`, DoubleGum is empirically stronger. More details are presented in Point 2 of our global rebuttal.
> Can we claim DoubleGum with adjusting pessimism the new state-of-the art?
We are hesitant to claim that DoubleGum is the new SOTA, due to the wide recent algorithmic innovations such as [1, 2], and the fact that most recent papers do not benchmark on as many tasks as we do, making comparisons difficult. However, we would like to claim that DoubleGum is the best-performing core algorithm over all continuous control tasks in aggregate (Figures 14, 15), and would thus like to recommend that DoubleGum form the new core algorithm for many tasks, just as SAC was the core algorithm for [1], or TD3 was for [2].
[1] P. D’Oro, M. Schwarzer, E. Nikishin, P.-L. Bacon, M. G. Bellemare, and A. Courville. Sample-
efficient reinforcement learning by breaking the replay ratio barrier
[2] S Fujimoto, WD Chang, EJ Smith, SS Gu, D Precup, D Meger. For SALE: State-Action Representation Learning for Deep Reinforcement Learning
> a way to measure how much more stable is DoubleGum(with fixed pessimism) across different tasks?
In our 1-page `.pdf`, Figures 14 and 15 show learning curves aggregated across all 33 continuous control tasks we benchmark on. These learning curves show how consistently the algorithms perform across all tasks, and thus how stable the algorithms are between tasks. DoubleGum achieves a higher learning curve than the baselines indicating that it is more stable across tasks than the baseline algorithms.
> the proposed algorithm seems to be very sensitive to hyper-parameters
We would like to point out that while we have shown that DoubleGum is sensitive to the pessimism factor $c$, we do not think DoubleGum is any more sensitive to any other hyperparameters than our baselines are.
> [DoubleGum] not very applicable if we need to tune pessimism per task.
We would like to reiterate that in our evaluations, we do not tune per task and only per suite. If accepted, we will update our text to make this clearer. As we intend $c$ in DoubleGum to be similar to how the learning rate is used in stochastic gradient descent optimizers, we evaluate in two different ways: untuned with default parameters, and tuned per suite. We go into more detail in Point 4 of our global rebuttal.
> in fixed pessimism, as shown in fig3, the performance seems to be sacrificed
We would expect the default parameter to have poorer performance than if we tuned it specifically for performance per task/suite. Nevertheless, the aggregate learning curve for DoubleGum with fixed default pessimism still outperforms all baseline methods (Figure 14).
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank the authors for the response to my questions and add additional experiments for SAC. I think I'll maintain my score. | Summary: The paper provides empirical evidence that it is inaccurate to assume the temporal difference error follows a homoscedastic normal distribution. The paper proposes to use Gumbel distribution to make a replacement. The optimal action value is considered as the learned value estimation plus a noise sampled from a Gumbel distribution. The Bellman Equation has changed accordingly. Based on this idea, a new algorithm for both discrete and continuous control tasks is derived, called DoubleGum. The paper empirically tested DoubleGum in various tasks, including several relatively complex environments.
Strengths: - The paper provides clear empirical evidence to explain that the temporal difference error is better fitted to a Guambel distribution rather than a normal distribution.
- The paper has a relatively complete related work list. The related work section considered other analyses of the noise in Q-learning and works proposing similar methods.
- DoubleGum empirically works on both discrete and continuous control tasks. To support this, in the experiment section, the paper empirically tested it on multiple environments, including both simple and complex tasks.
- Detailed hyperparameters and computational resources needed are reported, improving the reproducibility.
Weaknesses: - The main concern comes from the gap between the theory and the practical algorithm. The whole idea is based on one assumption, which is, estimating the temporal difference error with a Gumbel distribution is better than Normal distribution. The Bellman equation is also modified according to the same intuition. However, when deriving the practical algorithm, the paper points out that the Q-function cannot be learned by Gumbel or logistic regression because there lacks a sufficient statistical estimator. So, DoubleGum instead used a generalized method of moments to match the logistic with a normal distribution.
- Although the empirical results did suggest DoubleGum learned faster than the baseline (DDPG), it is not clear to me if the advantage does come from the Gumbel distribution, as the practical algorithm is not using the Gumbel distribution as previous sections discuss.
- The other concern is mainly about the baseline choice. SAC is another algorithm that empirically works well for online learning. It estimates the action value with considering the entropy. The setting that uses a softmax instead of max is consistent with a part of the proposed method in the paper. It might be worth checking SAC’s performance in the tasks listed in paper, and comparing it with DoubleGum.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: I would appreciate it if the authors could explain the abovementioned concerns.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your review!
We are delighted to read your positive feedback on our experimental methodology and reproducibility, which we care deeply about.
> The whole idea based on one assumption, estimating the temporal difference error with a Gumbel distribution is better than Normal distribution.
We would like to point out that we estimate TD errors with *two* heteroscedastic Gumbel distributions (Equation 15), which we rewrite into a heteroscedastic Logistic (Equation 19).
> DoubleGum instead used a generalized method of moments to match the logistic with a normal distribution.
We would like to point out that treating the heteroscedastic Logistic distribution (Equation 19) as a *homo*scedastic Normal (Equation 5) is problematic, but generalized moment matching with a *hetero*scedastic Normal is not. We realize that we did not make the distinction between hetero/homo clear enough in our paper, and will update the text if accepted.
The generalized method of moments estimator has been widely used in the statistics and econometrics literature. We were specifically inspired to use the generalized method of moments from Equations 42-45 of [1], which estimates the location and spread parameters of a Gumbel distribution in this manner. The same method is presented in [2] to demonstrate sampling from a Gumbel distribution in the NumPy package.
[1] Jowitt, P. W. (1979). The extreme-value type-1 distribution and the principle of maximum entropy.
[2] Code sample at the bottom of the documentation for numpy.random.gumbel
> not clear to me if the advantage does come from [modelling] the Gumbel distribution
The advantage comes from modeling the Q-values heteroscedastically, which is motivated by our theory that reveals that the TD errors are heteroscedastic, not homoscedastic. Learning a sample-dependent variance is not typically done in Q-learning, and not in our baselines, and therefore in our experiments with default hyperparameters (Updated Figure 3, Figure 14), is the source of our empirical advantage.
> checking SAC’s performance in the tasks listed in paper, and comparing it with DoubleGum
We now benchmark against SAC. and show that in Updated Figures 3 and 5, as well as Figures 14 and 15 in our attached 1-page `.pdf`, DoubleGum is empirically stronger. More details are given in point 2 of our global rebuttal.
We hope that our response helps bridge the gap between our mathematical analysis and the resulting algorithm, and shows stronger empirical evidence for the effectiveness of DoubleGum.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Sorry for the late reply! I would like to thank the author for the detailed reply and attached experiment results. It addressed my concern, and I increased my score from 4 to 6. | Summary: The paper proposes a combination of Gumbel noise with Q-learning. The idea is based on the high level observation that regular L-2 loss based Q-learning can be understood as maximum likelihood estimation with Gaussian noise. The paper derives the practical algorithm and shows some improvements over baselines.
Strengths: The paper takes a relatively novel perspective on interpreting L-2 loss based Q-learning algorithm as maximum likelihood estimation, and derives an empirically novel algorithm based on the Gumbel noise in place of Gaussian noise. The resulting algorithm looks easy to implement and showcases some practical improvements.
Weaknesses: Since the algorithm is heuristically motivated, the paper seems to lack a solid theoretical grounding as to whether the newly proposed algorithm would converge or not (not formally stated or proved in the paper). The empirical improvements are interesting but can use more results to further demonstrate the promise of this new approach.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: #### === ** Definition of Q-function and random variables ** ===
The definition of the equality in distribution look a bit confusing to me in Eqn 9,
$$Q_\theta(s,a) =_d Q*(s,a) - g_\theta(s,a)$$
Here $g_\theta(s,a)$ is a random variable whose distribution depends on $(s,a)$. As a result, the RHS is a random variable with a fixed probability law (once $\theta$ is fixed). However, the Q-function on the LHS is a single scalar at $(s,a)$. In general, how should we expect such an equality to hold? If we parameterize LHS by a neural network, it outputs a single scalar and the equality cannot hold in general.
Related to this, the equality of Eqn 10 seems to be an equality in scalar, i.e. both LHS and RHS are scalar quantities. However, $g_\theta(s,a)$ on the LHS is a random variable, and is in general not a scalar? The definition of $y*$ in Eqn 14, on the other hand, confirms that Eqn 10 should have been an equality in scalar since the RHS $y*(s,a)$ is a scalar.
#### === **Parameterization of Q-function** ===
Following the above discussion, in line 127 there is a use of distributional equality again, where LHS is $Q_\theta^\text{new}$. For this equality to hold, we do have to parameterize the Q-function output as a distribution at $(s,a)$ instead of a single scalar? A further clarification on this would be very helpful.
#### === **Convergence guarantee** ===
The algorithm proposed is very linked to max-entropy RL, the convergence property of which has been established. The L-2 based Q-learning algorithm is also convergent under suitable conditions. I wonder if there is a similar guarantee here, and if yes would be good to have in the main paper.
#### === **Empirical results** ===
Gumbel DDPG seems to improve over DDPG but not over TD3. I guess a main reason might be a TD3 uses double Q-learning, which is orthogonal to what's proposed here. Can we combine Gumbel trick with TD3 and show improvements over TD3? DDPG is in general a much more inferior baseline than TD3 due to the over-estimation bias, it'd be nice to showcase improvements over TD3 too.
From Fig 4 it seems that the performance is sensitive to the hyper-parameter $c$. How is the choice of $c=0$ selected in other experiments in comparison to alternative baselines? Is $c=0$ the best-performing hyper-parameter in general?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Discussed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your review!
We are very happy that you found both theoretical and empirical novelty within our paper, and that our resulting DoubleGum algorithm was practical and easy to implement.
> The definition of the equality in distribution look a bit confusing to me in Eqn 9
We have realized we should use an equality in samples $=$ whenever we've used a distributional equality $\overset{d}{=}$. Equation 9 constructs a new random variable $Q_\theta(s, a) \sim Q(s, \cdot)$ by defining a data-generating process that subtracts a random variable $g_{\theta, a}(s) \sim g(s)$ from the optimal $Q^\star(s, a)$ given an $\theta$ and $a$ index. As Equation 9 concerns samples and not random variables, it is not appropriate to use a distributional equality.
$Q_\theta$s is a random variable because it varies throughout training. In supervised learning, it is standard to model the output of a neural network as a random variable -- in regression the neural network estimate is a random variable that differs from the optimal value by Normal noise. We work through the mathematics of this example in Appendix A.1. There should again be a sample equality in the equation between lines 472-3.
> equality of Eqn 10 seems to be an equality in scalar i.e. both LHS and RHS are scalar quantities.
The LHS of Eqn 10 involves samples from two random variables whose sum is constructed to be a constant, which is the RHS.
> we do have to parameterize the Q-function output as a distribution at $(s, a)$ instead of a single scalar?
$Q_{\theta}^\text{new}$ is a scalar. In our anonymous codebase (link in appendix lines 547-8), line 36 of `policies_cont/networks/gaussian_critic.py` shows this.
> TD3 uses double Q-learning, which is orthogonal to what's proposed here
We do not think DoubleGum is orthogonal to TD3 as both algorithms induce pessimism. The pessimism of DoubleGum is adjusted by changing the pessimism factor $c$. TD3 computes a pessimistic estimate of the bootstrapped target by taking a sample-wise minimum from the ensemble. Pessimism of DDPG can be varied by using Twin Critics/not.
> it'd be nice to showcase improvements over TD3 too
Our implementations of DDPG with Twin Networks and TD3 are equivalent, and we have thus changed "DDPG (best w/wo Twin Networks)" to "Best of DDPG/TD3" in our 1-page `.pdf` whenever necessary. Thus, in aggregate, DoubleGum with default hyperparameters outperforms all benchmark algorithms, including TD3 (Figure 14). Also, the pessimism factor hyperparameter $c$ may be adjusted to outperform TD3 both in aggregate across suites (Figure 15) and per suite (Updated Figure 5). More details are presented in Point 1 of our global rebuttal.
> it seems that the performance is sensitive to the hyper-parameter $c$
We intend $c$ in DoubleGum to be similar to how the learning rate is used in stochastic gradient descent optimizers, so sensitivity is preferred. We cover this in Point 4 of our global rebuttal.
> How is $c$ selected? Is $c=0$ the best-performing hyper-parameter in general?
In our attached 1-page `.pdf`, we have created Figure 16, which is Figure 4 aggregated into one graph. We have now changed the default pessimism factor to $c = -0.1$, as it is the value with best aggregate performance.
> lack a solid theoretical grounding as to whether the newly proposed algorithm would converge or not (not formally stated or proved in the paper)
To the best of our knowledge, there are two types of convergence analysis in Q-Learning: 1) operator-theoretic analysis over tabular Q-functions, and 2) training dynamics of neural network parameters. We believe the second is more appropriate for DoubleGum, because our theory addresses issues in using neural networks (and not tables) for Q-learning. DoubleGum uses variance networks, but we are unaware of any literature analyzing their training dynamics. We thus believe convergence guarantees are beyond the scope of this rebuttal (and our paper)
Nevertheless, we are happy to provide an intuition about convergence we are happy to include in our paper. First, consider the following a numerically equivalent loss function to that of DoubleGum in Equation 27: $\sigma_{\bar{\theta}} \log \sigma_\theta + \frac{\epsilon_\theta^2}{\sigma_\theta}$. For each value of $\epsilon_\theta^2$, our resultant function is convex wrt the numerical value of $\sigma_\theta$. If $\sigma_\theta$ is large, the loss is dominated by $\sigma_{\bar{\theta}} \log \sigma_\theta$ and gradient descent will minimize $\sigma_\theta$. If $\sigma_\theta$ is small, the loss is dominated by $\epsilon_\theta^2$, and we can now rely on convergence guarantees of l2 Q-learning.
The guarantee we use is DR3 [2], which states convergence if $\phi_\theta(s, a)^\top \phi_\theta(s, a) - \gamma \phi_\theta(s, a)^\top \phi_\theta(s, a) \geq 0$, where $\phi(s, a)$ is the penultimate layer of the network. (This was also found by [3]). Following work [4] satisfied this condition by normalizing the layer before the value head. To do this, we used GroupNorm with 16 groups as it had empirically stronger performance. In 100 training runs on three of the hardest environments (humanoid-run, Humanoid-v4, and BipedalWalkerHardcore-v3), all seeds resulted in positive learning curves. We would be happy to include this ablation in our appendix.
[1] Seitzer, M. et al. On the pitfalls of heteroscedastic uncertainty estimation with probabilistic neural networks.
[2] Aviral Kumar et al. DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization.
[3] Wang, Z. T. and Ueda, M. A convergent and efficient deep Q network algorithm.
[4] Kumar, A. et al. Offline q-learning on diverse multi-task data both scales and generalizes.
To reiterate, for the reasons outlined above, we believe that theoretical guarantees for the convergence of DoubleGum are outside the scope of our work.
Nevertheless, we hope the additional results we have presented showcase the promise of our new approach.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: Thank you for the reply.
== **$Q_\theta(s)$ is a random variable because it varies throughout training.** ==
The fact that $Q_\theta$ varies during training due to changes in $\theta$ does not mean that we model it as a random variable. For any fixed training run, the sequence of $\theta$ is deterministic. Aggregating over all possible training runs, and accounting for randomness in the training process, we can say $Q_\theta$ has a marginal distribution at any fixed iteration during training. It is not clear to me whether the aim here is to model such a distribution, and how fundamentally one can do this, since we only have one training run in practice.
== **In supervised learning, it is standard to model the output of a neural network as a random variable, in regression the neural network estimate is a random variable that differs from the optimal value by Normal noise.**==
Do you have references for this statement? It is not clear to me why it is fundamentally possible to model the prediction $Q_\theta(s)$ as being just a Guassian variable away from the optimal Q-value. What is the cause of this randomness? According to author's response, it seems that this randomness is due to the training process? Can we always guarantee that $Q_\theta(s)-Q^*$ is Gaussian distributed and hence zero-mean? It should be fairly easy to construct examples where the difference $Q_\theta(s)-Q^*$ is mostly negative and cannot be Gaussian distributed.
=== **The guarantee we use is DR3 [2], which states convergence if ...** ===
I am not super familiar with the reference DR3 [2] but the condition $\phi(s,a)^T\phi(s,a)-\gamma\phi(s,a)^T\phi(s,a)\geq 0$ is trivially satisfied as long as $\gamma\in[0,1]$, no? Since $\phi(s,a)^T\phi(s,a)$ is a non-negative scalar. I might be missing something obvious here or the statement is slightly off?
=== **To the best of our knowledge, there are two types of convergence analysis in Q-Learning: 1) operator-theoretic analysis over tabular Q-functions, and 2) training dynamics of neural network parameters** ===
Case (2) should provide an even stronger and more general statement than case (1), and often time the theoretical guarantee of case (2) is obtained via the properties of the contractive opertor (e.g., linear TD). The algorithm proposed by the authors seems to be agnostic to the parameterization $Q_\theta$, and in the case when the parameterization is tabular, we should arguably be able to obtain tabular convergence to $Q^*$. This would be a valuable result to have.
Overall, I think it would be necessary to rigorously state the theoretical convergence of the algorithm as a major part of the paper. Otherwise my overall impression is that maybe the paper has proposed an interesting algorithm with some improvements (with empirical caveats), but it is not clear if this algorithm is principled enough and has solid guarantee.
Since the various assumptions made in the paper (e.g., $Q_\theta - Q^*$ is Gaussian distributed) are not necessarily grounded, I'd love to see more theoretical properties entailed by the algorithm. At the end of the day, if the algorithm is provably convergent, we can still see it as a principled improvement over baselines. However, in the current stage, it feels like the paper still falls short in this aspect. The paper does provide empirical gains in certain cases, but to me it would be more satisfying to see such improvements are not just "tricks", but rather principled improvements.
I'd like to keep my score as a result.
---
Reply to Comment 1.1.1:
Title: Convergence proof in the tabular setting; a more rigorous convergence discussion in the deep learning setting
Comment: Many thanks for your detailed response!
> $Q_\theta$ is a random variable
$Q_\theta$ is defined as a random variable in Equation 9. Our motivation for the form of Equation 9 comes from the equation at the top of Page 3 of Thrun and Schwartz, 1993, *Issues in Using Function Approximation for Reinforcement Learning*. This equation treats $Q_\theta$, the $Q$-function with function approximation as a random variable produced by $Q_\theta(s, a) = Q^\star(s, a) + y_{s, a}$ (notation edited to match ours for clarity), where $y_{s, a}$ is a noise source whose distribution the authors do not yet specify, but we argue is heteroscedastic Gumbel (Equation 9). The authors say that the noise is introduced by the function approximator (bottom of Page 2).
> For any fixed training run, the sequence of $\theta$ is deterministic.
This is not true because we update $\theta$ by stochastic gradient descent which involves (probabilistically) sampling a minibatch from the replay buffer.
> Do you have references for this statement?
Section 3.1.1 *Maximum Likelihood and Least Squares* from Bishop, Pattern Recognition And Machine Learning, Springer 2006 has Equation 3.8: $p(t \mid x, \mathbf{w}, \beta) = N(t \mid y(x, \mathbf{w}), \beta^{-1})$, which may also be found in Section 5.5.1 *Conditional Log-Likelihood and Mean Squared Error* from Goodfellow et al., Deep Learning, MIT Press 2016, and Equation 11.1 from the draft `.pdf` of Murphy, *Probabilistic Machine Learning: An Introduction* MIT Press, March 2022. This equation is equivalent to $t = \epsilon + y(x, \mathbf{w}), \epsilon \sim N(\epsilon \mid 0, \beta^{-1})$. If $y(x, \mathbf{w})$ is a neural network with parameters $\mathbf{w}$ and the optimal value is produced from x by an unknown underlying optimal function, then the neural network estimate differs from the optimal value by Normal noise.
However, we realise that as our work concerns RL, drawing an analogy to supervised learning may be confusing, and if accepted, we will remove Appendix A.1 and instead use Thrun and Schwartz, 1993 as precedence for modelling the function approximator as a random variable differing from the optimal model by another random variable.
> It is not clear to me why it is fundamentally possible to model the prediction $Q_\theta$ as being just a Guassian variable away from the optimal Q-value.
We would like to point out that we assume that $Q_\theta$ and $Q^\star$ differ by a *Gumbel*, not a *Normal/Gaussian* variable away (Equation 9 in our paper). To quote your original review, we show that "regular L-2 loss based Q-learning can be understood as maximum-likelihood estimation with Gaussian noise" and we share your concern that using a Normal/Gaussian is problematic, and indeed show histograms (Updated Figure 1 in the 1-page `.pdf`) that this assumption is problematic.
> the DR3 statement is slightly off?
It should be $\phi(s, a)^\top \phi(s, a) - \gamma \phi(s, a)^\top \phi(s', a') \geqslant 0$, with $s'$ and $a'$ occurring at the next timestep from $s$ and $a$.
We recently found [1], which provides PAC-bounds backing up our convergence intuition. [1] states that a heteroscedastic regression loss (ours is based on this -- line 167) converges if $Q$ is near $y^\star$ (Section 4, Paragraph 1). Note that our use of DR3 has $Q$ converging to $y^\star$.
[1] Zhang et al., 2023, Risk Bounds on Aleatoric Uncertainty Recovery
> and in the case when the parameterization is tabular
DoubleGum learns $Q_\theta$ and $\sigma_\theta$, so in the tabular setting, two tables would need to be learned, $Q(s, a)$ and $\sigma(s)$. Like SAC, DoubleGum would follow the operation of soft policy iteration and its convergence would have a similar proof. The policy improvement step remains the same as SAC (as discussed in Lines 217 - 221, Appendices A.3, A.4) and would not need to use table $\sigma$. The policy evaluation step would update the two tables $Q$ and $\sigma$.
$\sigma(s) \leftarrow Variance_a[y^\star(s, a)]$ and $Q(s, a) \leftarrow y^\star(s, a)$, where the target $y^\star$ is defined in Equation 22. Equation 22 may be rewritten with an augmented reward $r_{\text{aug}} = r + \gamma \beta(s') c$, where $\beta(s) = \frac{\sqrt{3}}{\pi} \sigma(s)$ (rearranged from Equation 27) so convergence may guaranteed by a similar argument to Lemma 1 of [2], regardless of how $\sigma$ is updated, just as SAC in the tabular case is guaranteed to converge regardless of how its temperature parameter changes. Note that in DoubleGum, $\beta(s')$ may be thought of as a state-dependent temperature parameter, as previously mentioned in Lines 218-8 and Appendix A.4.
[2] Haarnoja et al., 2018, Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
We would be happy to include both the above discussions about convergence in our work.
We would also be very grateful if you could quickly respond to these points, as there is not much time left for discussion! | Summary: Typically, TD learning assumes that the TD error follows a normal distribution with a fixed variance (induced by the Bellman squared loss). The authors argue that this assumption is too coarse in practice since the maximization of noisy Q-values across actions (when backing up the value in TD learning) is usually not Gaussian. To fix this, the authors discuss the proper limiting distribution of the Q-function when accounting for the Q-function maximization, and provides a practical temporal-difference backup algorithm that can accurately capture the distribution in the discrete control case and approximately capture it in the continuous control case. On a range of simulated robotic and control tasks, the proposed method is able to achieve better performance compared with existing approaches that naively backup the noisy target (without considering the interaction between the noise and the action maximization).
Strengths: To the best of my knowledge, this is the first paper that uses Gumbel distributions to model TD errors. The main strength of the paper is in the theoretical analysis around this TD error model and a theoretically justified, novel algorithm that can perform approximated TD backup with this new TD error model. The writing is clear and very easy to follow.
Weaknesses: The main weakness of the paper is the empirical evaluation. Overall, I am not convinced that 1) assuming that TD errors follow a homoscedastic normal distribution is problematic on the domains of tasks being tested, 2) the proposed method is able to improve upon existing approaches on these tasks.
- Section 4: I am not sure I am convinced that the Logistic model fits the empirical data better than the Gaussian model (from Figure 1). It actually makes me wonder how much is actually needed in practice to accurately account for the Gumbel noise. When do you expect the gap between these two models to be bigger?
- L196-197: How is the error bar computed? It seems a bit problematic to claim that the normal distribution is a suitable approximation given that their error bars overlap since it could be just due to the distribution variance.
- Just from Figure 3 it is hard to tell whether DoubleGum is better than MoG-DDPG. DoubleGum is slightly better on MuJoCo but slightly worse on both DMC and Box2D.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - L170: $\sigma(s)$ => $\sigma(s, a)$?
- I wonder how much of the problem in the assumption (that TD error is assumed to be Gaussian) can already be addressed by using distributional RL.
- It is very interesting to see that the method has a parameter that can adjust for the degree of pessimism more smoothly than the twin networks in TD3. There are also methods like RedQ that can adjust this with more fine-grain controls which has not been considered by the authors (though admittedly RedQ would be more expensive as it requires more critic ensemble elements).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your review!
We are very happy you highlighted our strong theoretical analysis of TD-errors, our justification for our resultant algorithm (DoubleGum), and that you found the writing clear and very easy to follow.
> [not convinced that] the proposed method is able to improve upon existing approaches
We have uploaded Figures 14 and 15 in our attached 1-page `.pdf`, which show aggregate learning curves across all 33 continuous control tasks. In these graphs, DoubleGum outperforms all benchmark algorithms. More details are given in point 1 of our global rebuttal.
> not sure I am convinced that the Logistic model fits the empirical data better than the Gaussian model (from Figure 1)
We expect little difference in fit between the *hetero*scedastic Normal and *hetero*scedastic Logistic. However, we also expect a vast difference in fit between a *hetero*scedastic Logistic and the *homo*scedastic Normal. We have updated Figure 1 to show this, and it is presented most prominently in Figure 1c. Updated Figure 1a fits homoscedastic distributions and Updated Figure 1b fits heteroscedastic distributions. Visually, heteroscedastic distributions fit far better than homoscedastic ones. More details are in point 3 of our global rebuttal.
> How much is actually needed in practice to accurately account for the Gumbel noise
Due to Updated Figure 1c, we believe we can do this with the heteroscedastic Normal.
> When do you expect the gap between these two models to be bigger?
In Updated Figure 1c, the gap between the homoscedastic Normal and the heteroscedastic distributions increases during training. However, the gap between both heteroscedastic distributions remains close throughout.
> L196-197: How is the error bar computed?
As mentioned in lines 185-6, "The line and error bars in Figure 1c reflect the empirical mean and standard deviation of the fitted NLLs computed over 12 training runs". The same procedure is used in Updated Figure 1.
> Is seems a bit problematic to claim that the normal distribution is a suitable approximation given that their error bars overlap?
Due to Updated Figure 1c, we claim that the heteroscedastic Normal is a suitable approximation, but the homoscedastic Normal is not.
> Just from Figure 3 it is hard to tell whether DoubleGum is better than MoG-DDPG.
Figure 14 in our 1-page `.pdf` is an aggregate version of Figure 3. This new graph shows that in aggregate, DoubleGum outperforms MoG-DDPG. More details are presented in Point 1 of our global rebuttal.
> L170: $\sigma(s)$ => $\sigma(s, a)$?
This is indeed a typo!
> how much of the problem in the assumption (that TD error is assumed to be Gaussian) can already be addressed by using distributional RL
We added a new baseline: QR-DDPG, which combines Quantile Regression [1] with DDPG. Following the original implementation, we learn 201 discrete quantiles. Pessimism of QR-DDPG is adjusted by deciding whether to use Twin Critics/not. Twin-QR-DDPG trains an ensemble of two Quantile Critics, and selects the minimum of two quantile estimates to compute the bootstrapped estimates. Results are reported in our 1-page `.pdf`.
DoubleGum outperforms QR-DDPG with default hyperparameters (Figure 14) and tuned pessimism per-suite (Figure 15). With default hyperparameters, DoubleGum outperforms QR-DDPG, with a bit of overlap in DeepMind Control and MetaWorld (Updated Figure 3), and comfortably outperforms QR-DDPG with tuned pessimism per suite (Updated Figure 5). In aggregate, MoG-DDPG outperformed QR-DDPG (Figure 14). Note that MoG-DDPG is a form of distributional RL and as mentioned in lines 236-7, DoubleGum may be considered a special case of MoG-DDPG.
Our theory showed that TD-errors follow a heteroscedastic Logistic, and we believe that modeling this distribution should be sufficient for distributional RL. We hypothesize that more complex distributions considered by QR and MoG might overfit, but this is a question beyond the scope of our work.
Finally, there are empirically larger gains in performance by adjusting pessimism (ie choosing DDPG or TD3) vs choosing to use distributional RL/not (QR-DDPG vs DDPG) (Figure 13, attached `.pdf`). While it is important to model the distribution of TD-errors, another important component is adjusting pessimism, and DoubleGum presents a computationally efficient way of doing both.
[1] Dabney, W., Rowland, M., Bellemare, M. G., and Munos, R. Distributional reinforcement learning with quantile regression.
> methods like RedQ that can adjust this [pessimism] with more fine-grain controls which has not been considered by the authors
As REDQ involves an increased replay ratio, we did not have the time during the rebuttal period to benchmark it. Instead, we introduce a new algorithm which we name FinerTD3. FinerTD3 trains an ensemble of critics and selects the $i$th smallest value within the ensemble as the bootstrapped target per sample. Note TD3 is a special case with an ensemble of 2 and selecting the smallest value. We used an ensemble size of 5. Over all continuous control tasks, we found that it was best to select the second-smallest value in the ensemble. These values were used in Updated Figure 3 and Figure 14 (graphs with default hyperparameters). Per each suite in DeepMind Control, MuJoCo, MetaWorld and Box2D, the best smallest values to select were respectively 5 (largest), 2 (second smallest), 4 (fourth smallest), and 2(second smallest). These values were used in Updated Figure 4 and Figure 15 (graphs where pessimism is tuned per suite).
In aggregate (Figures 14 and 15), FinerTD3 performs marginally poorer than DoubleGum. We believe this is because FinerTD3 adjusts pessimism finer than other baselines, but still not as fine as the continuous scalar in DoubleGum.
We hope that our response better motivates the problem of modeling TD-errors as homoscedastic normal distribution and better showcases the empirical improvements of DoubleGum. | Rebuttal 1:
Rebuttal: Many thanks to all reviewers for their comments and feedback!
We are delighted that all five reviewers mentioned the strong experimental analysis of our DoubleGum algorithm and that four reviewers highlighted the novelty of our theoretical analysis (dXQf, WFvy, uPm8, Mh9L). In addition, we are also very happy that four reviewers marked the presentation as 3 (dXQf, WFvy, je5E, Mh9L), with two explicitly mentioning the clarity of writing (WFvy, Mh9L).
There were four common issues raised by our reviewers.
**1. Three reviewers (dXQF, WFvy, Mh9L) were concerned that we did not clearly show empirical improvements of our algorithm (DoubleGum) over baselines.** We have better presented DoubleGum's empirical improvements over all baselines in Figures 14 and 15 in our 1-page `.pdf`. These graphs aggregate all learning curves over all tasks in all suites into one graph. They show that in aggregate, DoubleGum outperforms all baselines when all methods use default hyperparameters (Figure 14) and also when all methods' pessimism hyperparameters are tuned per suite (Figure 15).
**2. Three reviewers (dXQf, je5E, and Mh9L) suggested benchmarking the algorithm SAC.** We have benchmarked SAC and added its aggregate learning curves to Updated Figures 3 and 5 as well as Figures 14 and 15 in our 1-page `.pdf`. Pessimism of SAC is tuned by varying the use of Twin Networks/not, just as we do with TD3/DDPG in Updated Figure 3, as specified in line 250. In aggregate, DoubleGum outperforms SAC with both default hyperparameters (Figure 14) and tuned pessimism per suite (Figure 15). Finally, over each suite, DoubleGum outperforms SAC in all suites, both with default hyperparameters (Updated Figure 3) and tuned pessimism per suite (Updated Figure 5). If accepted, we will upload individual learning curves from each task in the appendix.
**3. Two reviewers (dXQf, WFvy) did not find Figure 1 convincing.** Figure 1 aims to present empirical evidence showing that the heteroscedastic Normal is a far better approximation of the heteroscedastic Logistic than a homoscedastic Normal. We have updated Figure 1 in our 1-page `.pdf` to better present this empirical evidence. No data has been changed between the original and updated Figure 1s. Instead, we have changed the labels to show which distributions are hetero and homoscedastic, removed distributions in the graph that we do not discuss in the main paper, and added a homoscedastic Normal NLL curve to subgraph 'c' such that the goodness of fit for all three aforementioned distributions may be compared.
**4. Two reviewers (uPm8, Mh9L) queried the sensitivity and stability of our algorithm with respect to our pessimism hyperparameter $c$.** We intend $c$ in DoubleGum to be used in a similar way to the learning rate in gradient descent optimizers. We thus intend DoubleGum to be sensitive to $c$ such that performance may be improved by tuning $c$, but we have also presented a default value of $c=-0.1$ (updated from $c=0$ in our paper) that showcases good aggregate performance across all tasks (Figure 14). The default value is inspired by optimizers with a default learning rate with good aggregate performance to naively try (eg Adam has a lr of 3e-4).
Individual issues raised by the reviewers include suggesting more algorithms to be benchmarked (XQL, distributional RL, an algorithm with finer pessimism control), empirical evidence for the effectiveness of our pessimism factor hyperparameter, a discussion of theoretical convergence guarantees for our algorithm DoubleGum, and clarifications of our mathematics. We address all individual issues in individual rebuttals.
Pdf: /pdf/5f66ef8d0b1e95f98ce87816cc97e03eedf15502.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Instead of modeling the TD error with a homoscedastic normal distribution, this paper tries to utilize two heteroscedastic Gumbel distributions for more complex and accurate error modeling. Based on this assumption, the authors presents a modified Q-learning algorithm, DoubleGum for solving discrete and continuous control tasks. In particular, DoubleGum can also achieve the effect of pessimism so as to avoid the overestimation. Empirical results show more stable training and competitive performance across classic discrete and continuous tasks.
Strengths: It's a novel perspective to model the TD-error following the Gumbel distribution.
Weaknesses: 1. Double error modeling in section 3.1 seems a little redundant and not intuitive enough;
2. Though linking the DoubleGum for continuous control to the pessimistic value estimation, the empirical evidence is lacking.
3. Experimental results don't show outstanding performance improvement across multiple benchmarks, and lack of comparison with similar algorithms, e.g. extreme Q-learning[1]. In addition, It is shown that the Gumbel error model helps reduce the overestimation issue, which is quite natural considering its close connection with SAC. So why not compare its behavior with SAC in more detail?
[1] Garg, Divyansh, et al. "Extreme Q-Learning: MaxEnt RL without Entropy." The Eleventh International Conference on Learning Representations. 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What's the motivation of using an additional Gumbel noise for the new function approximator $Q^{new}$ in Sec.3.1? Because this modeling does not seem to involve the Extreme Value Theorem, why to model the error using the same Gumbel distribution?
2. About the DoubleGum for the continuous control, it's said that, for ease of implementation, the pessimism factor $c$ is setted as 0, if this, it doesn't look much different than the common off-policy methods;
3. Sec.3.2 tries to connect the DoubleGum with the pessimistic value estimation, but the experimental results only contain the performance comparison for different choices of $c$, It is better to compare the magnitude of the value function directly;
4. Motiviated by a similar idea, extreme Q-learning also model the TD-error as Gumbel distribution, so it is better to make a detailed comparison between the both algorithms, including methodological differences and experimental performance.
5. About Fig.1a and Fig.1b, there is little difference for fittings by Logistic distribution or Normal distribution. I am not sure that it is a good evidence to justify the proposed error model.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your review!
We are very happy that you found our modeling of the TD-error novel.
> Experimental results don't show outstanding performance improvements across multiple benchmarks
We have uploaded Figures 14 and 15 in our attached 1-page `.pdf`, which show aggregate learning curves across all 33 continuous control tasks. In these graphs, DoubleGum outperforms all benchmark algorithms. More details are given in point 1 of our global rebuttal.
> Lack of comparison with similar algorithms [extreme Q-learning (XQL), SAC].
We have added the SAC and XQL benchmarks in Updated Figures 3 and 5 as well as Figures 14 and 15 in In our attached 1-page `.pdf`. In brief, DoubleGum outperforms SAC and XQL in aggregate. More details of how SAC was benchmarked are given in point 2 of our global rebuttal.
Our XQL benchmark was based on TD3 as it marginally outperformed SAC (Last line of Appendix D.6). Pessimism of XQL was varied by the use of Twin Networks/not, respectively yielding Twin-XQL and XQL (equivalent to what the authors name X-TD3 and X-TD3-DQ). The value of $\beta$ in XQL varies per task, but we vary them per suite, to be consistent with our other algorithms. We sweep over $\beta$s of 3, 4, 10, 20 for Twin-XQL and 1, 2, 5 for XQL, consistent with Appendix D.6 in XQL. Over all continuous control tasks, the best $\beta$ values were 20 and 5 for Twin-XQL and XQL respectively. These values were used in Updated Figure 3 and Figure 14 (graphs with default hyperparameters). For each suite in DeepMind Control, MuJoCo, MetaWorld, and Box2D, the best $\beta$ values were respectively 3, 20, 20, and 4 for Twin-XQL and 1, 5, 5, and 5 for XQL. These values were used in Updated Figure 4 and Figure 15 (graphs where pessimism is tuned per suite).
DoubleGum outperforms XQL with both default hyperparameters (Figure 14) and tuned pessimism per suite (Figure 15). Over each suite, DoubleGum outperforms XQL in all suites apart from Box2D, both with default hyperparameters (Updated Figure 3) and tuned pessimism per suite (Updated Figure 5).
> What's the motivation of using an additional Gumbel noise for the new function approximator $Q^\text{new}$?
We introduced $Q_\theta^\text{new}$ to write a Bellman Equation with one heteroscedastic Logistic noise source (Equation 19), instead of a Bellman Equation written with $Q_\theta$ with two heteroscedastic Gumbel noise sources (Equation 15). The additional Gumbel noise source shifts $Q_\theta$ into $Q_\theta^\text{new}$ using the property of the log-sum-exp (Appendix B.2). We realize that a simpler way of presenting this is to absorb the Gumbel noise source, simplifying Section 3.1 by removing Equations 16-18.
> it's said that, for ease of implementation, the pessimism factor is setted as 0, if this, it doesn't look much different than the common off-policy methods
We have now changed the default pessimism factor to $c = -0.1$ as this gives us empirically better per-suite performance (Updated Figure 3) and aggregate performance with fixed hyperparameters (Figure 14). We would like to point out that functionally, our algorithm uses variance networks unlike the common off-policy methods, which is not common in Q-Learning.
> Empirical evidence is lacking [for pessimistic value estimation ...] compare the magnitude of the value function directly)
This is a great suggestion, and we have added Figure 17 in our 1-page `.pdf'`. This graph shows how the magnitude of the target value function averaged over 256 training samples varies during training. The line and error bars show the mean $\pm$ standard deviation of the averaged target value function. Four graphs are presented, showing results of one task from each suite. In each graph throughout training, the magnitude of the target value function is smaller for a smaller $c$, although there is some overlap between the error bars.
> Make a detailed comparison between [our algorithm DoubleGum and XQL], including methodological differences and experimental performance
We would be happy to include a comparison of XQL and DoubleGum. We have detailed the experimental comparison above. Methodologically, in brief, XQL models TD-errors using one homoscedastic Gumbel, whereas DoubleGum models the same errors with two heteroscedastic Gumbels. This leads to a different loss function of the critic: XQL uses Gumbel regression with a loss derived from the MLE of a Gumbel, whereas our loss function is derived from moment matching the heteroscedastic Logistic (derived by combining the two heteroscedastic Gumbels) with a heteroscedastic Gaussian. While DoubleGum learns the heteroscedastic spread parameter continually throughout training, the spread parameter in XQL is a hyperparameter defined at the beginning of training and fixed throughout.
> About Fig.1a and Fig.1b, there is little difference for fittings by Logistic distribution or Normal distribution.
We expect to see little difference in fittings between a *hetero*scedastic Normal and the *hetero*scedastic Logistic. However, we also expect to see a vast difference in fit between a *hetero*scedastic Logistic and the *homo*scedastic Normal. We believe that did not make the distinction between hetero and homo Normals is clear enough in Figure 1. Therefore, we have updated Figure 1 in our attached 1-page `.pdf`. More details are in point 3 of our global rebuttal.
Updated Fig.1a fits homoscedastic distributions and Updated Fig.1b fits heteroscedastic distributions. Visually, the heteroscedastic distributions fit the underlying histogram far better than the homoscedastic distributions. Also, Updated Fig.1c matches our expectation and shows little difference between hetero-Logistic and hetero-Normal, but a vast difference with the homo-Normal.
We hope that our response clarifies the motivation for the mathematics in Section 3.1, and presents compelling evidence for DoubleGum's pessimistic value estimation.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response and I would maintain my orginal score. | null | null | null | null | null | null |
MoCa: Measuring Human-Language Model Alignment on Causal and Moral Judgment Tasks | Accept (poster) | Summary: The paper examines the causal and moral judgments made by large language models (LLMs) and their alignment with human intuitions. To do this, the researchers created a dataset of stories from 24 cognitive science papers, annotating each story with factors that influence people's judgments, such as norm violations and the avoidability or inevitability of harm.
The authors find that, on an aggregate level, the alignment between LLMs and human intuition has improved with newer models. However, statistical analyses reveal that LLMs and humans weigh these factors differently when making judgments.
Strengths:
**Originality**: The paper presents an interesting approach to evaluating large language models (LLMs) by testing their ability to handle tasks related to causal judgments and moral permissibility. The authors have transcribed stories from various papers and used them to test the LLMs, focusing on several factors that influence people's causal judgments and moral dilemmas. This approach is original and provides a new perspective on the capabilities of LLMs.
**Quality**: The authors have meticulously transcribed stories from a number of papers, ensuring a wide range of scenarios for testing the LLMs. They have also collected responses for each story from a crowd-sourcing platform, ensuring a diverse set of responses for analysis.
**Clarity**: The paper is written in a clear and understandable manner. The authors have explained their methodology and the factors they focused on in a detailed and comprehensible way.
**Significance**: The paper contributes to understanding how LLMs handle complex tasks related to causality and morality. This is an important area of research, given the increasing use of LLMs in various applications. The insights gained from this study could be useful for improving these models in the future. The paper also opens up new avenues for research in this area.
Weaknesses: There are a few areas where it could be improved:
1. **Evaluating with more models**: The paper is a bit skewed towards OpenAI models (GPT3 and beyond) for the evaluation. Including more diverse models could provide a more comprehensive understanding of how different LLMs perform on the tasks. This could also help identify whether the observed behaviors are specific to these models or are more generally applicable to LLMs.
2. **Comparing with human performance**: While the paper does a good job of comparing the performance of different LLMs, it does not provide a clear comparison with human performance. This makes it difficult to assess how close the models are to human-level performance on these tasks. Including a human baseline could provide a more meaningful context for the results.
3. **Analyzing incorrect predictions**: The paper could benefit from a more detailed analysis of the models' incorrect predictions. This could help identify common patterns or biases in the models' errors, which could provide insights for improving the models.
4. **Generalizability of findings**: The paper's findings are based on a specific set of stories and tasks. It's unclear how generalizable these findings are to other tasks or domains. The authors could address this by testing the models on a wider range of tasks or by discussing the limitations of their approach in more detail.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. **Evaluating with more models**: The paper primarily focuses on GPT-type models and its variants. Could the authors elaborate on why they chose to focus on these models? Would the inclusion of other language models provide different insights?
2. **Comparing with human performance**: The paper lacks a clear comparison with human performance. Could the authors provide a human baseline for these tasks? This could help in understanding how close the models are to achieving human-level performance.
3. **Analyzing incorrect predictions**: The paper could benefit from a more detailed analysis of the models' incorrect predictions. Could the authors provide more insights into the common patterns or biases in the models' errors? This could potentially help in improving the models.
4. **Generalizability of findings**: The findings of the paper are based on a specific set of stories and tasks. Could the authors discuss how generalizable these findings are to other tasks or domains?
5. **Interpretability and transparency**: The paper presents an analysis of how LLMs reason about causality and morality. However, it's not clear how these insights can be used to improve the interpretability and transparency of these models. Could the authors provide some thoughts on this?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper addresses the limitations of the work and potential negative societal impact, although not in a dedicated section. The authors acknowledge that their focus is narrow and only on certain aspects of alignment with humans. They caution that their work should not be used to make sweeping and general statements about AI-human alignment. They also note that their moral permissibility task is not a certification task and should not be used as a flat benchmark to beat.
The authors also discuss the ethical considerations of their work, emphasizing the importance of assessing implicit intuitions underlying commonsense reasoning abilities in large language models (LLMs), especially in cases related to morality. They acknowledge that even if a model is not explicitly given the responsibility to make moral judgments, these judgments can appear across many forms of freely generated text. They also recognize the potential for replicating human biases in LLMs and state that this is something they would want to avoid.
In terms of potential negative societal impact, the authors don't explicitly discuss this. However, they do acknowledge the potential for misuse of LLMs and the ethical considerations that come with their use. They also discuss the importance of transparency and consent in their data collection process.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful questions and comments. We will make sure that we incorporate all the feedback.
> Evaluating with more models: The paper primarily focuses on GPT-type models and its variants. Could the authors elaborate on why they chose to focus on these models? Would the inclusion of other language models provide different insights?
We have also included non-GPT-type models such as RoBERTa-large, Electra-gen-large, and Delphi. As far as we know, GPT-type large language models are being widely adopted by the industry for various applications and use scenarios. There is an urgent need to evaluate the behavior of these LLMs to ensure safety and alignment.
Also, some of the latest language modeling training techniques, such as instruction fine-tuning and RLHF, have been mostly implemented on GPT-type models. For example, Alpaca-7B is instruction fine-tuned, and text-davinci-003 and claude-v1 are RLHF fine-tuned. Even though, on a low level, these models seem to share the same GPT-like architecture, the difference in training methods and how that can cause behavioral shifts in LLMs is what we aim to investigate.
> Comparing with human performance: The paper lacks a clear comparison with human performance. Could the authors provide a human baseline for these tasks? This could help in understanding how close the models are to achieving human-level performance.
Thank you for this suggestion. Due to time constraints, we plan to include a human baseline for the final version of our work.
> Analyzing incorrect predictions: The paper could benefit from a more detailed analysis of the models' incorrect predictions. Could the authors provide more insights into the common patterns or biases in the models' errors? This could potentially help in improving the models.
We did a quick error analysis by annotating 80 examples. We sample 10 examples where 4 models made mistakes (text-curie-001, claude-v1, gpt-3.5-turbo, gpt-4) across 2 tasks (causal/moral). We ask the model why they made the choice to use the original story (prompting for an explanation). Examining the explanation, we conduct two analyses: 1). A quantitative analysis for model hallucination; 2). A preliminary qualitative analysis on what the tendencies model shows.
| Model | Causal | Moral |
|----------------|--------|-------|
| text-curie-001 | 8/10 | 3/10 |
| claude-v1 | 2/10 | 2/10 |
| gpt-3.5-turbo | 2/10 | 0/10 |
| gpt-4 | 0/10 | 0/10 |
For the quantitative analysis, we first annotate for hallucination. A model hallucinates when they re-state the core story in a way that’s inconsistent with the facts provided in the story. For example, if an action is not performed by character A, and the model thinks it is performed by character A, we count this as a hallucination. Due to time constraints, we can only annotate 80 examples, but we can clearly see for the mistakes the model makes, smaller models tend to hallucinate more.
For the qualitative analysis, we read the model explanations on examples where no hallucination is found, and we conclude that larger models indeed have a different preference compared to humans in these stories. On moral stories, Claude-v1, GPT-3.5-turbo, and GPT-4 all seem to be bound by pre-entered moral principles and choose the same action regardless of the story circumstances (we speculate it to be the influence of AI constitution) [1], while humans tend to consider a few more factors (i.e., nuances in the story). For example, when faced with intricate choices, language models tend to take a passive stance. This is true even if acting could save a greater number, even when a smaller number would face certain peril regardless of the action. However, when self-sacrifice is in question, the language model is inclined to metaphorically 'sacrifice' itself. Our observation is based on a limited set of examples. We hope to include a more comprehensive analysis for the final version of the paper.
> Generalizability of findings: The findings of the paper are based on a specific set of stories and tasks. Could the authors discuss how generalizable these findings are to other tasks or domains?
Thank you for your point. Our study offers a specialized evaluation suite for causal and moral judgment tasks, filling a significant gap left by benchmarks like HELM [2] that cover many tasks but overlook these critical areas. While our focus is specific, our benchmark complements others, aiming to enrich the holistic evaluation of language models across various domains.
> Interpretability and transparency: The paper presents an analysis of how LLMs reason about causality and morality. However, it's not clear how these insights can be used to improve the interpretability and transparency of these models. Could the authors provide some thoughts on this?
Thank you for the question. We provide additional error analyses in the response. By pinpointing where LLMs diverge from human judgments, we hope to lay the groundwork for future research to improve model interpretability. Identifying these discrepancies can guide refinements in LLM training, potentially leading to models that resonate better with human intuitions and are more transparent in their reasoning.
Thank you for the review. Did this answer your questions? We are also happy to answer more questions if they arise.
[1] Bai, Yuntao, et al. "Constitutional ai: Harmlessness from ai feedback." arXiv preprint arXiv:2212.08073 (2022).
[2] Liang, Percy, et al. "Holistic evaluation of language models." arXiv preprint arXiv:2211.09110 (2022).
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal! The answers to my own questions seem persuasive, and I feel that the new analysis is useful. I continue to support the acceptance of this paper. | Summary:
This paper investigates to what extent LLMs can align with human intuitions when making causal and moral judgments. To do this, they collected a dataset of stories from 24 cognitive science papers and created a causal and moral judgment challenge set. They evaluate different LLMs about their alignment with humans and reveal that the implicit preferences can be different even for LLMs trained with the same technique. They find that increasing model sizes actually impact those models’ aggregate-level alignment differently.
Strengths: With the wide spread of LLMs, to understand the alignment between humans and models is an important topic. In this paper:
- They have provided a dataset to understand the human-model alignment, especially in the causal and moral judgment perspective.
- The resources are from cognitive studies which makes it more reliable than the normal text resources.
Weaknesses: The paper wants to analyze the alignment between humans and models, however it lacks some description of how they conducted the human study.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - For the factors in Table 2, are they from existing literature reviews or summarized by the authors?
- For the dataset, are those factor labels annotated by the authors or by original cognitive scientists?
- For the human participants, do you have any criteria to select who can participate in the survey?
- Did you educate the participants about different factors before you conduct the survey?
- It seems only part of Fig.2 is visible on my side. Please check that.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful questions and comments. We will make sure that we incorporate all the feedback.
> The paper wants to analyze the alignment between humans and models, however it lacks some description of how they conducted the human study.
Thank you for bringing this up. We have provided a description of our human subject recruitment and how we conducted our human study in Sec A.14 Crowd Sourced Voting Experiment Design and Interface. We will include a screenshot of our study interface and provide a more detailed description in the final paper.
> For the factors in Table 2, are they from existing literature reviews or summarized by the authors?
All the factors mentioned in Table 2 are taken directly from the papers that conducted the original experiments. A graduate student and a professor in moral/causal psychology were involved in carefully discussing and designing these factors. A thorough literature review was conducted to make sure these factors are comprehensive enough.
> For the dataset, are those factor labels annotated by the authors or by original cognitive scientists?
Factor labels for each story are annotated by two data annotators who have backgrounds in psychology. An annotation guideline is included in the supplementary code zip file. Two annotators reached >0.8 inter-annotator agreement (see Appendix Sec A.7 Annotation Guidelines and Sec A.13 Annotation Agreement Calculation).
> For the human participants, do you have any criteria to select who can participate in the survey?
We only recruit participants with English as their first language. Studies that investigate causal and moral intuitions in cross-cultural contexts have often been conducted with translated stories. We believe this is out of the scope of our current study. This information is included in Appendix Sec A.14 Crowd Sourced Voting Experiment Design and Interface.
> Did you educate the participants about different factors before you conduct the survey?
No. Participants were only asked to read a story and select a binary response. We conduct the experiment as close to the original paper’s experimental setup as possible. We do note that some papers did not fully report their experimental setup or participant selection strategy.
> It seems only part of Fig.2 is visible on my side. Please check that.
We apologize for the inconvenience. It seems that this is an issue caused by the LaTex compilation. A possible solution could be using Adobe Reader to open the PDF file. We will fix this for the final version of our paper.
Thank you for the review. Did this answer your questions? We are also happy to answer more questions if they arise.
---
Rebuttal Comment 1.1:
Comment: Thanks! I have read the authors' rebuttal and do not have further questions. | Summary: This model presents a new challenge set of hard edge cases intended to test models understanding of the nuances of the directness of causation and moral culpability, by collecting them from a set of cognitive science papers. This has the clever effect of not only getting challenging stories, but those which would vary along specific features important to humans.
They test LLMs on those outputs to measure agreement with human intuitions; and annotate those cases for a set of features so that one could draw insight from those disagreements.
Strengths: It is a well-written and well-considered paper which both presents a new useful challenge set, and utilizes it to provide interesting analysis of LLM tendencies in causal culpability and moral judgments. It could clearly lead to further uses both in the evaluation of new models and in further analysis. The literature review is, as far as I could tell, comprehensive.
The work seem rigorous throughout - I appreciate the thorough explorations with personas and automatic prompt engineering, which alleviate worries about the normal fickleness of prompt choice.
Weaknesses: - The size of the challenge set (around 200 stories I believe) is somewhat limited; I don't think that that's too much of a worry for such a challenge set, so I wouldn't view it as a major weakness.
- quibble: A seemingly left-over note on line 232: " This is very very interesing, make the flow better."
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - I'd be very curious about which personas and prompts would lead to worst-case performance for various models, since that might give insight into how the models go awry.
- The improvements in alignment with human judgements from adopting a utilitarian/consequentialist framing is fascinating. However, that doesn't mean that all humans have a utiliarian framing. Is there any concern that measuring against the average of human judgements might ignore variance between different humans on such judgement tasks?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The ethical considerations section seems thoughtful, and I see no unaddressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful questions and comments. We will make sure that we incorporate all the feedback.
> quibble: A seemingly left-over note on line 232: " This is very very interesing, make the flow better."
Thank you for pointing this out – we have removed this in our paper.
> I'd be very curious about which personas and prompts would lead to worst-case performance for various models, since that might give insight into how the models go awry.
We describe the best and worst persona in Appendix Sec A.4, lines 696-700. We have copied and pasted the section below for ease of reading:
For both models, the persona that most closely aligned with our collected responses for Causal Judgment Task is “Emily White is a Republican who got fed up with the party”, and the least aligned persona is “Angela Campbell is a woman who likes to party until the sun comes up”. For Moral Permissibility Task, the most aligned persona for both models is “Allen Lee is a really cool guy”, and the least aligned persona is “Paul Brown is an anti-vaccine activist”.
We find this very interesting. We have used Prolific (a crowd-sourcing website) to provide labels for our challenge set. This is also a popular website where Psychology studies use to recruit participants. Whether there is a political bias in LLM is a fascinating topic and beyond the scope of our paper, but we would like to investigate it in the future.
> However, that doesn't mean that all humans have a utiliarian framing. Is there any concern that measuring against the average of human judgements might ignore variance between different humans on such judgement tasks?
There is a difference between what prompt aligns LLMs best with humans and the actual responses produced for each story. The prompt that makes LLMs align best with human reasoning is “adopt a utilitarian/consequentialist framing”, but this does not mean that human participants explicitly considered consequentialist principles when providing judgments for the different scenarios. Indeed, there are scenarios where human participants make judgments that go against utilitarian principles. For example, in one of the stories, participants are asked whether it's morally permissible to kill 5 children of a particular race in an orphanage in order to save hundreds of other children in WWII. Here, participants overwhelmingly refuse to kill even if it means that this would have resulted in the greatest number of lives saved. So, as the reviewer suggests, while utilitarian principles certainly seem to play an important part in human moral reasoning, it's not the only consideration that matters to humans.
The reviewer is also right in pointing out that different people's moral intuitions vary substantially. For some individuals, utilitarian principles matter a lot, whereas, for other individuals, deontological rules that prescribe what actions are right or wrong (e.g., do not kill) have a strong impact on their judgments. Currently, our evaluation of LLM alignment is against the aggregate of human judgments, and here it looks like prompting the LLM to adopt a utilitarian framing leads to the best alignment. That said, it's possible that different prompts would help the model to align better with different subgroups of human participants. While we are ultimately interested in better capturing the variance in human moral and causal intuitions, we chose not to do this for the current paper.
Thank you for the review. Did this answer your questions? We are also happy to answer more questions if they arise.
---
Rebuttal Comment 1.1:
Comment: Hi authors! Thank you for the rebuttal; the answers to my own questions seem persuasive, and I feel that the new analysis (in the rebuttal to other reviews) is useful . I continue to support the acceptance of this paper. | Summary: This paper focused on large language models' causal and moral intuitions and investigated the alignment between LLMs and humans' causal and moral judgments. For this purpose, the authors collected story datasets from the field of cognitive science and manually annotated each story with human judgments and underlying latent factors. Based on this dataset, a diverse range of LLMs with different model scales and training methods are evaluated. The authors then statistically revealed that LLMs weigh factors differently than humans, indicating divergent implicit preferences and emphasizing the importance of curated datasets and cognitive science insights in understanding model preferences and alignment.
Strengths: * This paper is well-motivated by philosophy and cognitive science and focused on an exciting topic, LLMs' causal and moral intuitions. Such an interdisciplinary insight would benefit the better understanding of LLMs' behaviours.
* The authors summarized a systematic framework of the underly latent factors of casual and moral judgements based on cognitive science, which might help improve the interpretability of LLMs.
* The constructed judgment dataset is high-quality, with a well-designed annotation protocol and high inter-rater agreement (>0.8).
* The authors benchmarked the alignment level between humans and a wide range of LLMs. They also conducted comprehensive analyses and made inspiring conclusions like those in Sec. 4.2.2, e.g., differences in Benefits.
Weaknesses: * The constructed dataset is too small, and the coverage is limited. Two hundred six instances are highly insufficient to investigate LLMs' properties which might make the conclusion biased. This can be observed in Table 3 (a). The relatively high bootstrapped confidence interval indicates a high variance and unreliable results. This is my biggest concern of this work.
* Some essential results need more in-depth analysis and explanation. (1) The unnatural results in Table 3(a) need more analysis. Why did the aligned and larger Alpaca-7B get lower Agg than GPT3-curie-v1 on Causal Judgement? Why did davinci-002 outperform the well-aligned davinci-003 on moral judgement? (2) The authors should provide some (even initial) analysis of the differences introduced in Sec. 4.2.2 though they are attractive.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: * How do you explain the unnatural results in Table 3(a): the aligned and larger Alpaca-7B got lower Agg than GPT3-curie-v1 on Causal Judgement; GPT-4 performed even worse than davinci-003 on Causal Judgement; davinci-002 outperformed the well-aligned davinci-003 on moral judgement.
* Would you release your Judgement dataset?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed the ethical considerations in Appendix A. However, the authors should also include more discussions of limitations, like the small dataset and variance of the results, as stated above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful feedback and for appreciating that a carefully curated set of examples and insights from philosophy and cognitive science can help provide a deeper understanding of the implicit biases and tendencies in LLMs. We will make sure to incorporate all the suggestions you have.
> The constructed dataset is too small, and the coverage is limited.
Indeed our dataset size is on the smaller side compared to the typical datasets to evaluate LLMs. However, each of our stories comes from a published, well-sourced paper that investigates a specific part of human intuition on causal or moral reasoning. Each of these stories was carefully designed and, in the original study, was used to get responses from 50-100 human participants. This is drastically different from the typical machine learning / natural language processing, where data were often crowd-sourced and only annotated by 2 or 3 annotators.
Moreover, our collected set of stories represents a wide range of factors (intuitions) that build up the human moral/causal reasoning process. Our coverage on the spectrum of known bias/intuitions is more than sufficient. We believe our work can inspire the community to design and curate more datasets like ours to evaluate LLM behaviors.
> The authors should provide some (even initial) analysis of the differences introduced in Sec. 4.2.2 though they are attractive.
We designed a quick analysis experiment to investigate whether the preference difference is due to actual preference difference or model hallucinations.
We annotated 80 examples by sampling 10 examples where 4 models made mistakes (text-curie-001, claude-v1, gpt-3.5-turbo, gpt-4) across 2 tasks (causal/moral). We ask the model why they made the choice to use the original story (prompting for an explanation). Examining the explanation, we conduct two analyses: 1). A quantitative analysis for model hallucination; 2). A preliminary qualitative analysis on what the tendencies model shows.
|Model|Causal|Moral|
|-|-|-|
|text-curie-001|8/10|3/10|
|claude-v1|2/10|2/10|
|gpt-3.5-turbo|2/10|0/10|
|gpt-4|0/10|0/10|
For the quantitative analysis, we first annotate for hallucination. A model hallucinates when they re-state the core story in a way that’s inconsistent with the facts provided in the story. For example, if an action is not performed by character A, and the model thinks it is performed by character A, we count this as a hallucination. Due to time constraints, we can only annotate 80 examples, but we can clearly see for the mistakes the model makes, smaller models tend to hallucinate more.
For the qualitative analysis, we read the model explanations on examples where no hallucination is found, and we conclude that larger models indeed have a different preference compared to humans in these stories. On moral stories, Claude-v1, GPT-3.5-turbo, and GPT-4 all seem to be bound by pre-entered moral principles and choose the same action regardless of the story circumstances (we speculate it to be the influence of AI constitution) [1], while humans tend to consider a few more factors (i.e., nuances in the story). For example, when faced with intricate choices, language models tend to take a passive stance. This is true even if acting could save a greater number, even when a smaller number would face certain peril regardless of the action. However, when self-sacrifice is in question, the language model is inclined to metaphorically 'sacrifice' itself. Our observation is based on a limited set of examples. We hope to include a more comprehensive analysis for the final version of the paper.
Beyond performing some error analyses by asking models to self-explain, we additionally found the following larger trends across models:
1). Non-monotonicity: the alignment to human biases does not necessarily increase when the model size increases. We speculate that alignment is an area where the inverse scaling law applies [2].
2). Heterogeneity: interestingly, but perhaps not surprisingly, models that used the same “training method” (such as RLHF) and fine-tuned for human preferences do not have the same implicit biases (see the difference between Claude-v1 and GPT3.5-turbo).
3). Egocentric Bias vs. Preference of Others: Humans are ego-centric and often make self-beneficial decisions. However, when asked to judge the behaviors of others, we prefer other people to be altruistic (see [3] for examples of human preferences vs. self-choices in scenarios involving self-driving cars). This difference will make models trained on human preferences to be different from human biases. This highlights the importance of a challenge dataset like ours that can measure the implicit bias of the LLMs in order to accurately measure where models and humans align (is it with human preference, or is it with human intuition/bias).
> Some essential results need more in-depth analysis and explanation. Table 3(a): Why did the aligned and larger Alpaca-7B get lower Agg than GPT3-curie-v1 on Causal Judgement? Why did davinci-002 outperform the well-aligned davinci-003 on moral judgement?
Alpaca-7B is fine-tuned on a dataset to be aligned with RLHF-trained text-davinci-003 model. However, it is actually smaller than GPT3-curie-v1, which has 13B parameters. It is unclear exactly why the differences exist – but as discussed above, we believe alignment with human intuition does not follow the traditional scaling law. We also want to point out that alignment with human preference is not the same as aligning with human intuitions.
Thank you for the review. Did this answer your questions? We are also happy to answer more questions if they arise.
[1] Bai, Yuntao, et al. "Constitutional ai: Harmlessness from ai feedback."
[2] McKenzie, Ian R., et al. "Inverse Scaling: When Bigger Isn't Better."
[3] Kallioinen, Noa, et al. "Moral judgements on the actions of self-driving cars and human drivers in dilemma situations from different perspectives."
---
Rebuttal Comment 1.1:
Title: Thanks for response
Comment: Thanks for the detailed response. I think most of my concerns have been addressed, and I have raised my score to support the acceptance. Please include your additional experiments (more sampled examples would be better) in your final version, which would make this work more convincing. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The motivation of this paper is that people constantly make lots of causal and moral judgments to reason about why did what things and why. This paper contributes a dataset of stories compiled from cog sci papers, with detailed annotation of the factors that contributes to the human judgment. Then, the paper looks at how LLMs make judgments, and check the alignment with humans.
Strengths: - The paper addresses an important topic to check the causal and moral reasoning and the alignment of LLMs with humans
- The proposed dataset looks solid and well-annotated
- The analysis provides insights to the community to develop safer and more aligned LLMs.
Weaknesses: - The size of the dataset is a bit limited, 144 causal stories and 62 moral stories, making the insights drawn upon them be not extensive enough
- The yes/no binary answer is reasonable, but analyzing LLMs behavior using a binary classification task might have a little signal-noise ratio. There needs to be lots of human annotation to evaluate the reasoning quality of LLMs, and whether any misalignment or unsafe reasoning was provided apart from the binary answer.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Are there domain experts in moral psychology / philosophy involved in the design process of this paper? How do you make sure the factors in 2a and 2b are comprehensive and can explain for all the judgment decisions? I saw the appendix A.1, but I would like to see one dedicated paragraph for each of Table 2a and 2b, describing the rationale behind each factor and how they correlate with human intuitions in the main text in the next version of the paper.
2. Can the authors let LLMs to output its reasoning, and then annotate what type of tendencies LLMs show in its reasoning (maybe doing it on a subset, e.g., 50 samples)?
[I have read the rebuttal, and acknowledge the author's effort into it. I'm supportive of the acceptance of this paper.]
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their thoughtful review and the positive feedback that our intention to build a challenge set to evaluate and understand causal and moral reasoning of LLMs and their alignment with humans. We will make sure to incorporate all the suggestions.
> The size of the dataset is a bit limited
> A binary classification task might have a little signal-noise ratio
Thank you for bringing this up! We agree with you that these are the limitations of our proposed dataset. However, each of our stories comes from a published, well-sourced paper that investigates a specific part of human intuition on causal or moral reasoning. Each of these stories was carefully designed and, in the original study, was used to get responses from 50-100 human participants. This is drastically different from the typical machine learning / natural language processing, where data were often crowd-sourced and only annotated by 2 or 3 annotators.
We do hope to conduct future work to move beyond binary tasks and come up with other alignment comparisons/tests between LLMs and human responses.
> Are there domain experts in moral psychology / philosophy involved in the design process of this paper? How do you make sure the factors in 2a and 2b are comprehensive and can explain for all the judgment decisions? I saw the appendix A.1, but I would like to see one dedicated paragraph for each of Table 2a and 2b, describing the rationale behind each factor and how they correlate with human intuitions in the main text in the next version of the paper.
All the factors mentioned in Table 2 are taken directly from the papers that conducted the original experiments. A graduate student and a professor in moral/causal psychology were involved in carefully discussing and designing these factors. A thorough literature review was conducted to make sure these factors are comprehensive enough. We agree that Table A1 only has a truncated description of each factor. However, we do provide a full paragraph to describe each factor in Appendix Sec A.1 and Sec A.2. Please let us know if the descriptions are still unclear and if further clarifications would be helpful.
> Can the authors let LLMs to output its reasoning, and then annotate what type of tendencies LLMs show in its reasoning (maybe doing it on a subset, e.g., 50 samples)?
Thank you for the suggestions. We annotated 80 examples by sampling 10 examples where 4 models made mistakes (text-curie-001, claude-v1, gpt-3.5-turbo, gpt-4) across 2 tasks (causal/moral). We ask the model why they made the choice to use the original story (prompting for an explanation). Examining the explanation, we conduct two analyses: 1). A quantitative analysis for model hallucination; 2). A preliminary qualitative analysis on what the tendencies model shows.
| Model | Causal | Moral |
|----------------|--------|-------|
| text-curie-001 | 8/10 | 3/10 |
| claude-v1 | 2/10 | 2/10 |
| gpt-3.5-turbo | 2/10 | 0/10 |
| gpt-4 | 0/10 | 0/10 |
For the quantitative analysis, we first annotate for hallucination. A model hallucinates when they re-state the core story in a way that’s inconsistent with the facts provided in the story. For example, if an action is not performed by character A, and the model thinks it is performed by character A, we count this as a hallucination. Due to time constraints, we can only annotate 80 examples, but we can clearly see for the mistakes the model makes, smaller models tend to hallucinate more.
For the qualitative analysis, we read the model explanations on examples where no hallucination is found, and we conclude that larger models indeed have a different preference compared to humans in these stories. On moral stories, Claude-v1, GPT-3.5-turbo, and GPT-4 all seem to be bound by pre-entered moral principles and choose the same action regardless of the story circumstances (we speculate it to be the influence of AI constitution) [1], while humans tend to consider a few more factors (i.e., nuances in the story). For example, when faced with intricate choices, language models tend to take a passive stance. This is true even if acting could save a greater number, even when a smaller number would face certain peril regardless of the action. However, when self-sacrifice is in question, the language model is inclined to metaphorically 'sacrifice' itself. Our observation is based on a limited set of examples. We hope to include a more comprehensive analysis for the final version of the paper.
Thank you for the review. Did this answer your questions? We are also happy to answer more questions if they arise.
[1] Bai, Yuntao, et al. "Constitutional ai: Harmlessness from ai feedback." arXiv preprint arXiv:2212.08073 (2022).
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the rebuttal and additional evaluation. I'd support acceptance for this paper. | null | null | null | null | null | null |
Generating Images with Multimodal Language Models | Accept (poster) | Summary: The paper introduces a new method called GILL that effectively integrates frozen LLMs with pre-trained image encoder and decoder models to create coherent image and text outputs. The authors show GILL's superior performance over baseline generation models in tasks involving longer and more complex text and its capability for image retrieval and generation at inference. GILL extends the capabilities of pre-trained LLMs to multimodal by mapping the embedding spaces of the LLMs to visual models with a mapping network. The proposed framework is inspiring and interesting. It may have a large impact for unified multimodal pre-training.
Strengths: * The proposed approach is innovative and inspiring, combining frozen LLMs with visual models to generate coherent image and text outputs. This approach is efficient, as it doesn't require training the image generation model from scratch.
* The paper is well-written, logically structured, and accessible.
* Experimental results show that GILL is effective in processing long-form text and generating images that are more closely matched to the text than baselines. It can process arbitrarily interleaved image-text inputs and generate retrieved images, novel images, and text, which expands its capabilities compared to previous models.
* The paper’s approach to aligning visual tokens with LLMs is promising, and the proposed solutions are insightful.
Weaknesses: * The paper lacks clear implementation details. Despite Eq. 5 explaining the overall objective to optimize, the parameters to be updated are not clearly stated in the equations. This makes it unclear which parts of the model require training and which parts remain fixed.
* While the GILL framework is novel, the GILLMapper idea is not new. Aligning a pre-trained/new encoder with the CLIPText encoder of the Stable Diffusion model has been previously explored [1,2], and this related literature is not discussed in the paper. [1] discussed altering the Language Encoder to extend the multilingual capacities for Stable Diffusion. [2] plug-ins the multimodal encoders to replace the CLIPText of the Stable Diffusion. The Eq. 3 of GILL paper is similar to the Eq. 3 of [2] even though GILL applies visual tokens here.
[1] AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities. arXiv preprint arXiv:2211.06679 (2022).
[2] GlueGen: Plug and Play Multi-modal Encoders for X-to-image Generation. arXiv preprint arXiv:2303.10056 (2023)
* The paper lacks comparison with state-of-the-art multimodal methods such as BLIP.
* The paper doesn’t discuss other relevant works, such as one [3] that unifies both text-to-image and image-to-text in one framework.
[3] CoBIT: A Contrastive Bi-directional Image-Text Generation Model." arXiv preprint arXiv:2303.13455 (2023).
* The paper lacks discussion of the inference speed, which is an especially essential factor for diffusion-based image generation task.
* The paper does not address the possibility of the OPT model not perfectly aligning with CLIPText, which could create cross-model gaps that are challenging to bridge. It's unclear how the authors would tackle such challenges and whether these difficult-to-address domain gaps would cause a decrease in performance.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See weakness above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: See weakness above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments. We are glad that the reviewer found our proposed approach innovative and inspiring, and recognized that it is efficient. We are pleased that the reviewer found the paper well-written, and appreciated GILL’s improved capabilities over baseline models.
## 1. Implementation Details
> Despite Eq. 5 explaining the overall objective to optimize, the parameters to be updated are not clearly stated in the equations
We describe the trainable parameters in L178-180, which are the linear layers $\\mathbf{W}\_{\\text{i2t}}$, $\\mathbf{W}\_{\\text{t2i}}$, $\\mathbf{W}\_{\\text{cap}}$, the IMG embedding matrix $\\mathbf{E}\_{\\text{img}}$, and the GILLMapper parameters $\omega$ and query vectors $q\_{1:L}$ (while everything else, including the LLM backbone, the visual encoder, and the SD generator, remain frozen), which is also illustrated in Fig. 2. We will also update Eq. 5 to read:
$$\\min\_{\\mathbf{W}\_{\\text{i2t}}, \\mathbf{W}\_{\\text{t2i}}, \\mathbf{W}\_{\\text{cap}}, \\mathbf{E}\_{\\text{img}}, \\omega, q\_{1:L}}
\\frac{1}{N} \\sum\_{i=1}^N \\big(l\_c(\\mathbf{x}\_i, \\mathbf{y}\_i) + l\_p(\\mathbf{y}\_i) + l\_g(\\mathbf{y}\_i) + l\_{r}(\\mathbf{x}\_i, \\mathbf{y}\_i) \\big) $$
which hopefully clarifies. We will also release **pretrained models weights and the full code** for training a model from scratch, to account for any other remaining code and low-level implementation details that are difficult to explain in the space limitations of the paper.
## 2. Discussion of prior work
> While the GILL framework is novel, the GILLMapper idea is not new …
> The paper lacks comparison with state-of-the-art multimodal methods such as BLIP.
Thanks for the pointers, and we are glad that the reviewer recognized the novelty of the overall GILL framework. We will update the paper to include these references in our discussion on related work with respect to the GILLMapper module.
In this paper, we primarily focus on the ability of GILL to generate images conditioned on interleaved image-text inputs, since most existing multimodal models such as BLIP cannot do this. For these reasons, we primarily compared against Stable Diffusion to test their image generation abilities. We also compared retrieval performance against FROMAGe, which is one of the few prior approaches which can process text + image inputs to generate text + image outputs (although FROMAGe retrieves rather than generates). We showed that GILL is better than SD at generating images with longer contexts (Table 1, 2) and as good as FROMAGe in retrieving images (Table 5). We will also include results on VQAv2 and MS-COCO captioning (see response to reviewer QYwk), which show that GILL is competitive with models of similar sizes and amount of training on image-to-text generation.
> The paper doesn’t discuss other relevant works, such as one [3] that unifies both text-to-image and image-to-text in one framework.
CoBIT is concurrent work according to NeurIPS guidelines (< 2 months of the deadline), but we will update the paper to briefly discuss it. To the best of our understanding, CoBIT is capable of generating text *or* image, but does not generate *interleaved* text and image outputs. It does not include methods to determine when to synthesize text vs. images. Our work is capable of generating text and images (by representing images as `[IMG]` tokens). This allows us to generate multimodal dialogue-like outputs consisting of both images and text.
GILL is a proof of concept for multimodal models that can process image + text and produce image + text, trained in an efficient manner (4 GPU days). Scaling it up to the data/compute scale of BLIP-2 (144 GPU days) and CoBIT (6144 TPU days) is a promising direction for future work.
## 3. Inference Speed
Generating text has the same throughput as a regular LM of the same size (i.e., that of OPT 6.7B). The main increase in inference time occurs when the model produces `[IMG]` tokens. For a batch size of 1, if the model decides to retrieve images, the additional inference overhead is minimal (< 0.001s on average) as image retrieval is fast (requiring a single matrix multiplication followed by a max operation). If the model predicts to generate an image, it takes 3.5s per image on average on a single A6000 GPU, which is the time for SD to generate a single image.
Overall, GILL’s inference speed is bottlenecked by the frequency of image generation, which is dependent on the application domain. In the case of generating dialogue-like text, we observed that images are usually generated or retrieved once or twice in a natural conversation. Amortized over a long conversation, we believe it does not lead to a significant increase compared to a text-only LLM, though exact numbers would depend on the application.
## 4. Alignment
Our experiments on VIST and VisDial (out of domain w.r.t. the CC3M finetuning data) suggest that our current approach does well at aligning the `[IMG]` representations with the CLIP text encoder. With respect to domain shift, our approach does not simply overfit to CC3M: our model surpasses SD on VIST and VisDial. With respect to the modality gap, both SD and our approach use the same image generation backbone, and the only difference is in the text encoder. If the learnt mapping was poor, we would not be able to outperform SD on these benchmarks. These results suggest that the current approach of training the GILLMapper module aligns it well to the CLIP text space.
However, we agree that it is possible that on more difficult or more out of distribution tasks, the `[IMG]` representations may not be mapped well. A possible method to alleviate this would be to include the SD generator into the pipeline, and train the whole model end-to-end with the image generation loss rather than the L2 loss on the text encoding. This would be useful to explore in future work, but would likely require significantly more GPU memory.
---
Rebuttal Comment 1.1:
Title: After Rebuttal
Comment: Thanks to the detailed explanation from authors. Most of my concerns are addressed and I'd like to raise the score to WA.
---
Reply to Comment 1.1.1:
Comment: We are happy to hear that we've addressed most of the concerns, and we thank the reviewer again for the feedback! Please let us know if there are any further clarifications we can provide. | Summary: The paper proposes a novel method to stitch together LLMs and text conditional image diffusion models, to process interleaved image-text and output interleaved image-text. The paper evaluates their methodological contributions on several tasks where outputting images is required.
Strengths: S1. The ability to output generated image or text OR retrieve images to satisfy a user’s query is not something I have seen in prior work and speaks to the generality of the formulation.
S2. The paper explores how to stitch together large pre-trained models (i.e., LLMs and image diffusion models). It represents a creative exploration of stitching these models together using a combination of learnable modules, captioning losses, contrastive losses, and distillation losses, which is non-trivial exploration.
S3. Generative applications of the model appear impressive in the demo figures.
S4. Ablations related to image generation and retrieval are generally strong, thoughtful, and cover many different key design decisions.
S5. Fine-tuning cost is cheap and reasonably accessible (~96 A6000 GPU hours).
Weaknesses: W1. Missing citation. The following paper conducts probes showing that pre-trained image and text representations can be aligned: [1] Ilharco et al. Probing Contextual Language Models for Common Ground with Visual Representations. 2021.
W2. Missing citations. Since one potential use-case of the model is image captioning (i.e., the model can handle image input and textual output), I also suggest citing recent captioning work: [2] Yu et al. CoCa: Contrastive Captioners are Image-Text Foundation Models. 2022.; [3] Li et al. BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation. 2022.; [4] Li et al. BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models. 2023; [5] Dai et al. InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning. 2023.
W3. Missing citation, unsupported claim. Recently, unified models take image and text inputs and generate image and text outputs for a variety of tasks. e.g., [6] Lu et al. Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks. 2022. Hence, lines 4-6 in the abstract seem a bit overstated: “Ours is the first approach capable of conditioning on arbitrarily interleaved image and text inputs to generate coherent image (and text) outputs.”
W4. Presentation. Consider adding percentage point improvements over baselines in the abstract and intro to give the reader an idea early on of what kinds of gains can be expected by using your model over competitors.
W5. Presentation. Consider giving some more intuition about why you are making certain design decisions, instead of just saying what you are doing. For example, I found the motivation for your introduced IMG1, IMG2, …, IMGr tokens to be a bit confusing. It seems here, the supervision encourages learning when to produce IMG tokens, but not what content should be captured in the representation space, which would be different depending on the image. Why should this design decision will lead to learning good last layer hidden representations for arbitrary images.
W6. Clarity. The Image retrieval section seems incomplete. An objective is provided, but it is not clear to me how the model is ultimately used to retrieve images at evaluation time.
W7. Clarity. The work proposes to use a learned mapping module (GILLMapper) to go from text hidden states (of an LLM) to vision model embedding space (of a text-conditioned image diffusion model). However, it is not clear this module is needed. An alternative strategy might be to feed generated LLM text directly to the text-conditioned diffusion model, thereby bypassing the need to map representations.
W8. Missing evaluation, unsupported claims. The paper claims that the model can generate image or text or retrive images. However, the experiments test only the abilities related to outputting images. How does the model perform on tasks that require outputting text (e.g., CoCo CIDEr and VQAv2 accuracy)? I consider such evals to be critical to verify claims that the model can generate sensible text outputs based on image inputs. Such evaluations also allow for comparison with Flamingo-like models.
W9. Generality of the method. Deciding whether to retrieve or generate images depends on a liner classifier that is dataset specific. This means that a practitioner wanting to use this model out-of-the-box on data of their own may have to train their own classifier (annotate data etc.) to use the model.
W10. Clarity. How is CC3M turned into an interleaved training dataset for model training? Are there any data sampling strategies that are important here? How many images and captions are sampled? Are non-interleaved sequences also trained on (i.e., image or text only)?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Here are my main questions distilled from the weaknesses above:
Q1. Can some of the clarity related questions be addressed, specifically W5, W6, W7, W10?
Q2. How does your model perform on CoCo captioning and VQAv2 (W8)? These also seem like useful evaluations to make sure that your model interprets the content in images.
Q3. Can authors address the concerns related to the generality of the method for image generation vs. retrieval (W9)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No limitations or failure analysis is presented in the main paper. I suggest discussing conditions under which the model fails.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments. We are glad that the reviewer recognized the creativity and generality of our approach in combining pretrained LLMs and visual models, and appreciated the impressive qualitative results, strong ablations, and accessible finetuning cost. We address specific queries below, and will incorporate all feedback.
## 1. Clarity Related Queries
### W5
Our approach includes losses to supervise both learning when to produce `[IMG]` tokens, but also the representation of the `[IMG]` tokens (detailed in L120-140 of the main paper) through minimizing the L2 loss against CLIP text embeddings. The representations depend on context, so different text inputs produce different `[IMG]` token hidden states (due to the LM attending to different things). This allows us to produce the appropriate representations for different text inputs. We will update Sec 3.2 and the caption of Fig. 2 to make this clear.
### W6
During inference, we follow standard procedure [7] and retrieve the image with the highest cosine similarity (between image embeddings and `[IMG]` token embeddings) from a dataset. For VIST (Tab. 5), this is from its val set (consistent with [8]). For other qualitative results, we retrieve from CC3M.
### W7
One benefit of our approach is that it treats image outputs as continuous embeddings to be directly fed into an image generator. This allows us to leverage the capabilities of LLMs, such as in-context learning and the ability to handle longer inputs than can be handled by the SD text encoder (i.e., 77 tokens) for generating images. Continuous representations bypass the text bottleneck, and can be optimized completely end-to-end. Feeding generated text would require RL-like approaches, which may not be as straightforward. Please also see our response to reviewer bnn7, which shows that a text-only approach (using generated text from much larger GPT-3.5/4 models) underperforms GILL on VIST, suggesting that using generated text may be insufficient for more complicated tasks.
### W10
We follow [8] in packing two random examples during training with probability 0.5. This means that 50% of the time, input data is `<img1><txt1><img2><txt2>`, while the other 50% of the time, it consists of single examples `<img1><txt1>`. Although the examples are distinct (`<img1>` is in general unrelated to `<img2>`), we find this helps encourage the model to attend to the correct image, rather than always attending to the first image.
> Are non-interleaved sequences also trained on (i.e., image or text only)?
We do not train on image or text-only sequences, though the frozen LLM was originally pretrained on text-only data.
## 2. VQAv2 and COCO
We ran evaluations on VQAv2 and MS-COCO captioning, and found that GILL is comparable to models trained with similar compute and data. On VQAv2, we achieve a zero-shot val accuracy of 31.78, which is slightly better than prior approaches of similar model sizes and compute: FROMAGe achieves zero-shot accuracy of 28.51, Frozen achieves 25.53, and MAGMA achieves 28.35. On MS-COCO, GILL achieves a BLEU@4 of 0.1059 and METEOR of 0.2529, which is comparable to FROMAGe (BLEU@4 of 0.1023 and METEOR of 0.2873). GILL is also capable of a wider set of tasks (e.g., generating interleaved image and text outputs) compared to these models.
We note that these scores are lower than SOTA models, as they are usually much larger and trained with significantly more compute and data (e.g., Flamingo uses 23,040 TPU days, BLIP-2 uses 144 GPU days, while ours takes 4 GPU days). Scaling up GILL to similar data and parameter scales to further push its capabilities is an exciting avenue for future work.
## 3. Generality of the Decision Method (W9)
Our evaluations were conducted on PartiPrompts (P2), and we find that the decision classifier trained on P2 does reasonably well on a held out subset (Tab. 1 in the appendix).
> This means that a practitioner … may have to train their own classifier
We agree that this is a fair point, and whether it would generalize would likely depend heavily on the actual application’s data distribution, and the amount of data available. P2 is best suited to generalizing to domains that require separating factual from non-factual captions, which we believe is a key consideration for many downstream tasks. However, there is likely room for improvement in future work, e.g., by taking into account the quality of the generated image before making a decision.
## 4. Missing Citations
### W1 and W2
We will update the intro to include [1]. We will also update the related work section to discuss image captioning approaches as suggested.
### W3
> Recently, unified models take image and text inputs and generate image and text outputs for a variety of tasks. e.g., [6] …
Thanks for the pointer! To the best of our knowledge, ours is still the first approach capable of generating coherent images and text. Unified-IO generates images *or* text, but without modifications, does not seem capable of producing outputs that are interleaved image and text, or decide when to generate text and when to generate images.
In contrast, GILL generates text interleaved with images, and automatically determines when to produce images instead of text. CM3 and FROMAGe are capable of doing so, but CM3 frequently generates incoherent or low quality images (see appendix of [8]). We are able to do better as we leverage a strong pretrained diffusion model. FROMAGe retrieves images but does not generate novel images.
## 5. Limitations / Failure Analysis
We included discussions on limitations, failure modes, potential future directions to alleviate these, and the broader impact of our model in Appendices A and B.
### References
[7] Radford, Alec, et al. "Learning Transferable Visual Models From Natural Language Supervision." ICML, 2021.
[8] Koh, Jing Yu. et al. “Grounding Language Models to Images for Multimodal Inputs and Outputs”. ICML, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their extensive rebuttal. I generally feel more positively about the paper, especially with the newly reported numbers on VQAv2 and CoCo (thanks for running this!). I think it is important that the authors add these results to the main paper. I still think that the domain specific classifier is a fundamental weakness, but do not feel this is sufficient grounds for rejection. I am happy to raise my score to a 7.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their detailed feedback, and are happy to hear that our rebuttal addressed their concerns! We will definitely add these text generation results to the next version of the paper, and add some discussion about the domain specific classifier.
Please let us know if there are any further clarifications we can provide. | Summary: The authors train adapters to map embeddings of pre-trained image encoders and decoders to pre-trained LLMs. This allows them to input interleaved images with text into a pre-trained LLM and also make the LLM generate [IMG] tokens as required, which can be fed into a decoder to generate images or can be used to retrieve from a set of images based on cosine similarity.
Strengths: - Clean idea that introduces a fundamental novelty in the capability of text-to-image models - the ability to interleave image and text to generate images.
- Adapting image to text space to use with pre-trained LLM's is neat since we don't need to train yet another large model. This idea has been shown to work well in contemporary models like LLaVA and miniGPT-4, but they show it works well for generation as well - a capability these other contemporary models lack.
- The results look very promising. They show longer context helps, further justifying the need of such models that can handle long chains of interleaved image and text.
- Such a line of work can have multiple interesting follow-up works on evaluating compositionality, etc.
Weaknesses: - One of the strengths of interleaving image and text is that one can expect to extract different concepts from different images to compose a new image. For instance: "a cup that looks like <image_of_come_cup>, but on a table that looks like <image_of_some_table>" Such an analysis could have been great similar to the goal of this paper: https://arxiv.org/pdf/2212.04488.pdf.
- Some other recent methods aim to solve similar goals as advertised in the qualitative results. For instance, Figure 5 top row - papers like prompt-to-prompt and imagick aim to do this. Although their method cannot handle arbitrary interleaved images and text, a qualitative comparison of these applications that both methods can handle would make the paper very strong.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Is it necessary to train for both generation and retrieval? How does it work if we just train for generation and not include the retrieval loss? The ablation in the supplemental Table 1 is just during inference, correct?
During training, the authors say they use CC 3M. This only has one image and text pair. Do they use any data that has multiple images and text? Are results on Visual Stories without training on it?
What are the [IMG_r] tokens learning exactly and how does varying `r` (the number of image tokens) affect results? I am guessing each of the tokens are learning some
image concepts present frequently among the images in data? An analysis on this would have been great!
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Authors discuss limitations in the supplemental.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful and valuable comments. We are glad that the reviewer appreciated the promising results from our paper, the possibility for interesting follow-up work, and recognized that it is a clean idea which introduces fundamental capabilities (image generation) to multimodal language models. We address specific queries below, and will incorporate all feedback.
## 1. Training a generation-only model
We ran the suggested experiment, removing the retrieval objective from the training losses. On the VIST evaluation (5 captions, 4 images), this ablated model achieves CLIP similarity of 0.636 and LPIPS of 0.694, which are comparable to that of the original model (CLIP similarity of 0.641 and LPIPS of 0.693). This suggests that it is not necessary to include the retrieval loss, although such a model would only be able to generate images and text and not retrieve images. These results also suggest that the model is not bottlenecked by including the retrieval objective, and that there is sufficient capacity for the model to perform both generation and retrieval.
## 2. Training data
Our model is trained on just the CC3M dataset (which contains just single image-text pairs). We follow [1] in randomly packing two distinct examples together during training with probability 0.5. This means that 50% of the time, examples consist of `<image1><text1><image2><text2>`, while the other 50% of the time, it consists of single examples `<image1><text1>`). Although the two examples are distinct (`<image2>` is in general unrelated to `<image1>`), we also find this helps encourage the model to attend to the correct image in the sequence, rather than simply always attending to the first image in the input sequence.
> Are results on Visual Stories without training on it?
This is correct. Despite not using explicitly interleaved data with multiple related images (such as Flamingo [2] or CM3 [3]), GILL is capable of processing Visual Stories zero-shot (which consists of 5 captions, 4 images, more than even the 2 images + 2 captions we randomly introduce through packing). We attribute this to the ability of the pretrained frozen LLM to generalize to multiple images, as we learn to map the images to embeddings in the LM space.
## 3. Analysis on the $r$ image tokens
In the main paper, we included ablations and discussion on how varying the number of tokens $r$ affects generation results (Table 4, and L275-277 on page 9). We find that lower values of $r$ (e.g., $r=1$ or $r=2$) tends to result in worse results on the VIST task, as the model is less expressive, which motivates our design choice of using $r=8$ tokens for improved generated image quality and image-text match (as measured on VIST).
> I am guessing each of the tokens are learning some image concepts present frequently among the images in data
We largely agree with this intuition: the $r$ image tokens are used as inputs to GILLMapper (trained by minimizing the L2 loss of its outputs against the CLIP text embeddings), whose outputs are used by the Stable Diffusion image generator. The “concepts” learnt by the image tokens in GILL would likely correspond to the “concepts” used by SD to produce image tokens. Further and more comprehensive analysis would likely be necessary to verify this (e.g., learning linear probes, or finding a way to visualize the embeddings of the individual tokens without significantly altering the SD pipeline), which we believe might be out of the scope of this paper, but would be very interesting to study in future work.
## 4. Extracting concepts from images
> One of the strengths of interleaving image and text is that one can expect to extract different concepts from different images to compose a new image. For instance: "a cup that looks like <image_of_come_cup>, but on a table that looks like <image_of_some_table>" Such an analysis could have been great similar to the goal of this paper: https://arxiv.org/pdf/2212.04488.pdf.
We agree that this is a very exciting direction for follow up work! One of the limitations of GILL (discussed further in Appendix A) is that it has somewhat limited visual processing capabilities. Hence, we don’t expect that it will do as well as the referenced paper or prompt-to-prompt on tasks that involve finegrained image editing.
In GILL, input images are represented as $k = 4$ visual vectors. Although this improves compute efficiency (and allows us to train the model on just 2 A6000 GPUs), it results in the loss of some finegrained visual information. Scaling the model up to adopt pretraining objectives that encode more explicit information (e.g., longer sequences of ViT patches [4]) would be promising directions for future work.
### References
[1] Koh, J.Y., Salakhutdinov, R. and Fried, D. “Grounding Language Models to Images for Multimodal Inputs and Outputs”. ICML, 2023.
[2] Alayrac, Jean-Baptiste, et al. "Flamingo: a visual language model for few-shot learning." NeurIPS, 2022.
[3] Aghajanyan, Armen, et al. "CM3: A causal masked multimodal model of the internet." arXiv preprint arXiv:2201.07520, 2022.
[4] Liu, Haotian, et al. "Visual instruction tuning." arXiv preprint arXiv:2304.08485 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. Thanks for running the experiment without the retrieval loss to show that it is not bottlenecked by it! I believe the paper's results are very promising and adapting multimodal LLMs without too many resources is an exciting area. Hence, I maintain my score as a strong accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your helpful comments, and we are glad you liked the paper! We will also include the retrieval loss ablation results in the appendix of the next version of the paper. Please let us know if there are any further clarifications we can provide. | Summary: This paper proposes to use LLM to do image generation. Their approach consists of two stages of training.
In the first stage, they try to learn a linear layer to make VIT visual feature space is compatible with LLM space.
In the second stage, they learn r new tokens representing image. They hided states of these 'image token' are used to do retrieval or image generation. For the image generation, they train a GILLMapper to transfer these hidden states into CLIP text space (which is the input to the unet of SD) so that the transferred feature can be directly used as input the SD. They also found that simple design of the GILLMapper such as linear layer does not work well, thus they train a encoder -decoder like model (like a DETR)
Strengths: The writing is clear and I like their idea and motivation in general.
Weaknesses: 1, I feel maybe it is not appropriate to claim their method can 'generate image'. The generation part is still from SD; what they are doing is to use a multi modal language model to combine both image and text information into CLIP text space. This can be reflected from their training: Neither stage1 nor stage2 actually involve image generation.
2, for table 1, I think current 1 caption case does not make scene, and maybe they can just skip this, as it is not using any previous information at all. For 5 caption case, I don't know how exactly they feed them (maybe just stack them?). I think a better way is to ask a LLM such as GPT4 to combine all captions and generate a better prompt.
3, for 5 captions and 4 images case in the table 1, the SD can also do this. A simple approach is to use image captioning model such as BLIP2, LLAVA etc to convert images as caption, and prompt GPT4 to generate the final caption by giving 5 captions and 4 images (in the captioning format)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: na
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: In my opinion, their comparison is not complete as they can have a simple extension (see weakness) so that it may potentially improve SD performance.
I will raise my score if they could conduct this experiment. If they are going to do the experiment, I hope they carefully prompt a LLM and choose different captioning models to evaluate.
------------------------------------------------------------------------------------------------------------
The authors address my major concerns in their rebuttal and I recommend them to add these stronger baselines in their final version.
W8 from reviewer QYwk seems a valid concern which I did not notice before. However, I am not very familiar with evaluation on text, thus I can not confidently evaluate their rebuttal in that question.
Overall, I am lean towards the accept, but will not be supervised if got rejected due to other weakness raised by other reviewers
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments. We are pleased that the reviewer found the writing clear, and that they liked the idea and motivation of the paper. We address specific queries below, and will incorporate all feedback.
## 1. Whether GILL Can ‘Generate Images’
We consider GILL to be the combined visual encoder, LLM, and visual decoder (i.e., the SD generator), not just the LLM. One of our contributions is that this combined model can process interleaved image + text inputs, and generate text, generate images, and retrieve images, and interleave image and text in its outputs. Thus, we think that it is reasonable to claim that GILL can generate images, as this is one of its capabilities. If the reviewer has suggestions for specific lines in the paper that they believe could be re-written to be more accurate, we are also happy to consider rephrasing them.
## 2. 1 Caption Setting
For the VIST evaluation, the single caption corresponds to the last image. Since the goal is to generate the last image, it is possible to produce a reasonable image given just this one caption. This allows us to compare what happens when the model has access to a single caption, for the last image, compared to the full sequence of captions, for all images, and it verifies that having access to the full context helps. For these reasons, we think that this is still a useful baseline to include.
## 3. 5 Captions + GPT Baseline
The results in Tables 1 and 2 concatenate the 5 captions for both SD and our model. We also ran the experiment suggested by the reviewer, where we provided GPT-3.5 (turbo) with the 5 story captions, and asked it to generate a prompt.. We find that our GILL approach outperforms this baseline, see results below. (Below, in 4., we also find that our approach outperforms using GPT-4 as the LLM.) We prompt the LLM as follows:
```
There are five images and five story parts that make up a full visual story. The five story parts are provided. Generate a caption that will be used as input to a text-to-image generation model to generate the last image in the sequence. The caption should relate to the full story sequence, but also describe the content that should be in the last image in the sequence.
Story Part for Image 1: "my sister arrived early to help me with the family bar bq ."
**<parts 2 to 4 omitted for brevity>**
Story Part for Image 5: "we ended the day shooting off some fireworks ."
Final image description for text-to-image generation:
```
We find that this generally produces reasonable captions. For the above example, the completion by GPT-3.5 (turbo) is “a group of people gathered around a bonfire, watching fireworks light up the night sky”. We use this completion to generate an image with SD, and run the same VIST evaluation on it. The results are as follows:
| Model | CLIP Similarity ($\uparrow$) | LPIPS ($\downarrow$) |
| ----------- | ----------- | ----------- |
| Ours (5 captions + 4 images) | **0.641** | **0.693** |
| Ours (5 captions) | 0.612 | 0.696 |
| SD (5 captions, concatenated) | 0.598 | 0.704 |
| GPT-3.5 + SD (5 captions) | 0.605 | 0.707 |
We see that while this improves over simply concatenating the captions, it still underperforms our method even when image information is not provided (despite GPT-3.5 using a much larger LLM). For reasons of cost, we do not evaluate GPT-4 in this setting, but do in 4. below.
## 4. Captioning Model + GPT + SD extension
As suggested, we ran this baseline. We used various captioning model to generate captions for the 4 images, and similar to the above, prompted GPT-4 / GPT-3.5 as:
```
There are five images and five story parts that make up a full visual story. The descriptions of the first four images and the five story parts are provided. Generate a caption that will be used as input to a text-to-image generation model to generate the last image in the sequence. The caption should relate to the full story sequence, but also describe the content that should be in the last image in the sequence.
Story Part for Image 1: "my sister arrived early to help me with the family bar bq ."
Image Description 1: "a woman sitting in the driver's seat of a car"
**<parts 2 to 3 omitted for brevity>**
Story Part for Image 4: "there was so much food and it was all delicious ."
Image Description 4: "a person is holding a knife and a burger"
Story Part for Image 5: "we ended the day shooting off some fireworks ."
Final image description for text-to-image generation:
```
Where the “Image Descriptions” are the outputs of an image captioning model. The results of this experiment on VIST are as follows (all models use 5 captions + 4 images):
| Model | CLIP Similarity ($\uparrow$) | LPIPS ($\downarrow$) |
| ----------- | ----------- | ----------- |
| Ours | **0.641** | **0.693** |
| BLIP-2 (OPT-6.7B) + GPT-3.5 + SD | 0.620 | 0.705 |
| BLIP-2 (T5-XL) + GPT-3.5 + SD | 0.622 | 0.705 |
| LLaVA (LLaMA-2-13B) + GPT-3.5 + SD | 0.620 | 0.704 |
| BLIP-2 (T5-XL) + GPT-4 + SD | 0.630 | 0.700 |
We find that this baseline improves results over the SD baseline (which does not use image captioning models), but the results are still worse compared to our model. This is despite our model being significantly smaller and finetuned with less data.
Aside from these quantitative results, there are other benefits in using an approach such as ours. GILL generates latent features that are provided to the SD image generator, which bypasses the text bottleneck that models such as the GPT baseline above introduce. With our approach, there is no requirement to autoregressively generate text for the generator.
We hope that this clarifies the reviewer’s queries on the suggested SD + GPT baselines. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable comments. We are glad that all reviewers appreciated the creativity and novelty of our method, and found it innovative and inspiring (reviewer Gk8n), appreciated its creative exploration (reviewer QYwk), ideas and motivation (reviewer bnn7), and recognized that it introduces a fundamental novelty in the capabilities of multimodal language models that other contemporary models lack (reviewer 5sAf).
We are pleased that the reviewers found the paper well-written (reviewers bnn7, Gk8n), and the results and applications impressive and very promising (reviewers Gk8n 5sAf). We are glad that the reviewers found our proposed modeling approaches insightful (reviewer Gk8n), and recognized its potential for multiple interesting follow-up works (reviewer 5sAf).
We address specific queries and run the suggested baselines in responses to individual reviewers below. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Identification of Nonlinear Latent Hierarchical Models | Accept (poster) | Summary: The paper introduces a class of latent DAG models which allow for hierarchical structure between latent variables. The class of models allows for more general structure than previously considered classes (e.g., tree-structured latent models). The model also allows for general nonlinear relationships between variables. The authors prove that, under additional conditions on the Jacobians of the causal mechanisms, the latent variables in these models are identifiable up to a component-wise invertible transformation. They conclude with experiments showing that the latent variables are recovered to an acceptably high level of accuracy.
Strengths: **Significance**: The allowance for non-linear relationships and the generality of the latent structure are substantive improvements over previous works.
Weaknesses: ## Related work
The authors mostly do a solid job reviewing related works, with an additional section in the appendix dedicated to such a review. However, they conspicuously leave out a strongly related line of recent work which identifies latent causal graphs from interventional data, e.g. [1,2,3]. I think the comparison with this line of works is important context: there is a tradeoff between structural assumptions and the assumption that interventional data is available. The approach to identifiability which is used in this paper is not the only viable approach, and might be a worse fit for some applications than the intervention-based approach.
[1] Ahuja, K., Wang, Y., Mahajan, D., & Bengio, Y. (2022). Interventional Causal Representation Learning.
[2] Squires, C., Seigal, A., Bhate, S. S., & Uhler, C. (2023). Linear Causal Disentanglement via Interventions.
[3] Varici, B., Acarturk, E., Shanmugam, K., Kumar, A., & Tajer, A. (2023). Score-based Causal Representation Learning with Interventions.
## Clarity
The most significant weakness of this paper is in terms of clarity; in my opinion, the paper needs substantial re-writing efforts to improve clarity. Here are some of the points which need to be clarified, from most important to least:
1. **Deterministic functions vs. conditional probabilities:** Is each variable a deterministic function of its parent, or is there randomness? It seems from Assumption 3.1(ii) and Assumption 3.3(i), that the variables need to be deterministically related. However, deterministic relations introduce problems for faithfulness, which is assumed in Definition 2.1(iii). Thus, it isn't clear if the assumptions for the theorems are even mutually consistent, which is a **major** problem for this work.
2. **Algorithm 1:** Several new ideas are introduced all at once in the description of Algorithm 1. The paper would benefit from a more gradual development of these ideas. For example, Stage 3 of the algorithm, which discusses detecting and merging super-variables, is necessary due to issues which had not been introduced. For any of the steps which are essential to include in the main paper, there should be an accompanying example that demonstrates the necessity of the step. The authors should reflect on which steps are essential for intuitively understanding the algorithm, and should consider moving other steps to the appendix.
3. **Other:** On line 215, the authors say that Theorem 3.4 can be applied to models with arbitrary latent structure, but this seems to contradict that the Theorem requires Condition 2.3 - what are the authors trying to say here?
## Experimental Results
Section 4 has a couple of major weaknesses.
First, the description of the experiments is not sufficiently detailed (unless I have missed some descriptions, in which case the paper should be re-organized so that the relevant details are easier to find). For example, when performing hierarchical model identification via Algorithm 1, there are several steps where one must test whether one variable can "perfectly predict" another variable. How is this converted into a test when running the algorithm in a finite-sample regime? As another example, how many instance (random seeds) are the synthetic results averaged over? This detail should be in the figure caption.
Second, I don't find that the experiments are comprehensive enough. The experiments should, at the least, evaluate the performance of the method across a range of sample sizes. This is important for demonstrating that the algorithm is consistent: despite the identifiability guarantees, the algorithm may fail to be consistent since it relies on perfect optimization of an autoencoder.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please provide detailed responses to the points raised in "Weaknesses". My initial opinion is that the paper is not ready for publication due to the combination of (1) the re-writing required for acceptable clarity and (2) the insufficiency of the experimental results.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. We respond to your concerns as follows and will include your suggestions as indicated.
**Q1: Related work on interventional-based causal identification.**
Thank you for the suggestion! We agree that there are two possible ways to learn causal representation: utilizing specific graph conditions as in our paper or assuming and leveraging interventional data for which you provide valuable references. We will include the following in our related work.
``We note that our work focuses on identifying latent causal models from observational data and structure conditions. Another important line of work [A,B,C,D,E,F,G] utilizes interventional data for this purpose. Specifically, these works leverage multiple data distributions generated by one causal model under distinct interventions, which are accessible in applications like biological experimentations. The accessibility of interventional data can allow for relaxed structure conditions. Hence, one should consider this tradeoff when faced with causal identification problems.’’
**Q2: Deterministic functions vs. conditional probabilities.**
Thank you for the question. We are wondering if there is a misunderstanding of the structure causal model (SCM) and our assumption, and let us now provide clarification.
The SCMs we use in Equation 1 are standard ones with exogenous noise, i.e., the mappings from parents $\text{Pa}(x_{i})$ to the children $x_{i}$ are not deterministic due to exogenous variables $\epsilon_{x_{i}}$, although $g_{x_{i}}$ itself is a deterministic mapping from $ (\text{Pa}(x_{i}), \epsilon_{x_{i}}) $ to $ x_{i} $.
Similarly, $g$ in Assumption 3.1 (ii) is the mapping from all the input variables $(z, s_{1}, s_{2})$ to $(v_{1}, v_{2})$, i.e., $(v_{1}, v_{2}) := g( z, s_{1}, s_{2} ) $, where exogenous variables $\epsilon$ can be considered contained in $s_1$ and $s_2$. Thus, the mapping from $z$ to $(v_{1}, v_{2})$ is random, although $g$ itself is a deterministic function.
Regarding Assumption 3.3 (i), consider a simple causal model $ x := g(z, \epsilon) $ where $g$ is an invertible function and $x$ is the child of $z$. Although we can express $ z $ as a function of $ x $, i.e., the first $d_{z}$ dimensions of the inverse function
$g^{-1} ( x ) $ as in Assumption 3.3 (i), the causal module $ p ( x | z ) $ is non-degenerate due to the randomness in $\epsilon$. The suggested work [A] assumes a similar generating process, where the latent variables can be recovered from the observed variables.
**Q3: Algorithm 1 development.**
Thank you for the constructive suggestion. We will add more illustrative examples to demonstrate each step's necessity as you suggested. For instance, we will append the following sentence to "...the concatenation $(z_{2}, z_{3})$" in line 248.
"Leaving this super-variable untouched will be problematic, as we would generate a false causal structure $\tilde{z} \to z_{4}$ where $\tilde{z}$ is the estimated super-variable $(z_{2}, z_{3})$, rather than recognizing $z_{4}$ is the child of two identified variables $z_{2}$ and $z_{3}$."
**Q4: Line 215 contradicts Condition 2.3.**
We note that the remark in line 215 is made on measurement models, as specified in this line and the paragraph title in line 213.
For measurement models (which means that each latent variable has enough measured children, see [D]), our theory can handle arbitrary structures among latent variables as long as each latent variable has enough pure children, which are used to identify latent variables with our approach.
**Q5: Experiment details.**
Thank you for detailed feedback. We evaluate the $R^{2}$ score with kernel regression (line 314) over 8192 samples within each estimated variable pair. As shown in Table 2 and Figure 6.b, the matched pairs give significantly higher scores than the unmatched ones. For instance, in the first row of Table 2, the estimated latent variable with $x_{1}$ as $v_{1}$ and that with $x_{2}$ as $v_{1}$ both correspond to $z_{2}$ in Figure 4 b and thus archives $0.85$, which is significantly higher than the other scores (around $0.55$) in that row. This gap allows us to set a threshold for each experiment to distinguish the matched estimated variables (e.g., a threshold of $0.6$ suffices for the experiment in Table 2).
We will include detailed steps in the revised version.
We repeated each experiment for at least 3 random seeds, as written in line 315 of the experiment setup. We will further include this in the caption to make it more visible.
**Q6: Experiments not comprehensive enough.**
Thank you for the suggestion. We have extended our experiments to make the results more informative. For instance, as you suggested, we vary the sample size for the basis model experiments where $d_{z}=d_{s_{1}}=d_{s_{2}}=2$. The plot is included in the PDF file. Our algorithm can achieve decent identification scores when the sample number is larger than 5000. The performance peaks with little variance when the sample size is over 10000, which suggests that the approach can enjoy asymptotic performances.
**References:**
[A] Interventional Causal Representation Learning. Ahuja et al.
[B] Linear Causal Disentanglement via Interventions. Squires et al.
[C] Score-based Causal Representation Learning with Interventions. Varici et al.
[D] Learning nonparametric latent causal graphs with unknown interventions. Jiang and Aragam.
[E] Causal Component Analysis. Liang et al.
[F] Learning linear causal representations from interventions under general nonlinear mixing. Buchholz et al.
[G] Identifiability Guarantees for Causal Disentanglement from Soft Interventions. Zhang et al.
[H] Causality 2n edition. Pearl.
Please let us know if you have further comments – thank you so much!
---
Rebuttal Comment 1.1:
Comment: ### General response
I appreciate the author's receptiveness to my suggestions from the rebuttal. I think that their proposed changes will increase the clarity of the paper and largely alleviate my concerns.
I have increased my score by one point, from 3 to 4. I have one remaining concern (**Q2** below); if this can be appropriately addressed, then I would be confident enough to raise my score to a 5.
### Point-by-point response
**Q1:** Thank you, I really like the suggested addition to the text and how it acknowledges the tradeoffs.
**Q2:** I agree that it is possible for $p(x|z)$ to be non-degenerate, while still letting $z$ be a deterministic function of $x$, e.g. if $p(x|z)$ has a different support for each different value of $z$. However, there still seems to be an issue regarding the compatibility of (1) having deterministic relations and (2) faithfulness.
Take for example $A \to B \to C$ with:
- $A$ is $+1$ w.p. $1/2$ and $-1$ w.p. $1/2$,
- $B = A + A \cdot \varepsilon_b$ with $\varepsilon_b \sim Unif([0, 1])$, and
- $C = sign(B) \cdot \varepsilon_c$ with $\varepsilon_c \sim Unif([0,1])$.
Then $p(B \mid A)$ and $p(C \mid B)$ are both non-degenerate, and we have deterministically that $A = sign(B)$ and $A = sign(C)$. We also have that $C \perp A \mid B$, violating faithfulness.
In particular, the incompatibility of deterministic relationships and faithfulness has been an important point of study in the foundations of causality, see e.g. [1]. It is **possible** that this incompatibility is not a problem in the existing setup, but I think this needs to be carefully argued. Notably, [2] does **not** use the faithfulness assumption over the full set of latent and observed variables, so the comparison is not relevant.
**Q3:** Thank you in advance for adding clarifications on each step, I think this will help a lot with clarity.
**Q4:** Sorry for my confusion, in my experience I'm not sure that the terminology of a "measurement model" is used with 100% consistency across the literature. Could you please add a definition for how you use the term "measurement model" so that the statement on line 215 can be interpreted precisely?
**Q5:** Thank you for including the details of the $R^2$ thresholding, which will be a helpful addition to the paper. By the way, I think many more than 3 random seeds should be used, unless there are major computational issues (which should then be discussed). 30 to 100 random seeds would ensure better statistical significance of the results.
**Q6:** Thank you very much for exploring the results across a range of sample sizes! The resulting plot gives me much stronger confidence in the method.
[1] *Faithfulness, Coordination and Causal Coincidences.* Weinberger (2018).\
[2] *Interventional Causal Representation Learning.* Ahuja et al. (2023)
---
Reply to Comment 1.1.1:
Comment: Thank you for your careful evaluation of our responses and constructive comments. We will include the updates/results as indicated to improve our work, thanks to your insightful questions. We address the remaining concerns as follows.
**Q2: Deterministic relations.**
Thank you for this interesting example and the valuable reference on this topic. (By the way, perhaps there is a typo – $ B \perp C | A $ rather than $ C \perp A | B $.)
Clearly, this example is different from our case. Specifically, it does not satisfy Assumption 3.3 (i). The incompatibility in the example originates from the fact that the newly introduced randomness in $B$, i.e., $ \epsilon_{b} $, is entirely missing in the observed variable $C$. Specifically, $C$ takes on the value $ \\text{sign}(A) \cdot \epsilon_{c} $ (due to $ \\text{sign}(A) = \\text{sign}(B) $), regardless of the realization of $ \epsilon_{b} $. Consequently, we have that $B$ and $C$ are independent given $A$, which follows from the independence of $ \epsilon_{b} $ and $ \epsilon_{c} $. However, Assumption 3.3 (i) in our work entails that the information of exogenous variables should be preserved (i.e., invertible) in the observed downstream variables (i.e., $C$ here), as opposed to this instance where the information of $\epsilon_{b}$ is lost in the observed variable $C$.
Our condition is a deterministic relation between exogenous variables (including root cause variables) and the observed variables, which, in general, doesn’t lead to the violation of faithfulness.
Let’s use a simple example to illustrate this.
If Assumption 3.3 (i) holds for the $ A \to B \to C$ model where $B:=g_{B}(A, \epsilon_{B})$ and $ C:= g_{C} (B, \epsilon_{C}) $ and $C$ is the observed variable, then $C$ can be written as an *invertible* function of $ (A, \epsilon_{b}, \epsilon_{c}) $ (i.e., it preserves the randomness of $A$, $\epsilon_{b}$, and $\epsilon_{c}$) and $B$ can be written as an *invertible* function of $ (A, \epsilon_{b})$ (i.e., it preserves the randomness from $A$ and $\epsilon_{b}$) – it follows that $ B $ and $ C $ are *dependent* given $A$, due to the shared information from $ \epsilon_{b} $.
We hope we are on the same page – please kindly let us know what you think. We will include the reference you provide and a discussion with the above example in our revision to make it more transparent to the reader. Thank you for the insightful question.
**Q4: Measurement models.**
Thank you for the helpful suggestion. We will give the definition of the specific type of measurement models we use in a footnote as follows. “We refer to [a] for a general measurement model definition. Here, we are considering a popular type of measurement models that has been widely used in the literature (see Definition 1 in [b]) in which observed variables do not cause any other variables.”
**Q5: More random seeds.**
We completely agree with you and appreciate that you raised this point. We will push repetitions over 30 in the revision, as you suggested. We will update you on this as soon as some results are available during the discussion phase.
We really appreciate your meticulousness in your examination of our work and believe this is indispensable to the advancement of fundamental research. Please kindly let us know if we have addressed your concerns – many thanks!
**References:**
[a] Learning the Structure of Linear Latent Variable Models. Silva et al.
[b] Generalized Independent Noise Condition for Estimating Latent Variable Causal Graphs. Xie et al. | Summary: The goal of this paper is to identify the hierarchical graph structure and latent variables for general nonlinear latent hierarchical causal models. The paper reduce the problem to identification of the so-called basis model and proves the connection between latent hierarchical model and basis model.
Strengths: The theoretical results seems solid and is backed by experiment on both synthetic and real world datasets.
Weaknesses: My main concerns with the paper are that it relies on strong assumptions but identifiability results are relatively weak:
1. Faithfulness assumption is a rather strong assumption.
2. Learning latent variables up to invertible transformations is quite weak. A stronger identifiability result in the literature usually learns latent variables up to affine transformation or permutation. But I am not positive if that is possible with this problem setting.
3. The pure child assumption is also a strong assumption.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In the definition of assumption 3.2 ii, is T just any matrix with the right support?
2. Assumption 3.3 i, are the functions supposed to be invertible to match assumption 3.1?
3. In general, it’s hard to figure out how necessary are the conditions in Assumption 3.3?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations are addressed in section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your careful assessment and valuable feedback! Below, we respond to your concerns raised in Weaknesses and Questions.
**Q1: Strengths of the assumptions.**
Thank you for the feedback! We admit that our approach relies on a number of assumptions; this is because our approach aims to learn representations from a single distribution. This problem is inherently challenging. Without assumptions that are generally sensible, it is generally impossible to achieve nontrivial results.
As one such essential assumption, faithfulness is adopted in all causal discovery literature (unless further assumptions are introduced) to eliminate cases where the data distribution does not faithfully describe the causal graph. It is a reasonable assumption for many real-world applications -- faithfulness attributes the statistical independence to the graph structure rather than unlikely coincidence, as articulated in Section 3.5.2 in [30] and [C].
Similarly, the pure child assumption eliminates certain unidentifiable cases where two latent variables share the same set of children. For instance, $z_{1} \to X$ and $z_{2} \to X$, then $z_{1}$ and $z_{2}$ cannot be identified without further assumptions. The pure children assumption has been widely adopted in causal discovery literature [4,9,15,25,35], often more stringent than ours (e.g., tree structures).
**Q2: Strength of the identifiability.**
Regarding identifiability, we are concerned bout representation vectors corresponding to each individual variable. As far as we know, one has to assume multiple distributions or other additional assumptions to obtain stronger identifiability, whereas we only assume a single distribution. From this perspective, our assumption is weaker since the result is applicable even with a single distribution.
**Q3: Definition of $T$.**
Great question! $T$ is a fixed matrix sharing the support of the matrix-valued function $T(c)$ and satisfying Assumption 3.1 (ii). This assumption is also adopted in [B]. We will make this point explicit in the revision.
**Q4: Assumption 3.3 (i) and invertibility.**
Good question! Assumption 3.3 (i) requires that each latent variable $z$ and exogenous variable $\epsilon$ can be expressed as functions of all observed variables $X$, which is essentially an invertible assumption on the generating process from $(z, \epsilon)$ to $X$.
**References:**
[A] Disentanglement via Mechanism Sparsity Regularization: A New Principle for Nonlinear ICA. Lachapelle et al.
[B] On the Identifiability of Nonlinear ICA: Sparsity and Beyond. Zheng et al.
[C] Replacing Causal Faithfulness with Algorithmic Independence of Conditionals. Lemeire and Janzing.
Please let us know if you have any further concerns, and please consider raising the score if existing concerns are addressed -- thank you very much!
---
Rebuttal Comment 1.1:
Comment: Thanks for the response!
Overall, my concerns about the assumption still stand.
I understand what faithfulness, pure child, and assumption 3.3 (i) are and why they are made for the theory to work. Still, it does not convince me why they are needed for your work. On the other hand, there are also causal representation learning papers that do not rely on the faithfulness assumption [a], although [a] does use linearity assumption.
Because of the limitations of assumptions and the lack of justification for necessity, I am currently keeping my score.
[a] Seigal, Anna, Chandler Squires, and Caroline Uhler. "Linear causal disentanglement via interventions." arXiv preprint arXiv:2211.16467 (2022).
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt response and the valuable reference. We completely agree that generally speaking, when interventions or multiple distributions are available, the original faithfulness assumption is usually not needed (some much weaker assumptions might suffice, thanks to the availability of multiple distributions), as shown in the paper you mention. On the other hand, we humbly believe that to learn the underlying latent variables with identifiability guarantees from independent and identically distributed (i.i.d.) data, stronger assumptions would be needed. We discuss the roles of Assumption 3.3 (i), faithfulness, and the pure child assumption individually as follows.
**Assumption 3.3 (i)**:
As you correctly pointed out, Assumption 3.3 (i) is a form of invertibility of the mapping from the latent variables and exogenous variables to the observed variables in the hierarchical model, which we inherit from Assumption 3.1 (i) for the basis model.
As far as we know, the existing literature on causal variable identification for nonlinear models [18,19,21,24,36,37] universally assumes the invertibility of such a mapping, including [a]. Without this assumption, we cannot guarantee the identification of latent variables from observed variables in the nonlinear case (perhaps one day this would not be the case anymore, if overcomplete nonlinear ICA theories were properly developed). In fact, one contribution of our work is to relax the invertibility of the basis model to eliminate duplication of the shared variable $z$, as discussed in line 151. In case you have seen any developments with a more general invertibility assumption in the nonlinear case, please kindly let us know.
**The faithfulness assumption**:
Thank you for directing us to this question. In fact, our theorems will still hold even if we relax the faithfulness assumption (Definition 2.1 (iii)) to the structural minimality condition (Definition 6.3.3 in [b]):
> A distribution satisfies causal minimality with respect to $G$ if it is Markovian with respect to $G$, but not to any proper subgraph of $G$.
This relaxation is possible because the only place where we infer the graph structure from conditional independence relations is in the proof of Theorem 3.4. There we show that the estimated variable $\hat{z}$ from the basis model with $ v_{1} = \\{ x_{i} \\}$ and $v_{2} = X \\setminus \\{x_{i} \\} $ contains at least the parent variables of $x_{i}$, i.e., $ \text{Pa}( x_{i} ) $, due to the condition independence $ v_{1} \perp v_{2} | z $ in the basis model. This reasoning step goes through under the structure minimality condition, which requires that parents of the variable $x_{i}$ should be conditioned on to make $x_{i}$ and its siblings independent.
We are grateful for your question and will update Definition 2.1 (iii) to be the structural minimality given above. We note that structural minimality is generally considered necessary for causal structure identification – see the discussion following Proposition 6.36 in [b]. (To us, faithfulness is more well-known in the community, so we adopted it in the submission for the simplicity of the presentation.) | Summary: This paper addresses the problem of identifying latent variables and causal structures from observational data in the context of nonlinear latent hierarchical causal models. Such models are common in real-world applications involving biological, medical, and unstructured data such as images and languages. The paper presents novel identifiability guarantees and an estimation procedure for both the causal structure and latent variables in these models, under mild assumptions on causal structures and structural functions.
Compared to the previous methods, the main contributions are
1. The author propose the basis model with theoretical guarantees (Theorem 3.2) as a foundation for constructing the identifiability for general nonlinear hierarchical causal model.
2. They show structure identification guarantees for general latent hierarchical models admitting continuous multi-dimensional variables, general nonlinear structural functions, and general graph structures, which go beyond the limitations of previous works that assume linear functions or discrete variables.
3. An estimation method is presented that can asymptotically identify the causal structure and latent variables for nonlinear latent hierarchical models. The authors validate their method on synthetic and real-world datasets.
Strengths: ## Originality
The paper presents several original contributions in the context of causal discovery for nonlinear latent hierarchical models. The authors develop a novel identifiability theory (Theorem 3.2) for basis model, which serves as a fundamental criterion for locating and identifying latent variables in general nonlinear hierarchical causal models. This theory relax some of the limitations compared to previous works. Moreover, the paper provides the estimation procedure for general hierarchical structures as well as latent variables up to a invertible mapping.
## Clarity
The paper is logically structured and well-written. I especially appreciated the intuitive explanation behind each assumptions and conditions, making it accessible to general audience. The intuitive explanation of the algorithm is particularly helpful and allowing the reader to capture the main ideas behind the estimation procedure.
## Significance
The paper addresses the causal discovery problem in the context of nonlinear latent hierarchical models.
The proposed identifiability guarantees and estimation procedure relax some of the limitations compared to the previous work, which itself is a great contribution. This paper is likely to have a notable impact on the field of causal discovery and inspire further research in this area.
Weaknesses: This paper is generally well-written, however, I still have some questions:
1. Scalability: The author admits the computational limitations. I still think the discussion on the computational complexity should added. Also it is very helpful to report the wall clock comparison to the other baselines regarding the high dimensional problem.
2. Joint invertible mapping: One of the basic assumptions is that the latent and observational variables are jointly invertible. I wonder if this is not satisfied, you mentioned that the identifiability cannot be established. Could you point out the relevant references? Also, in the experiments, why you use MLP as the function $g$? In general MLP is not invertible, for example, we can construct an MLP that map the $\mathbb{R}^n$ to a fixed point. In this case, you you cannot recover the original latent information.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The questions are mentioned in the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author included a discussion on the limitations regarding the computational complexity. But I think more statics like wall clock time and detailed discussion on the complexity should be added.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your encouraging comments and valuable suggestions! We will include your feedback and the discussions in our revision. Below are our responses to individual questions.
**Q1: Computation complexity and wall-clock times.**
Thank you for the wonderful suggestion – this will help future readers better understand the complexity aspect! We will include the following discussion on computational complexity in our work:
"The dominant complexity term in our proposed algorithm is the deep learning training which is of a complexity $O( R * n )$ where $R$ is the number of levels in the hierarchical model, and n is the number of observed variables. In contrast, recent works on linear hierarchical models provide algorithms with complexity $O( R * (1+n)^{p+1} )$ [15] and $O(R * n!)$ [35] and $p$ is the largest number of variables that share children. Our algorithm achieves the lowest complexity in terms of the algorithmic complexity. As we allow for multi-dimensional variables, the dimensionality of each variable also contributes to the overall wall-clock time through the deep learning model dimension, which is a hyperparameter for the experimenter. This cost can grow significantly for high-dimensional datasets like ImageNet."
In the attached PDF file, we report detailed wall-clock times for all experiments. We note that the wall-clock time can vary significantly depending on the CPU availability, as we run kernel regression on CPUs for $R^{2}$ score computation. Most of our experiments were launched on a group server and experienced contending CPU usage to certain extents.
Our basis model estimation takes roughly the same time as the baseline method in Table 1.
As no existing causal discovery approaches are designed for nonlinear hierarchical models with multi-dimensional variables (to the best of our knowledge), we cannot find a baseline to compare with on this aspect.
**Q2: Joint invertible mapping and MLPs.**
Great questions! We’d like to note that, to the best of our knowledge, existing latent variable identification literature universally adopts the invertibility (or, more broadly, injectiveness) assumption. Please see [18,19,21,24,36,37].
The fundamental issue is as follows. The observed variables are generated as a function of latent variables, i.e., $ x := g(z_{1}, ..., z_{i}, ..., z_{n})$ where $x$ is the observed variable, $z_{i}$ is the latent variable, and $g$ is the function.
Identification results typically entail that we recover/identify each latent variable $z_{i}$ from the observed variable $x$, i.e., $z_{i} = f_{ z_{i} } (x)$ via a learned function $ f_{z_{i}} $.
To express each $z_{i}$ as a function of $x$, it is necessary that $x$ preserves all original information of $z_{1}$, ..., $z_{n}$, which is equivalent to the mapping $g$ is invertible (injective).
Regarding MLPs, we fully agree that general MLPs can be non-invertible.
In our experiments, we follow the implementation of prior work [18,19,21] to adopt leaky-ReLU and dimension-preserving MLP layer weights with large condition numbers to facilitate invertibility. We will expand on this in our paper to make it clearer -- thank you so much for the suggestion!
Please let us know if you have further concerns – thank you!
---
Rebuttal 2:
Comment: Thanks for the authors' reply. It addresses my concerns. I will keep my original score.
---
Rebuttal Comment 2.1:
Comment: Thank you so much for the time, effort, and acknowledgment of our work! | Summary: This paper presents an identification strategy that allows the unique reconstruction of a latent hierarchical model, including both the graphical structure and (up to invertible transformations) the values of the latent variables. Assumptions include faithfulness, some structural assumptions (weak compared to related work), and other more technical assumptions. The algorithm is positively evaluated on various datasets.
Strengths: * Identifying latent structure (and even the values of latent variables themselves) is very powerful and could be used in many applications.
* There are only weak assumptions on the graphical structure.
* The experimental results look very good.
Weaknesses: * Differentiability is assumed, but not mentioned explicitly (another assumption refers to Jacobians). The strength of a continuity/differentiability assumption is compounded by the assumption of invertibility.
* I find it hard to assess how strong the subspace span condition is, in part because it is not written up clearly; see questions below.
* Footnote 1 / appendix C.2 address the case where some non-leaf variables are observed. For this case, I think the structural assumptions become quite strong, so I do not consider this a strong part of the paper's contribution.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * line 97-99, "a latent variable with no pure children can be merged into its children without affecting the overall generating process": this merging appears to not be possible for the middle z in $x \leftarrow z \rightarrow x \leftarrow z \rightarrow x \leftarrow z \rightarrow x$. Could you explain what you mean exactly?
* Assumption 3.1(ii) and text before it: The definition of T depends on c, but for what value? Further, how is T quantified over in assumption 3.1(ii)?
* If $g$ is linear, the subspace span condition (ii) will not hold (except in 1-dimensional cases, I suppose). Can this be addressed by something like "restructure latent variables by merging redundant components" (line 156)?
* Related: in line 200-202, the identity function doesn't satisfy assumption 3.1(ii) if $d_z > 1$
* Assumption 3.3(ii): Is the independence in 1) automatically satisfied if $A_z$ and $X$ have nonempty intersection? (Please make this explicit or rewrite to avoid this case.) I suppose condition 2) refers to the induced subgraph of $A_z$, as a set of node does not strictly speaking contain any paths. Is there a minimum length to these paths? A 0-length path always exists, and if you mean a 1-length path, you could simply ask if there is an arrow between two nodes in $A_z$.
* Assumption 3.3(iii): The "function between each latent variable $z$ and each of its children $z'$ (I assume separately for each $z'$?) also depends on other parents of $z'$. Should this assumption hold for some/most/all of the values of those parents?
* Algorithm line 17: remove variables independent of what? In the example, the only role of this line appears to be to clear A if it contains just the root. What else should this line do?
Things to clarify:
* latent variable identification: I'd recommend mentioning early on that this is up to transformations. Also, when this is first mentioned on the top of page 3, the objective is impossible to obtain without the assumption 3.3(i), which appears quite a bit later.
* Condition 2.3 (ii): I recommend to give a definition for "siblings" here
* In the algorithm, how do you detect equivalence of two constructed latent variables? (Mutual perfect predictability I suppose?)
* Algorithm line 9: There needs to be "\ {$\hat{z}$} after $P$.
* Line 352-353: the part after "whereas" doesn't match the results
Textual comments:
* line 47 "variable variables"
* line 171: "descents" -> "descendants"
* line 211-212: "$v_2$ always contains" -> "there always exists a $v_2$ [$\neq v_1$]"
* line 248 & 337: "an variable" -> "a"
* line 266 & 524: "an one-to-one mapping" -> "a"
* line 523: "proof" should instead refer to theorem / assumptions
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper has many assumptions, which are hard to interpret. While some effort is made to discuss most of these (exceptions: see weaknesses), this discussion leaves some questions (see above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your thorough reading and valuable insights! Below, we provide individual responses to your comments.
**Q1: Differentiability is not explicit.**
Thank you for pointing this out. This condition should definitely be given, and we have made this assumption explicit in our revision. We'd like to note that the differentiability assumption and invertibility are adopted in almost all continuous latent-variable identification papers [18,19,21,24,36,37].
**Q2: The observed non-leaf case is not a strong contribution.**
We agree with you – Footnote 1 is only meant to illustrate that our theorem applies naturally to models with observed non-leaf variables beyond our main discussion, where all non-leaf variables are latent.
**Q3: ``Line 97-99,..., the merging seems not possible….’’**
Thank you for raising the point. We meant to describe scenarios where latent variables are unidentifiable for lack of unique footprints. For instance, if two latent variables $z_{1}$ and $z_{2}$ share the same set of children $X$, i.e., $ z_{1} \to X$ and $z_{2} \to X$, then the two latent variables cannot be identified without further assumptions, while pure children would help in this case.
In light of your remark, we will replace the sentence in lines 97-99 “for instance, …” with “in a toy graph $\{z_{1}, z_{2}} \to X$, two latent variables $z_{1}$ and $z_{2}$ cannot be distinguished without further assumptions.”
**Q4: The definition of $T$.**
Thank you for the question. In lines 135-136, we define $T$ as a fixed matrix that shares the support (defined in lines 130-132) of the matrix-valued function $T(c)$. Therefore, $T$ and $T(c)$ refer to two different objects, and $T$ is not dependent on $c$. This assumption is adopted in prior work [A]. Sorry for the confusion – we will replace the fixed matrix $T$ with $Q$ in the revision.
**Q5: ``If $g$ is linear, the subspace span condition (ii) will not hold….’’**
Great observation! The subspace span assumption (ii) stipulates that the generating function's Jacobian should vary sufficiently within its support and is specialized for nonlinear functions, as discussed in [A,B]. Prior work [A] derives counterpart assumptions for the linear case (see Proposition 1, Assumption ii in [A]). These conditions can be adapted to our setup for the linear case. In addition, Prior work [B] supplies an explanation of the subspace span assumption (ii) and an example of functions that satisfies such an assumption (see Section 2.4.1. in [B]). In light of your question, we will include this discussion and the example in our revision to make our work thorough.
Regarding the identity function, we note that its Jacobian matrix $G$ is an identity matrix. The resultant subspace $R_{ G_{ i , :} }^{d_{c}}$ is only a 1-d subspace (see the subspace definition in line 128) in $R^{d_{c}}$, because row $i$ only has one nonzero element $G_{i,i} = 1$. Consequently, Assumption 3.1 (ii) is met even if $d_{c} > 1$.
**Q6: Assumption 3.3 (ii) and path lengths.**
Great question! When $A_{z} $ and $ X$ have a nonempty intersection, the independence condition 1) does not necessarily hold. Take a toy graph $z \to x_{1}$, $z \to x_{2}$ and set $A_{z} = \\{ x_{1} \\}$. We have $ X \cap A_{z} = \\{ x_{1}\\} \neq \emptyset$ and $ z \not\perp X | A_{z} $ because the path $z \to x_{2}$ is unblocked.
As you correctly identified, we refer to the subgraph induced by variables in $A_{z}$, and the paths refer to paths with nonzero lengths. Thank you for making this distinction, and we will explicitly make this point in our manuscript.
**Q7: Assumption 3.3 (iii).**
Great question! As this assumption is made on the support of the Jacobian matrix (see lines 130-132), it is satisfied as long as the component $z_{j}$ has a nonzero influence (i.e., partial derivative) on its child's component $z'_{i}$ at *some* values of parents of $z’$.
**Q8: Algorithm line 17.**
Algorithm line 17 removes variable $a \in A$ if $a$ is independent of all other variables in $A$, i.e., the complement set $ A \backslash \\{ a \\} $.
Some root variables may enter $A$ before the other root variables for graphs with multiple root variables. Line 17 removes these identified root variables from the active set $A$ before the iteration. One instance would be $z_{1} \to \\{ x_{1}, x_{2}, x_{3}\\}$, $z_{2} \to \\{ x_{3}, x_{4}, x_{5}\\}$, $z_{3} \to \\{ x_{6}, x_{7}\\}$, and $ z_{4} \to \\{ z_{2}, z_{3} \\} $. In this case, one root variable $z_{1}$ would enter $A$ before the other root variable $z_{4}$ and would get cleared earlier. We will improve this in the manuscript, thanks to your feedback.
**Q9: Mention “up to transformations” early on and the definition of siblings.**
Wonderful suggestions -- this will definitely help us improve the readability! We will mention "up to invertible transformations" in the introduction and abstract, and we will add a footnote for the objective to indicate that assumptions such as Assumption 3.3 (i) are necessary. We will include the definition of siblings right before introducing Condition 2.3.
**Q10: Equivalence detection.**
You are totally right! We use mutual predictions to detect the equivalence between two estimated variables. In our implementation, this is done by kernel regression (line 314).
**Q11: Algorithm line 9.**
Thank you for pointing this out! We will correct it as you suggested.
**Q12: ``Line 352, …, does not match…’’.**
Thank you! We will replace "closely" with "remotely".
**Q13: Typos.**
We are grateful for all the listed typos, and we will correct them in the paper. Thank you for your time and effort!
**References:**
[A] On the Identifiability of Nonlinear ICA: Sparsity and Beyond. Zheng et al.
[B] Disentanglement via Mechanism Sparsity Regularization: A New Principle for Nonlinear ICA. Lachapelle et al.
Please let us know if you have further concerns, and please consider raising the score if we have cleared existing concerns – thank you so much!
---
Rebuttal Comment 1.1:
Comment: Thank you for your replies, which have addressed most of my concerns.
Regarding Q4: It might be better to leave the name $T$ as it is. My confusion stems from the fact that $T$ is defined in terms of $T(c)$, but it was not clear to me why $T(c)$'s support should be the same for general $c$, because I didn't recognize this construction.
Knowing that these assumptions have appeared in the prior work mentioned in the rebuttal, somewhat alleviates my concerns about the clarity of the assumptions' exposition in this paper. While I would have preferred to have known these references during the review, I am raising my score from 4 to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback and thoughtfulness. We’ve improved the definition and included the references in our revision to make the construction transparent to the readers.
The revised texts are as follows:
Line 135: “We denote Jacobian matrices of $g$ and $\hat{g}$ as $J_{g}$, and their supports as $G$ and $\hat{G}$, respectively.
Further, we denote as $T$ a fixed matrix that has the same support as the matrix-valued function $J_{g}^{-1} (c) J_{\hat{g}} (\hat{c})$.”
Line 153: “Assumption 3.1 (ii), which is given in prior work [A,B], guarantees that the influence of $z$ changes adequately across its domain.”
We were wondering if all your concerns had been properly addressed – please kindly let us know if you have any further concerns. Many thanks for your time and dedication! | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable feedback and dedicated time! We are encouraged that they find our theoretical contribution significant/substantive (wEjX, gbV3, mbLW, jcJm) and well-supported by experimental results (wEjX, mbLW). We address the individual comments in separate responses and will incorporate the reviewers’ suggestions in our revision.
Pdf: /pdf/0dfad32ca9c888229ca5bd2f868fbae227c2adbd.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Active Negative Loss Functions for Learning with Noisy Labels | Accept (poster) | Summary: In this work, the authors propose a new class of theoretically robust passive loss functions named Normalized Negative Loss Functions (NNLFs). By replacing the MAE in APL with the proposed NNLFs, this paper improves APL and proposes a new framework called Active Negative Loss (ANL).
Strengths: 1. The motivation is clearly stated. The writing of this paper is also clear.
2. The theoretical study in this paper is solid.
Weaknesses: 1. More real-world datasets are needed for algorithm verification.
2. This paper contains many typos.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Some mathematical notations should be clearly defined, such as N in Section 2.1.
2. The authors should carefully proofread their paper. Some typos should be corrected, such as "at at least one other class position...". In supplementary material, the equation index is missing in "So, from Eq () we have..." above Eq. 25.
3. This paper only investigates the proposed method on one real-world dataset, namely WebVision, which I think is not adequate. In fact, there are many benchmark datasets with real-world noise, such as Animal-10N, Clothing-1M, etc.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See my comments above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the suggestion from the reviewer and will incorporate it into the updated version of our paper.
1. Typos and mathematical notations.
We thank the reviewer for pointing out the typos and the unclear mathematical notations. We will check our paper carefully and correct the relevant errors. We apologize for the inconvenience caused by this.
2. More experiments.
Please see the global response.
---
Rebuttal Comment 1.1:
Title: I have read the authors' rebuttal
Comment: Thanks for the authors' rebuttal. I do not have major concerns on this paper, and vote for an acceptance. | Summary: This paper proposes a robust loss function for learning from label noise. The authors first find that negative losses proposed in APL are scaled versions of MAE, which is not training-friendly. To solve this problem, the authors propose the Active Negative Loss (ANL) framework which improves the APL loss by replacing the passive losses with a newly designed set of negative losses. Theoretically, the ANL losses are proved to be noise tolerant and the designed negative loss is shown to focus more on the well-learned classes and samples compared with MAE. Experimentally, the ANL framework outperforms the state-of-the-art robust losses on multiple datasets.
Strengths: - This paper is well-written and easy to read.
- This paper provides new insights into the negative losses proposed in APL.
- The proposed loss framework is provably robust and its gradient is theoretically understood.
- The experiments are extensive and sound.
Weaknesses: - In Section 3.1, the motivation for incorporating the three components in NNLF is lacking. Why do we need the three components in NNLF? And why do the three components make NNLF a good robust loss function?
- The theoretical analysis is less original. Most discussions of the relationship between symmetric loss function and noise tolerance can be found in previous works, e.g. APL. Accordingly, the proofs of Theorems 2, 3, and 4 could be simplified.
- Typos. Lines 196, 201, 220. "theorem" should start with a capital T.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What motivates the three components of NNLF?
- In Eq. (7), is $A$ a constant over all $p(k|x)$? If not, the derivation of gradients in Eq. (15) should consider the dependency of $A$ on $p(k|x)$.
- The proposed ANL framework performs less well on the MNIST dataset (Tables 2, 6, 7). Are there any explanations?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the suggestion from the reviewer and will incorporate it into the updated version of our paper.
1. The three components of NNLF.
Our goal is to create a new loss function to replace the MAE in APL. This means that: 1. The loss function must conform to the definition of passive loss functions in APL. 2. The loss function is different from MAE and does not treat all samples equally, which leads to training difficulties. 3. The loss must be robust to noisy labels. The three components of NNLF: complementary label learning, “vertical flipping”, and normalization operation correspond to these three motivations respectively. We believe that NNLF is better than MAE because it can make the model pay more attention to clean samples that have already been memorized, as described in our discussion of Theorem 5 and Theorem 6. Theorem 5 and Theorem 6 are due to the “vertical flipping” operation, which means that the gradient increases as the prediction probability of non-labeled classes decreases.
2. The theoretical analysis of noise tolerance is less original.
We appreciate your comment on the originality of our discussion on noise tolerance. We kept this part for the sake of completeness of our proofs, but we agree that it can be simplified. We will revise our paper accordingly.
3. Is $A$ a constant ?
Yes, $A$ is a constant.
4. ANL framework performs less well on the MNIST.
We appreciate the reviewer’s insightful question. We conduct experiments on the MNIST dataset with 0.8 symmetric noise rate using ANL-CE and NCE+AGCE, and calculate the entropy $H=-\sum_i p(i|x) \log p(i|x)$ of the model’s prediction probability on clean samples at each step. The lower the entropy, the closer the prediction probability is to a one-hot vector, indicating that the model is more confident in its prediction results. We find that in the first 5 epochs, the entropy values of both methods are very close and have similar trends. After 5 epochs, the entropy of NCE+AGCE stabilizes at around 0.09 and keeps oscillating, while that of ANL-CE shows a continuous decreasing trend and approaches 0. This indicates that our model is too confident in its prediction results.
We believe that this phenomenon is caused by our Theorem 5 and Theorem 6, which force the loss function to continuously make the model learn already learned samples, causing overfitting of the model to clean samples in the training set and loss of generalization performance. Although we use L1 regularization to alleviate this problem, considering that MNIST is a very simple dataset, the effectiveness of L1 regularization may be affected. As described in the Limitations section, we believe that regularization methods are the main limitation of our method.
5. Typos
Thank you for pointing out the typos in our paper. We will carefully check our paper and correct all the spelling and grammar errors. We apologize for the inconvenience caused by this.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. The reply solved my concerns. I will keep my score and vote for acceptance. | Summary: This paper proposes robust loss function to improve training of DNNs using noisy labels. Previously proposed work of Active Passive Loss functions is improved.
Strengths: Paper is well written and contains theoretical foundation of the proposed work. There are a lot of derivations which give a broad understanding of loss functions. Table 2 shows improved performance of the proposed loss functions compared to [6] and [18] in some cases. Though the improvement is not consistent.
Weaknesses: 1. The terminology used in this paper is mainly based on previous work of reference [6]. The terms like active and passive loss functions do not clearly separate the loss functions. As shown in the appendix, MAE=1(1-p(y|x)), shows that MAE also aims to maximise p(y|x) therefore it is also an active loss function. In [6], MAE was considered as a passive loss function which is not correct Mathematically in cases when we are dealing with a prediction as \sum_k{p(k|x)}=1. This paper also inherits the same definitions from [6].
2. Changing the sign of a loss function also changes the max to min. If we are maximising L, we should be minimising -L. Therefore change of sign may not be considered making any fundamental difference. The term `Vertical Flip' is. derived from the graph. Mathematically, there is no thing like a vertical flip.
3. Definition of A in Eq (7) is not clear. How a loss can be computed between a vector [ . .........p_min, .........] and y ? In the appendix, A is defined as a constant value.
4. As shown in the appendix, the proposed normalisation by [6] is scaling and the same is inherited in the current work. If all loss values are scaled by the same number then how it will be robust to noise? The loss generated by the noisy and the clean samples will be scaled equally and noisy labels are not known at the time of training.
5. In Eq 9, denominator can be simplified as: \sum_k{A+ log p(k|x)}=KA+1, reducing Eq 9 to: NNFL=1-A/(KA+1)-logp(y|x)/(1+KA) . It is again a scaled version of the original function added a constant value. What fundamental change makes it robust to noise?
6. The range of asymmetric noise is kept quite small {.2,.3.4} compared to symmetric noise {.4,.6,.8} in Table 2. A larger range would have revealed more insights.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The same as above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: More theoretical insights are required for the proposed loss functions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the suggestion from the reviewer and will incorporate it into the updated version of our paper.
1. Active and passive loss functions.
We agree that there is no clear distinction between active and passive loss functions in [1]. Since our work follows [1], we directly use these definitions. Regarding why to distinguish between active and passive loss functions, we suggest referring directly to [1], which has a detailed discussion on this.
2. Vertical flip.
Firstly, the name “vertical flip” does come directly from the graph because it is very intuitive. Secondly, taking CE as an example, $A-(-\log p(i|x))$ obtained by “vertical flip” is not the same as $-(-\log p(i|x))$ obtained by simply changing the sign because we do not directly minimize it but also apply normalization to it. Specifically, after normalizing $\sum_{i \ne y}^K-(-\log p(i|x))$, we get $\frac{1}{K-1}\cdot(1-\frac{-\log p(y|x)}{\sum_k - \log p(k|x)})$, which shows that when we minimize it, we are actually minimizing $p(y|x)$ and maximizing $p(i\ne y|x)$, which contradicts our optimization goal. In contrast, after normalizing $\sum_{i \ne y}^K A - (- \log p(i|x))$, we get $\frac{1}{K-1}\cdot (1-\frac{A-(-\log p(y|x))}{\sum_k A - (-\log p(k|x))})$, which shows that when we minimize it, we are actually maximizing $p(y|x)$ and minimizing $p(i\ne y|x)$, which is consistent with our optimization goal.
3. Definition of $A$.
Thanks to the reviewer for pointing out that the definition of $A$ is unclear. $A$ is indeed a constant, and we will revise its definition to make it more clear and precise.
4. Normalization.
The normalization proposed by [1] does not mean that all the loss values are scaled by the same number. For example, for $NCE=\frac{1}{\sum_k - \log p(k|x)} \cdot \big(- \log p(y|x)\big)$, the factor $\frac{1}{\sum_k - \log p(k|x)}$ varies for different samples $x$ with different prediction probabilities $p(i|x), i \in [1, \cdots, K]$. Moreover, the noise robustness of the normalized loss function is determined by its symmetry: $\sum_{y=1}^K \Big( \frac{1}{\sum_k - \log p(k|x)} \cdot \big(- \log p(y|x)\big)\Big) = 1$, and [2] has proved that the symmetric loss functions are robust to noise.
5. The denominator of Eq (9).
The denominator of Eq (9) can be expressed as: $\sum_k (A + \log p(k|x) ) = K \cdot A + \sum_k \log p(k|x)$. We know that $\sum_k p(k|x)=1$ and $p(k|x) \in [0, 1]$. Therefore, we can conclude that $\sum_k \log p(k|x) \in [-\infty, K\cdot \log \frac{1}{K}]$ and $\sum_k ( A + \log p(k|x) ) \ne K \cdot A + 1$.
6. The range of asymmetric noise.
For synthetic asymmetric noise, we often mislabel one class as a specific other class. For instance, on CIFAR-10 with synthetic asymmetric noise, we randomly label bird images as planes with probability $\eta$ (noise rate). If $\eta$ is larger than 0.5, it means that most of the birds are labeled as planes, which is very rare in the real world. To the best of our knowledge, no previous work has experimented with synthetic asymmetric noise at noise rates $\eta$ greater than 0.5.
[1] Normalized Loss Functions for Deep Learning with Noisy Labels, ICML, 2020
[2] Robust Loss Functions under Label Noise for Deep Neural Networks, AAAI, 2017
---
Rebuttal Comment 1.1:
Comment: The authors have followed the active and passive terminology used by previous papers. The intuition of why the normalisation will result in robustness will improve the paper. The demonstration of the proposed loss is perhaps for DNNs while it remains unclear if it will be effective for other deep architectures such as Vision Transformers which use attention mechanism. Networks architectures used in different experiments must be mentioned. Nevertheless, the paper may be considered for publication. | Summary: This paper introduces a novel type of loss function called Active Negative Loss (ANL), which builds upon the Active Passive loss function (APL) framework. The authors identify a limitation in APL, where the passive loss function, being a scaled version of Mean Absolute Error (MAE), can lead to slower convergence and underfitting issues. To address this, the authors propose replacing the passive loss in APL with a normalized negative loss, drawing inspiration from negative learning and vertical flipping techniques. The paper also provides theoretical justifications for the superiority of ANL over APL. The proposed ANL framework is evaluated through experiments conducted on CIFAR10 and CIFAR100 datasets with synthetic label noise, as well as ILSVRC12 and WebVision datasets with real-world label noise.
Strengths: 1. The motivation is clearly articulated and accompanied by well-explained reasoning.
2. The proposed ANL framework can be considered an enhanced version of the APL framework, offering improved performance with theoretical backing. Specifically, it demonstrates robustness against both symmetric and asymmetric label noise under certain conditions.
3. Code is provided to facilitate reproducibility.
Weaknesses: 1. Although the proposed loss is theoretically sound, many of the techniques or analyses employed are not very new. The ANL framework appears to be a simple combination of the NL loss [R1] and APL loss [R2]. While I acknowledge the validity of this combination, I do not believe it constitutes a significant contribution to the LNL community.
2. The authors mention that regularization, such as L1 regularization, is applied to the ANL framework. I am curious to know if other approaches in your experiments also utilize this regularization. Additionally, the absence of ablation studies investigating the impact of regularization on the ANL framework is worth considering.
3. I believe conducting further experiments to verify the effectiveness of the ANL framework is necessary. For instance, experiments involving CIFAR with controlled instance-dependent label noise, CIFAR-10N, CIFAR-100N, and the Clothing1m dataset would provide valuable insights.
[R1] NLNL: Negative Learning for Noisy Labels
[R2] Normalized Loss Functions for Deep Learning with Noisy Labels
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See **weaknesses**
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I think this work does not bring negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the suggestion from the reviewer and will incorporate it into the updated version of our paper.
1. A simple combination of the NL and APL.
First of all, although our ANL may seem like a simple combination of existing techniques, we have a strong motivation for doing so: finding a passive loss function that performs better than MAE to enhance APL.
Secondly, our NLF and NL are still fundamentally different. Take NLF(CE) as an example:
$$
L_\text{NLF}=\sum_{k=1}^K (1-q(k|x))\big(A-(-\log (p(k|x))\big).
$$
And the NL loss is:
$$
L_\text{NL}=\sum_{k=1}^K (1-q(k|x)) \big( - \log (1-p(k|x)) \big).
$$
Although the $q(k|x)$ in NL is actually obtained by randomly selecting complementary labels rather than directly from the given labels as we did, we use the same form for comparison purposes.
Next, let's take a closer look at the gradient of these two losses on $p(j|x), j \ne y$. For NLF, we have:
$$
\frac{\partial}{\partial p(j|x)} L_\text{NLF} = \frac{1}{p(j|x)}.
$$
And for NL loss, we have:
$$
\frac{\partial}{\partial p(j|x)} L_\text{NL} = \frac{1}{1-p(j|x)}.
$$
We can see that for NL loss, its gradient decreases as $p(j|x)$ decreases. However, for our NLF, the gradient increases. This means that our NLF can keep focusing on well-learned samples. This property holds even after normalization (as shown in Theorem 5 and 6). In fact, if we were to use NL instead of our NLF, Theorem 5 and 6 would not hold.
2. Regularization.
For the experiments of other methods, we followed their original papers and used L2 regularization. However, to ensure fairness and investigate whether other regularizations would have positive effects on other methods, we also conducted a set of experiments using L1 regularization for APL. The details and results of these experiments are presented in Appendix on L530. We found that changing the regularization methods did not significantly affect the performance of APL. Moreover, regarding the ablation study of the impact of regularization on the ANL framework, we have already done the relevant experiments and discussions on L247 of the paper.
3. More experiments.
Please see the global response.
---
Rebuttal Comment 1.1:
Title: Thank you for your prompt response and thorough explanation.
Comment: The majority of my concerns have been adequately addressed. However, I would like to note that, in my opinion, the performance exhibited in both the main paper and the Rebuttal phase does not match the competitiveness of state-of-the-art methods like the ELR loss [R1]. Nevertheless, this work does make contributions in analyzing robust losses within the domain of Learning with Noisy Labels (classification). In light of this, I decide to maintain my rating at 5. I strongly encourage the authors to include rebuttal experiments involving additional methods such as ELR, Peer Loss in the upcoming version.
[R1] Early-Learning Regularization Prevents Memorization of Noisy Labels | Rebuttal 1:
Rebuttal: Following the suggestions of the reviewers and in order to further validate the effectiveness of our method, we conducted a set of experiments on CIFAR-10N [1], CIFAR-100N [1], Animal-10N [2], and Clothing-1M [3]. For some experiments, we can only compare a few methods due to time constraint. The table of the experimental results can be found in the pdf file. Overall, our method outperforms all the other compared methods on all four datasets.
**CIFAR-10N and CIFAR-100N**
We use the same experimental settings and parameters as CIFAR-10 and CIFAR-100 for CIFAR-10N and CIFAR-100N, since the only difference is the noise label distribution. The results are reported in Table 1 and Table 2.
**Animal-10N**
We follow the experimental setting in previous works [2]. We use VGG-19 with batch normalization. The SGD optimizer is employed. We train the network for 100 epochs and use an initial learning rate of 0.1, which is divided by 5 at 50% and 75% of the total number of epochs. Batch size is set to 128. Typical data augmentations including random horizontal flip are applied.
We compare our ANL-CE with CE and GCE. We use L2 regularization (weight decay) for GCE, and L1 regularization for ANL-CE. We denote the regularization coefficient by $\delta$. We tune the parameters $\\{\delta\\}, \\{q, \delta\\}$ and $\\{\alpha, \beta, \delta\\}$ for CE, GCE and ANL-CE respectively. We use the best parameters $\\{10^{-3}\\}, \\{0.5, 10^{-4}\\}, \\{0.5, 0.1, 10^{-6}\\}$ for each method in our experiments. The results are reported in Table 3.
Moreover, we experiment with NCE+RCE on this dataset and tune the parameters $\\{\alpha,\beta,\delta\\}$, but we find that the performance is very poor for some unknown reason. The best test accuracy we achieve is 28.28 with $\\{10.0, 0.1, 5 \times 10^{-6}\\}$. Since this result is too low and inconsistent with its performance on other datasets, we do not include it in the table for comparison.
**Clothing-1M**
We follow the experimental setting in previous works [4]. We use the $14k$ and $10k$ clean data for validation and test, respectively, and we do not use the $50k$ clean training data. We use ResNet-50 pre-trained on ImageNet. For preprocessing, we resized the images to $256 \times 256$, performed mean subtraction, and cropped the middle $224 \times 224$. We used SGD with a momentum of 0.9, a weight decay of $10^{-3}$, and batch size of 32. We train the network for 10 epochs with learning rate $10^{-3}$ and $10^{-4}$ for 5 epochs each. Typical data augmentations including random horizontal flip are applied.
We compare our ANL-CE with CE, GCE and NCE+RCE. In this experiment, we use L2 regularization (weight decay) for ANL-CE and set the coefficient the same as those for CE, GCE and NCE+RCE. We tune the parameters $\\{q\\}, \\{\alpha, \beta\\}$ and $\\{\alpha, \beta\\}$ for GCE, NCE+RCE and ANL-CE respectively. We use the best parameters $\\{0.6\\}, \\{10.0, 1.0\\}$ and $\\{5.0, 0.1\\}$ for each method in our experiments. The results are reported in Table 4.
[1] Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations, ICLR, 2022
[2] SELFIE: Refurbishing Unclean Samples for Robust Deep Learning, ICML, 2019
[3] Learning From Massive Noisy Labeled Data for Image Classification, CVPR, 2015
[4] Joint Optimization Framework for Learning with Noisy Labels, CVPR, 2018
Pdf: /pdf/3191b563feb24e39a75d6367fc321b48fd6ceb3d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Model-enhanced Vector Index | Accept (poster) | Summary: In this study, the authors present a new approach called Model-enhanced Vector Index (MEVI) that combines ideas from both autoregressive sequence-to-sequence models for indexing and dense retrieval models (twin-tower architectures), and includes Residual Quantization. This approach offers significant advantages and proves to be highly effective in real-world applications. MEVI demonstrates nice performance gains in terms of achieving both high recall accuracy and faster retrieval speed, even when dealing with large-scale corpora.
Strengths: * The method introduced combines two of the recent approaches for document retrieval - dense retrieval and the generative neural indexing. It showcases the power of these together and is extensively evaluated and experimented, both in an offline manner as usual, but also online in real-word scenarios (in commercial advertising). They also include a nice latency evaluation.
* The paper is well-written and easy to follow in general.
Weaknesses: Major comments:
* MSMARCO is the only benchmark which was evaluated in this work. I’d love to see if the method generalizes well across more datasets. This is required in order to convince readers to favour this method.
* While an online A/B testing experiment appreciated, there is no guarantee about the quality of such kind of testing, while controlled crowdsourcing does provide an infrastructure for filtering out bad annotations.
* L152: I am not convinced that the tree structure is problematic because of decoding time. It can actually alleviate the decoding if using controlled decoding instead of plain beam search. Also, the hierarchical nature of the structure of embeddings proposed in this work is not that different (as stated in L167).
* The authors report results using MRR@10 Recall@50 Recall@1000 but it woul’d be interesting to see both more fine-grained retrieval results (Recall@1) following previous works, as well as a figure which states how the method results progress as k is being increased.
The training process of the twin-tower model and the sequence-to-sequence model are distinct which may be very problematic. But I do acknowledge this was stated in the limitations section.
Minor comments:
* Please elaborate more both in the contributions part in the introduction section about and in the introduction itself more about the proposed MEVI method. After reading this, I didn’t have any clue about the method which should be stated much early in the text.
* Related work section - please state how your method differs from other recent works.
* L146: insert space after T5-ANCE. Similar comment for the paragraph starting in 149 and the rest of the paper.
* L157-166: The mathematical notations lack clarity in terms of explicitly defining the dimensions of the vectors/matrices and what they represent.
* Table 4: please include detailed caption so the table can be understood standalone. The same applies for the rest of the tables.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Trank you for your thoughful comments and suggestions.
We address the points you mentioned in the weaknesses part.
Major comments
1. We add another dataset Natural Questions (NQ). We take AR2 (https://arxiv.org/abs/2110.03611) as the dense retriever and HNSW as the ANN algorithm. The results are listed in the table below. MEVI also achieves better performance than baseline methods on NQ dataset.
| Method | R@5 | R@20 | R@100 |
| -------- | :-----: | :-----: | :-----: |
| BM25 | / | 59.1 | 73.7 |
| AR2 + HNSW | 70.89 | 78.50 | 83.02 |
||||
| MEVI Top-10 Clus | 59.61 | 66.45 | 71.63 |
| MEVI Top-100 Clus | 70.33 | 77.23 | 81.77 |
| MEVI Top-1000 Clus | 75.57 | 82.83 | 87.31 |
||||
| MEVI Top-10 Clus & AR2(HNSW) | 74.10 | 82.11 | 86.43 |
| MEVI Top-100 Clus & AR2(HNSW) | 74.43 | 82.71 | 87.51 |
| MEVI Top-1000 Clus & AR2(HNSW) | **75.93** | **83.96** | **88.98** |
2. The online A/B test results come from our company's real online service business, which has brought higher advertising revenue with higher model quality. We present this experimental result to show that MEVI is not only applicable to academic datasets, but also effective in real industrial applications.
3. NCI's high latency does not come from the tree structure, but from its long decoding steps. We add a latency comparison between NCI and MEVI in the following table. We decouple the problem into two parts, one is the difference between MEVI and NCI/DSI, and the other is the difference between RQ and normal k-means. For the first part, each minimal cluster in MEVI contains a set of documents, while each minimal cluster in NCI and DSI contains only one document. This design makes MEVI smaller number of clustering layers, leading to smaller serving latency; also if the doc-id is too long, the model cannot learn well, so the results of MEVI is better than NCI. For the second part, we use RQ instead of normal K-Means in MEVI because RQ performs better. In RQ, each layer uses the residual vectors from the last clustering layer, and conducts k-means clustering among all the vectors. Comparing to hierarchical k-means that focuses on more fine-grained clustering within large clusters, RQ adopts residual information to address the errors of the last clustering layer, making the clustering results more robust. We conduct an experiment to compare RQ with normal hierarchical k-means. In the experiments, we test MEVI-RQ and MEVI-KMeans on MSMARCO Passage dataset, setting layer depth to 4 and the number of clusters per layer to 32. The results are listed below, showing that RQ generally achieves better recall than hierarchical k-means under the same number of layers.
| Method | MRR@10 | Latency (ms) |
| -------- | :-----: | :-----: |
| T5-ANCE (HNSW) | 33.21 | 19.71 |
| NCI | 26.18 | 2899.17 |
| Top-10 & HNSW | 35.22 | 96.87 |
| Top-100 & HNSW | 35.60 | 222.55 |
| Top-1000 & HNSW | 35.83 | 1662.84 |
| Method | MRR@10 | Recall@50 | Recall@1000 |
| -------- | :-----: | :-----: | :-----: |
| RQ MEVI Top-10 Clus | 32.05 | 63.25 | 66.82 |
| RQ MEVI Top-100 Clus | 35.16 | 79.14 | 88.22 |
| RQ MEVI Top-1000 Clus | **35.76** | **82.37** | **95.17** |
||||
| K-Means MEVI Top-10 Clus | 31.62 | 65.09 | 68.25 |
| K-Means MEVI Top-100 Clus | 34.82 | 77.30 | 87.93 |
| K-Means MEVI Top-1000 Clus | 35.65 | 81.01 | 94.17 |
4. In the main experiments, we align with existing work (e.g. coCondenser https://arxiv.org/abs/2108.05540, RocketQA https://arxiv.org/abs/2010.08191, AR2 https://arxiv.org/abs/2110.03611) and report MRR@10, Recall@50, Recall@1000. Following your suggestion, we add Figure 2 into the attached PDF to show how the results vary with different $k$, where the ensemble method consistently outperforms T5-ANCE+HNSW. We currently separate the training of the two-tower model and the sequence-to-sequence model to eliminate the overall training cost. We plan to explore joint-training optimization in the future.
Minor comments
1. We incorporate the information from sequence-to-sequence model into dense retrievers, achieving better model quality on large datasets with acceptable serving latency. For the generative sequence-to-sequence model, we construct an RQ codebook to cluster the documents, limiting the decoding steps as well as improving the clustering quality. For the ensembling, we take the retrieved documents from the dense retriever and the generative model, then re-rank the documents with a proper score. In experiments, we exhibits better recall performance than baseline dense retriever and generative model. We have also applied MEVI in real industrial applications, bringing higher advertising revenue.
2. Existing state-of-the-art methods can be classified into dense retrievers and generative models, where the former applies to large datasets and the latter performs better on small datasets. MEVI incorporates information from generative models into dense retrievers, achieving better model quality on large datasets.
3. Thanks for your suggestions in items 3-5. We will modify the typos and improve the captions in the next version of this paper. We have also added a table for notations, which will be placed in the appendix later.
Notations
| Notation | Explanation |
| ----- | ----- |
| $n$ | The number of documents |
| $d$ | Embedding dimension |
| $b$ | The number of bits for clustering |
| $m$ | The number of clustering layers, i.e. the length of cluster-id |
| $X \in \mathbb{R}^{n\times d}$ | All document embedding |
| $B_i\in\mathbb{R}^{b\times d}, i=1,...,m$ | Centroid embeddings of clusters |
| $B\in\mathbb{R}^{mb\times d}$ | Overall centroid embeddings |
| $C_i\in\{0,1\}^{n\times b}, i=1,...,m$ | Code-style assignment of documents |
| $C\in\{0,1\}^{n\times mb}$ | Overall cluster assignment |
| $\widetilde{C}\in\{0,1,2,...,2^b-1\}^{n\times m}$ | Compact cluster assignment (typos in the original paper will be modified) |
| $R_i\in \mathbb{R}^{n\times d},i=1,...,m-1$ | Residual embeddings after clustering |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses. Although the additional works that the authors attached include more extensive evaluations, and over more than just MSMARCO and NQ, I’m willing to raise my score. In addition, I still think that A/B test doesn’t fit here, mistletoes because it is not reproducible.
---
Reply to Comment 1.1.1:
Comment: Thanks for your recognition of our revision. Based on your suggestions, in the next version we will move the new reproducible experiments to the main body of the paper, and the A/B test section to the appendix. | Summary: This work introduces residual quantisation codebook into k-mean clustering to generate semantic IDs. These semantic IDs are used to provide initial cluster-level ranking and prune the corpus into document subset with thousand documents. The interpolation between the dual-encoder similarity and a derived cluster score is used as final ranking. The experiments are on msmarco passage dataset.
Strengths: - An novel idea is implemented and examined to exploit the semantic ids from generative retrievers to introduce clusters into dual-encoder
- The writing is easy to follow and the method is insightful
- Promising results are reported on large in-house dataset Ms Marco and on an online system
Weaknesses: - The cluster-based retrieval has been long studied and this paper should be put into context better, e.g., [1, 2] . Besides, is this the first work to introduce clusters into dual-encoder retriever?
- The effectiveness improvement in Table 1 comes from the fine-grained clustering information. Does such boost still exist when using state-of-the-art dual-encoder retrievers, like rocketqav2 and GTR, where the document similarity is with higher quality? Actually, in Table 1, it can be seen that use of HNSW could boos the recall considerably. Also, the results of the online system are also on top of weak dual-encoder models (four layers). Thus, it is beneficial to better understand when and where the clustering information could help.
- For the dynamic update of documents, is this the first work that investigates this problem? What are the existing methods tackling this problem and how they design the experiments? The results from Table 2 look promising, but I am not sure how convincing it is to support that claims of dynamic updates.
- As an ablation results, Table 3 is hard to read, and the configuration differences among rows are more than one out of three (subgroup, bits, top-k), and it is hard to answer by increasing subgroup or k it helps.
- Some formatting issues: a space is required between the text and their follow-up citation, like line 150
1. https://dl.acm.org/doi/pdf/10.1145/253495.253524
2. https://arxiv.org/abs/2008.00150
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is the configuration of \alpha and \beta and how sensitive is the performance relative to their configs?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time you took to review our paper.
First, we address the five points you mentioned in the weaknesses part.
1. From the taxonomy perspective, our proposed MEVI is a generation-enhanced dense retrieval method, not a traditional cluster-based retrieval method. The clustering process here forms doc-ids (or say cluster-ids) to build outputs of a generative language model. We would like to claim this is the first work to introduce information from generative model into dual-encoder retriever. We will reference the cluster-retrieval based methods and clarify their difference in the next version of our paper.
2. We have conducted experiments with another state-of-the-art dense retrieval model AR2 (https://arxiv.org/abs/2110.03611) in the table below. AR2 performs better than RocketQA-v2. The comparison of vanilla dense retriever and ensembling method in the table below shows the gain of introducing generative-based retrieval information.
| Method | MRR@10 | Recall@50 | Recall@1000 |
| -------- | :-----: | :-----: | :-----: |
| BM25 | 18.7 | 59.2 | 85.7 |
| T5-ANCE + HNSW | 33.21 | 77.30 | 88.61 |
| AR2 + HNSW | 35.54 | 78.80 | 87.11 |
| NCI | 26.18 | 74.68 | 92.44 |
||||
| MEVI Top-10 Clus | 32.05 | 63.25 | 66.82 |
| MEVI Top-100 Clus | 35.16 | 79.14 | 88.22 |
| MEVI Top-1000 Clus | 35.76 | 82.37 | 95.17 |
||||
| MEVI Top-10 Clus & T5-ANCE(HNSW) | 35.22 | 81.29 | 93.21 |
| MEVI Top-100 Clus & T5-ANCE(HNSW) | 35.60 | 82.27 | 95.62 |
| MEVI Top-1000 Clus & T5-ANCE(HNSW) | 35.82 | 82.77 | 97.12 |
||||
| MEVI Top-10 Clus & AR2(HNSW) | 37.00 | 82.64 | 93.46 |
| MEVI Top-100 Clus & AR2(HNSW) | 38.42 | 84.52 | 96.23 |
| MEVI Top-1000 Clus & AR2(HNSW) | **39.16** | **86.12** | **97.65** |
3. Dynamic update is not a problem in dense retrieval, since the usage of embedding already enables unseen documents if the new-coming ones do not change the overall distribution drastically. However, in existing generation-based methods which convert documents to fixed doc-ids, re-training is required if new documents come in. Therefore, existing dense retrieval methods have better ability to handle new documents, while existing generation-based methods cannot support dynamic update. Since we aim to incorporate the information from generative models and take the advantage of dense retrieval methods simultaneously, we train the model with only 90% documents and regard the rest as new documents to validate the capability of MEVI to handle new documents. In experiments, MEVI has comparable results to dense retrieval methods, exhibiting similar ability to support dynamic updates.
4. In Table 3 of the original paper, our aim is to examine the recall performance of different configurations with almost the same number of candidate documents per query. To align the magnitude of the number of documents per query, we have to adjust the number of clusters, which is calculated as (2 ^ #subgroup) ^ #bits. The numbers of subgroup and bits both determine the size of cluster-id space, while the k determines the number of documents per query. From RQ(3,4) to RQ(5,5), the number of clusters increases, and the number of documents in each cluster decreases; so we enlarge $k$ to align the magnitude of the number of documents per query. In general, the larger number of candidate documents, the more likely the model is to recall the correct document. Comparing RQ(3,4), RQ(4,4), and RQ(4,5), even the number of documents per query is reduced, the recall performance increases due to finer-grained learning to generate cluster-ids. In RQ(5,5) setting, the cost of generation becomes unacceptable as the number of clusters increases. In summary, RQ(4,5) achieves the best trade-off between model performance and cluster-id space size.
5. Thanks for your valuable suggestions. We will modify the typos in the next version.
Then we answer the questions below.
1. What is the configuration of $\alpha$ and $\beta$ and how sensitive is the performance relative to their configs?
- $\alpha$ determines the overall contribution ratio of clustering information, while $\beta$ determines how large the clustering score is with respect to the cluster rank. In practice, we directly search and choose the best configurations since the ensembling does not incur additional training or inference, and the search overhead can be ignored. Figure 1 in the attached PDF shows the MRR@10 for different values of $\alpha$ and $\beta$. With the right configuration, the ensemble results achieves optimal performance as it fully leverages the two components. The best hyperparameters searched on the dev set are also optimal on the test set.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response and the additional results. I am willing to increase my rating.
For 1 and 2, that makes sense to me. It seems with AR2 the performance gets even better (more boost?).
For 3 and 4, thanks for the explanation and that makes sense. For 3, the update is indeed an issue since it is hard for the gen model to provide cluster id to the unseen doc, but I think it is not of concern for me and the 90% experiment looks good.
For the attache pdf, can you claim the robustness for different choices of hyper-parameters from the results (the range of y-axis is relatively small)?
---
Reply to Comment 1.1.1:
Comment: Thanks for your recognition of our revision.
MEVI achieves significant gains on both T5-ANCE and AR2 retrievers. The performance gain of MEVI+AR2 is larger, probably because AR2 is a more effective retriever with a higher upper bound, and the information it contained is fully utilized in the ensemble process.
Our ensemble method is robust to hyperparameters. As shown in Figure 1, the ensemble result outperforms the baseline (35.76) when $0.3\le\alpha\le0.7$ and $0.01\le\beta\le0.03$. This finding will be added into the next version of our paper. | Summary: This paper proposes to improve dense neural IR models by adding a RQ structure (hierarchical clustering based on residuals at each step) before the ANN search. The structure is used to construct a semantic identifier string for each document. The authors thus use a generative approach to retrieval combined with a dense model.
This approach has two advantages, firstly to have a faster retrieval (since the ANN search is restricted to a subset of the document), secondly to have a better retrieval (by exploiting a cluster-based score). Experiments conducted on MS Marco (passages) and a proprietary advertisement dataset show improvements over T5-ANCE.
Strengths: The paper is well written and technical details ensuring reproduction are given.
The approach is promising as it both allows for more effective and efficient retrieval - which is always a challenge in IR. The proposed approach, based on RQ clusters, is original and combines the strength of generative and dense models in IR.
It also shows that adding documents is not as problematic as for other generative approaches (but however the performance is worse than T5-ANCE in that case, for a higher latency).
Weaknesses: - Comparison with state of the art models (A2R/Adversarial Retriever-Ranker for dense models, SPLADE for sparse ones) is missing - it would have been good to see how with better dense models the proposed approach evolves.
- With respect to generative models, several approaches have better results than NCI nowadays (e.g. Ultron) or the more recent “Learning to tokenise” (SIGIR’23). At least Ultron should be reported (ArXiv 2022) for comparison purposes.
- The authors have chosen to use a proprietary dataset (section 4.7) for the second evaluation. It would have been much wiser to use publicly available ones as nothing can be checked on this one (e.g. BEIR) - reporting proprietary results in the appendix would have been appropriate.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - what is the performance of ANCE without HNSW? This would have been a proper baseline.
- in section 3.2, how are $\alpha$ and $\beta$ set?
- What it the advantage of using RQ over hierarchical clustering?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitation section contradicts a bit the initial claims (“… the capacity of the sequence-to-sequence model is still insufficient to cope with large-scale corpus within acceptable inference latency …”), but correctly lists the different limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review.
First, we address the three points you mentioned in the weaknesses part.
1. On the MSMARCO Passage dataset, we add the experiment results with another state-of-the-art dense retrieval model AR2 (https://arxiv.org/abs/2110.03611) in the following table. We also add SPLADE results for comparison. MEVI achieves better performance than AR2 (with HNSW as the ANN algorithm). Note that AR2 without HNSW performs brute force to rank all candidates, which gets better results but is not applicable in online applications.
| Method | MRR@10 | Recall@50 | Recall@1000 |
| -------- | :-----: | :-----: | :-----: |
| BM25 | 18.7 | 59.2 | 85.7 |
| SPLADE | 32.2 | / | 95.5 |
| T5-ANCE | 35.73 | 82.96 | 97.21 |
| T5-ANCE + HNSW | 33.21 | 77.30 | 88.61 |
| AR2 | 39.50 | 86.98 | 98.44 |
| AR2 + HNSW | 35.54 | 78.80 | 87.11 |
| NCI | 26.18 | 74.68 | 92.44 |
||||
| MEVI Top-10 Clus | 32.05 | 63.25 | 66.82 |
| MEVI Top-100 Clus | 35.16 | 79.14 | 88.22 |
| MEVI Top-1000 Clus | 35.76 | 82.37 | 95.17 |
||||
| MEVI Top-10 Clus & T5-ANCE(HNSW) | 35.22 | 81.29 | 93.21 |
| MEVI Top-100 Clus & T5-ANCE(HNSW) | 35.60 | 82.27 | 95.62 |
| MEVI Top-1000 Clus & T5-ANCE(HNSW) | 35.82 | 82.77 | 97.12 |
||||
| MEVI Top-10 Clus & AR2(HNSW) | 37.00 | 82.64 | 93.46 |
| MEVI Top-100 Clus & AR2(HNSW) | 38.42 | 84.52 | 96.23 |
| MEVI Top-1000 Clus & AR2(HNSW) | 39.16 | 86.12 | 97.65 |
2. Ultron conducts experiments on MSMARCO DOCUMENT and NQ-320K datasets, which has much fewer documents than MSMARCO PASSAGE and NQ datasets (we add experiments of this dataset in the second table) in our experiments. Due to the high decoding cost of generative models, Ultron is not friendly to larger datasets. Therefore, we do not compare to Ultron in our experiments. Moreover, the results presented in the Ultron paper are no better than NCI. For another paper you mentioned as "learning to generate", we can not find this paper in both Arxiv and SIGIR'23 venue. Could you provide a link to this paper for our reference?
3. We conduct experiments on another public dataset Natural Questions (NQ) and the results are listed in the table below. MEVI still achieves better performance than baseline methods.
| Method | R@5 | R@20 | R@100 |
| -------- | :-----: | :-----: | :-----: |
| BM25 | / | 59.1 | 73.7 |
| AR2 | 77.78 | 85.98 | 90.03 |
| AR2 + HNSW | 70.89 | 78.50 | 83.02 |
||||
| MEVI Top-10 Clus | 59.61 | 66.45 | 71.63 |
| MEVI Top-100 Clus | 70.33 | 77.23 | 81.77 |
| MEVI Top-1000 Clus | 75.57 | 82.83 | 87.31 |
||||
| MEVI Top-10 Clus & AR2(HNSW) | 74.10 | 82.11 | 86.43 |
| MEVI Top-100 Clus & AR2(HNSW) | 74.43 | 82.71 | 87.51 |
| MEVI Top-1000 Clus & AR2(HNSW) | 75.93 | 83.96 | 88.98 |
Then we answer the questions below.
1. What is the performance of ANCE without HNSW? This would have been a proper baseline.
- We add the performance of ANCE without HNSW in the tables above. Though the ideal result of ANCE achieves good recall performance especially when k is large, it is not applicable in real industrial scenarios due to large serving latency (> 3500 ms), so we do not explicitly include it in our paper. MEVI can reach better or nearly the same model quality with less serving latency.
2. In section 3.2, how are $\alpha$ and $\beta$ set?
- We search $\alpha$ and $\beta$ within a proper range and set them according to the model metric. Since the ensemble process does not incur additional training and inference, the time for search can be ignored. Figure 1 in the attached PDF shows the MRR@10 for different values of $\alpha$ and $\beta$. With the right configuration, the ensemble results achieves optimal performance as it fully leverages both components. In general, for the same dataset, the best hyperparameters searched on the dev set are also optimal on the test set.
3. What it the advantage of using RQ over hierarchical clustering?
- In RQ, each layer uses the residual vectors from the last clustering layer, and conducts k-means clustering among all the vectors. Comparing to hierarchical k-means that focuses on more fine-grained clustering within large clusters, RQ adopts residual information to address the errors of the last clustering layer, making the clustering results more robust. We conduct an experiment to compare RQ with normal hierarchical k-means. In the experiments, we test MEVI-RQ and MEVI-KMeans on MSMARCO Passage dataset, setting layer depth to 4 and the number of clusters per layer to 32. The results are listed in the table below, showing that RQ generally achieves better recall than hierarchical k-means under the same number of layers.
| Method | MRR@10 | Recall@50 | Recall@1000 |
| -------- | :-----: | :-----: | :-----: |
| RQ MEVI Top-10 Clus | 32.05 | 63.25 | 66.82 |
| RQ MEVI Top-100 Clus | 35.16 | 79.14 | 88.22 |
| RQ MEVI Top-1000 Clus | **35.76** | **82.37** | **95.17** |
||||
| K-Means MEVI Top-10 Clus | 31.62 | 65.09 | 68.25 |
| K-Means MEVI Top-100 Clus | 34.82 | 77.30 | 87.93 |
| K-Means MEVI Top-1000 Clus | 35.65 | 81.01 | 94.17 |
---
Rebuttal Comment 1.1:
Comment: Many thanks for the answers – I think the complementary results are nice (there was no answer for the use of a proprietary dataset)./
Few other notes:
- SPLADE has different versions – the one you report is v1 and is far from optimal
- efficiency results should be reported along with effectiveness ones (when applicable)
- the "learning to tokenize" (sorry, not "to generate") paper: http://arxiv.org/abs/2304.04171
---
Reply to Comment 1.1.1:
Comment: Thanks for your recognition of our revision. We make some supplementary explanations on the above questions.
1. Currently we have used two large-scale public datasets, MSMARCO Passage and Natural Questions, which are commonly used in related works such as coCondenser, RocketQA, and AR2. They are also adopted in BEIR benckmark (NQ in BEIR is smaller; here we use the version with large-scale documents as in previous works). In the next version, we will place the experiments on public datasets in the paper, and move the A/B test section to appendix.
2. We add the results of SPLADE-v2 to the table of MSMARCO Passage below.
| Method | MRR@10 | Recall@50 | Recall@1000 |
| -------- | :-----: | :-----: | :-----: |
| BM25 | 18.7 | 59.2 | 85.7 |
| SPLADE | 32.2 | / | 95.5 |
| SPLADE-v2 | 36.8 | / | 97.9 |
| T5-ANCE | 35.73 | 82.96 | 97.21 |
| T5-ANCE + HNSW | 33.21 | 77.30 | 88.61 |
| AR2 | 39.50 | 86.98 | 98.44 |
| AR2 + HNSW | 35.54 | 78.80 | 87.11 |
| NCI | 26.18 | 74.68 | 92.44 |
||||
| MEVI Top-10 Clus | 32.05 | 63.25 | 66.82 |
| MEVI Top-100 Clus | 35.16 | 79.14 | 88.22 |
| MEVI Top-1000 Clus | 35.76 | 82.37 | 95.17 |
||||
| MEVI Top-10 Clus & T5-ANCE(HNSW) | 35.22 | 81.29 | 93.21 |
| MEVI Top-100 Clus & T5-ANCE(HNSW) | 35.60 | 82.27 | 95.62 |
| MEVI Top-1000 Clus & T5-ANCE(HNSW) | 35.82 | 82.77 | 97.12 |
||||
| MEVI Top-10 Clus & AR2(HNSW) | 37.00 | 82.64 | 93.46 |
| MEVI Top-100 Clus & AR2(HNSW) | 38.42 | 84.52 | 96.23 |
| MEVI Top-1000 Clus & AR2(HNSW) | 39.16 | 86.12 | 97.65 |
3. We conduct experiments of efficiency in another experiments, and the results are shown below.
| Method | MRR@10 | Latency (ms) |
| -------- | :-----: | :-----: |
| T5-ANCE (HNSW) | 33.21 | 19.71 |
| NCI | 26.18 | 2899.17 |
| MEVI Top-10 & HNSW | 35.22 | 96.87 |
| MEVI Top-100 & HNSW | 35.60 | 222.55 |
| MEVI Top-1000 & HNSW | 35.83 | 1662.84 |
4. We notice that "learning to tokenize" is a preprint paper which still works in progress, and their codes are not open-sourced. Thus we are not able to conduct experiments based on it. It uses NQ-320K and sampled MSMARCO Passage datasets, which are much smaller than NQ and full MSMARCO Passage datasets in our experiments, so the results are not comparable. Since the components in MEVI are highly decoupled, we believe that MEVI is also applicable to generative models other than NCI, which we consider as future work. | Summary: This paper present a new method for ensembling generative retrieval and embedding-based dense retrieval. Prior generative retrieval work that solely relies on a generation model to generate the retrieval document, but it is difficult to scale to large corpus. In contrast, this paper uses the generation model to predict candidate document clusters, but still adopt a dense retriever to produce the final document rankings, where the document from different clusters are weighted according the generation model's output.
The authors first evaluated this method on MS MARCO. The proposed method show improvements over a dense retriever T5-ANCE, and also good support to corpus update. The authors also tested this method in a production ad system where improvements were also observed over their production dense retrieval model (a 4-layer transformer dual-encoder).
Strengths: - This paper proposes a simple idea to ensemble generative retrieval model and embedding-based dense retrieval. In the proposed method, the generative retrieval model is essentially predicting document clusters from a query which are then used for reweighting or pruning the dense retrieval results. This aproach is a lot more practical and scalable compared to existing generative retrieval approaches that need to generate the exact document identifier.
- Experiments show promising results.
- Paper is well written.
Weaknesses: - The authors did not compare to any published neural retrieval baselines on MS MARCO, as T5-ANCE and NCI are both their own implementations. I'm wondering how does the method work with SOTA dense retrievers such as RocketQA v2 or coCondensor, whose results seem stronger than T5-ANCE on MS MARCO.
- It would be nice to add another dataset, such as Natural Questions.
- It is unclear to me what is the advantage of RQ compared to other hierarchical clustering approaches. It would be nice to add some discussion and ablation studies.
- There are some confusions around T5-ANCE. First of all, T5-ANCE does not seem to exactly follow the ANCE recipe which uses dynamic indexing for hard negative mining. Second, it is unclear the model size and hyperparameters for this model.
- I think is is an over-claim to say "For the first time, we demonstrate that a novel-designed generation-based model is able to handle
a large corpus with millions of documents, reaching high recall performance and low serving latency at the same time." From my understanding, this method mostly rely on dense retrieval and generation-based model is only providing cluster-level weights. To justify this claim, the author should report the performance without ensembling with dense retrieval.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How does the model compares or works with SOTA dense retrievers on MS MARCO?
- What is model size of the generation model? How does it affect serving latency?
- How's the performance with just the generation model?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback.
First, we address the five points you mentioned in the weaknesses part.
1. On the MSMARCO Passage dataset, we add another state-of-the-art dense retrieval model AR2 (https://arxiv.org/abs/2110.03611) in addition to T5-ANCE. AR2 performs better than RocketQA-v2 and coCondenser. From the experiment results in the table below, we can see that MEVI also achieves a better performance than AR2+HNSW.
| Method | MRR@10 | Recall@50 | Recall@1000 |
| -------- | :-----: | :-----: | :-----: |
| BM25 | 18.7 | 59.2 | 85.7 |
| T5-ANCE + HNSW | 33.21 | 77.30 | 88.61 |
| AR2 + HNSW | 35.54 | 78.80 | 87.11 |
| NCI | 26.18 | 74.68 | 92.44 |
||||
| MEVI Top-10 Clus | 32.05 | 63.25 | 66.82 |
| MEVI Top-100 Clus | 35.16 | 79.14 | 88.22 |
| MEVI Top-1000 Clus | 35.76 | 82.37 | 95.17 |
||||
| MEVI Top-10 Clus & T5-ANCE(HNSW) | 35.22 | 81.29 | 93.21 |
| MEVI Top-100 Clus & T5-ANCE(HNSW) | 35.60 | 82.27 | 95.62 |
| MEVI Top-1000 Clus & T5-ANCE(HNSW) | 35.82 | 82.77 | 97.12 |
||||
| MEVI Top-10 Clus & AR2(HNSW) | 37.00 | 82.64 | 93.46 |
| MEVI Top-100 Clus & AR2(HNSW) | 38.42 | 84.52 | 96.23 |
| MEVI Top-1000 Clus & AR2(HNSW) | **39.16** | **86.12** | **97.65** |
2. We conduct experiments on Natural Questions (NQ) and the results are listed here. We take AR2 as the dense retriever and HNSW as the ANN algorithm. MEVI achieves better performance than baseline methods on NQ.
| Method | R@5 | R@20 | R@100 |
| -------- | :-----: | :-----: | :-----: |
| BM25 | / | 59.1 | 73.7 |
| AR2 + HNSW | 70.89 | 78.50 | 83.02 |
||||
| MEVI Top-10 Clus | 59.61 | 66.45 | 71.63 |
| MEVI Top-100 Clus | 70.33 | 77.23 | 81.77 |
| MEVI Top-1000 Clus | 75.57 | 82.83 | 87.31 |
||||
| MEVI Top-10 Clus & AR2(HNSW) | 74.10 | 82.11 | 86.43 |
| MEVI Top-100 Clus & AR2(HNSW) | 74.43 | 82.71 | 87.51 |
| MEVI Top-1000 Clus & AR2(HNSW) | **75.93** | **83.96** | **88.98** |
3. We find RQ performs better than normal hierarchical K-Means. In RQ, each layer uses the residual vectors from the last clustering layer, and conducts k-means clustering among all the vectors. Comparing to hierarchical k-means that focuses on more fine-grained clustering within large clusters, RQ adopts residual information to address the errors of the last clustering layer, making the clustering results more robust. We conduct an experiment to compare RQ with normal hierarchical k-means. In the experiments, we test MEVI-RQ and MEVI-KMeans on MSMARCO Passage dataset, setting cluster depth to 4 and the number of clusters per layer to 32. The results are listed in the table below, showing that RQ generally achieves better recall than hierarchical k-means under the same number of layers.
| Method | MRR@10 | Recall@50 | Recall@1000 |
| -------- | :-----: | :-----: | :-----: |
| MEVI-RQ Top-10 Clus | 32.05 | 63.25 | 66.82 |
| MEVI-RQ Top-100 Clus | 35.16 | 79.14 | 88.22 |
| MEVI-RQ Top-1000 Clus | **35.76** | **82.37** | **95.17** |
||||
| MEVI-KMeans Top-10 Clus | 31.62 | 65.09 | 68.25 |
| MEVI-KMeans Top-100 Clus | 34.82 | 77.30 | 87.93 |
| MEVI-KMeans Top-1000 Clus | 35.65 | 81.01 | 94.17 |
4. T5-ANCE is another model released by the authors of ANCE. The hyperparameters and the whole training process is shown in this website https://openmatch.readthedocs.io/en/latest/models/t5-ance.html. T5-ANCE follows the two-round training, where the first round uses BM25 negative samples and the second round uses hard negative samples. It uses T5-base as the backbone model. More detailed hyperparameters can be found in the website above.
5. Our expression here is a little bit confusing. We will revise it in the next version. We would like to demonstrate that MEVI is the first generation-ENHANCED model to handle a large corpus with millions of documents, with high recall performance and acceptable serving latency. The performance without ensembling has already been reported in Table 1 from the original version, named "MEVI Top-K Clus". The recall and MRR metrics are generally better than T5-ANCE+HNSW when K is 100 or 1000.
Then here are the answers to your questions.
1. How does the model compares or works with SOTA dense retrievers on MS MARCO?
- Already presented in weaknesses part. We add an experiment with sota dense retriever AR2 in the first table.
2. What is model size of the generation model? How does it affect serving latency?
- Currently we use T5-base as the backbone generation model. We add an experiment on the serving latency of different model sizes in the following table. When the model size becomes larger, the latency becomes unacceptable for online serving.
| Method | Latency (ms) |
| -------- | :-----: |
| T5-base Top-10 | 96.87 |
| T5-base Top-100 | 222.55 |
| T5-base Top-1000 | 1662.84 |
| T5-large Top-10 | 124.56 |
| T5-large Top-100 | 253.23 |
| T5-large Top-1000 | 1665.47 |
3. How's the performance with just the generation model?
- The performance is presented in the first table above, named "MEVI Top-k Clus". The generation model already achieves good recall performance without ensembling when K is 1000.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response and the additional result! These are good results and should be put into the paper. I'm willing to raise my rating to 6. | Rebuttal 1:
Rebuttal: Dear reviewers,
Thank you for taking time in reading our paper and providing valuable comments. We briefly address common questions here.
1. We add another dataset, Natural Questions (full document set version from DPR https://arxiv.org/abs/2004.04906), for comparison. We take AR2 (https://arxiv.org/abs/2110.03611) as the dense retriver and HNSW as the ANN algoritm. As shown in the table below, MEVI performs better than baseline methods.
| Method | R@5 | R@20 | R@100 |
| -------- | :-----: | :-----: | :-----: |
| BM25 | / | 59.1 | 73.7 |
| AR2 + HNSW | 70.89 | 78.50 | 83.02 |
||||
| MEVI Top-10 Clus | 59.61 | 66.45 | 71.63 |
| MEVI Top-100 Clus | 70.33 | 77.23 | 81.77 |
| MEVI Top-1000 Clus | 75.57 | 82.83 | 87.31 |
||||
| MEVI Top-10 Clus & AR2(HNSW) | 74.10 | 82.11 | 86.43 |
| MEVI Top-100 Clus & AR2(HNSW) | 74.43 | 82.71 | 87.51 |
| MEVI Top-1000 Clus & AR2(HNSW) | **75.93** | **83.96** | **88.98** |
2. We would like to clarify the difference between MEVI and NCI/DSI, and the difference between RQ and k-means for identifier generation. For the first difference, each minimal cluster in MEVI contains a set of documents, while each minimal cluster in NCI and DSI contains only one document. This design makes MEVI smaller number of decoding steps, leading to smaller serving latency. Also, if the doc-id is too long, the model cannot be trained effectively, so the results of MEVI on MSMARCO Passage dataset is better than NCI and has lower serving latency. For the second difference, we choose RQ instead of K-Means in MEVI for its better performance. In RQ, each layer uses the residual vectors from the last clustering layer and conducts k-means clustering among all the vectors. Comparing to hierarchical k-means that focuses on more fine-grained clustering within large clusters in each layer, RQ adopts residual information to address the errors of the previous layer, making the clustering results more precise and robust. We conduct an experiment to compare RQ with normal hierarchical k-means. In the experiments, we test MEVI-RQ and MEVI-KMeans on MSMARCO Passage dataset, setting layer depth to 4 and the number of clusters per layer to 32. The results are listed in the following table, showing that RQ generally achieves better recall than hierarchical k-means under the same number of layers.
| Method | MRR@10 | Recall@50 | Recall@1000 |
| -------- | :-----: | :-----: | :-----: |
| MEVI-RQ Top-10 Clus | 32.05 | 63.25 | 66.82 |
| MEVI-RQ Top-100 Clus | 35.16 | 79.14 | 88.22 |
| MEVI-RQ Top-1000 Clus | **35.76** | **82.37** | **95.17** |
||||
| MEVI-KMeans Top-10 Clus | 31.62 | 65.09 | 68.25 |
| MEVI-KMeans Top-100 Clus | 34.82 | 77.30 | 87.93 |
| MEVI-KMeans Top-1000 Clus | 35.65 | 81.01 | 94.17 |
3. On the MSMARCO Passage dataset, we add an experiment on the state-of-the-art dense retriever AR2 (https://arxiv.org/abs/2110.03611) in addition to T5-ANCE. From the result table below, we can see that MEVI+AR2 achieves a better performance than the corresponding baselines.
| Method | MRR@10 | Recall@50 | Recall@1000 |
| -------- | :-----: | :-----: | :-----: |
| BM25 | 18.7 | 59.2 | 85.7 |
| T5-ANCE + HNSW | 33.21 | 77.30 | 88.61 |
| AR2 + HNSW | 35.54 | 78.80 | 87.11 |
| NCI | 26.18 | 74.68 | 92.44 |
||||
| MEVI Top-10 Clus | 32.05 | 63.25 | 66.82 |
| MEVI Top-100 Clus | 35.16 | 79.14 | 88.22 |
| MEVI Top-1000 Clus | 35.76 | 82.37 | 95.17 |
||||
| MEVI Top-10 Clus & T5-ANCE(HNSW) | 35.22 | 81.29 | 93.21 |
| MEVI Top-100 Clus & T5-ANCE(HNSW) | 35.60 | 82.27 | 95.62 |
| MEVI Top-1000 Clus & T5-ANCE(HNSW) | 35.82 | 82.77 | 97.12 |
||||
| MEVI Top-10 Clus & AR2(HNSW) | 37.00 | 82.64 | 93.46 |
| MEVI Top-100 Clus & AR2(HNSW) | 38.42 | 84.52 | 96.23 |
| MEVI Top-1000 Clus & AR2(HNSW) | **39.16** | **86.12** | **97.65** |
4. To determine the values of $\alpha$ and $\beta$, we search within a proper range and set them according to the model metrics. Since the ensemble process does not incur additional training and inference cost, the time for grid search can be ignored. We show the MRR@10 for different values of $\alpha$ and $\beta$ in the attached PDF. With the best configuration, the ensemble results achieves optimal performance as it fully leverages both components. On MSMARCO dataset, the best hyperparameters searched on the dev set are also optimal on the test set.
Pdf: /pdf/3b3be6c97ab5343af7781bde13afa0c2d6e8a92f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This research paper introduces a deep text retrieval model that possesses the capability to effectively manage a large corpus comprising millions of documents. The model achieves remarkable recall performance while maintaining relatively low latency. Moreover, in addition to surpassing the performance of existing methods in a document retrieval benchmark, the paper also showcases the successful integration of deep text retrieval models into a commercial advertising system, demonstrating their practical value in an industrial setting.
Strengths: 1. S1: They are supposed to be the first to demonstrate the successful implementation of deep text retrieval models in the practical application.
2. S2: They investigate and test the effectiveness of Residual Quantization within the deep text retrieval model.
3. S3: The experiment results on MSMARCO dataset display the effectiveness of the proposed method.
Weaknesses: W1: They conduct experiments on a single dataset for their study. However, it is recommended that their model undergo experiments on commonly used datasets such as Natural Question, which is utilized in both DSI and NCI research. In addition, they point out that the existing methods are limited by unacceptable serving latency but they do not conduct such experiments to prove this. Since their method use Residual
Quantization (RQ) as their clustering method, this method may be more time-consuming than other clustering methods during training.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. In dynamic update experiments, if more documents come in, their codebook may not be "optimal", so how can the codebook be adjusted to preserve similar effectiveness and robustness? Otherwise, the resulting method may not be good. In addition, their Top-10 results are worse than T5-ANCE.
2. More experiments are needed. They should also compare their method with one another datasets used in DSI or NCI. And the latency experiment should include other baseline models.
3. If RQ is applied to DSI and NSI, does their method achieve better performance or does the utilization of normal k-means in the proposed method potentially diminish its effectiveness to a significant extent?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors pointed out the limitations of their model including not jointly learning between twin-tower model and seq2seq model, and their method is still unacceptable inference latency in the large-scale corpus. They had provided potential solutions to address the above issues.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper.
First, we address the points you mentioned in the weaknesses part.
1. We conduct experiments on another popular dataset, Natural Questions (NQ). We take AR2 (https://arxiv.org/abs/2110.03611) as the dense retriver and HNSW as the ANN algoritm. As shown in the table below, MEVI method still achieves better performance than baseline methods.
| Method | R@5 | R@20 | R@100 |
| -------- | :-----: | :-----: | :-----: |
| BM25 | / | 59.1 | 73.7 |
| AR2 + HNSW | 70.89 | 78.50 | 83.02 |
||||
| MEVI Top-10 Clus | 59.61 | 66.45 | 71.63 |
| MEVI Top-100 Clus | 70.33 | 77.23 | 81.77 |
| MEVI Top-1000 Clus | 75.57 | 82.83 | 87.31 |
||||
| MEVI Top-10 Clus & AR2(HNSW) | 74.10 | 82.11 | 86.43 |
| MEVI Top-100 Clus & AR2(HNSW) | 74.43 | 82.71 | 87.51 |
| MEVI Top-1000 Clus & AR2(HNSW) | **75.93** | **83.96** | **88.98** |
2. We add the latency of NCI here for comparison. It is worth noting that the clustering process is performed in advance, while the training process and the serving process are performed on the constant RQ structure. The service latency here is mainly determined by the number of decoding steps in the autoregressive generation. Since MEVI allows each doc-id to represent a set of documents, while NCI only allows one doc-id to represent one document, MEVI has much fewer decoding steps than NCI's 10 decoding steps, resulting in a smaller latency.
| Method | MRR@10 | Latency (ms) |
| -------- | :-----: | :-----: |
| T5-ANCE (HNSW) | 33.21 | 19.71 |
| NCI | 26.18 | 2899.17 |
| MEVI Top-10 & HNSW | 35.22 | 96.87 |
| MEVI Top-100 & HNSW | 35.60 | 222.55 |
| MEVI Top-1000 & HNSW | 35.83 | 1662.84 |
Then we answer the questions below.
1. In dynamic update experiments, if more documents come in, their codebook may not be "optimal", so how can the codebook be adjusted to preserve similar effectiveness and robustness? Otherwise, the resulting method may not be good. In addition, their Top-10 results are worse than T5-ANCE.
- If more documents come in and the distribution of documents has changed drastically, all existing dense retrieval methods, as well as MEVI, have to re-train the model and re-construct the document index. In other words, dynamic updates only support a small number of new documents that do not change the distribution much. In future work we may extend RQ to an adaptive structure that captures the distribution of new documents during training. When MEVI only searches documents within top-10 clusters, the number of documents to be ranked is very small, leading to bad recall performance. After enlarged to top-100 or top-1000 clusters, MEVI performs better than the baseline in both basic experiment and dynamic update experiment.
2. More experiments are needed. They should also compare their method with one another datasets used in DSI or NCI. And the latency experiment should include other baseline models.
- This question has been addressed in the weakness part. The experiment results on NQ dataset (full document set version from DPR) are listed in the first table. MEVI performs better than baseline models. For latency comparison, the results are listed in the second table. Dense retrieval based on bi-encoders has the smallest latency. MEVI outperforms significantly better than NCI while holding affordable latency.
3. If RQ is applied to DSI and NSI, does their method achieve better performance or does the utilization of normal k-means in the proposed method potentially diminish its effectiveness to a significant extent?
- We would like to break this problem down into two parts, one is the difference between MEVI and NCI/DSI, and the other is the difference between RQ and normal k-means. For the first difference, each minimal cluster in MEVI contains a set of documents, while each minimal cluster in NCI and DSI contains only one document. This design makes MEVI smaller number of clustering layers, leading to lower serving latency. Also, if the doc-id is too long, the model cannot learn effectively in a certain time budget, so the results of MEVI on MSMARCO Passage is much better than NCI. For the second difference, we use RQ instead of normal K-Means in MEVI because RQ performs better. In RQ, each layer uses the residual vectors from the last clustering layer, and conducts k-means clustering among all the vectors. Comparing to hierarchical k-means that focuses on more fine-grained clustering within large clusters, RQ adopts residual information to address the errors of the last clustering layer, making the clustering results more robust. We conduct an experiment to compare RQ with normal hierarchical k-means. In the experiments, we test MEVI-RQ and MEVI-KMeans on MSMARCO Passage dataset, setting layer depth to 4 and the number of clusters per layer to 32. The results are listed in the following table, showing that RQ generally achieves better recall than hierarchical k-means under the same number of layers (corresponding to comparable latency).
| Method | MRR@10 | Recall@50 | Recall@1000 |
| -------- | :-----: | :-----: | :-----: |
| MEVI-RQ Top-10 Clus | 32.05 | 63.25 | 66.82 |
| MEVI-RQ Top-100 Clus | 35.16 | 79.14 | 88.22 |
| MEVI-RQ Top-1000 Clus | **35.76** | **82.37** | **95.17** |
||||
| MEVI-KMeans Top-10 Clus | 31.62 | 65.09 | 68.25 |
| MEVI-KMeans Top-100 Clus | 34.82 | 77.30 | 87.93 |
| MEVI-KMeans Top-1000 Clus | 35.65 | 81.01 | 94.17 | | null | null | null | null | null | null |
Generative modeling for RNA splicing code predictions and design | Reject | Summary: This paper presents a tissue specific transformer based splicing prediction model, TrASPr along with a Bayesian Optimization algorithm, BOS, capable of designing RNA with desired properties. The authors start by demonstrating the performance of TrASPr on RNA splicing data from both mouse and human tissues. Next, the authors assess the model’s ability to detect condition specific regulatory elements using ENCODE data involving three RBP Knockdown (KD) in two human cell lines, and data for tissue-specific regulatory elements from a mini-gene reporter assay. At last, TrASPr is used as an Oracle for BOS to generate AS event sequences with desired properties and an evaluation of BOS performance is presented.
Strengths: [-] Originality; This paper utilizes recent advances in large language models (LLMs) to define a splicing prediction model. While this is not the first attempt to use LLMs for nucleotide encoding, the authors improved existing models and incorporated existing prior knowledge through dedicated features. Furthermore, the authors use the model as an oracle for a generative model for RNA splicing design.
[-] Quality; The paper is well-written and presented. The framework seems well thought through; combining both expert prior knowledge regarding the problems tackled and state-of-the-art computational models.
[-] Clarity; The paper is self-inclusive, presenting the reader with all necessary information from the background regarding the biological problem, its complexity, and motivation regarding its importance. Following that, all framework parts are clearly presented.
[-] Significance; This work provides promising results towards utilizing LLMs for predicting splicing events and further using such these to train an RNA design model. While this work is only the initial step towards obtaining a robust, reliable framework that solves this task it is of great significance as it advances the field.
Weaknesses: [-] Reproducibility code; the authors claim for reproducibility however no code was provided. Providing the code could improve the understanding and evaluation of the presented framework.
[-] Structure; the paper is generally well structured in providing all necessary information however the organization could be improved. For example, the background section already contains model implementation details and prior attempts are split between the introduction and related work. In line with the latter, it would be beneficial if the authors could provide a more elaborate description of previous work, specifically for the prediction task.
[-] Evaluation; the author’s explanation regarding the degradation in performance over the “Strict” test set is not convincing, and following the rationale behind the necessity of the “strict” test set raises some concerns. It would be insightful if the authors could look more into this point, potentially testing on alternative data to obtain a better understanding of this.
[-] minor (these are provided to improve the manuscript’s readability);
[--] incomplete/unreadable sentences; a few sentences in the text are incomplete or unclear (e.g. line 78, lines 256-259)
[--] Space after TrASPr is omitted in many places in the manuscript (probably as a result of latex macro).
[--] Figure 1; Figure 1a is never referenced (and its components are hence not explained), the relationship between 1b-1c could be depicted better (in line with many transformer visualization models).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Could the authors provide a code base for evaluation?
2. Could the authors extend the analysis on differences in performance over the MGP dataset or provide an additional example?
2. Can the authors suggest an alternative baseline, similar to DEN but which can be evaluated in the same setting, to compare the performance of the design class?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Reviewer Kz6p
**Weaknesses**:
==========
[-] **Reproducibility code**; the authors claim for reproducibility however no code was provided. Providing the code could improve the understanding and evaluation of the presented framework.
**Reply**:
Agreed! Code will be made available after publication.
[-] **Structure**; the paper is generally well structured in providing all necessary information however the organization could be improved. For example, the background section already contains model implementation details and prior attempts are split between the introduction and related work. In line with the latter, it would be beneficial if the authors could provide a more elaborate description of previous work, specifically for the prediction task.
**Reply**:
Thank you for the suggestion. The model implementation details will be moved to the model introduction part and prior attempts will be combined together. In terms of previous work, we plan to include more work about RNA splicing prediction with other structures and other biological prediction tasks with BERT related models (see also response to other reviewers).
[-] **Evaluation**; the author’s explanation regarding the degradation in performance over the “Strict” test set is not convincing, and following the rationale behind the necessity of the “strict” test set raises some concerns. It would be insightful if the authors could look more into this point, potentially testing on alternative data to obtain a better understanding of this.
**Reply**:
We are unsure about the “not convincing” comment. The focus of this work is on the new model, TrASPr, and we clearly show that with a more strict threshold we get somewhat of a drop in performance, which makes sense. We assume the comment refers to the old AE+MLP model compared against. We agree that the result for that model showing improvement was not immediately clear to us either. We don’t have additional data for this model (the model was published with the analyzed dataset included here, any additional data would require creating it based on the original model curated feature set). We think that some explanation could be in the fact that only a subset of splicing patterns are conserved across species (please see the response to reviewer XeDB for more details), combined with the fact the AE_MLP model only used a limited set of pre-computed features. One way we could potentially look into this further is to look into the splicing patterns of the specific samples which were removed between the “strict” and more “permissive” thresholds. Are those similar or different compared to the matching ones in the training set? We plan to add that analysis for the final version.
===========
[-] **minor** (these are provided to improve the manuscript’s readability);
[--] incomplete/unreadable sentences; a few sentences in the text are incomplete or unclear (e.g. line 78, lines 256-259)
[--] Space after TrASPr is omitted in many places in the manuscript (probably as a result of latex macro).
[--] Figure 1; Figure 1a is never referenced (and its components are hence not explained), the relationship between 1b-1c could be depicted better (in line with many transformer visualization models).
**Reply**:
Thank you! We will work to address all of the above comments in the revision.
**Questions**:
Could the authors provide a code base for evaluation?
**Reply**:
Yes - please see the response above.
**Questions**:
Could the authors extend the analysis on differences in performance over the MGP dataset or provide an additional example?
**Reply**:
In the paper, we provided some results on both MGP and the GTEx (human) dataset. In addition to that, we have done another round of ablation study to better understand the importance of each component of the model and inputs (see response to reviewer aUkC). See also comment above about analyzing performance differences in the MGP data. We definitely want to keep introducing additional datasets/conditions including genetic variants but these are unlikely to make it into this current manuscript.
**Questions**:Can the authors suggest an alternative baseline, similar to DEN but which can be evaluated in the same setting, to compare the performance of the design class?
**Reply**:
Yes - please see above response to reviewer XeDB on question “BO part of paper barely discussed…” for details.
---
Rebuttal Comment 1.1:
Title: post-rebuttal comments
Comment: I would like to thank the for taking the time and responding to my review.
While the authors have directly addressed some of my concerns, a majority of the response relies on changes or inclusions that will be provided in a revised version.
Unfortunately, this makes it hard for me to evaluate the quality of these, and based on the response and comments from other reviewers, I tend to keep my current evaluation. | Summary: The authors develop a new framework to predict alternative splicing of RNA. They then deploy it with adaptations and Bayesian Optimization to design new sequences.
Strengths: I find the validation using the RBP KD experiment interesting. It is great that the knowledge of the biological system can be used to inform your computational experiments.
“We repeated this process 5 times with different random mutations and the prediction results where then averaged and compared to the wild type (WT) sequence prediction.“ - Good to do lots of permutations!
Good to try to remove batch effects with ENCODE data, but going to be hard.
Levenshtein distance between designed sequences is good!
Figure 2 comparison to Pangolin is pretty good and convincing
Weaknesses: Table 1 results of rAUPRC and AUROC are confusing. Can the “feature” and “Model” terms be a bit better defined?
Figure 4b is a bit confusing, and I feel like we need a bit more context. Should things be above or below the line? Can we have a legend for the figure as well?
BO part of paper barely discussed, required changes to core algorithm, and not validated–I would remove. Moreover, the baseline algorithm to compare BO (random mutation) is not a sufficient baseline. What about evolutionary strategies? What about Gibbs sampling?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: “...∼100bp long reads that are mapped back to the genome using 70 dedicated tools (e.g., STAR).”- Citation please?
Section 2.2 What are the “...classification 90 ([CLS]), padding ([PAD]), separation ([SEP]), mask ([MASK]) and unknown ([UNK])....” characters, or what do they mean? Perhaps they are discussed more in detail in the paper, but their relative importance and disambiguation should be described when they are introduced
To what extent is there synteny between mouse and human chromosomes on the held out set? For human chromosomes 8 and 14, and mouse 4 and 11, it seems like there will be some overlap in homologous sequences in the training and test sets. How many sequences are then removed during the BLAT step? Are the fitlers strict enough? Should BLAT just be done on exonic regions, because they are more conserved?
Line 280 - Should be Fig 3 and Table 1
Section 6.2 - “... To achieve this, we randomly mutated sequences in the same set of exons, selected from the same distribution of distances as the original motifs…” What does distance refer to here? Linear genome distance? Levenstein distance?
“We then ’removed’ the effect of these RBPs on the set of AS targets by randomly mutating the identified binding motifs.” What is random here? Did you preserve GC content? Could you just permute the bases?
What is figure 5b trying to show? Is the y axis the number of times a mutation shows up during optimization? Also, is this the figure you are referring to in line 322, where you say figure 4b)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: It is unclear how much the BO section is needed. With a sufficient predictor, do evolutionary strategies work?
“Specifically, we only…” Background section is not complete…sort of just trails off.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Reviewer XeDB
Weaknesses:
===========
(1) Table 1 results of rAUPRC and AUROC are confusing. Can the “feature” and “Model” terms be a bit better defined?
Reply:
Agreed. The “feature Model” terms should be removed there. They simply refer to the fact that the AE+MLP model is based on predefined/manually curated features.
(2) Figure 4b is a bit confusing, and I feel like we need a bit more context. Should things be above or below the line? Can we have a legend for the figure as well?
Reply:
Again - agreed. Things should be along the 45 deg line i.e. perfect agreement between predicted and measured PSI values. Most observed values tend to reside around 0 or 1, which is a known phenomenon for splicing. We will improve the figure and add a legend in the final version.
(3) BO part of paper barely discussed, required changes to core algorithm, and not validated–I would remove. Moreover, the baseline algorithm to compare BO (random mutation) is not a sufficient baseline. What about evolutionary strategies? What about Gibbs sampling?
Reply:
We agree the BO presentation and discussion is limited. We will work to improve on that in the final version where we will include additional supplementary material as suggested by other reviewers. That said, we believe that introducing BO in this context is novel and can potentially garner interest from NeurIPS researchers working on design problems but are not aware of RNA related tasks.
The reviewer raises a good question about additional baselines for the design task. First, we note that since we are now introducing this as a design problem (which was not done before) it means, by definition, there are no available baselines to compare against. This means any other baseline we come up with would require creating another algorithm for this task. Specifically, it is not immediately clear to us how Gibbs sampling/MCMC would be applied here. That said, using evolutionary algorithms directly over proposed sequences seems doable and there are existing packages for those that we could try. We weren't able to complete this in the rebuttal week but we plan to include some runs with those in the final version.
Questions:
===========
(1) “...∼100bp long reads that are mapped back to the genome using 70 dedicated tools (e.g., STAR).”- Citation please?
Reply:
Will add.
(2) Section 2.2 What are the “...classification 90 ([CLS]), padding ([PAD]), separation ([SEP]), mask ([MASK]) and unknown ([UNK])....” characters, or what do they mean? Perhaps they are discussed more in detail in the paper, but their relative importance and disambiguation should be described when they are introduced
Reply:
Good point. This description assumes prior knowledge which is not appropriate - we will make it more clear in the revised text.
(3) To what extent is there synteny between mouse and human chromosomes on the held out set? For human chromosomes 8 and 14, and mouse 4 and 11, it seems like there will be some overlap in homologous sequences in the training and test sets. How many sequences are then removed during the BLAT step? Are the filters strict enough? Should BLAT just be done on exonic regions, because they are more conserved?
Reply:
It is important to keep in mind that, unlike gene expression, splicing patterns are generally *not* conserved across species but rather across tissues in the same species (see Barbosa Morais et al Science 2012 and Merkin et al Science 2012). Nonetheless, as these papers show, a subset is highly conserved across evolution. Because we were worried about homologous sequences we created a more strict definition of the filters when using both human and mouse data. The amount of samples (i.e. specific AS events observed in specific tissue) removed are 6328.
(4) Line 280 - Should be Fig 3 and Table 1
Section 6.2 - “... To achieve this, we randomly mutated sequences in the same set of exons, selected from the same distribution of distances as the original motifs…” What does distance refer to here? Linear genome distance? Levenstein distance?
Reply:
Linear genome distance
(5) “We then ’removed’ the effect of these RBPs on the set of AS targets by randomly mutating the identified binding motifs.” What is random here? Did you preserve GC content? Could you just permute the bases?
Reply:
Good question. We simply introduced ten random sequences and computed the average over those. We didn’t try to just permute the bases in that region or preserve the GC content. While these are intronic (non coding) and the motifs are far from those of TF the above suggestions make sense and could potentially help get more stable/accurate results - we should try those for the final version.
(6) What is figure 5b trying to show? Is the y axis the number of times a mutation shows up during optimization? Also, is this the figure you are referring to in line 322, where you say figure 4b)?
Reply:
In Fig 5b the y axis shows how many times an optimization hits the position. This marginal, which is far from uniform, is sensible given what we know about splicing core machinery and positional bias of intronic regulatory elements. In line 322 we are referring to figure 4b as written. The description there is about which exons were used to test BOS, and those exons are the ones shown in Fig 4b to have low inclusion.
Limitations:
===========
It is unclear how much the BO section is needed. With a sufficient predictor, do evolutionary strategies work?
Reply: Please see the above response/discussion.
“Specifically, we only…” Background section is not complete…sort of just trails off.
Reply:
Thank you, we will fix this.
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Thank you for your comments and continued work. I have read the comments and will keep the same scores. | Summary: The authors propose a new machine learning framework called TrASPr, which is a transformer-based architecture with pretrained RNA language models that is tailored for the prediction and design task of RNA alternative splicing. The authors demonstrate that TrASPr accurately predicts tissue-specific `percent spliced in’ (PSI) scores, and the trained model can facilitate RNA design for specific RNA splicing outcomes.
Strengths: - Overall, the authors introduce and explain the problem of alternative splicing, its significance, and the dataset they used to study this problem quite well for a general reader at a machine learning conference.
- The evaluation setting is generally rigorous, as the authors carefully curated the test set to avoid any information leakage.
Weaknesses: - Some additional work and its relationship to this research should be discussed, such as "RNA Alternative Splicing Prediction with Discrete Compositional Energy Network," which also focuses on the prediction of PSI scores in a tissue-specific setting.
- When evaluating the effect of RBP KD and mutations, the authors first identify a set of RBP motif matching sites that might affect alternative splicing and then check if their model can accurately predict those sites through in-silico mutations. However, this measurement only assesses the model's ability to recover positive samples. The authors should also evaluate and present examples of in-silico mutations on non-regulating motif matches and demonstrate their models' predictions. This is important to show that the model is genuinely learning context-dependent sequence features and not just memorizing motif matches.
- While the formulation of the RNA design problem and the utilization of language models for RNA sequences can be considered novel in the field of RNA splicing, similar techniques have been introduced and used in protein sequence design and protein language models before; it would be nice to discuss some of those (e.g., for a review, see https://doi.org/10.1016/j.cels.2021.05.017), perhaps in Related work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions: from line 83-87, there are some empirical results, such as “we considered different tokenizing strategies of RNA sequences which are composed of 4 types of ribonucleotide bases (‘A’,‘C’,‘G’,‘U’). We settled on overlapping k-mers of length 6,” and “However, we found the DNABERT architecture to be unstable and opted to pre-train a lighter BERT model with only six layers as describe below.” Are there results or analysis supporting these decisions and claims?
Some suggestions:
- It would be helpful to include a subfigure when introducing the concept of alternative splicing, splicing junction, etc. The authors make good efforts in the main text for getting ML audience to be familiar with the concepts, it would be easier if there are figures or illustrations involved.
- Some typos: e.g. line 9 “on Oracle” -> “an Oracle”, line 278 “Autoencodeer” -> “Autoencoder”
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discussed the limitations of the noisy labels obtained from biological experiments and concluded that this work "should be viewed more as a proof-of-concept outlining exciting directions for future developments rather than a finished product." It would be helpful if the authors could comment on how many datasets exist and are expected to be generated, and whether these limitations would be addressed with more data or more careful model design.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Reviewer LDfP
Weaknesses:
=========
(1) Some additional work and its relationship to this research should be discussed, such as "RNA Alternative Splicing Prediction with Discrete Compositional Energy Network," which also focuses on the prediction of PSI scores in a tissue-specific setting.
Reply:
Yes, we will add a reference to the above mentioned work. We note this work by Chan et al is quite different from the one presented here. Chat et al predict whole transcripts, assume the same genomic sequence, and use additional side information about RBP gene expression levels to then predict (sample specific) tissue specific isoforms (termed PSI as well, but the measured entity is different). These characteristics make the work quite different in terms of the task addressed and closer to DARTS (Zhang et al Nat Methods 2019). Nonetheless we agree it adds more general context and are happy to include it in the related work discussion.
(2) When evaluating the effect of RBP KD and mutations, the authors first identify a set of RBP motif matching sites that might affect alternative splicing and then check if their model can accurately predict those sites through in-silico mutations. However, this measurement only assesses the model's ability to recover positive samples. The authors should also evaluate and present examples of in-silico mutations on non-regulating motif matches and demonstrate their models' predictions. This is important to show that the model is genuinely learning context-dependent sequence features and not just memorizing motif matches.
Reply:
The reviewer raises some important points which we elaborate on below. First, let us clarify the term “positive samples”. We note that in the cases we evaluate the motifs’ occurrences may push inclusion either up (“positive”) or down (“negative”). However, we think the reviewer means here “negative” as in sequence elements which are non-functional (“non-regulating motif”). We very much agree with the need for such an evaluation but note that the only way to test such sequence elements is by direct mutagenesis experiments (wet-lab), which are labor intensive, low-throughput, and well beyond the scope of a NeurIPS paper.
(3) While the formulation of the RNA design problem and the utilization of language models for RNA sequences can be considered novel in the field of RNA splicing, similar techniques have been introduced and used in protein sequence design and protein language models before; it would be nice to discuss some of those (e.g., for a review, see https://doi.org/10.1016/j.cels.2021.05.017), perhaps in Related work.
Reply:
We agree. While these works/problems are quite different these are definitely worth pointing out for the interested reader and we will add context/citations as suggested in the revised discussion of related work.
Questions:
=========
From line 83-87, there are some empirical results, such as “we considered different tokenizing strategies of RNA sequences which are composed of 4 types of ribonucleotide bases (‘A’,‘C’,‘G’,‘U’). We settled on overlapping k-mers of length 6,” and “However, we found the DNABERT architecture to be unstable and opted to pre-train a lighter BERT model with only six layers as described below.” Are there results or analysis supporting these decisions and claims?
Reply:
Please see details in the answers to reviewer aUkC on question “How important is the pre-training…”
Some suggestions:
==============
(1) It would be helpful to include a subfigure when introducing the concept of alternative splicing, splicing junction, etc. The authors make good efforts in the main text for getting ML audience to be familiar with the concepts, it would be easier if there are figures or illustrations involved.
Reply:
That’s a good suggestion! We will create a more substantial introduction as a supplementary, with such a figure to make the topic more accessible to new readers.
(2) Some typos: e.g. line 9 “on Oracle” -> “an Oracle”, line 278 “Autoencodeer” -> “Autoencoder”
Limitations:
The authors discussed the limitations of the noisy labels obtained from biological experiments and concluded that this work "should be viewed more as a proof-of-concept outlining exciting directions for future developments rather than a finished product." It would be helpful if the authors could comment on how many datasets exist and are expected to be generated, and whether these limitations would be addressed with more data or more careful model design.
Reply:
In general, we are aware of several datasets and more data being produced - we will add references to such. We also think that there is still room for improvement, modeling wise, which is an active area in several groups across the world, including ours. We will add that point to the discussion as well.
---
Rebuttal Comment 1.1:
Title: Response to rebuttals
Comment: The authors have addressed all my questions, and explained those they cannot address (requiring wet lab experiments). By looking at other reviewer’s comments I agree that a lot of the suggested revisions or changes would make the manuscript better. Taking all this into consideration, I wouldn’t change the score or rating; a weak accept would still be my rating for the current revision. | Summary: The paper tackles two tasks in alternative splicing of pre-mRNA, where multiple unique mRNAs are produced by including different segments. First, the authors proposed a transformer-based splicing prediction model, TrASPr. A 6-layer transformer model is pre-trained with 1.5M pre-mRNA sequences centered in splice sites. TrASPr utilizes multiple pre-trained transformer encoders for the splicing prediction in a tissue-specific manner. Second, the authors used TrASPr as an Oracle to train and evaluate splicing sequence design based on the Bayesian Optimization algorithm. Through the experiment results, they argue that TrASPr significantly outperforms state-of-the-art models and BOS can generate sequences with desired characteristics.
Strengths: - The authors tackle important problems in RNA biology. In particular, they argue that the sequence design for RNA splicing is a novel task and it can benefit therapeutics for correcting splicing defects and synthetic biology.
- To tackle the problems, they adopt a couple of machine learning methods that have shown successes in other domains: pre-training and fine-tuning of language models for biological sequences, and latent space Bayesian optimization (LSBO) over structured and discrete inputs.
Weaknesses: Major comments:
- [Originality] While the methods are novel for their first use for RNA alternative splicing, they still seem like mostly direct applications of widely known machine learning methods. For example, pre-training and fine-tuning of language models seem trivial even for the biological sequences. As referenced in the paper, DNABERT proposed a pre-trained language model for DNAs. There are also plenty of previous works that use pre-trained language models for various protein biology tasks.
- [Quality] While the paper includes a couple of experiment results, I do not think they are sufficient to verify the effectiveness of the proposed methods. It lacks strong baseline models for comparison and ablation studies for thoroughly understanding the proposed methods.
- [Significance] The paper does not bring significant and novel ideas that would be valuable to the broader NeurIPS community.
- [Clarity] I don’t think this is the best version of the paper, considering the broad NeurIPS community is not familiar with computational biology. The explanations are not detailed enough to easily understand the problem and significance of the experiment results.
Minor comments:
- Typo in L10: on Oracle -> an Oracle
- Sec 2.1: Incomplete. It suddenly ends with “Specifically, we only.”
- Sec 7: The authors mostly use the Discussion section for summarizing their contributions rather than discussing notable observations, limitations, and future directions. (except for the last paragraph where they discuss the inherent limitation of experiment data)
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: - How well is the pre-training conducted? Can you provide the training curve and evaluation (e.g. perplexity) of the pre-trained model? How much does it differ from the DNABERT model?
- How do you include conservation values for each k-mer for the Transformer model? Since the conservation values are not considered in the pre-training, the inputs for the Tranformer encoder will be different from the pre-training and possibly incur out-of-distribution problems.
- Do the Pangolion and TrASPr share the same training data? If not, is it possible to compare the TrASPr with the Pangolion trained with the same training data? It would more clearly show the effect of the proposed methods.
- How important is the pre-training? Is it possible to use the pre-trained DNABERT instead? How does the model perform with a randomly-initialized Transformer model? How important is the 6-mer tokenization compared to 1-mer tokenization?
- How important is the Transformer architecture? How does the model perform with other model types?
- How important are the event features?
- It seems TrASPr is used as an Oracle for both training and evaluation of the sequence design. Wouldn’t it produce over-optimistic results?
- How important is the LSBO? Can you provide comparisons with other baseline methods?
- The explanations of the biological problem and interpretations of the experiment results should be easier and more intuitive to understand. In addition, more detailed and friendly backgrounds should be included as supplementary.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: The authors adequately addressed that the experiment data are inherently noisy and limited in number, such that this work should be viewed more as a proof-of-concept rather than a finished product.
---Post-Rebuttal Comments---
Overall, I am inclined to believe that incorporating the authors' responses would indeed enhance the manuscript's quality. Consequently, I have adjusted my rating from 3 to 4. Upon reviewing the comments from other reviewers and the authors' clarifications, it seems I'm not the only one who has struggled to understand the authors' contributions and has concerns about the presentation, especially regarding the BO for sequence design. This suggests that substantial revisions are needed beyond the inclusion of additional experimental results. Although the authors' responses have been comprehensive, I could not give a higher score as I respectfully believe resubmission after revision is more appropriate for this manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Reviewer aUkC
Weaknesses:
Major comments:
**Originality**: While the methods are novel for their first use for RNA alternative splicing, they still seem like mostly direct applications of widely known machine learning methods. For example, pre-training and fine-tuning of language models seem trivial even for the biological sequences. As referenced in the paper, DNABERT proposed a pre-trained language model for DNAs. There are also plenty of previous works that use pre-trained language models for various protein biology tasks.
**Reply**:While we would agree our paper does not focus on new ML methodology/theory, we would like to respectfully push back on the above criticism. In our minds, this criticism mostly reflects a failure on our part to explain what is new/original in our work to those who are not familiar with RNA splicing modeling. Let us try to amend this here:
It is very true that LLMs have been applied in Genomics with the aforementioned DNABERT as an example. However, unlike DNABERT, which is an “off the shelf” BERT model applied to several Genomic datasets, our work goes beyond direct application of an existing model. Previous attempts at modeling condition specific alternative splicing used either precomputed manually derived features (the latest of that being Jha et al 2017, with an AE+MLP) or very large CNNs (e.g. SpliceAI and Pangolin use 10 Kb windows). The first type of models are limited in scope/ability to generalize to any RNASeq/condition. The latter modeling approach ignores any knowledge about the structure of the underlying transcripts making those harder to interpret. Furthermore, even with those large CNN window sizes, those models are unable to fully capture around 25% of the AS events we use here since in human exons are typically short (~147b long) while introns can span many thousands of bases. It was not immediately clear, at least to us, how to model such large scale events using Transformers that are typically limited in their sequence length. We tried “off the shelf '' sparse transformers of several kinds without much success early on, likely due to the fact much of the sequence is irrelevant for splicing decisions. Thus, we came up with what we consider an original solution to the problem: Instead of ignoring annotation as in recent work (SpliceAI, Pangolin etc.) we utilize it to first identify “templates'' of AS events across the transcriptome, then train multiple transformers, each focused around a specific splice site, and combine those (along with additional features) into an MLP. This allowed the overall sequence to grow linearly with the number of junctions involved (J=4 in this case) such tha complexity of the model is 4xL^2 rather than (4L)^2 or much longer if you just take the entire genomic sequence as in SpliceAI/Pangolin. In addition, we made adaptations to standard BO and used the above model as an Oracle to resolve the issue of BO training. While none of this is novel ML we do believe our modeling approach is new and, as we demonstrate, clearly improves over current SOTA.
**Quality**: While the paper includes a couple of experiment results, I do not think they are sufficient to verify the effectiveness of the proposed methods. It lacks strong baseline models for comparison and ablation studies for thoroughly understanding the proposed methods.
**Reply**:We argue that strong baselines are already included in our work: We used the latest model for tissue specific splicing (Pangolin) for any given RNASeq/condition, and we used the best tissue specific model to date which used curated features. Still, the reviewer makes a good point about the lack of ablation studies and we worked to add those (see below).
**Significance**: The paper does not bring significant and novel ideas that would be valuable to the broader NeurIPS community.
**Reply**:Please see the discussion above. In addition to demonstrating how Transformers could be adapted to RNA splicing modeling, we believe that framing RNA splicing design as a BO problem should be of interest to other groups working on either BO or Genomics.
**Clarity**: I don’t think this is the best version of the paper, considering the broad NeurIPS community is not familiar with computational biology. The explanations are not detailed enough to easily understand the problem and significance of the experiment results.
**Reply**:
We appreciate the candid criticism and will make a serious effort to have both the problem and the contributions more clear to the non (RNA) CompBio audience in the revised version.
===========
**Minor comments**:
**Sec 7**: The authors mostly use the Discussion section for summarizing their contributions rather than discussing notable observations, limitations, and future directions. (except for the last paragraph where they discuss the inherent limitation of experiment data)
**Reply**:We will work to improve the discussion by pointing out other possible directions for extensions but also the applications of these models. A good example to note, which highlights the applicability of the proposed models to significant biological problems is the very recent Wagner et al Nature Genetics 2023. There, the authors apply two similar splicing models (MMSplice and SpliceAI - we compared against Pangolin which came out later, used SpliceAI architecture and demonstrated improved performance) to predicting the effect of genetic variants in genomic sequences of patients with rare undiagnosed disease. The authors clearly demonstrate improved performance for detecting variations in splicing when variations in RNA occur in non-CAT (clinically accessible tissue) - see Fig5c in their study. This application also serves to demonstrate why the models presented here are not just a theoretical exercise but can have immediate applications if tuned and packaged correctly.
(due to space limitations we upload a separate PDF with additional figure, table and response to the questions)
---
Rebuttal Comment 1.1:
Title: Post-Rebuttal Comments
Comment: I appreciate the authors' detailed responses. They have addressed many of my concerns about the paper. It seems like I misunderstood some parts of the manuscript. Some of my follow-up questions are as follows:
- In reference to Table 1 in the separate PDF, there appear to be discrepancies between the results presented in that table and those in Table 1 of the main manuscript. Could you kindly elaborate on whether the ablation studies were conducted under different setups? Further elucidation on these ablation studies would be beneficial.
- While the authors have elucidated that the utilization of pre-trained DNABERT yielded unsatisfactory results, it would be helpful to have access to the quantitative outcomes of this comparison. Additionally, could you expound on whether, during the fine-tuning phase, the entire Transformer model is retrained alongside the additional MLPs? Would it be more effective and robust to hyperparameters if the pre-trained Transformer were frozen, with only the MLPs undergoing fine-tuning?
- As far as I understand, the claimed main reason for the newly trained Transformer model outperforming the pretrained DNABERT is that it is only pretrained from the splice site sequences rather than the entire genome. To support the claim, would it be feasible to demonstrate the shortcomings of TrASPr when pre-trained with an equivalent volume of sequences randomly sampled from the entire genome?
- Regarding the evaluation of BOS sequence generation, in L 320-234, the authors stated "From the generated 214 sequences with increased inclusion, our BOS algorithm significantly increased PSI for 46 of them." However, I am still struggling to comprehend how the authors can determine that the generated sequences truly increased PSI, if they did not use the TrASPr as an Oracle to evaluate them.
**Overall**, I am inclined to believe that incorporating the authors' responses would indeed enhance the manuscript's quality. Consequently, I have adjusted my rating from 3 to 4. Upon reviewing the comments from other reviewers and the authors' clarifications, it seems I'm not the only one who has struggled to understand the authors' contributions and has concerns about the presentation, especially regarding the BO for sequence design. This suggests that substantial revisions are needed beyond the inclusion of additional experimental results. Although the authors' responses have been comprehensive, I could not give a higher score as I respectfully believe resubmission after revision is more appropriate for this manuscript.
---
Reply to Comment 1.1.1:
Title: Reply to Post-Rebuttal Comments
Comment: Comment:
I appreciate the authors' detailed responses.... addressed many of my concerns ..... seems like I misunderstood some parts
Reply:
We thank the reviewer for their detailed rebuttal response. We are glad the reviewer found we have addressed many of their concerns and clarified some misunderstandings about our work. We address the follow-up questions below.
Comment:
In reference to Table 1....appear to be discrepancies between the results....
Reply:
We apologize for the confusion regarding the different experiment settings. In prediction of PSI/dPSI for wild type samples, we have two kinds of datasets. The first dataset is for mouse which samples are generated from the mouse genome project (MGP). This data was used in the original AE+MLP paper so we took the same settings and showed the results in Table1. In the ablation study, we used the human dataset from GTEx, which is far more extensive. The main purpose of the ablation study is to further understand the effects of different components of the model and features.
Comment:
....pre-trained DNABERT yielded unsatisfactory results, ...have access to the quantitative outcomes ...during the fine-tuning phase, the entire Transformer model is retrained....only the MLPs undergoing fine-tuning?
Reply:
The reviewer listed several interesting questions here. Starting from the end - Yes, the whole model is retrained with a much smaller learning rate. We tested with freezing transformers and only training the MLP part. However, the performance was worse. This is to be expected: The pre-training is only on splice site recognition but what make a region condition specific still needs to be learned. Freezing the Transformer hurts that learning.
Regarding a (quantitative) comparison to using DNABERT as the underlying Transformer model: Following the reviewer’s specific request we ran such a model for several days. There were several issues with it. First, using the DNABERT published parameter setting completely failed. As we noted before, we found DNABERT to be highly finicky which made it hard to use in the first place. After parameter search we were able to get it to run successfully and the results were as follows:
AUPRC: [0.05311496 0.04932114]
Spearman: 0.11426922300496299
AUROC: [0.71877873 0.72784285]
We think the very low AUPRC (but reasonable AUC) has to do with the model preferring to give more weight to the majority class during training (non-changing events). This in turn could be due to the fact that the significantly larger model required us to reduce the batch size from 32 to 16. More experiments and hyper parameter exploration would be needed to look into this but regardless we stress pre-training a different BERT was never the main focus of this paper.
Comment:
...the claimed main reason for ...outperforming ...DNABERT is ...pretrained from the splice site sequences....feasible to demonstrate the shortcomings of TrASPr ...randomly sampled from the entire genome?
Reply:
We would like to clarify a few things here. First, it’s unclear to us what exactly was the combination of factors that led our pre-trained model to outperform using a pre-trained DNABERT as described above. For splice-site prediction only DNABERT got slightly worse results compared to TrASPr and for the actual tissue specific predictions much worse results (see above). It could be the finicky nature of the model, the reduced batch size, the fact it was trained on irrelevant genomic data. We think asking us to pre-train TrASPr for several weeks on random genomic data just to delve further into this question is an unreasonable request given that this is not a major point in this paper.
Commet:
...L 320-234, the authors stated... I am still struggling to comprehend...they did not use the TrASPr as an Oracle to evaluate them.
Reply:
We apologize for failing to make this clear previously. This analysis is based on TrASPr predictions. The point of this analysis is to use TrASPr as an Oracle, combined with BOS, to efficiently find good candidate sequences to achieve the desired splicing change. The part of the assessment not related to TrASPr is the overlap with the known regulatory motifs around these events and the distribution of locations in terms of whether these make biological sense as we describe in the main text.
In conclusion:
Standing issues are (a) better writing/explanations (b) testing evolutionary algorithms. Both are addressable and any result from testing those would be new/interesting. Other reviewers indeed raised that concern but suggested taking the BO out while still giving a higher score. We hope our new set of clarifications/details would help increase the reviewer's confidence in our work rather than recommend a rejection. That said, we understand agreement is not always reached and regardless we very much appreciate the reviewer’s informative comments and time spent to ultimately make our work better. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their valuable comments. In particular, we appreciate the comments on the importance of the problem we address, the improvement in RNA splicing prediction, the experiment design to support our claim, as well as the positive feedback regarding the flow of our paper. Furthermore, we thank all reviewers for pointing out potential weaknesses and unclear elements. To save reviewers time/effort we addressed all questions and concerns of the reviews in separate responses but pointed to shared elements where relevant. We also uploaded the attached PDF with additional results/figures to address specific questions/concerns raised. We hope you will find our response suitable and look forward to any subsequent inspiring discussion.
NOTE: We were unable to address all of reviewer aUkC questions (which also relate to the uploaded figure and table) so we include those in the attached PDF together with the additional figure and table.
Sincerely,
the authors
Pdf: /pdf/94755090fa2e78740a9906fa5917d8819de37f90.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose two approaches to deal with the problems of alternative splicing (AS). A transformer architecture-based tissue-specific splicing code model, TrASPr, and a Bayesian Optimization algorithm were performed on the latent variable space of VAE to address the design of RNA sequences with specific splicing characteristics. The architecture is not so novel, but applying LLM to the AS is worth noting.
Strengths: The proposed methods are exciting and computationally novel. Using BERT with masking to pre-train the model was a nice touch. Using VAE with Bayesian Optimization is interesting.
Weaknesses: The proposed method is a nice modeling experiment exploring using LLM on a novel application. However, it lacks reliability as a tool to deal with biomedical problems that can be used for biological research. The evaluation the author provides shows that the method fails. The authors mention that TrASPr significantly outperforms AE-MLP in a particular situation but also point out that the performance degrades based on the filtering criteria. There also seems to be a performance difference in BOS sequence generation based on the edit distance, a hyperparameter that lay users would not know how to tune on their particular problem.
Minor comment:
Line 78 is incomplete.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Which version of MAJIQ did you use?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The proposed method has a potential to be a hypothesis generation method. However, it lacks the biological soundness and it is unclear how to pinpoint the problem when they arise. Authors are very casual about the evaluation and the problems that arise when the hyper parameters are chosen inappropriately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding MAJIQ version - we used 2.2
Regarding Weaknesses and Limitations listed by reviewer wcPZ:
We are unsure how to interpret the “biological soundness' ' and ‘lack of reliability’ criticism. We have worked hard to show not only do the model predictions outperform current SOTA but correspond to known biological motifs where we are able to test for those. Furthermore, we believe there is actual utility in “hypothesis generation” by this method - we give one concrete example for rare undiagnosed diseases (see response to reviewer aUkC). Regarding the critique related to the choice of hyper-parameters setting: We’d like to clarify that we don’t view edit distance as a hyperparameter of BOS per se, because its impact on optimization performance is theoretically monotonic: larger edit distances afford the optimizer more freedom to modify the original sequence. To see that performance is monotonic, simply observe that any sequence with up to k edits is of course a valid sequence with up to k+1 edits, and therefore allowing k+1 edits cannot perform worse than allowing k edits. Therefore, since it is common to prefer parsimonious solutions to problems, edit distance should simply be taken as small as possible while achieving whatever the sequence designer believes to be an adequate change in splicing for their own task. The actual definition of this parameter can be the result of constraints/preferences related to the task. For example, if one was to design RNA edits for therapeutic purpose using base editors (e.g. “fixing” a genetic disorder) they may want to limit the number of locations, their positions and sequence compositions. Here we only explored a simple constraint which is the total number of edits. Since we clearly failed to convey this, we will try to make this point more clear in the revised version. | null | null | null | null | null | null |
Decompose Novel into Known: Part Concept Learning For 3D Novel Class Discovery | Accept (poster) | Summary: - This paper proposes a novel part-based algorithm for 3D novel class discovery (NCD). Authors propose Decompose Novel Into Known parts (DNIK) that leverages knowledge about parts of known objects to discover novel classes.
- Authors identify that the main problem with learning 3D features for object discovery is that the features are heavily biased towards the known classes. This work shows that this can be prevented by using the well known part-based modeling approach.
- DNIK is trained to learn a part concept bank that can be used to compose different known and novel objects. Three different regularizations are proposed to prevent the collapse of this part bank.
- Extensive experiments show the deficiencies of existing 2D NCD literature and the effectiveness of DNIK to overcome these issues.
Strengths: - This paper builds on an age old, part-based models in visual recognition and shows impressive improvements over single holistic representations currently in use in the 2D category discovery methods. As shown in the experiments this has significant merits for identifying and grouping new classes.
- The paper writing was smooth and was very easy to follow. An experienced engineer would be able to reproduce the work with the given details.
- Authors support all the claims made in the paper with experiments on real world datasets or on toy problems. Sec 3.1 and Fig. 4.a were particularly interesting to understand what the authors were trying to convey.
- The effectiveness of the method was shown with the impressive experimental results.
Weaknesses: - While the problem tackled by the authors is relevant and important, the setup adopted by the authors is outdated. Generalized category discovery, as done in [1][2] is a more realistic setting and it is not clear why the current method is not suited for this setup or the authors advice against it? It is my strong suggestion to the authors to answer this question and compare with the relevant work (cited below) to justify this work among existing literature.
- The toy example in Sec. 3.1 is not fair for the following reason. In L86, the setup states that the classes in known and unknown sets share some similarities, but in the example authors choose {table, sofa, stool} as known and {chair, bench, bathtub, plant} as unknown objects. In this case, bathtub and plant do not share any commonalities with the known objects. It looks like the authors intentionally exaggerate the problem to make their point. While this is acceptable, it is not clear how much of a serious issue is the "overfitting to the known classes" problem.
- Author propose to use supervised contrastive loss to learn more parts from the known shapes. The motivation and explanation doesn't justify why this would be the case. In L204 authors pool the features along the last dimension which basically is a "shape" feature as opposed to a "part" feature. It is not clear how the contrastive loss helps learn more part features when the loss is being applied on the "shape" features.
- Table 4 demonstrates the performance improvement for each of the proposed components but experiments to demonstrate that show that the regularizations on the part features actually operate as the authors claim is missing. What happens when diversity loss is missing? Do all the part features in the bank collapse to fewer representations? This can be quantified by cosine similarity between the part features. Similar analysis on the remaining two regularization terms is warranted.
[1] Sagar Vaze, Kai Han, Andrea Vedaldi, Andrew Zisserman, Generalized Category Discovery, CVPR 2022.
[2] Sai Saketh Rambhatla, Rama Chellappa, Abhinav Shrivastava, The pursuit of knowledge: Discovering and localizing novel categories using dual memory, ICCV 2021.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Can Fig. 5b, 5c be combined in to one plot? Two separate plots makes it hard to understand what values of K, Q are being used for each of these.
- Legend for Fig. 6 is missing.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Authors have addressed the limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and helpful feedback! We sincerely appreciate your positive feedback that our work "has impressive improvements" and "is very easy to follow". Your suggestions provide valuable guidance for us to improve this study. We address your thoughts point by point below.
>Q1: Generalized category discovery is a more realistic setting and it's not clear why the method is not suited for this setup or the authors advice against it? It is my strong suggestion to the authors to answer this question and compare with the relevant work to justify this work among existing literature.
A1: We appreciate the reviewer raising this thoughtful point. Please see our response to this question in the General Response.
>Q2: The toy example in Sec. 3.1 is not fair for the following reason. In L86, the setup states that the classes in known and unknown sets share some similarities, but in the example authors choose table, sofa, stool as known and chair, bench, bathtub, plant as unknown objects. In this case, bathtub and plant do not share any commonalities with the known objects. It looks like the authors intentionally exaggerate the problem to make their point. While this is acceptable, it is not clear how much of a serious issue is the "overfitting to the known classes" problem.
A2: We are sorry for the confusion. Our intention for the toy experiment is to create an illustrative example to help readers better understand the feature bias issue. But we would like to clarify that the feature bias issue illustrated in the toy example is widely present in real datasets, rather than a contrived corner case. As evidenced in Supplementary Fig. III on the high similarity ModelNet task, the same trend of decreasing novel accuracy can be observed, validating that the problem is widely present in real-world datasets. Furthermore, Fig. 6 and Supplementary Fig. 2 visualize baseline features learned on ModelNet, showing high confusion between different novel classes due to a lack of discriminability.
>Q3: Author propose to use supervised contrastive loss to learn more parts from the known shapes. The motivation and explanation doesn’t justify why this would be the case. In L204 authors pool the features along the last dimension which basically is a "shape" feature as opposed to a "part" feature. It is not clear how the contrastive loss helps learn more part features when the loss is being applied on the "shape" features.
A3: We apologize for the unclear motivation behind the contrastive loss for discovering more part concepts. Basically, our assumption is that **the accumulated part activation map should be more similar for objects from the same class than objects from different classes.** Therefore encouraging the inter-class discrepancy of their part concept activation maps (i.e., the contrastive loss) can activate more part concepts so that the part activation maps between different classes can be pushed apart as much as possible. In our method, we use the shape concept map *T* (i.e., the "shape" feature) to represent the accumulated part concept activation maps of an object.
If we do not consider the contrastive loss, only **a small subset of part concepts** are needed to distinguish different known classes based on their shape concept maps as shown in Fig. 4(c) red box. For example, the maps only need to activate concepts like \[surface, table corner\] and \[surface, stool back\] for the recognition of table and stool, while other part concepts like \[table leg\] and \[stool base\] are ignored. Without the contrastive loss, the concept bank lacks the impetus to learn additional concepts beyond those minimally needed parts to distinguish known classes.
By pushing maps of different classes apart and enforcing the similarity of shape concept maps within each known class via contrastive loss, more part concepts get discovered to make the histograms more distinct and discriminative so contrastive loss is further minimized. For instance, concepts like \[table leg\] and \[stool base\] will be learned to make the shape maps clearly separable as shown in Fig. 4(c) green box.
>Q4: Table 4 demonstrates the performance improvement for each of the proposed components but experiments to demonstrate that show that the regularizations on the part features actually operate as the authors claim is missing. What happens when diversity loss is missing? This can be quantified by cosine similarity between the part features.
A4: We sincerely appreciate the valuable comments from the reviewer. Firstly, as shown in Tab.4-\[6,7,8\] and Supplementary Tab. I-\[3,4,5,6\], we detailed the effect of each loss term on the results. Secondly, following the reviewer’s suggestion, we added the following table to demonstrate the average cosine similarity between part concepts when using different losses:
| | w/o ( $L_{sd}$, $L_{sc}$, $L_{cd}$ ) | w/ $L_{sd}$ | w/ $L_{sc}$ | w/ $L_{cd}$ |
|:------------------|:-------:|:--------------------:|:--------------------:|:--------------------:|
| Cosine Similarity | 0.251 | 0.022 | 0.045 | -0.007 |
It can be seen that adding the diversity loss $L_{cd}$ leads to a significant decrease in the similarity between part concepts, indicating the decreased distinction between concepts and verifying the effectiveness of the diversity loss. Adding the remaining two regularization losses ( $L_{sd}, L_{sc}$ ) also decreases the similarities to varying degrees. These results further validate the positive effects of the various loss terms on model training. We sincerely appreciate the reviewer's suggestions and will include these additional analyses in our revised manuscript to more comprehensively demonstrate the efficacy of our method.
>Q5: Can Fig. 5b, c be combined? Legend for Fig. 6 is missing.
A5: We will fix the confusion in Fig. 5 and Fig. 6 in the revision.
---
Rebuttal 2:
Comment: Author's response to my questions are satisfactory but I have a major concern about author's general response. While authors claim that their setting is indeed GCD, there is no mention of that in the main paper. But the additional experiments provided by the author's general response are sufficient. The only thing which I'm concerned about is the changes that need to be made in the paper (changing the formulation to GCD instead of NCD). This requires a lot of change and I am curious to hear what the authors think about this. After looking at the author's response, I am willing to improve my response.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer eZuw,
We are glad that the responses address your concerns. We appreciate your positive feedback and constructive suggestions on the GCD setting.
As for the revision of the GCD setting, there are two choices.
For the first choice, we can keep the NCD setting and treat the GCD setting as a more generalized setting. The changes will be as follows:
1. Basically, the only difference between the NCD setting and the GCD setting is the inference stage, while the training stage is the same. We will add an additional subsection at the end of Sec. 3 to discuss the different inference modes of the GCD and the NCD and highlight that the GCD setting is more realistic and universal.
2. Related works about GCD will be added to Sec. 2.
3. Experiments in Tab. 1-3 will be updated by reporting results of both the NCD setting and the GCD setting.
In this way, the changes to the manuscript are minimized and most content remains intact. However, as GCD is a more generalized setting than NCD, devoting most space to the NCD setting may confuse the readers.
For the second choice, we can change all formulations to the GCD setting which is more practical and common than the NCD setting in real applications. The changes are summarized as follows.
1. As the method is fundamentally designed for the GCD setting, all problem analysis and descriptions about NCD can be easily changed to the GCD setting. The problem statement part will also be reframed for the GCD setting. In addition, Sec. 3.1 will be changed by using a GCD method for illustration, where our results suggest that the training curve of SimGCD is similar to the trend in Fig. 3 of UNO.
2. Related works about GCD will be added to Sec. 2.
3. Results in Tab. 1-3 will be updated by extending all compared NCD methods to the GCD setting and adding results of GCD and SimGCD. In fact, we have finished these experiments of comparison methods and even greater improvements are observed. As the results of Tab. 4 and Fig. 5 are already based on the GCD setting, we do not need to change them.
For this choice, we need to thoroughly change our description of the NCD setting. However, the benefit is that the addressed problem can be unified to the GCD setting without mentioning the old NCD setting. **In this way, the organization of the paper can be more clear and the addressed problem can be more practical and general. We prefer the second choice for its clarity.** | Summary: In this work, they address 3D novel class discovery (NCD) that discovers novel classes from an unlabeled dataset by leveraging the knowledge of disjoint known classes. The key challenge of 3D NCD is that learned features by known class recognition are heavily biased and hinder generalization to novel classes. Since geometric parts are more generalizable across different classes, the authors propose to decompose novel into known parts, coined DNIK, to mitigate the above problems.
Strengths: 1. The paper is well written and motivation (separate instances into repeatable parts) is pretty good.
2. The model design is reasonable and the improvement is satisfied.
3. The experimental analysis is sufficient.
Weaknesses: 1. This paper does not consider hierarchical part representation.
2. why does Part Relation Encoder work for novel classes ?
3. Does the improved representation works for some scene level tasks, such as novel class segmentation for point cloud ?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see the weakness
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your attentive comments! We are glad you thought “the paper is well written and motivation is pretty good”. We address your feedback point by point below.
>Q1: This paper does not consider hierarchical part representation.
A1: We thank the reviewer for the suggestive comment. The focus of this paper is leveraging part concepts to obtain generalized features for improved novel category discovery therefore we did not consider hierarchical parts in the present. We do agree that using multi-scale parts can further improve the performance as noted in the Limitations section and suggested by recent works exploring hierarchical point cloud parts\[1,2\] for 3D recognition. In future work, we will explore this idea further.
- \[1\] Wei et al., Multi-scale Geometry-aware Transformer for 3D Point Cloud Classification. 2023.
- \[2\] Yang et al., PointCAT: Cross-Attention Transformer for point cloud. 2023
>Q2: Why does Part Relation Encoder work for novel classes?
A2: Thank you for raising this important question. The PRE module is a point cloud transformer based on local patches following prior works like \[1-4\]. These studies show that self-attention can capture spatial relationships between patches effectively. We propose PRE to complement global shape information that part concepts alone cannot precisely encode. For example, a table and desk contain similar parts like flat top and leg. But PRE may capture finer spatial relationships like a table’s peripheral leg layout versus a desk’s set-back legs. So despite similar parts, PRE’s positional encodings help distinguish subtle structural differences across object classes. As shown in Table 4-9, the PRE can further enhance the performance ( + 1.6% on average).
- \[1\] Pang et al., Masked Autoencoders for Point Cloud Self-supervised Learning. ECCV 2022
- \[2\] Yu et al., Point-bert: Pre-training 3d point cloud transformers with masked point modeling. CVPR 2022
- \[3\] Zhang et al., Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training. NeurIPS 2022
- \[4\] Yang et al., PointCAT: Cross-Attention Transformer for point cloud. 2023
>Q3: Does the improved representation work for some scene-level tasks, such as novel class segmentation for point cloud?
A3: Novel class segmentation is an intriguing new direction, with very few explorations\[1\] so far even in 2D vision. As demonstrated by our good performance on 3D novel class discovery, learned shape parts and their compositions can bridge the gap between seen shapes and unseen shapes and generalize learned features to domains with semantic shifts. For semantic segmentation, this assumption still holds, and novel semantic shapes can also be seen as compositions of shared shape parts. Therefore, we believe integrating our approach with part-based segmentation is a promising path to novel class segmentation in future work. We appreciate this thoughtful suggestion that brings insights advancing our approach to novel tasks.
- \[1\] Zhao et al., Novel class discovery in semantic segmentation. CVPR 2022.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer dey2
Comment: Thanks for the response. The authors address my concerns regarding Part Relation Encoder and other two questions. Because of the interesting idea of this paper, I am willing to improve my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer dey2,
We are glad to receive your reply. Thank you for your positive feedback and for pointing out further applications of our method. | Summary: This paper tackles the problem of novel category discovery in the 3D shape recognition domain, a framework leveraging the 3D parts and the part-wise relation is proposed which the motivation is learning the parts from the known classes could help the model capture more transferrable features or concepts for the novel categories.
This motivation is validated using experiments, and overall the framework shows better performance than some baselines.
Strengths: 1. The idea of decomposing a category into parts is interesting.
2. I like the organization of this paper, starts with an analysis of the problem of previou method, and then proposed new ones based on the analysis.
3. The paper also explored a bit on the design choices for 3D novel category discovery, which could be helpful.
Weaknesses: 1. This paper still considers the novel category discovery problem while a more generlized setting exists, generalized category discovery [R1, R2], I would suggest the paper to include more discussion and experiment on this generalized setting.
2. It seems that the total number of categories in the datasets are quite small compared to 2D NCD, I am wondering if Objaverse [R3] can be used for this task?
[R1] Generalized Category Discovery, CVPR 2022
[R2] Parametric Classification for Generalized Category Discovery: A Baseline Study, arXiv.
[R3] https://objaverse.allenai.org/
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: 1. In the v1 version of SimGCD fig 10 [R4], it is shown that the accuracy on known classes first increases and then drops while the novel class accuracy keeps improving, this contradicts the observation made in this paper, I am wondering if this is because of the setting (generalized category discovery v.s. novel category discovery), the data (2D v.s. 3D), or the number of categories(200 v.s. 7)? Consider this observation is the motivation for this paper, this question will be the biggest concern of mine.
[R4] https://arxiv.org/pdf/2211.11727v1.pdf
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: I think the main limitation is that the number of categories is small, thus the conclusion made based on these small datasets may not be generalizable to larger scale datasets.
Overall I think this paper is very clear, and could be of interest for the community, however the concerns I raises in the questions should be addressed first.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful observations on the generalized category discovery (GCD) problem. Your careful analysis has given us many new perspectives to consider! We sincerely appreciate you thought "the idea \[...\] is interesting" and "like the organization of our paper". We address your thoughts point by point below.
>Q1: This paper still considers the novel category discovery problem while a more generalized setting exists, generalized category discovery \[R1, R2\], I would suggest the paper to include more discussion and experiment on this generalized setting.
A1: We appreciate the reviewer raising this thoughtful point. Please see our response to this question in the General Response.
>Q2: It seems that the total number of categories in the datasets is quite small compared to 2D NCD, I am wondering if Objaverse \[R3\] can be used for this task.
A2: We appreciate the reviewer’s suggestion to leverage larger-scale 3D datasets. The large-scale 3D datasets (MVimgNet and Objaverse) were not available when we wrote this paper. Due to time constraints, we only conducted **an MVimgNet50-100 task following the SimGCD as shown in the table below**. In this task, we selected 50 classes as known classes and another 100 classes as novel classes from the large-scale MVimgNet dataset. Despite the challenges when increasing the number of unknown classes, our method can still outperform baselines by a large margin. We will add more experiments in future versions.
| | Novel | Known |
|:----------|:---------------:|:-------------:|
| AutoNovel | 12.8 | 47.8 |
| NCL | 11.3 | 53.9 |
| UNO | 10.7 | 44.3 |
| IIC | 12.5 | 58.2 |
| Ours | **43.9** + 31.1 | **64.0** +5.8 |
>Q3: In the v1 version of SimGCD fig 10 \[R4\], it is shown that the accuracy on known classes first increases and then drops while the novel class accuracy keeps improving, this contradicts the observation made in this paper, I am wondering if this is because of the setting (generalized category discovery v.s. novel category discovery), the data (2D v.s. 3D), or the number of categories(200 v.s. 7)? Consider this observation is the motivation for this paper, this question will be the biggest concern of mine.
A3: We thank the reviewer for the insightful observation. Our analysis is as follows.
- Firstly, **the different setting of GCD and NCD is not the cause** as our method basically follows the GCD setting.
- Secondly, **3D data makes things more challenging** as the 3D pretrained model does not generalize well. For example, SimGCD[1] uses the powerful DINO model pre-trained on millions of images as the encoder and only fine-tunes a very small part of the network during training. This training strategy makes the extracted image features more generalizable, thus producing better results. In fact, previous work[2] suggests that DINO is very important to the generalization to unseen concepts. In contrast, the 3D field lacks both large-scale 3D shapes as well as high-quality pre-trained models. We also tried using some pre-trained models like OcCo[3] but did not achieve good results.
- Thirdly, we would like to clarify that **the inconsistent accuracy trend is not caused by the number of categories in the toy experiment.** Supplementary Fig. III shows the accuracy curves of various methods on the ModelNet with more categories (40 classes), we can observe the same decreasing novel accuracy curve as the toy experiment.
We believe that **the main reason is still the feature bias problem** described in Sec.3.1. As visualized in the t-SNE of Fig. 6 and Supplementary Fig. II, the learned feature distributions of different novel classes by baseline methods are highly overlapping and confused, indicating a lack of discriminability. Therefore the accuracy of pseudo-labels generated by clustering algorithms like Sinkhorn and K-Means decreases with the training proceeding **as shown in the table below**. In Section 3 of SimGCD, the authors also had the conclusion that "the key to previous parametric classifiers’ degraded performance is unreliable pseudo labels". In fact, we attempted to improve the quality of pseudo labels with various methods, such as self-distillation and entropy regularization, but we did not manage to improve the accuracy of pseudo labels to good quality in the case of feature bias.
We would like to highlight that our method uses part concepts to obtain more generalizable features without the requirement of any large-scale pre-trained model like SimGCD. The results show this achieves better performance, presenting an interesting alternative direction for learning generically informative 3D shape representations. In the revision, we will add this discussion.
| Epoch | 75 | 125 | 175 | 225 |
|:---------|:----:|:----:|:----:|:----:|
| Kmeans | 58.1 | 52.6 | 49.3 | 47.2 |
| Sinkhorn | 50.9 | 48.6 | 47.8 | 45.1 |
- \[1\]Wen. et al., A Simple Parametric Classification Baseline for Generalized Category Discovery. arxiv 2022
- \[2\] Sariyildiz. et al., Concept Generalization in Visual Representation Learning. ICCV 2021
- \[3\] Wang et al., Unsupervised Point Cloud Pre-training via Occlusion Completion. CVPR 2021.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I would strongly recommend the paper to change its title to generalized category discovery or highlight this somewhere in the text, since several reviewers are confused about this.
My concerns on the number of categories still remain, since the 3D datasets are all small in size, I would argue that the real-world use cases of 3D category discovery are limited, or at least is not ready.
I would hope the paper gives more motivation on why 3D category discovery is a real problem that people would work on?
The discussion on my question 3 is not very convincing if the cause of the mismatch between the SimGCD paper and this paper is because of feature bias(which also should be existing in 2D GCD methods), why does the SimGCD paper have different observations?
To me, the real reason should be the 3D vs 2D problem, as mentioned in the response, 3D model does not have strong pretrained representations, maybe the mismatch happens here as SimGCD uses DINO as the pretrained model.
Is there any 3D pretrained models that can be used for this task? I think this kind of experiment could settle this problem.
---
Reply to Comment 1.1.1:
Title: Replying to Comment
Comment: Dear Reviewer mbrM,
Thank you for your reply. We address your concerns below.
> Q4: Since the 3D datasets are all small in size, I would argue that the real-world use cases of 3D category discovery are limited, or at least is not ready. I would hope the paper gives more motivation on why 3D category discovery is a real problem that people would work on?
A4: The reviewer may notice that the number of classes of most datasets (CIFAR10/CIFAR100/ImageNet/Stanford Cars/FGVC-Aircraft) used in [1-2] is less than 200. In the response, the results on MVImgNet in **A2** are validated on about 100 classes. We can see the number of classes is comparable to the datasets used in the 2D GCD task. Moreover, you may also notice several much larger 3D datasets, such as OmniObject3D, Objaverse, and MVImageNet, have been proposed this year.
As for the motivation, we do not see why the motivation of GCD and NCD should depend on the scale of datasets. As described in [1], we can name many cases where recognizing novel classes is necessary, which also applies to 3D objects. For instance, tools in workshops, products in stores, furniture in homes, and many other objects often have novel designs or categories that cannot all be predefined. In some cases, the number of known classes can be small, e.g., furniture in homes, and recognizing novel classes is still in great demand in this case. Essentially, the core motivation behind generalized category discovery is to develop flexible visual intelligence that is more like human beings. **A baby usually knows very few semantic concepts but she can learn new objects from unlabeled instances by associating them with known abstract concepts.** Therefore, learning generalizable 3D part concepts that can be transferred across categories remains crucial for handling novelty discoveries. Our work focuses on developing methods that can learn novel classes in this open-world scenario and we believe that pursuing this capability is an important direction in 3D recognition research with many potential applications.
> Q5: The cause of the mismatch between the SimGCD paper and this paper
A5: We agree that the difference between 2D data and 3D data is one of the main causes as analyzed in Q3. **In order to evaluate the impact of 3D pre-trained models**, we present results using two different self-supervised pre-trained backbones under the GCD setting in the table below. Following SimGCD, we only fine-tune the last three layers of the PointNet++ backbone and the classification heads.
The results demonstrate that utilizing more generalized features from pre-training improves the performance of the baseline methods. However, there remains a substantial gap compared to our method. This indicates that existing 3D pre-trained models have not yet learned representations as powerful as those captured by DINO. We will include these detailed ablations in the revision to provide a more thorough analysis.
| | Scratch | OcCo[1] | CrossPoint[2] |
|--------|---------|---------|---------------|
| UNO+ | 41.7 | 42.7 | 45.3 |
| SimGCD | 47.2 | 48.9 | 51.2 |
| Our | 70.6 | - | - |
Moreover, we would like to politely highlight that relying heavily on large-scale pretraining data could be a limitation of existing methods, while our method can still achieve good performance without any reliance on large-scale pre-trained models.
[1] Wang et al., Unsupervised Point Cloud Pre-training via Occlusion Completion. CVPR 2021.
[2] Afham et al. Crosspoint: Self-supervised cross-modal contrastive learning for 3d point cloud understanding. CVPR 2022.
---
Rebuttal Comment 1.2:
Title: Factual Error in the Rebuttal
Comment: Given the factual error in the author's rebuttal to reviewer eZuw, where the author claims that:
> Basically, the only difference between the NCD setting and the GCD setting is the inference stage, while the training stage is the same.
Which is not correct, the difference between NCD setting and GCD setting is not only at inference stage, it is also at training stage. In NCD, it is assumed that the unlabelled data only has novel categories, yet in GCD, the unlabelled data has both novel and seen categories. [a, b]
This factual error questions the fairness and correctness of the experiments in the rebuttal, and maybe some claims in the main paper.
I would hope there is an explanation, otherwise, I would lower my score.
[a]Generalized Category Discovery, CVPR 2022
[b]Parametric Classification for Generalized Category Discovery: A Baseline Study. arxiv 2022
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer,
Thank you for pointing this out. We would like to further clarify our experimental setup in the General Response and the main paper. In our experiments, we assume there exist a labeled dataset of known classes and an unlabeled dataset with shapes only from novel classes during the training phase. During testing, we assume the test dataset can contain shapes from both known classes and novel classes. Note that experiment results marked with 'GCD setting' in the General Response indeed follow our present setting. Therefore the major difference between our present setting and the GCD setting is the assumption of the unlabeled training set.
We sincerely apologize for the misunderstanding of the training process for the GCD setting, and we will **immediately update the results for the GCD setting in the generic response once we get the results.**
We agree that our current setting is less generalized than the full GCD formulation. However, the present setting also makes sense and can be used for novel class discovery.
More importantly, the main novelty of the paper lies in analyzing the shape-level feature bias problem and addressing it by leveraging more generalized part concept compositions for recognizing novel classes. Therefore, the limitations of our setting do not detract from the primary contributions, which focus on developing part-based representations for better generalization.
**We highly appreciate your feedback to help improve our paper. We will revise the overall setting to the GCD form to avoid confusion.** | Summary: This work presents a framework, called Decompose Novel Into Known parts (DNIK), that addresses the challenge of 3D Novel Class Discovery (NCD) – identifying new classes from an unlabeled dataset using the knowledge of known classes. Current methods, heavily biased towards known classes, struggle to generalize to novel classes. By leveraging more generalizable geometric parts across different classes, DNIK mitigates this issue. It constructs a part concept bank encoding rich geometric patterns from known classes, which is used to represent novel 3D shapes as part concept compositions, thus facilitating cross-category generalization. DNIK also leverages part-wise spatial relations for improved recognition. The method has been tested through three 3D NCD tasks, consistently outperforming state-of-the-art baselines.
---- after rebuttal ----
As the author's rebuttal resolved some of concerns, I raised my score to 5. However, I still feel the studied task is a bit simple, and also there are several spaces to improve for the current manuscript. I will not fight for its acceptance.
Strengths: The studied direction is important as we need to understand parts well to play with 3D objects generalizable. This paper takes a step towards open 3D object recognition via part understanding. Overall, the components used in the proposed framework are sound and reasonable. The paper is easy to follow.
Extensive results shown in Table 1 & 2 demonstrate the strength of the proposed method. The proposed DNIK generally achieved state-of-the-art performance. Some detailed ablation studies are included in Table 4. The cross-domain task is interesting to see the transfer performance.
Weaknesses: Utilizing the part are sharable across different 3D object categories are studied in the previous literature [1,2]. In those paper, they exploited "harder" task, such as segmentation. As the proposed framework can address novel class classification via known part concepts. Can the framework be extended to ground where is those known parts in novel object? Or other applications beyond object recognition?
[1] Learning to Group: A Bottom-Up Framework for 3D Part Discovery in Unseen Categories
[2] 3D Compositional Zero-Shot Learning with DeCompositional Consensus
How the framework handle two different object categories with limited shared parts, such as airplane and chair? Will the framework train on multiple object categories benefit novel object discovery? It would be good to include some failure cases to analyze and provide readers a sense for the limitation of the proposed framework.
L46~47 said the framework can help use part relation features. Can the framework be extended to discovery part relationship?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address the concerns raised above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Line 319~320 analyzed one minor limitation. I feel there are potential more:
1. if the part are not sharable between different categories, such as lamp -> chair, table -> faucet, can the framework still handle?
2. the only shown application is recognition which limits the practical use of the proposed framework.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your attentive comments! We are glad you thought “the studied direction is important” and “the components \[...\] are sound and reasonable.” We address your feedback point by point below.
>Q1: Previous literature exploited "harder" task, such as segmentation. Can the framework be extended to ground where is those known parts in novel object?
A1: We appreciate you raising the possibility of extending our framework, but find the specific comparisons to supervised techniques [1,2] highly inapt and misleading.
- First, these works fundamentally address different problems as the novel class discovery.
- Second, despite their similarities in using shape parts, technical details are fundamentally different, and two related works [1,2] cannot be directly used for the 3D NCD task.
For example, the part segmentation produced by [1] has no semantic information, making it less useful for knowledge transfer.
The goal of [2] is compositional zero-shot learning, where the part combinations of novel classes are predefined.
- Thirdly, we would like to clarify that our method actually faces a harder challenge compared to the cited literature [1,2].
Those works rely on ground truth part annotations (PartNet), while **we learn parts in a completely unsupervised way without any part labels**, which also makes our approach more widely applicable.
We believe the method can be extended to part discovery.
We provide extensive part visualization results of both seen shapes and unseen shapes in Supplementary Fig.I.
Note that our method can roughly locate shape parts in unseen shapes learned from seen shapes without part annotation.
Therefore our method can be extended to ground the known parts in novel shapes.
We encourage the reviewer to see more detail there.
In the revision, we will add some discussions about this comment.
- \[1\] Learning to Group: A Bottom-Up Framework for 3D Part Discovery in Unseen Categories
- \[2\] 3D Compositional Zero-Shot Learning with DeCompositional Consensus
>Q2: How the framework handle two different object categories with limited shared parts, such as airplane and chair? If the part are not sharable between different categories, such as lamp -> chair, table -> faucet, can the framework still handle?
A2: We thank the reviewer for raising this important case.
- Firstly, as stated in Line 165, the part concept activations are distinct for dissimilar objects like airplanes and chairs, providing discriminative information to distinguish the novel class. We also concatenate the original part features so that novel parts dissimilar to existing concepts can be properly handled.
- Secondly, as shown in Table 2’s low similarity tasks, our method achieves substantial gains over baselines (17.5% on average) when known and novel classes have limited similarities. This validates that the generalized part concepts are also beneficial for knowledge transfer across categories with few shared parts.
>Q3: Will the framework train on multiple object categories benefit novel object discovery?
A3: Yes, training on more object categories can certainly benefit novel object discovery. As shown in Table 1, increasing the number of known classes leads to improved performance. Because with multiple categories we can learn more diverse part concepts that further will help us recognize novel objects better. **In the table below, we evaluated the performance by training the method with an increasing number of known categories and the fixed number of novel categories.** The results show that training the method with more categories does enhance the recognition of novel shapes.
| ModelNet | 5-10 | 10-10 | 20-10 | 30-10 |
|:----------|:----:|:-----:|:-----:|:-----:|
| Novel Acc | 57.9 | 61.3 | 64.0 | 66.2 |
>Q4: It would be good to include some failure cases to analyze and provide readers a sense for the limitation of the proposed framework.
A4: Thank you for emphasizing the need for failure case analysis. Our model does have some challenging cases in handling nearly identical parts, such as chairs and benches containing similar seating surfaces, legs and arms, and tables and desks sharing analogous tabletops and supporting legs. These situations test the model’s ability to discriminate. We will add those failure cases in the revision. We greatly appreciate your valuable guidance on analyzing and presenting the model’s limitations concisely.
>Q5: Line.46-47 said the framework can help use part relation features. Can the framework be extended to discovery part relationship?
A5: The PRE module is a point cloud transformer based on local patches, following prior works like \[1\-4\]. These studies have shown that self-attention can capture spatial relationships between patches effectively. We propose PRE only to complement global shape information that part concepts alone cannot precisely encode and benefit the NCD task. Note that part relations are implicitly encoded and we don’t see how the framework can be extended to discover explicit part relationships. We agree that explicitly modeling part relationships is an exciting direction for future work.
- \[1\] Pang et al., Masked Autoencoders for Point Cloud Self-supervised Learning. ECCV 2022
- \[2\] Yu et al., Point-bert: Pre-training 3d point cloud transformers with masked point modeling. CVPR 2022
- \[3\] Zhang et al., Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training. NeurIPS 2022
- \[4\] Yang et al., PointCAT: Cross-Attention Transformer for point cloud. 2023
>Q6: If the part are not sharable between different categories, such as lamp -> chair, table -> faucet, can the framework still handle?
A6: Thank you for raising this case. Please see Q2 for the response.
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: I appreciate the efforts made by the authors. I believe the rebuttal addressed my major concerns. I hope the authors can incorporate our discussion into the revision, especially for the failure cases.
Besides, I have another question is about "our method achieves substantial gains over baselines (17.5% on average) when known and novel classes have limited similarities". If the known and novel classes have limited similarities, how could the method learn transferable knowledge? Could the author provide some qualitative and quantitative results?
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for the constructive suggestions and we will incorporate the discussions into the revision. We respond to the new question below.
> Q7: If the known and novel classes have limited similarities, how could the method learn transferable knowledge?
A7:
As visualized in Supplementary Fig. I, part concepts can learn some basic primitives as generic LEGO blocks (e.g., planes, arcs, and cubes). These part concepts are widely present in different shapes and are easier to be generalized than the overall shape-level features. For example, guitars and airplanes both have cylindrical shapes (the neck of a guitar and the fuselage of an airplane).
In fact, we can approximate different shapes as a composition of simple primitives as done in 3D shape abstraction.
Thus, the network can recognize the low-similarity novel shapes based on different activations of these generic part concepts. As shown in Table I-7 of the Supplementary Material, using part concepts learned from known classes can improve the performance by 25.4\% compared to the baseline (Table 4-1 37.9\%) for the low-similarity task.
Furthermore, we also concatenate the original local part features to preserve the distinctive part information of the dissimilar shapes as shown in line#167, which further improves the performance by 1.8\% as shown in Table I-8 of the Supplementary Material. | Rebuttal 1:
Rebuttal: # **General Response**
Dear reviewers and AC,
We sincerely appreciate your valuable time and efforts spent reviewing our manuscript. We are grateful that reviewers found "the proposed method outperforms all baselines consistently and significantly on all metrics" (Reviewer zZT9), "the studied direction is important" (Reviewer qzzG), "the idea \[...\] is interesting" (Reviewer mbrM), "motivation is pretty good" (Reviewer dey2), and "impressive improvements over single holistic representations" (Reviewer eZuw). We also thank reviewers for their appraisal of the paper’s organization and easy-to-understanding. We respond to each reviewer in a separate response window and address the common concern below.
**Several reviewers would like to see more analysis about the setting of our method and that of the generalized category discovery (GCD) task. Actually, our method follows the setting of the GCD task** and is fundamentally designed to address the more challenging GCD task, where test samples include both known and novel classes. As described in line#221, during inference, test samples from both known and novel classes are passed to both classifiers $\phi^l$ and $\phi^u$ to obtain the corresponding logits, then the two logits are concatenated and fed into a softmax layer to obtain the class-wise probability along all categories. Thus **all the results of our method reported in the paper are based on the GCD setting**. The compared methods in the paper follow the NCD setting where known and novel classes have separate classification heads when computing accuracy. This setting benefits these methods by avoiding confusion between known and novel classes. Therefore results of the NCD setting are better than that of the GCD setting for compared methods as shown in the table below. We also show the results of our method using the NCD setting for reference.
We do not deliberately distinguish these two settings (NCD and GCD) in the paper because we focus on solving the key feature bias problem present in both NCD and GCD. However, we agree with the reviewers that specifically reporting GCD results can better showcase our method’s capabilities. To that end, we conducted additional GCD task experiments on the high similarity task of the ModelNet dataset and compare our method with the GCD methods, i.e., AutoNovel+, UNO+, GCD\[1\], SimGCD\[2\]. We extend AutoNovel and UNO for GCD tasks (denoted as AutoNovel+ and UNO+) by concatenating logits of known classes and unknown classes following \[2\]. **The results are shown in the table below.** We can see our method greatly overwhelms the extension of 2D GCD methods in novel class recognition and also achieves comparable results on known classes. Note that since the pre-trained DINO model used in the GCD and SimGCD are not available in 3D, these two methods are trained in an end-to-end way.
| NCD settings | Novel | Known |
|:------------------|:---------------:|:--------------:|
| AutoNovel | 39.2 | 94.8 |
| NCL | 56.6 | **95.2** |
| UNO | 49.8 | 95.1 |
| IIC | 59.6 | 95.0 |
| **Ours**| **74.8** + 15.2 | 95.0 - 0.2 |
| GCD settings | Novel | Known |
|:------------------|:---------------:|:--------------:|
| AutoNovel+ | 34.3 | 93.4 |
| UNO+ | 42.5 | 86.3 |
| GCD \[1\] | 57.7 | 92.6 |
| SimGCD \[2\] | 46.1 | 94.2 |
| **Ours** | **73.2** + 15.5 | **95.2** + 1.0 |
In the revision, we will add a separate section dedicated to GCD
experiments to thoroughly illustrate the strengths of our method under
this important and widely applicable setting. We hope our responses
address the reviewers’ concerns.
Thank you very much.
Best regards,
Authors.
- \[1\]Vaze et al., Generalized Category Discovery, CVPR 2022
- \[2\]Wen et al., A Simple Parametric Classification Baseline for Generalized Category Discovery. arxiv 2022 | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the problem of 3D NCD (novel class discovery). The objective is to discover novel classes by leveraging information learned from the known classes. This paper proposes a novel framework, DNIK, for 3D novel class discovery (3D NCD) by leveraging part concepts and part-wise relations learned from known classes to reinforce the recognition of novel shapes. The framework consists of a learnable part concept bank, a local geometric aggregation module, a part relation encoding module, and three constraints to facilitate effective part concept learning.
Strengths: (1) The proposed part concept bank and part relation encoding module can effectively bridge the gaps between known and novel shapes and mitigate feature bias.
(2) The experiments show that the proposed method outperforms all baselines consistently and significantly on all metrics.
(3) The paper is generally well-written and easy to follow.
Weaknesses: (1) The unseen class number is assumed to be known, which makes the method less practical.
(2) The effectiveness of the proposed PRE module is not well demonstrated. The performance is not shown by using only the part position feature from the PRE. Therefore, it is not clear about the individual role of PCB and PRE. It would be good to at least ablate the effectiveness that only uses PRE in Table 4.
(3) The study of NCD has been extended to consider the case where the unlabelled data contains objects from seen and unseen classes [A]. It is more convincing to also show results under this more general and practical case.
[A] Vaze et al, Generalized Category Discovery, CVPR 2022
(4) Each part in Part Set Q has the same number of points, that is, K neighbors, which may be dataset dependent and affected by the scale of the objects, while a fixed value of K=64 is selected in the paper. This is unlikely to generalize well to other datasets and objects of different scales. It would be good to show how the method works on instances from the same categories but of different scales. More investigation on this is expected.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: (1) How to ensure the features from PRE include the position relationship of each part? The feature extraction by PRE seems like a process through a black box.
(2) How are the Nq parts like in the initial point cloud? How different/similar are they? The initialization may also heavily affect the results. e.g., if the initial parts are too similar, they are unlikely to be well separated in the end. However, in the beginning, we have little (or no) control over this.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The paper has described the potential limitation of multi-scale objects, not mentioning much about the societal impact, but I did not see any major problem here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and useful suggestions! We are glad you think the proposed method "can bridge the gaps between known and novel shapes effectively" and "outperforms baselines consistently". We address your thoughts point by point below.
> Q1: The unseen class number is assumed to be known, which makes the method less practical.
A1: Thank you. We fixed the number of unseen classes following the baseline methods. However, the number of novel classes can be estimated by following the strategy in AutoNovel and GCD, where semi-supervised k-means and a validation set can be used to select the optimal number of unseen classes. **We estimate the number of clusters using the above method on the ModelNet similarity task as shown in the table below.** The results show the estimation method may slightly underestimate the novel class number when similarities between novel classes and known classes are high. This is because some highly related classes are clustered together. Meanwhile, it slightly overestimates the number of novel classes when similarities decrease. We will incorporate discussions of this finding in the revised manuscript to enrich the analysis.
| | High | Medium | Low |
|:-------------|:----:|:------:|:---:|
| Ground Truth | 26 | 27 | 27 |
| Ours | 24 | 30 | 31 |
> Q2: How to ensure the features from PRE include the position relationship of each part? The feature extraction by PRE seems like a process through a black box. The performance is not shown by using only the part position feature from the PRE. Therefore, it is not clear about the individual role of PCB and PRE.
A2: Thank you. We employ the standard Transformer encoder \[1\] for the PRE module (see model/PDG\*.py Line#188 in the supplementary code for reference). Similar schemes to encode relations of local patches can be found in prior works \[2-4\], where the Transformer encoder can capture spatial relationships between patches effectively. In revision, we will add an illustration in the supplementary for clarity.
**Following the reviewer’s suggestion, we evaluate the performance of only using the PRE module as shown in the table below.** The results show that using PRE alone can already achieve considerable performance gains (4.75% on average).
| | High | Low |
|:---------------|:--------:|:--------:|
| Baseline | 48.8 | 37.8 |
| Baseline + PRE | 52.4 | 43.7 |
| Ours | **73.2** | **66.4** |
Also, combining the PRE module with other modules can enhance the performance ( + 1.6% on average) as shown in Table 4-9. We will add the result to Table 4 in the revision.
- \[1\] Vaswani et al. Attention is all you need. NeurIPS 2017.
- \[2\] Pang et al., Masked Autoencoders for Point Cloud Self-supervised Learning. ECCV 2022
- \[3\] Yu et al., Point-bert: Pre-training 3d point cloud transformers with masked point modeling. CVPR 2022
- \[4\] Zhang et al., Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training. NeurIPS 2022
> Q3: The study of NCD has been extended to generalized category discovery. It is more convincing to also show results under this more general and practical case.
A3: We appreciate the reviewer raising this thoughtful point. Please see our response to this question in the General Response.
> Q4: Each part in Part Set Q has the same number of points, which may be dataset dependent and affected by the scale. This is unlikely to generalize well to other datasets and objects of different scales. It would be good to show how the method works on instances from the same categories but of different scales.
A4: We thank the reviewer for raising this important issue regarding scale invariance. To address it, we would like to highlight three aspects.
- Firstly, our method is invariant to different initial scales, because all samples are normalized to a canonical scale in our pipeline as a preliminary step.
- Secondly, **the results in the table below** show that our method is able to maintain comparable performance even for unseen point densities. In this experiment, we evaluated the robustness of our approach by testing the trained model on testing shapes with varying point densities using the ModelNet-C dataset. We believe the performance can be further improved once we train the method with shapes of diverse densities.
- Thirdly, our method can even work well for more challenging cross-domain cases as shown in Tab. 3. Note that multi-view scans from Co3D and ScanObjectNN present more challenging variances, such as point noises, and occlusions. Our method still achieves consistent improvements (more than +9.5%) in these cross-domain tests.
| Density(point number) | 700 | 800 | 900 | 1024 |
|:----------------------|:----:|:----:|:----:|:----:|
| Novel Acc | 69.5 | 72.9 | 72.6 | 73.2 |
> Q5: How are the Nq parts like in the initial point cloud? The initialization may also heavily affect the results. e.g., if the initial parts are too similar, they are unlikely to be well separated in the end. However, in the beginning, we have little control over this.
A5: Thank you for raising this important point. Our model is robust to part initialization. The key is that farthest point sampling (FPS) ensures the initial parts are as far as possible therefore sampled parts can cover different local geometric structures across the whole shape and are diverse enough. Moreover, since the sampled parts for the same shape are different across epochs during training, the model can learn to identify parts with diverse structures. We evaluate the variances of five random experiments on the high-similarity task on the ModelNet. The variance is 0.0228, while our improvement is 13.6. Therefore, different initializations do not have a big impact on the results. We will add an illustration of the sampled parts and report variances for the results in the supplementary.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal. I have also read other reviewers' comments. I am convinced with the responses to Q1-Q3.
The proposed method alone can not handle the unknown class number case. Hence, this remains a limitation, but it is okay to apply others' methods for practical use. For Q4 and Q5, it will be very helpful to see the visualization of how the learned parts change over time. Considering all factors, I would like to keep the original rating. | null | null | null | null | null | null |
Intensity Profile Projection: A Framework for Continuous-Time Representation Learning for Dynamic Networks | Accept (poster) | Summary: The paper proposes a method to learn low-dimensional continuous-time representations of network nodes, based on the collection of interaction events among them. More precisely, the events are in the form of $(i,j,t)$, where $(i,j)$ is the pair of nodes involved in the interaction event, and $t$ is the occurrence time. The proposed method first estimate the intensity function $\lambda_{i,j}(t)$ of events between each pair of nodes $(i,j)$ at every time instant $t$, then project the intensities of each node at time $t$ onto a learned lower dimensional subspace to obtain a representation. Theoretical results on the recovery error of the representation is provided. Numerical experiments using real data shows the effectiveness of the proposed method.
Strengths: The paper proposes to estimate the representation of nodes using continuous-time events, which seems to be a novel type of data.
Weaknesses: I find the presentation of the paper generally vague and hand-wavy. See the following.
1. The introduction is way too high-level. The authors should be more specific about the problem setting in this paper, for example, why we care about dynamic models, continuous-time event data, low-dimensional representation of nodes etc.
2. The related work is not specific. The authors should use a sentence to summarize the contribution of the mentioned papers and explain the difference from your work.
3. Lemma 1 is not correct. $\widehat U_d$ minimizes the residual sum of squares at $B$ chosen time instants, but not the integrated one.
4. In Section 3, notation part, what is the difference between $\gg$ and $\gtrsim$? Also is the universal constant multiplicative or additive?
5. It's not clear what `$\approx$' means in Section 4.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: .
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
> The introduction is way too high-level. The authors should be more specific about the problem setting in this paper, for example, why we care about dynamic models, continuous-time event data, low-dimensional representation of nodes etc.
See global response.
> The related work is not specific. The authors should use a sentence to summarize the contribution of the mentioned papers and explain the difference from your work.
See global response.
> Lemma 1 is not correct. $\hat U_d$ minimizes the residual sum of squares at $B$ chosen time instants, but not the integrated one.
Lemma 1 is correct as stated. Please see the proof in Section D of the supplementary materials.
> In Section 3, notation part, what is the difference between ≫ and ≳? Also is the universal constant multiplicative or additive?
Thanks for pointing this out, we will make this explicit in the revision. The symbols $\gg$ and $\gtrsim$ denote the inequalities $>$ and $\geq$ that hide multiplicative constants.
> It's not clear what '≈' means in Section 4.
The "$\approx$ " symbol means "approximately equal to" and is deliberately left informal in the descriptions of the "Structure preserving" and "Temporally coherent" properties. In the paragraph before Lemma 2, $\hat X_i(s) \approx \hat X_j(t)$ is formally defined to mean that $X_i(s) = X_j(t)$, so that the lemma is a mathematically rigorous statement. | Summary: The paper presents a framework called Intensity Profile Projection (IPP) for continuous-time representation learning in dynamic networks. The authors aim to address the challenge of capturing temporal dynamics and evolving relationships in dynamic networks with both high statistical precision and interpretability. The model leverages the concept of intensity profiles, which encode the temporal changes and interactions between nodes in a network. The model provides a uniform error bound for learned node representations and preserves a novel "temporal coherence" property compared to existing baselines. Empirical results on real-world dynamic network datasets demonstrate that IPP outperforms existing methods in various tasks such, highlighting its ability to capture continuous-time representations and uncover temporal patterns in dynamic networks.
Strengths: 1. The paper introduces the Intensity Profile Projection (IPP) framework, which offers a unique and innovative approach to continuous-time representation learning for dynamic networks. It introduces the concept of intensity profiles and effectively utilizes them to capture temporal dynamics.
2. Theoretical analysis towards the model shows that the model can achieve high statistical precision and preserve interpretability in terms of ""temporal coherence".
3. The paper is in general easy to follow.
Weaknesses: 1. Lack of comparison with state-of-the-art methods: Although the paper claims improved performance over existing methods, it does not provide a comprehensive comparison with some existing continuous models such as GraphODEs[1,2,3,4] which combines neuralODE with GNNs to model network evolution over time.
2. Scalability: The scalability of the IPP framework is not extensively discussed. It would be valuable to address the computational requirements and scalability limitations of the proposed approach, especially when dealing with large-scale dynamic networks.
3. The related work section is too short to provide a comprehensive background of the research topic.
[1] Huang, Zijie, Yizhou Sun, and Wei Wang. "Learning continuous system dynamics from irregularly-sampled partial observations." Advances in Neural Information Processing Systems 33 (2020): 16177-16187.
[2] Song Wen, Hao Wang, and Dimitris Metaxas. 2022. Social ODE: Multi-agent Trajectory Forecasting with Neural Ordinary Differential Equations. In Computer Vision–ECCV 2022: 17th European Conference.
[3]Zijie Huang, Yizhou Sun, and Wei Wang. Coupled graph ode for learning interacting system dynamics. In
401 ACM SIGKDD Conference on Knowledge Discovery and Data Mining, page 705–715, 2021.
[4] Zang, Chengxi, and Fei Wang. "Neural dynamics on complex networks." In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 892-902. 2020.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What would be the time complexity of the proposed method?
2. How would the model performance be affected by different network topology?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
> Lack of comparison with state-of-the-art methods: Although the paper claims improved performance over existing methods, it does not provide a comprehensive comparison with some existing continuous models such as GraphODEs[1,2,3,4] which combines neuralODE with GNNs to model network evolution over time.
We thank the reviewer for pointing us to GraphODEs. From what we understand, these methods work on multivariate time series on the nodes of a graph which captures the influence of agent $i$ on the dynamics of agent $j$. Our method works on point processes observed between pairs of of nodes. Therefore, these methods are not directly comparable. We are only aware of one existing method (CLPM in our method comparison) which does continuous-time representation learning with the type of data we consider. This is the reason we additionally compare to a collection of discrete time methods.
> Scalability: The scalability of the IPP framework is not extensively discussed. It would be valuable to address the computational requirements and scalability limitations of the proposed approach, especially when dealing with large-scale dynamic networks. / What would be the time complexity of the proposed method?
This is a great question which we will address in Section 2.2 after discussing the numerical approximation. One of the advantages of the IPP framework is that it is highly scalable. If the network is sparse, the method is scalable to networks with potentially tens of millions of nodes.
Suppose we use a kernel with finite support of width $2h$, then the expected time complexity of the KDE for a single edge at a single point in time is $O(h \lambda_{\max})$. Evaluation of $\hat{\mathbf{\Lambda}}(t)$ is $O(n^2 h \lambda_{\max})$ and of the $\hat{\mathbf{\Lambda}}$ is $O(Bn^2 h \lambda_{\max})$. Consider the special case discussed in Section 3.1 that $\lambda_{ij} \asymp \rho$ for all $i,j$ and $L \asymp \rho L_0$. Assuming $\mu, d$, and $L_0$ are fixed, our theory suggests choosing $h \asymp (n\rho)^{-1/3}$, and if we suppose $n \rho \asymp \log^3$, the sparsest regime our theory allows, then evaluation of $\hat{\mathbf{\Lambda}}$ is $O(B n \log^2 n)$, i.e. log-linear in $n$.
In practice the top singular vectors of $\hat{\mathbf{\Lambda}}$ can be using the Augmented Implicitly Restarted Lanczos Bidiagonalization algorithm implemented in the `irlba` package in R, or the `irlbpy` in Python. The time complexity of this algorithm has not been studied theoretically although in practice it can be incredibly fast. For example, the package author performs a simulated experiment in which they compute the first 2 singular vectors of a sparse 10M x 10M matrix with 1M non-zero entries which takes approximately 6 seconds on a computer with two Intel Xeon CPU E5-2650 processors (16 physical CPU cores) running at 2 GHz equipped with 128 GB of ECC DDR3 RAM (see `https://bwlewis.github.io/irlba/comparison.html`).
> The related work section is too short to provide a comprehensive background of the research topic.
See global response.
> How would the model performance be affected by different network topology?
Our method is able to capture both homophilic and heterophilic network topologies. Fundamentally, the quality of the learned representations depends on how well the intensity profiles are approximated on a d-dimensional subspace.
---
Rebuttal Comment 1.1:
Title: Thanks for your response.
Comment: Thanks for the authors' response and my concern about the scalability issue has been addressed. One questions I still have is that I do not know why GraphODE methods cannot be directly comparable as they are also continuous-time networks as mentioned in your general response. Illustrating the performance comparison among continuous dynamic network models can be beneficial to show how well your model performs.
---
Reply to Comment 1.1.1:
Comment: To clarify, our method takes as input a list of triples $\{ (i,j,t) \}$, representing events which occur between pairs of nodes, while GraphODE methods take as input either complete or partial observations of the state of each node (a vector-valued trajectory) at either regularly or irregularly spaced time intervals, and either a static graph or a series of graphs which encode the dependences in the ODEs which drive the evolution of the node states. We are happy to clarify this in the revision, but since GraphODE methods take as input a different type of data to that which is considered in our paper, we aren't sure what meaningful numerical comparison could be made. Does the reviewer have something specific in mind? | Summary: To represent the continuous dynamic network, authors provide the framework based on the intensity profile. First, the intensity between nodes is estimated, which produces the intensity profile. Low dimension reduction via SVD is applied on the intensity, and then each node embedding is obtained by the low dimensional subspace.
Author also provide various theoretical analysis about the error bound and the bias-various trade-off. Theoretical analysis as well as empirical analysis on the simulated data demonstrates that the proposed method capture structural preserving and temporally coherent properties. Case study on the real data is conducted to explain the outcome of the proposed framework qualitatively
Strengths: - Simple but powerful method is proposed
- Based on the mathematical model, theoretical bound is analyzed and explained.
- IPP can capture the behavior of a bifurcating block model.
Weaknesses: - The proposed method is not novel enough. SVD decomposition is a very common technique for the reduction of dimensions, and it often suffers from the long-tailed singular values.
- Comparison is too limited. The analysis has been made only for the simulated data with figures. More experiments as well as some qualitative results would be great to have.
- SVD decomposition does not prevent producing negative values at the reconstruction.
- The proposed projection space is very dependent on the fixed dataset. At least, how to leverage the given embeddings for predictions is not straightforward. Given this, the potential application value is not very clear.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Figure numbers are all wrong.
- Section 4 is true for any global subspace projection. Also, both properties could be debatable, not necessarily ideal. For instance, when \Labmda_{i}(s) = \Lambda_{i}(t), X_{i}(s) = \alpha * X_{i}(t) could be more ideal, depending on the interactions among the other nodes.
- It would be great if authors compare the embedding trajectory for more real data, beyond the specific simulated ones.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Often, the meaning of each dimension from the SVD decomposition is not clear. This interpretability is not necessarily required for the representation, but this should be addressed when presenting the case study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The proposed method is not novel enough. SVD decomposition is a very common technique for the reduction of dimensions,
This is unreasonable. Dismissing our algorithm as "not novel" because it contains an SVD is like dismissing an optimisation algorithm because it uses SGD. The novelty of the algorithm lies in the whole framework which we propose to learn representations, which employs the SVD as the appropriate tool for optimising the $\hat R^2$ objective function discussed in Section 2 and Lemma 1. In particular, we point out the careful construction of the matrix $\hat{\mathbf{\Sigma}}$ from the data, which is entirely non-trivial.
The novelty of the method is exemplified in the following points:
- To our knowledge, IPP is the first provably consistent representation learning algorithm for continuous time dynamic networks in the literature. The uniform error bound in Theorem 1 is the first of its kind for data of this kind. We note additionally that our algorithm is non-parametric, and all existing methods are either model-based (e.g. fit a latent position model) and lack statistical estimation theory, or are entirely heuristic.
- IPP is the first representation learning algorithm for continuous time dynamic networks which satisfies the desirable properties of "structure preservation" and "temporal coherence" (defined in Section 4), which are necessary to make even the most basic temporal inferences about node behaviours (e.g. "do nodes $i$ and $j$ behave in the same way at time $t$?", and "does node $i$ change its behaviour between times $t$ and $s$?")
> It [the SVD] often suffers from the long-tailed singular values.
This is a good point. In much of the data that we have experimented with (e.g. the school children data in Section 5), the eigenvalues $\hat{\mathbf{\Sigma}}$ decay quickly, however the reviewer correctly observes that in some real-world data they might not. In our theory, this corresponds to Assumptions 2 or 3 being violated.
In the revision, we will add discussion about how this problem can be identified in practice (e.g. by plotting the singular values) and some possible solutions:
- taking a transformation of the intensity profiles (such as the square root) to temper heavy tails [1].
- employing a robust subspace estimator, such as robust PCA [2].
[1] Ian Gallagher, Andrew Jones, Anna Bertiger, Carey E. Priebe & Patrick Rubin-Delanchy (2023) Spectral Embedding of Weighted Graphs, Journal of the American Statistical Association.
[2] Candès, E. J., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis?. Journal of the ACM (JACM).
> Comparison is too limited.. / It would be great if authors compare the embedding trajectory for more real data, beyond the specific simulated ones.
We believe this simple simulation is sufficient to demonstrate the diverse failure mechanisms of each of the rival methods. We have implemented the rival methods on real data, but simply chose not to include them: with no ground truth we are unable to evaluate them objectively. In the included PDF, we include a plot of the omnibus method (Fig. 1a) and aligned spectral embeddings (Fig. 1b) on real data which, at least visually, seem to fail in the same way as they do in simulated data. We will add these to the supplementary material.
> SVD decomposition does not prevent producing negative values at the reconstruction.
From the point of view of representation learning, negative values in the intensity reconstructions are immaterial, since they are never used directly. If one desires intensity estimates which are non-negative, negative intensity values can be set to zero, which can only lower the reconstruction error.
> The proposed projection space is very dependent on the fixed dataset. At least, how to leverage the given embeddings for predictions is not straightforward. Given this, the potential application value is not very clear.
Due to space constraints, we have mainly focused on the application of the learned representations to unsupervised learning (e.g. clustering, trend estimation, etc.) in the real data section and discussion. However, the structure preservation and temporal coherence properties of the embeddings make them particularly suited to dynamic prediction tasks. Our theory not only suggests that the classifier trained on node representations at time $t$ would perform well at predicting unlabelled nodes at that time (see e.g. [3]) but the "temporal coherence" property suggests that same classifier would perform well at predicting node labels at some time $s$, in the future. This is something that to our knowledge no other continuous-time dynamic network embedding algorithm can achieve (see the method comparison in Section 4). We will add a comment about the value of our method for predictive applications to the discussion.
[3] Minh Tang. Daniel L. Sussman. Carey E. Priebe. (2013) Universally consistent vertex classification for latent positions graphs. Ann. Statist.
> Section 4 is true for any global subspace projection. Also, both properties could be debatable, not necessarily ideal. For instance, when $\Lambda_{i}(s) = \Lambda_{i}(t), X_{i}(s) = \alpha * X_{i}(t)$ could be more ideal, depending on the interactions among the other nodes.
The reviewer correctly identifies that these properties hold for any global subspace projection, and this is precisely the part of the algorithm which we are justifying in this section. Justification of the precise global subspace projection we choose is the subject of Lemma 1 in Section 2.
We think there may be typo here, and that the reviewer meant: $\Lambda_{i}(s) = \alpha * \Lambda_{i}(t)$ implies $X_{i}(s) = \alpha * X_{i}(t)$. This is a very interesting suggestion and, remarkably, we *do* in fact guarantee that this property holds! This follows by Lemma 2, combined the linearity of the projection. We will add this observation to the revision. | Summary: The authors propose an approach for learning time-varying node embeddings from continuous-time dynamic network data, which consist of a set of instantaneous timestamped relational events between nodes (e.g., messages from one social media user to another). Their proposed approach learns a projection that minimizes reconstruction error of the pairwise intensities between nodes and comes with theoretical guarantees on estimation error. They also show that their approach generates embeddings that both preserve network structure at a given time and is temporally coherent. They demonstrate strong empirical performance on simulated data compared to other dynamic network embeddings. Furthermore, they use their approach to analyze a real network data set on face-to-face interactions of primary school students, which is quite enlightening due to the interpretability of their model.
*After rebuttal:* The authors have clarified the one question I had about the meaning of "inductive" in their setting. I continue to strongly support the paper.
Strengths: - Proposed approach learns time-varying node embeddings from continuous-time networks with theoretical guarantees, which is among the first, if not the first, in the literature.
- Proposed embeddings can satisfy two good properties of structure preservation and temporal coherence.
- Very well written and organized paper that provides highlights of theoretical analysis in the main paper followed by details, including proofs, in the supplementary.
Weaknesses: - There's a large body of related literature on probabilistic generative models for continuous-time networks using point process models such as Hawkes processes that should be discussed. Many of these models are based on stochastic block models or latent space models and are thus also learning node embeddings. See suggested references below.
- No quantitative evaluation. This is only a minor weakness in my opinion because I view the main contribution to be theoretical.
Typos and minor issues:
- Supplementary Section C heading: Visualsation -> Visualisation
References:
- Arastuie, M., Paul, S., & Xu, K. S. (2020). CHIP: A Hawkes process model for continuous-time networks with scalable and consistent estimation. In Advances in Neural Information Processing Systems 33 (pp. 16983-16996).
- Corneli, M., Latouche, P., & Rossi, F. (2018). Multiple change points detection and clustering in dynamic networks. Statistics and Computing, 28(5), 989-1007. doi:10.1007/s11222-017-9775-1
- Huang, Z., Soliman, H., Paul, S., & Xu, K. S. (2022). A mutually exciting latent space Hawkes process model for continuous-time networks. In Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence (Vol. 180, pp. 863-873).
- Junuthula, R. R., Haghdan, M., Xu, K. S., & Devabhaktuni, V. K. (2019). The Block Point Process Model for continuous-time event-based dynamic networks. In Proceedings of the World Wide Web Conference (pp. 829-839).
- Matias, C., Rebafka, T., & Villers, F. (2018). A semiparametric extension of the stochastic block model for longitudinal networks. Biometrika, 105(3), 665-680. doi:10.1093/biomet/asy016
- Yang, J., Rao, V., & Neville, J. (2017). Decoupling homophily and reciprocity with latent space network models. In Proceedings of the Conference on Uncertainty in Artificial Intelligence.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. The authors mention several times that their approach is inductive, allowing one to obtain a node representation profile outside of the training sample. If the task is to obtain the node representation for the future, how would the Intensity Profile Projection approach handle it? Would it require some data from other nodes at that future time?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations are thoroughly discussed in Section 6. I commend the authors for being very forthcoming with these limitations. I don't view the limitations as weaknesses, because they are mostly limitations that apply to all unsupervised problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > There's a large body of related literature on probabilistic generative models for continuous-time networks using point process models such as Hawkes processes that should be discussed. Many of these models are based on stochastic block models or latent space models and are thus also learning node embeddings. See suggested references below.
Thanks for pointing us to these interesting references, which we will point to in the introduction. (See global response.)
> The authors mention several times that their approach is inductive, allowing one to obtain a node representation profile outside of the training sample. If the task is to obtain the node representation for the future, how would the Intensity Profile Projection approach handle it? Would it require some data from other nodes at that future time?
To obtain a representation of a node, $i$, at a time, $t$, outside the training sample, one just needs the node's intensity profile vector $\hat \Lambda_i(t) = (\hat \lambda_{i1}(t),\ldots,\hat \lambda_{in})^\top$, which is then projected onto the subspace spanned by $\hat{\mathbf{U}}_d$. The construction of $\hat\Lambda_i(t)$ requires data from the interaction events involving node $i$ around time $t$.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. I continue to support the paper, primarily based on the novelty. This is the first paper I am aware of that provides these types of theoretical guarantees for node embeddings from continuous-time networks. | Rebuttal 1:
Rebuttal: We thank all reviewers for their time and expertise. We summarise some of the positive comments made by reviewers: "a unique and innovative approach to continuous-time representation learning for dynamic networks", "Simple but powerful method", "among the first, if not the first, in the literature.", "Very well written and organized paper", "The paper is in general easy to follow.", "I commend the authors for being very forthcoming with these limitations".
Based on the reviews, we feel we should better convey the concrete possibilities offered by this new algorithm for important applications such as cyber-security [1], combating human-trafficking [2], fraud and corruption [3]. We briefly mentioned these in the conclusion, but in our revision we will discuss these further at the outset.
In addition, we will expand the 'related work' section to cover the broader literature which puts this work in context, for example on estimation of stochastic block model [4-10] and latent position models for continuous time networks [11-16], spectral methods discrete-time dynamic networks [17-19] (in the absence of existing spectral methods for continuous-time networks), probabilistic modelling of network point processes [20-23] (such as Hawkes processes), neural-network based embedding algorithms [24-26] and relevant surveys [27-31].
[1] Kent, A. D. (2015). Cybersecurity Data Sources for Dynamic Network Research. In Dynamic Networks in Cyber-security. Imperial College Press.
[2] Szekely, P., Knoblock, C. A., Slepicka, J., Philpot, A., Singh, A., Yin, C., Kapoor, D., Natarajan, P., Marcu, D., Knight, K. et al. (2015). Building and using a knowledge graph to combat human trafficking. In The Semantic Web-ISWC 2015: 14th International Semantic Web Conference. Springer.
[3] Microsoft Researcg. (2021, December 9). Revealing the Hidden Structure of Corruption. https://www.microsoft.com/en-us/research/group/societal-resilience/articles/revealing-the-hidden-structure-of-corruption/.
[4] Blundell, C., Beck, J., and Heller, K. A. (2012). Modelling reciprocating relationships with hawkes processes. Advances in Neural Information Processing Systems.
[5] DuBois, C., Butts, C., and Smyth, P. (2013). Stochastic blockmodeling of relational event dynamics. In Artificial intelligence and statistics. PMLR.
[6] Corneli, M., Latouche, P., and Rossi, F. (2016). Block modelling in dynamic networks with non-homogeneous Poisson processes and exact ICL. Social Network Analysis and Mining.
[7] Matias, C., Rebafka, T., and Villers, F. (2018). A semiparametric extension of the stochastic block model for longitudinal networks. Biometrika.
[8] Corneli, M., Latouche, P., & Rossi, F. (2018). Multiple change points detection and clustering in dynamic networks. Statistics and Computing.
[9] Junuthula, R., Haghdan, M., Xu, K. S., and Devabhaktuni, V. (2019). The block point process model for continuous-time event-based dynamic networks. In The world wide web conference.
[10] Arastuie, M., Paul, S., and Xu, K. (2020). CHIP: A Hawkes process model for continuous-time networks with scalable and consistent estimation. Advances in Neural Information Processing Systems.
[11] Durante, D. and Dunson, D. B. (2014). Nonparametric Bayes Dynamic Modelling of Relational Data. Biometrika.
[12] Durante, D., & Dunson, D. B. (2016). Locally Adaptive Dynamic Networks. The Annals of Applied Statistics.
[13] Yang, J., Rao, V., & Neville, J. (2017). Decoupling homophily and reciprocity with latent space network models. In Proceedings of the Conference on Uncertainty in Artificial Intelligence.
[14] Rastelli, R. and Corneli, M. (2021). Continuous latent position models for instantaneous interactions. arXiv preprint arXiv:2103.17146.
[15] Huang, Z., Soliman, H., Paul, S., & Xu, K. S. (2022). A mutually exciting latent space Hawkes process model for continuous-time networks. In Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence.
[16] Artico, I. and Wit, E. (2023). Fast inference of latent space dynamics in huge relational event networks. arXiv preprint arXiv:2303.17460.
[17] Liu, F., Choi, D., Xie, L., and Roeder, K. (2018). Global spectral clustering in dynamic networks. Proceedings of the National Academy of Sciences.
[18] Cape, J. (2021). Spectral analysis of networks with latent space dynamics and signs.
[19] Gallagher, I., Jones, A., & Rubin-Delanchy, P. (2021). Spectral embedding for dynamic networks with stability guarantees. Advances in Neural Information Processing Systems.
[20] Butts, C. T. (2008). A relational event framework for social action. Sociological Methodology.
[21] Vu, D., Hunter, D., Smyth, P., and Asuncion, A. (2011). Continuous-time regression models for longitudinal networks. Advances in Neural Information Processing Systems.
[22] Perry, P. O. and Wolfe, P. J. (2013). Point process modelling for directed interaction networks. Journal of the Royal Statistical Society: SERIES B: Statistical Methodology.
[23] Passino, F. S. and Heard, N. A. (2022). Mutually exciting point process graphs for modeling dynamic networks. Journal of Computational and Graphical Statistics.
[24] Nguyen, G. H., Lee, J. B., Rossi, R. A., Ahmed, N. K., Koh, E., and Kim, S. (2018). Continuous-time dynamic network embeddings. In Companion Proceedings of the Web Conference 2018.
[25] Du, L., Wang, Y., Song, G., Lu, Z., and Wang, J. (2018). Dynamic network embedding: An extended approach for skip-gram based network embedding. In IJCAI.
[26] Xu, D., Ruan, C., Korpeoglu, E., Kumar, S., and Achan, K. (2020). Inductive representation learning on temporal graphs. arXiv preprint arXiv:2002.07962.
[27] Spiliopoulou, M. (2011). Evolution in social networks: A survey. Social network data analytics.
[28] Holme, P. and Saram ̈aki, J. (2012). Temporal networks. Physics reports.
... [29-31] omitted due to character limit.
Pdf: /pdf/6a63daeefea52b3a06e650a339d4a79eac2fb1a3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Expressive Sign Equivariant Networks for Spectral Geometric Learning | Accept (spotlight) | Summary: The authors proposed a novel sign-equivariant model that is equivariant to the sign flip of eigenvectors for spectral graph learning. They demonstrated that the proposed architecture could capture the information when sign-invariant models would fail. The authors then made theoretical analyses of the sign-equivariant polynomial space and also proposed some constructible neural network architecture with sign-equivariance guaranteed by design. Experimental results on both the synthetic and real-world datasets demonstrate superior performance on link prediction, especially when automorphism exists.
Strengths: 1. The paper is well-organized and easy to follow. The authors first provided an example to demonstrate the limitation of the previous "sign-invariant" models and then made theoretical analyses. Experiments were followed to demonstrate superior performance over the baselines.
2. The problem that the authors tried to address is novel. The authors pointed out the limitation of sign-invariant models and mathematically supported their claim in Proposition 1. The proposed "sign-equivariant" architecture can properly address the limitation, both theoretically and empirically.
3. The mathematics are solid and the claims are well-supported. The definitions of sign-equivariance and -invariance are well-formulated. Sign-equivariance of polynomials and neural networks is proven in the appendices. Universality is also proven and guaranteed.
4. Experiments on graphs with high symmetries demonstrated the effectiveness of the proposed model architecture. State-of-the-art performance was also achieved.
Weaknesses: 1. Proposition 1 lacks some detail (See Question 1 & 2).
2. Limitation of the proposed architecture is not explicitly discussed.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. In Proposition 1, the first result claims that if $f$ is sign invariant and the eigenvalues are distinct, then the node-wise representations are the same for automorphic nodes. Consider the complete graph of 3 nodes, the adjacency matrix is
$$
\begin{pmatrix}
0&1&1\\\\
1&0&1\\\\
1&1&0\\\\
\end{pmatrix}
$$
We have eigenvector $v_1=(1,1,1)$ belonging to $\lambda=2$ and $v_2=(0,1,-1)$ belong to $\lambda=-1$ (normalization is ignored for conciseness). Let $f$ be the pointwise absolute function, then $f$ is clearly sign-invariant (but not necessarily basis-invariant). In this graph, all three nodes are automorphic, but we have $z_1=(1,0)$ and $z_2=(1,1)$. This is probably because the eigenspace corresponding to $\lambda=-1$ is 2-dimensional. Therefore, I believe basis-invariance is also a necessary condition. The first claim is also not directly proven in Appendix A.3.2. Please provide a more detailed justification.
2. Similarly, the scenario when the dimension of the eigenvalue is greater than 1 is not explicitly discussed. In the above example, the sign-invariant model may still be capable of distinguishing the automorphic nodes. In footnote 2, the authors mentioned that sign invariant embeddings maintain "some positional information". The authors may provide a more accurate description using mathematical formulations similar to Proposition 1.
3. What are the potential limitations of the proposed model architecture?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: No potential negative societal impact is expected.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > “Proposition 1 lacks some detail (See Question 1 & 2).”
> “In Proposition 1, the first result claims that if $f$ is sign invariant and the eigenvalues are distinct, then the node-wise representations are the same for automorphic nodes. Consider the complete graph of 3 nodes, the adjacency matrix is…”
Indeed, this issue that you note here arises because the eigenspace corresponding to $\lambda=-1$ is two-dimensional. This misunderstanding is because we do not exactly define what we mean in Proposition 1 when we say “the eigenvalues associated with the $v_l$ are distinct.” By this, we mean that the eigenvalues $\lambda_l$ are all simple eigenvalues, meaning that they all belong to one-dimensional eigenspaces, and also no two of the eigenvectors belong to the same eigenvalue. We will make this clear in the revision.
With this correct interpretation of our assumptions, we are confident that the proposition is correct in the sign invariance case: we only need that the eigenvalues corresponding to the input $v_1, \ldots, v_k$ are all simple eigenvalues. The proof of the sign invariant case is exactly the same as the general basis invariant case; the orthogonal matrices $Q_t$ are just $1$ dimensional and hence they are signs.
> “Similarly, the scenario when the dimension of the eigenvalue is greater than 1 is not explicitly discussed. In the above example, the sign-invariant model may still be capable of distinguishing the automorphic nodes. In footnote 2, the authors mentioned that sign invariant embeddings maintain "some positional information". The authors may provide a more accurate description using mathematical formulations similar to Proposition 1.”
True, the repeated eigenvalues do arise in practice, and sign invariant embeddings are partially positional in this case since they are not fully basis invariant. It is hard to say too much in this setting, since the sign invariant embeddings are not even well-defined / deterministic here (the embeddings would depend on the choice of basis of the eigenspaces used as input). This also means that sign invariant embeddings would not lead to structural node-pair representations when there are repeated eigenvalues. We will make this latter point in the revision.
> “Limitation of the proposed architecture is not explicitly discussed.”
> “What are the potential limitations of the proposed model architecture?”
Thanks for the suggestion, see general comment to all reviewers for discussion of limitations.
---
Rebuttal Comment 1.1:
Title: Comment on Authors' Rebuttal
Comment: I appreciate your comprehensive response regarding my concerns about some theorems on sign equivariance. You have also frankly and meticulously explained the potential limitation of the proposed model when the eigenvalues coincide. Though the proposed model may fail in this case, to the best of my knowledge, I believe that the authors have paved the road for a new sub-realm for further work on sign equivariance, which is important for tasks like link prediction where node-level GNNs would fail. In this sense, **I raise my score from 6 to 7**. | Summary: The paper proposes a sign-equivariant design for processing spectral features in geometric deep learning.
This is particularly useful to process the eigenvectors of graph Laplacians and generate graph positional encodings, especially for link prediction tasks.
The equivariant, rather than invariant, approach guarantees improved expressivity of the models, which is verified in a numberof experiments.
Strengths: The paper is clearly written and well motivated.
I liked this simple yet effective design and I think the proposed method could be useful for many future works in the field.
Weaknesses: I don't see major flaws in the paper.
See "Questions" for other comments.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I don't think the claim that the Geometric Deep Learning approach fails here is correct.
The issue you describe is due to a bad design choice of intermediate representation within the network (i.e. the same as the input and output representation).
Note that this choice is not common in other equivariant networks either; for instance, in group-convolution NNs (GCNNs), the input representation is an image (this is a quotient representation of $SE(2)$) but intermediate features transform like a regular representation (the output of a group convolution is a function over the full group, not only the pixels).
This step is fundamental to achieve universal approximation.
The limited expressive power you mention in line 48 is only due to the choice of intermediate representations (similarly, in a GCNN, if the intermediate features are constrained to transform as images, equivariance requires isotropic filters, which are indeed less expressive).
In any case, while a group-convolution approach is probably a poor choice here (the group $(-1, 1)^k$ has size $2^k$), smaller representations which can provide a better trade off might exist.
Your method seems to provide a nice solution to this, so I would argue it still fits within the GDL framework but provides a more efficient solution than its naive implementation.
Isn't computing the eigenvectors of the graph Laplacian an expensive preprocessing step?
Estimating these features has $O(n^3)$ complexity, which seems to exceed the $O(n)$ operations in the DSS layers (or $O(n^2)$ is an attention-based or message-passing based model).
In Appendix A.2 you discuss the computational complexity of your method but do not include this aspect.
Could you comment on this?
Could you report the table with all baselines and their performance in Section 4.2?
Also, is there any more recent work on this dataset you could include?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: I did not see an explicit discussion of the limitations in the paper, which I think would be a nice addition.
For example, the authors could comment further on the complexity of computing the eigenvectors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments on the choice of group representations in intermediate layers. Here, we respond to the comments; we will add discussion about this to the main paper, which we think will be interesting to people in the equivariant machine learning community.
> “I don't think the claim that the Geometric Deep Learning approach fails here is correct. The issue you describe is due to a bad design choice of intermediate representation within the network … “
The reviewer makes a great point here, the claim is more nuanced than how we wrote it in this paper. We will change our wording of the point to be less strong and capture more of the nuance (namely, the approach does not work for some choices of group representation, but other group representations may allow it to succeed). We also had some more results that we did not end up adding to the paper, which we now plan to add to support this point a bit more.
In particular, we derived the sign equivariant linear maps between tensor representations, and found that it would not be efficient to use tensor representations either. That is, at $n=1$ we considered the sign equivariant maps from $\mathbb{R}^{k^{d_1}} \to \mathbb{R}^{k^{d_2}}$ for $d_1, d_2 > 0$. The sign group acts on $\mathbb{R}^{k^{d_1}}$ via $((s_1, \ldots, s_k) \cdot x)_{i\_1, \ldots, i\_{d_1}} =s\_{i\_1} \cdots s\_{i\_{d_1}} x\_{i\_1, \ldots, i\_{d\_1}}$. Note that the $d_1 = d_2 = 1$ case is the one considered in the current paper, where the only equivariant linear maps are diagonal matrices, so the equivariant linear maps have dimension $k$.
We can show that there are no sign equivariant linear maps (besides the zero map) between $\mathbb{R}^{k^{d_1}} \to \mathbb{R}^{k^{d_2}}$ when $d_1 + d_2$ is an odd number. Note that this is also the case for the orthogonal group, as can be seen numerically in Table 6 of Finzi et al. 2021. In particular, this means that there is no linear way to map $\mathbb{R}^{k} \to \mathbb{R}^{k^2}$ or back $\mathbb{R}^{k^2} \to \mathbb{R}^k$, so if we want to lift our $\mathbb{R}^{k}$ input to a tensor representation, you have to at least go to order 3 tensors (in $\mathbb{R}^{k^3}$). This would already be quite expensive, and the linear maps between order 3 tensors ($\mathbb{R}^{k^3} \to \mathbb{R}^{k^3}$) are also expensive, since they span a space of dimension 5888 when $k=10$.
This made us feel that the equivariant linear maps approach with tensor representations is too inefficient, so we did not pursue it further. We will add proofs of these points in the revised version, as others may find it interesting or useful.
> “smaller representations which can provide a better trade off might exist. Your method seems to provide a nice solution to this, so I would argue it still fits within the GDL framework but provides a more efficient solution than its naive implementation.”
This is a good point! As argued above, the natural tensor representations (as used e.g. in [Maron et al. 2019] https://arxiv.org/abs/1812.09902 and [Finzi et al. 2021] https://arxiv.org/abs/2104.09459) are not a good choice for equivariant linear map based architectures for the sign group. We agree with you that regular representations of size $2^k$ are also too expensive without any tricks. However, other representations may be useful, and we will note this possibility in the main paper.
> “Isn't computing the eigenvectors of the graph Laplacian an expensive preprocessing step? … Estimating these features has $O(n^3)$ complexity…”
As is commonly done in practice, we only use some subset of the Laplacian eigenvectors corresponding to the smallest $k$ eigenvalues (e.g. $k=16$ for our link prediction experiments). Then standard iterative eigensolvers can be used (e.g. scipy.sparse.linalg.eigsh), which are very efficient for sparse graphs. The time complexity is closer to $O(|E| k)$, where $|E|$ is the number of edges, which is usually close to linear in the number of nodes. For instance, on a standard laptop, computing the smallest $k=16$ eigenvectors of an Erdos-Renyi graph with average degree 10 takes 0.3 seconds for 10,000 nodes and 9.0 seconds for 100,000 nodes.
For the n-body experiments, the covariance matrix is $d \times d$, where $d$ is the ambient dimension, which is typically quite small. Thus, there are no efficiency issues here as well.
> “Could you report the table with all baselines and their performance in Section 4.2? Also, is there any more recent work on this dataset you could include?”
As suggested by Reviewer YXFy, we can also add the Kaba et al. paper just published at ICML 2023 https://arxiv.org/abs/2211.06489; this method achieves .0043 MSE in $d=3$ dimensions (hence outperforming our method in accuracy here), but the runtime scaling in $d$ may be worse, since their canonicalization network has to output a $d \times d$ matrix that must then be orthogonalized via Gram-Schmidt, which has $O(d^3)$ complexity. Further, they do not test their methods on $d > 3$, and we do not have access to their code.
For the other baselines, we will include a table (possibly in the Appendix due to space constraints), but this would only include $d=3$ (as many methods only test on $d=3$, or only work on $d=3$).
> “I did not see an explicit discussion of the limitations in the paper, which I think would be a nice addition. For example, the authors could comment further on the complexity of computing the eigenvectors.”
Thank you for the suggestion, see the general comment to all reviewers for limitations.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed answer.
I have also appreciated the discussion regarding my first question about the GDL framework and encourage the authors to include it in the paper.
In particular, I am very interested in reading the theoretical results about linear equivariant maps between tensor product representations.
I maintain my recommendation | Summary: This paper contributes construction and analysis of sign equivariant neural network architectures for processing eigenvectors while respecting their symmetries. While a similar approach has been proposed by a prior work (Lim et al., 2023), this work is motivated by the fact that sign invariance, pursued in prior work, is in general not sufficient for multi-node tasks as sign invariance leads to structural node representations that does not distinguish automorphic nodes while multi-node tasks require positional node representations that provides a form of node identification (Srinivasan et al., 2019). The authors provide a construction of sign equivariant neural network architecture with analysis and provable guarantee on expressive power, and also propose its application as an alternative to PCA-based frame averaging which requires only a single evaluation of base model (while frame averaging requires 2^k) while guaranteeing equivariance to orthogonal groups in arbitrary dimensions. The developed architectures are empirically demonstrated for link prediction on graphs, n-body dynamics prediction, and node clustering on SBM graphs, which support the main claims of the paper.
Lim et al., Sign and Basis Invariant Networks for Spectral Graph Representation Learning (2023)
Srinivasan et al., On the equivalence between positional node embeddings and structural graph representations (2019)
Strengths: S1. The paper tackles an important and practical problem of developing expressive neural architectures that respect symmetry of eigenvectors, and successfully complements and improves key prior work (Lim et al., 2023) by achieving sign equivariance. Also, the paper is overall well written and easy to read, and the provided illustrations are clear.
S2. The motivation for sign equivariance over invariance only is clearly presented in Section 2.1 where multi-node tasks are considered, and I find the discussion sound and informative.
S3. In addition to multi-node tasks, a novel (as far as I can tell) and powerful alternative based on sign equivariance to PCA based frame averaging for modeling O(k) equivariant functions is provided. The approach trades off generality, as the base model (h) should satisfy sign equivariance unlike frame averaging that works with arbitrary base functions, but more efficient as it eliminates the requirement of frame averaging for 2^k forward passes.
S4. The proposed parameterization of sign equivariant polynomial is novel and theoretically sound as it is grounded to the theory of equivariant polynomials. A strong theoretical guarantee on universality is provided for the non permutation equivariant case as well.
S5. Overall, the experimental results overall seem sound and support the main claims of the paper.
Lim et al., Sign and Basis Invariant Networks for Spectral Graph Representation Learning (2023)
Weaknesses: W1. The description of how sign equivariant network is applied in Appendix D.4 was a bit unclear to me. In in the equation e_ij = V_i^TDV_j, does the V refer to the output of sign equivariant module within the given GraphGPS layer? Is V passed and updated across GraphGPS layers?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I have no specific questions for now other than W1.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: As far as I can tell, the authors didn't address the limitations and potential societal impact of the work. I encourage the authors to discuss them in the next revision of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We are especially glad that the reviewer liked section 2.1 on multi-node representations and link prediction, as we originally had trouble writing it, but spent time iterating on it.
> “W1. The description of how sign equivariant network is applied in Appendix D.4 was a bit unclear to me. In in the equation e_ij = V_i^TDV_j, does the V refer to the output of sign equivariant module within the given GraphGPS layer? Is V passed and updated across GraphGPS layers?”
Thank you for pointing this out. Yes, $V \in \mathbb{R}^{n \times k}$ are eigenvector representations, which we compute using our sign equivariant module. This $V$ is passed and updated in each GraphGPS layer using our module.
To make this clearer, in the revision we will denote the eigenvector representation of layer $l$ as $V^{(l)}$. Then $V^{(0)}$ are the original eigenvectors, and $V^{(l)} = \mathrm{SignEquivariant}^{(l)}(V^{(l-1)})$. In contrast, the prior work PEG [Wang et al. 2022] takes $V^{(l)} = V^{(l-1)}$, meaning they do not update eigenvector representations.
> “As far as I can tell, the authors didn't address the limitations and potential societal impact of the work. I encourage the authors to discuss them in the next revision of the paper.”
Thanks for the suggestion, we include a discussion in the general reviewer comment, and will add discussion to our paper.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the response. I have no questions for now and will discuss with other reviewers. | Summary: This paper focuses on addressing the sign ambiguity problem of eigenvectors. It argues that previous sign-invariant models are insufficient for some applications, e.g., link prediction and multi-node tasks. To solve this problem, the authors propose a sign-equivariant neural network with provable expressiveness guarantees, which is more powerful than the sign-invariant models and sign-equivariant linear maps. Experiments on various tasks validate the superiority of the proposed method.
Strengths: 1. This paper is the first to study the sign-equivariance problem of eigenvectors. It finds that the signs of eigenvectors contain meaningful positional information but the sign-invariant models can only preserve the structural information. Therefore, the sign-invariant models cannot distinguish automorphic nodes and are insufficient for link prediction or multi-node tasks.
2. This paper proves that the sign equivariant polynomial can be implemented by the elementwise product between a linear sign-equivariant polynomial and a general sign-invariant polynomial. The linear sign-equivariant polynomial cannot learn the interactions between eigenvectors, and the sign-invariant polynomial cannot preserve positional information. This theoretical discovery combines the advantages of both methods and addresses their shortcoming.
3. Based on the characterization of sign-equivariant polynomial functions, this paper provides a general framework for analyzing the sign-symmetry models and gives a new perspective on the universality of SignNet.
Weaknesses: The major weakness of this paper is that it lacks experiments on real-world data. This paper uses three tasks, e.g., link prediction, n-body problem, and node clustering, to validate the effectiveness of the proposed method. However, all the experiments are based on synthetic data, which makes the results unconvincing. I think there are many real-world datasets for both link prediction and node clustering tasks, and the authors should give more empirical results on at least these two tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: This paper considers both sign-equivariance and permutation-equivariance. I wonder why the authors do not discuss the basis-equivariance for eigenvectors.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper should discuss the limitation of sign-equivariance from the perspectives of efficiency and scalability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their work in improving our paper! We are glad that the reviewer appreciates our theoretical characterization of sign equivariant functions and improvements over sign invariant networks. Here we address the comments:
> “The major weakness of this paper is that it lacks experiments on real-world data. This paper uses three tasks, e.g., link prediction, n-body problem, and node clustering, to validate the effectiveness of the proposed method. However, all the experiments are based on synthetic data …”
True, even though the n-body task and node clustering tasks are standard benchmarks, they do consist of synthetic data. We did not manage to run more experiments for the rebuttal, but we believe that our experiments do support our theoretical insights, and note that other reviewers appreciate our experiments.
> “This paper considers both sign-equivariance and permutation-equivariance. I wonder why the authors do not discuss the basis-equivariance for eigenvectors.”
This is a good point and a good idea for future work. The neural architecture would be more difficult to derive for the continuous symmetries in basis equivariance, and it would probably require different methods than our sign equivariant networks, which is why we do not cover it much. Basis equivariance would indeed be useful to handle repeated eigenvalues. For instance, basis equivariance could be used in Section 2.1 to obtain structural node-pair representations when the graphs have repeated eigenvalues. We will add discussion of this latter point to the revised paper.
> “This paper should discuss the limitation of sign-equivariance from the perspectives of efficiency and scalability.”
We are not sure what the reviewer means here, please let us know if we have misunderstood. In our paper, we explain that for both of our main application areas, sign equivariant networks have efficiency and scalability benefits over certain baselines. In link prediction, sign equivariant networks only require one forward pass whereas most subgraph-based methods need a forward pass and graph construction for each predicted edge (Section 2.1, last paragraph). For orthogonal equivariance, sign equivariant networks only require one forward pass for inference on $d$-dimensional point clouds, whereas frame averaging requires $2^d$ forward passes (see also Figure 4 and Section 2.2, paragraph before the Proposition). See also Appendix A.2, A.2.1, and A.2.2 for discussion of complexity.
As for general limitations, we will add them to the revision, see general reviewer comment for more information.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Hi, thanks for addressing my concerns.
After carefully checking the appendix, I think the sign-equivariance networks have some benefits in terms of both efficiency and scalability.
Also, I agree with the authors that the sign-equivariant layers can improve the sign-invariant readout functions, e.g., SignNet. The combination of local equivariant and global invariant is important for geometric deep learning.
Although it is a pity that the authors do not provide more empirical results, I think the clear motivation and important theoretical results deserve a good point. Therefore, I raise my score to 6. | Rebuttal 1:
Rebuttal: We would like to sincerely thank all of the reviewers for the work they put into their reviews. The reviews are thoughtful, and the reviewers are each clearly experts in at least some of the several subject areas related to the submission (geometric deep learning, orthogonally equivariant models, spectral graph theory, graph neural networks); we very much appreciate these different viewpoints.
The reviewers suggested adding limitations (and potential societal impacts) several times, so we address them here. We will add these points to the revised version:
1. Limitation: we did not develop architectures for basis equivariance in the case of repeated eigenvalues. Repeated eigenvalues are known to occur in many real-world graphs [Lim et al. 2023], so future work in this area could be useful.
2. Limitation: though we expect the most gain in some node-level and multi-node level tasks on graphs, we do not have theoretical reasons to expect as much impact of sign equivariance on graph-level representations tasks, which for instance is common in molecular processing.
3. Limitation: our theoretical results and model design are focused on expressive power. We do not have results on optimization (e.g. [Xu et al. 2021]), robustness (e.g. [Wang et al. 2022]), or generalization (e.g. [Keriven & Vaiter 2023]). These latter three properties are very important for learning, so future work should address them.
4. Limitation: while we can achieve universal approximation in the non-permutation-equivariant setting, we do not know of the exact expressive power in the permutation-equivariant setting. We can not directly apply existing results from Maron et al. 2020, because the sign group does not act via permutation matrices here. Note that Lim et al. 2023 also faces this issue, and the exact expressive power for their permutation equivariant and sign invariant networks is unknown.
5. Societal impact: we do not foresee direct societal impacts from our work. This project is primarily theoretical and aims to improve models for two general application areas: multi-node representation learning and orthogonal equivariant models. Potential societal impacts may arise in downstream applications that may be affected by general progress in geometric machine learning, such as social network analysis and recommender systems. These two applications are known to have negative societal impacts in certain circumstances, so care must be taken in future related work to avoid major negative consequences.
**References**
[Lim et al. 2023] Sign and Basis Invariant Networks for Spectral Graph Representation Learning.
[Xu et al. 2021] Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth.
[Wang et al. 2022] Equivariant and Stable Positional Encoding for More Powerful Graph Neural Networks
[Keriven & Vaiter 2023] What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes to build sign equivariant neural networks with applications of the method in O(n) equivariant modeling and graph representation learning. They show that corresponding equivariant MLPs are inexpressive, by contrast to the proposed method which is universal (non-permutation equivariant version) or plausibly much more expressive (permutation equivariant version). Experiments are performed on artifcial datasets and show that the proposed method achieve comparable or better results than alternatives.
Strengths: This is a well written and motivated paper. I also appreciate the detailed appendix.
The experimental protocol seems rigorous and results are presented for a few different settings.
The paper highlights interesting and potentially useful applications of sign (O(1)) equivariance.
Weaknesses: (W1) I think the main result of this paper could be obtained from the results of [Villar 2021]. They already show how to use invariants to builds equivariant functions. Sign equivariance is O(1) equivariance, so it is a particular case of O(n) equivariance. This connection is not mentioned by the authors and would be important to investigate. As such polynomials may not really be needed, the results from invariant theory could be sufficient. Also, Proposition 11 from the aforementioned work may offer a way to obtain the desired permutation equivariance with universality.
(W2) More generally, given the previous comment, I feel like the paper should situate itself more within the literature on O(n) equvariance, since sign and basis equivariance are that. I appreciate that some of the applications considered included processing eigenvectors, which is different from most of what is done in that literature, but this was already put forward by [Lim 2023].
(W3) The experiment on CLUSTERS is interesting but the improvement to other methods is not too important and has not convinced me of the practical usefulness of the method. Especially, since CLUSTERS is a synthetic task. It would also be great to have results on real-world graphs to see if the method brings a performance increase there.
(W4) It seems like the proposal for orthogonal equivariance is a realization of the partial canonicalization framework of [Kaba 2023]. Partial canonicalization is performed with respect to SO(3), with the reminder handled by a sign equivariant function. Proposition 2 follows from their results. The results on the N-body problem seem worst compared to learned canonicalization, which should be included for comparison. There are also many other methods outside of those presented in [Puny 2022] that could be compared against.
(W5) The paper [Sachs 1983] is relevant to the Proposition 1 presented in this paper and should be mentioned.
(W6) I think the application on link prediction is the most promising aspect of the submission. The authors should compare with the setup/methods that was proposed by [Sartorras 2021] in their autoencoding experiment. In that paper, the method was used to solve a similar issue caused by automorphic nodes.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (Q1) Can the authors provide a comparaison of their theoretical results with [Villar 2021] and discuss how their work compares?
(Q2) Would you elaborate on some limitations of this work and potential future directions?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their in-depth comments and suggestions for our paper! Here we address the comments:
> “(W1) I think the main result of this paper could be obtained from the results of [Villar 2021]. ... Sign equivariance is O(1) equivariance, so it is a particular case of O(n) equivariance… ”
We kindly disagree. The key difference is that they only consider $O(d)$ (or $O(1)$), whereas the main group we consider is a direct product $O(1) \times \ldots \times O(1)$ (for sign equivariance) or $O(d_1) \times \ldots \times O(d_l)$ (for basis invariance).
These are related, but the direct product group has specific challenges of its own. For instance, interactions between eigenspaces cannot be obtained by simple methods like equivariant linear maps (as explained in Section 3.1).
We are not aware of any way to directly obtain our results using the results of [Villar et al. 2021]. But we are open to more specific suggestions if the reviewer has any.
> ”such polynomials may not really be needed, the results from invariant theory could be sufficient”
We are not aware of a result in invariant theory from which our results follow directly. Invariant theory is (in large part) the study of equivariant or invariant polynomials, and our derivations allow us to find a form specifically for the sign equivariant polynomials.
> “Also, Proposition 11 from the aforementioned work may offer a way to obtain the desired permutation equivariance with universality.”
Thanks for the suggestion. We were very much aware of this work and these results, but we did not figure out a way to use them for our case. Also, this result does not lead to an efficient network architecture (directly parameterizing the functions would require $nd$ neural networks, with permutation constraints that must be enforced as in equation 13 of their arxiv version).
> “(W2) … paper should situate itself more within the literature on O(n) equvariance, since sign and basis equivariance are that. I appreciate that some of the applications considered included processing eigenvectors … but this was already put forward by [Lim 2023].”
Sign and basis equivariance are not quite $O(d)$ equivariance, but rather $O(d_1) \times \ldots \times O(d_k)$ equivariance (as mentioned above). They are related though, and we will include more information in the revision about the connection to $O(d)$ equivariance literature. At the moment we have some relevant content on this in Appendix A.2.1. Some key differences in our case are the proof techniques (the direct product adds complexity, but the fact that we mostly consider $d=1$ for the sign group allows us to use simpler linear algebraic techniques).
We indeed do process eigenvectors as in [Lim 2023], but as noted in Sections 2.1 and 2.2, sign invariance [Lim 2023] is provably limited in learning multi-node representations or $O(d)$ equivariant functions, whereas our sign equivariance methods are provably expressive here.
> “(W4) It seems like the proposal for orthogonal equivariance is a realization of the partial canonicalization framework of [Kaba 2023] ... Proposition 2 follows from their results. The results on the N-body problem seem worst compared to learned canonicalization, which should be included for comparison … .”
Thanks for this, you are correct in that our method works within the [Kaba et al. 2023] framework of partial canonicalization, in which our subgroup is $K = \\{-1, 1\\}^d$ of the orthogonal group $G = O(d)$. Moreover, our Proposition 2 does follow from their Theorems 3.1 and 3.3. We will note this nice connection in our revision, though we will keep our own proof as well for clarity.
Our novel contributions in relation to [Kaba et al. 2023] are:
1. We choose $K = \\{-1, 1\\}^d$. This is an important design choice, and Kaba et al only choose the trivial group $K = I$ when dealing with $G = O(d)$ in practice.
2. We choose an unlearned canonicalization up to $K$, via PCA frames [Puny et al. 2022]. Kaba et al. achieve their best results with learned canonicalizations.
3. We are the first to design a $K = \\{-1, 1\\}^d$ equivariant network. This is a necessary component, and we are also able to do this in a provably universal way, which is required to apply Theorem 3.3 of Kaba et al.
We will add discussion of this to the revision, and we will add comparison to more recent $O(d)$ equivariant models like Kaba et al.
> “(W5) The paper [Sachs 1983] is relevant to the Proposition 1 presented in this paper and should be mentioned.”
We assume you are referring to the paper “Automorphism group and spectrum of a graph.“ In our opinion it is not necessarily so relevant to our Proposition 1, in the sense that one can understand either without understanding the other.
However, what is certainly relevant is the general connection between graph automorphisms and repeated eigenvalues. We will add some discussion to our paper about this. In summary, graph automorphisms often lead to repeated eigenvalues, but not always. We will add this reference and discussion to our paper.
> “(W6) I think the application on link prediction is the most promising aspect of the submission. The authors should compare with the setup/methods that was proposed by [Sartorras 2021] in their autoencoding experiment ...”
Great point, perhaps our method could be used for their autoencoding experiment, but we did not have time to try it; our method would take eigenvectors as additional node features to break node symmetry (instead of random noise), and decode in a similar sign invariant way. This is nice, as our method would learn structural node-pair representations if eigenvalues are distinct, whereas their method is not exactly permutation equivariant (since they use random noise). We may try this experiment in a later version of our paper.
> “(Q1) Can the authors provide a comparaison of their theoretical results with [Villar 2021] ...”
Yes, see above. We will add further discussion to our paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. Most of my comments have been addressed, although I still think a bit more results on the experimental side will strengthen the paper significantly. I was indeed mistaken about the group of interest, or at least underestimated the difficulty of tackling the product in a general way. It is worth emphasizing this more explicitly in the paper for clarity. As such, the paper makes a significant contribution. I am therefore happy to increase my score and recommend acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reconsideration and reply. Indeed, upon rereading we realize that we can make the symmetry groups under consideration more clear. Thank you for this suggestion, we will do this in the revision. | null | null | null | null | null | null |
PAC-Bayes Generalization Certificates for Learned Inductive Conformal Prediction | Accept (poster) | Summary: Inductive Conformal Prediction (ICP) provides a coverage guarantee of constructed prediction sets, while does not provide any guarantee on the efficiency of prediction sets. The efficiency depends on a conformity score function, and direct approaches optimize the score function to minimize the efficiency of prediction sets in two steps: (1) given a hold-out set from a calibration set, optimize the score function, and (2) given another hold-out set from a calibration set, run an ICP algorithm. However, the direct approaches do not provide a generalization guarantee for efficiency. The proposed approach combines ICP and the PAC-Bayes to provide generalization guarantees for both coverage and efficiency. In particular, the paper provides novel generalization bounds for coverage and efficiency and proposes an algorithm from them, where the efficacy is evaluated on regression and image classification problems.
Strengths: **Originality**: The paper proposes new generalization bounds on coverage and efficiency.
**Quality**: The proposed algorithm is supported by theorems, making the algorithm rigorous.
**Clarity**: I like the paper structure. It first provides the main theories, and then explains an algorithm with practical details (appreciate it).
**Significance**: I believe the PAC-Bayes interpretation introduces an interesting, novel view on conformal prediction, which could trigger interesting CP algorithms.
Weaknesses: Combining PAC-Bayes and ICP is very interesting, though I have a few concerns.
1. It is not easy to find the benefit of the bound on efficiency (i.e., Thm2). This is novel, but in Algorithm 1, it also uses the first term of (8). What is the algorithmic benefit of the efficiency bound Thm2?
2. I assume that this paper uses the PAC-style ICP (4) as the baselines (i.e., “the standard ICP approach” and “a learned ICP approach”. However, (4) is known to be loose (as clearly mentioned in Vovk [2012]. In the same Vovk’s paper, it also has a tighter ICP (i.e., Proposition 2b in Vovk [2012]) and its tightness (and efficiency) is also well-demonstrated in deep learning applications in [R1]. If the authors use the tiger PAC-style ICP, would “the standard ICP” and “a learned ICP” baselines be possible to be better than the proposed approach in Figure 4?
3. It is unclear what the benefit of PAC-Bayes is in achieving an efficient prediction set. Based on Line 50 (i.e., “our framework allows us to utilize the entire calibration dataset…”), I initially thought that the proposed approach fully utilizes the calibration set (while direct approaches require splitting calibration sets into two). However, the proposed approach also needs to split the calibration sets for prior learning (i.e., Line 238). At this point, I was unsure of the benefit of combining PAC-Bayes with ICP. My guess is simultaneous optimization of a score function distribution Q and threshold \hat{tau} is one factor for achieving a better efficiency than the baselines, but I want to hear the author's clarification.
4. The nonconformity score function of the standard ICP in regression is quite weak, which misleads readers. The score function with only the residual (i.e., Line 282) is not recommended as it does not encode per sample uncertainty (as also mentioned in the paper). Instead, a variance normalized score function (i.e., (8) in [R2], 3.6 in [R1], or 2.2.1 in [R3]) is a better choice, where the variance is simply trained via a training set. Given this better score function, I don’t think “the standard ICP” result in Figure 3 is as poor as shown in the paper.
[R1] https://arxiv.org/abs/2001.00106
[R2] http://proceedings.mlr.press/v91/vovk18a/vovk18a.pdf
[R3] https://arxiv.org/abs/1612.01474
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Each question is associated with each weakness in Weaknesses.
1. What is the algorithmic benefit of the efficiency bound in Thm2?
2. If the authors use the tiger PAC-style ICP (i.e., Proposition 2b in Vovk [2012]), would “the standard ICP” and “a learned ICP” baselines be possible to be better than the proposed approach in Figure 4?
3. Could you clarify on the benefit of PAC-Bayes in achieving an efficient prediction set?
4. If the authors use the variance normalized score function, do we still observe the same limitation of “the standard ICP” in Figure 3?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are clearly stated in Conclusion, and I agree with that.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and valuable feedback. We address each of your questions below:
- *Efficiency bound (Theorem 2):* As you noted, we don’t directly optimize the generalization bound in Theorem 2 in our algorithm. This is due to the fact that the optimizable part of the second term in (8) is just the KL divergence, and the coverage constraint already regularizes the KL value. We found that in practice, optimizing through the bound did not improve performance, and excluding it led to a simpler algorithm. However, we did use the generalization bound when comparing between hyperparameter settings, reporting test-time results on only the hyperparameters that achieved the best generalization bound according to Theorem 2. We use a union bound argument, using $\delta’=\delta / K$ when evaluating bounds, where K is the number of hyperparameter choices.
- *Vovk Prop 2b*: First, we note that the mapping from our notation to that in [Vovk 2012] is $\delta \rightarrow \delta, \hat{\alpha} \rightarrow \epsilon$, and $\alpha \rightarrow E$. We will use our notation to discuss comparisons between our choices and those in Vovk 2012.
- Indeed for the learned and standard ICP baselines in our experiments, we follow the recommendation of [Vovk 2012] and use the result of Prop 2a as a recipe to construct a $(\alpha, \delta)$-valid conformal predictor. Specifically, given $\alpha$ and $\delta$, this recipe involves constructing a set predictor to achieve $1-\hat{\alpha}$ coverage on the calibration dataset, where $\hat{\alpha} = \alpha - \sqrt{(-\ln \delta)/(2n)}$. Then, we apply Prop 2b once using the chosen $\hat{\alpha}$ and $\delta$ to guarantee $(\alpha,\delta)$-validity of the resulting predictor.
- We implemented the Prop 2b bound and used it similarly to [R1] to find the largest $\hat{\alpha}$ for which the equation in Prop 2b holds for our desired $\alpha$ and $\delta$, and include the results in the attached pdf under the label (Vovk 2b). While the less conservative $\hat{\alpha}$ leads to tighter prediction sets, empirically we find that the coverage guarantee is violated, especially for the learned version. Indeed, applying a binomial test on the rate of test-time coverage violation for the Learned (Vovk 2b) trial evaluated on N=2500 calibration datapoints split at a ratio of 0.5/0.5 over 6 independent seeds, we obtain a p-value of 0.0327 for the null hypothesis that the probability of failure (coverage < $1- \alpha = 0.90$) is less than $\delta=0.05$, indicating we should reject the null hypothesis that the coverage guarantee is valid for this trial. We are investigating the reason for this discrepancy, to identify if there is an aspect of our approach which breaks the assumptions required by Prop 2b. In any case, we thank the reviewer for highlighting this tighter bound, and we will investigate if a similar tightening can be translated to a PAC-Bayes analysis.
- *Benefit of PAC-Bayes:* Prior works learning score functions for conformal prediction use only a portion of the calibration dataset to optimize the score function to achieve good efficiency in prediction sets whose threshold $\tau$ is computed on the same portion of the calibration dataset. However, especially in the low-data regime, this can lead to cases where the score function over-fits to the portion of calibration data – then, when recalibrating the model (i.e. re-computing the threshold $\tau$ on the remaining portion of the dataset), the score function may behave erratically, leading to a different $\tau$ value and potentially worse efficiency. In contrast, the PAC-Bayes approach allows further optimization of the score function on the second part of the dataset. In this way, it can mitigate the impacts of overfitting in the low-data regime.
- *Variance-normalized score function:* Our aim in this work is to address situations where calibration data faithfully representing test-time conditions is limited, and one may want to simultaneously fine-tune models on this data while also producing calibrated prediction sets. To illustrate this set-up in a simplified illustrative example, we considered a (synthetic) example, where the pretrained model did not produce variance outputs, but the heteroskedasticity in the calibration dataset was information, which, if effectively incorporated, could lead to more efficient prediction sets. Our goal with the illustrative model was not to demonstrate the real-world use of the method, but rather to highlight that our approach can more effectively learn new concepts and produce calibrated prediction sets from a limited set of calibration data. While in practice, training a model to produce variance outputs would yield tighter prediction sets to begin with, this would not illustrate the utility of further fine-tuning a model on calibration data. Indeed, the utility of this approach is highest when calibration data is a scarce resource, e.g., when we only have a limited set of samples from a shifted test data-distribution. We will make this point clearer in the revision.
---
Rebuttal Comment 1.1:
Title: Discussion
Comment: Thanks for the detailed responses. I wish to initiate the discussion.
- *Efficiency bound (Theorem 2)*: the answer is a bit confusing. Based on the answer, the second term is useful in hyper-parameter selection but not useful in test-time performance, which sounds inconsistent. Usually, we use a loss function in hyper-parameter selection and the best model is also good w.r.t. the same loss function. Can you clarify the counter-intuitive usage? What is the hyper-parameter selection procedure? What is $K$ (which is not appeared in the paper)? Can I say that you're using $\delta'$ in Algorithm 1, instead of $\delta$?
- *Vovk Prop 2b*: To my understanding (based on the empirical coverage violation), the author's implementation of *Vovk Prop 2b* has some issues. I'd recommend to draw the whisker plot such that the open interval from the bottom tip of the whisker plot contains $100\delta$ % of samples --- this is the meaning of $\delta$. If the bottom of the whisker plot is still below of the desired coverage rate, then it's the sign of implementation bugs. I think comparison to this tighter baseline should be provided to justify the efficacy of the proposed approach.
- *Benefit of PAC-Bayes:*: I'm not quite convinced the provided intuitive explantation. For example, consider the claim of "the score function over-fits to the portion of calibration data – then, when recalibrating the model". For the low-data regime, I agree that the score function overfits, but during the recalibration, the low-sample size is still accounted (i.e., Vovk Prop 2a/2b are a function of the sample size). I think we need to see empirical benefits of PAC-Bayes compared to the **fixed** "Learned (Vovk 2b)" first for further discussion.
- "Variance-normalized score function": I guess my concern was not addressed. In other words, my point is that "Learned ICP + Recal" essentially wastes calibration samples and that's why its efficiency is poor (i.e., a weaker baseline). A better way is considering the variance-normalized score function and recalibration with a **full** calibration set (i.e., a stronger baseline). This should be the starting point. Given this stronger baseline, I would be excited if the PAC-Bayes results outperform the stronger baseline.
In short, I found the baseline used in this paper is weak in two senses: (1) a weak scoring function (i.e., the paper does not use the variance-normalized score function) and (2) a weak ICP PAC bound (i.e., the paper does not use Vovk Proposition 2b). This does not convince myself whether the PAC-Bayes can serve as an exciting new bound. So, I'll maintain my score at this point, but I hope to see a better PAC-Bayes bound!
---
Reply to Comment 1.1.1:
Title: Response to questions
Comment: Thanks for starting the discussion. Below are our responses to your questions, which we hope provide some clarity.
*Efficiency bound* We wish to clarify that the model with the lowest efficiency bound was in general had the best test-time performance; hence it served well as a loss to select between hyperparameter settings. Our claim in our comment was addressing why we did not use the full bound in our optimization procedure: we found that differentiating through the full bound during optimization did not significantly improve performance, as the constraint imposed by Theorem 1 ensured the KL divergence remained small.
$K$ is the number of hyperparameter options considered. For the hyperparameter selection, we considered choosing a value for $\hat{\alpha}$ between $K=5$ values between 0.02 and 0.08. To do so, we optimized a score function for each setting on the same dataset using $\delta’ = \delta/K = 0.05/5 = 0.01$. Then, we evaluated the efficiency bound for each of the 5 models using this value of $\delta’$ in place of $\delta$. In this way, the bound holds for each of the models independently with probability $1 - \delta’$, i.e. $1-\delta/K$. Therefore, by the union bound, the probability of all $K=5$ bounds holding jointly must be greater than $1-\delta$, which is our desired guarantee. Thank you for pointing out that our discussion of hyperparameter selection can be further clarified; we will update it in the revised manuscript.
*Benefit of PAC-Bayes* While the recalibration procedure accounts for the low sample size by choosing a more conservative choice of $\hat{\alpha}$, this is to ensure that the *coverage* guarantees still hold on test-data. The sample-size considerations in Vovk 2a and 2b make no claims to ensure the efficiency of the sets (e.g. their size) also generalizes well. This is what we are referring to by overfitting – we might learn a score function that achieves efficient set sizes on the particular data points that are in the calibration set, but produce inefficient sets on test data. In our problem setting, our aim to learn sets whose coverage *and* efficiency both generalize well to the test data distribution. We hope this provides some clarity on the novelty of our approach over prior work.
*Variance-normalized score function* Indeed, the Learned ICP + Recal appears to “waste” samples by only learning the variance normalization on part of the calibration data, and using the rest to select the threshold. However, prior work **requires** this recalibration on a **held-out** dataset in order to provide the PAC guarantee on test-time coverage. If we were to use the full calibration dataset to learn the variance normalization, and then recalibrate (i.e. compute the threshold $\tau$) on the same dataset, the guarantees on test-time coverage would not hold. This is because after optimization, the score function is no longer independent of the calibration dataset. This apparent “waste” is precisely the limitation of prior work we aim to address; by leveraging PAC-Bayes theory, we can use the **full** dataset to optimize the score function for efficiency, without losing test-time guarantees.
Nevertheless, we agree that a comparison to this version would help make this point more clear, and we will update our manuscript to include these results.
*Vovk 2b* Thank you for your suggestion. We will update the visualization for the results we include in the revised version. | Summary: This work first derives coverage and efficiency generalization bound using PAC-Bayes for inductive conformal prediction. Furthermore, a practical algorithm, basing bayesian learning and conformal training, is proposed to learn efficient nonconformity score functions. In experiment section, the proposed algorithm is shown to outperform two baselines (original ICP and learned ICP) in some scenarios.
Strengths: 1, This work is mathematically solid, by combining PAC-Bayes theory with inductive conformal prediction. Particularly, it derives a efficient loss bound using PAC-Bayes for ICP.
2, In the experiment section, the proposed method outperforms other two baselines (original ICP and learned ICP) in many scenarios, demonstrating high efficiency, especially when limited calibration data is available.
3, The manuscript is well-written; and proposed method is mathematically sound.
Weaknesses: 1, In the abstract, it is claimed that "the entire calibration dataset" can be utilized. In the practical algorithm, splitting the calibration dataset is still needed (to derive a good prior). As a result, in my opinion, the mentioned claim is at least controversial.
2, Though not mentioned, the proposed algorithm could be expensive in training and inference, as multiple models need to be sampled.
3, The empirical evaluation is insufficient. Only one synthetic dataset and one practical dataset are used. It would be appreciated if experiments on more practical datasets. With more empirical results, it could be more convincing to claim that the proposed method produces more efficient prediction sets.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1, How are the two derived bounds connected to the proposed algorithm?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: 1, The computation complexity might be so high that the proposed method cannot trivial extend to larger datasets and models.
2, Without good prior, the derived bounds might be too loose.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful and helpful comments. We address your criticisms and questions below:
- *Data-splitting:* As you point out, in the experiments we split the data, using part of it to tune the prior, and the rest to simultaneously tune the posterior and also achieve test-time coverage guarantees. However, both splits of data are used to optimize the score function. This is in contrast to prior work in learned conformal prediction, which optimizes the score function on one set of data, and then holds it fixed when using the remainder to compute the threshold needed to achieve test-time coverage guarantees.
- *Sampling multiple models:* Indeed, PAC-Bayes style generalization bounds require reasoning over stochastic models rather than individual hypotheses. However, at test time, each data point is only evaluated under a single model: test-time computational complexity at a single-sample level is therefore similar to a non-stochastic approach. Indeed, our guarantee on test-time coverage holds in expectation over samples drawn from the posterior and data drawn from the data distribution, so as long as each new input is evaluated using a new sample, our guarantee holds. However, one important limitation is that we require recomputing the threshold $\tau$ for each new sample. For small calibration sets and models, this can be done in parallel at test time. Alternatively, one can pre-sample K models from the posterior, and use the calibration data to pre-compute the threshold $\tau$ for each sample. This offloads the computational burden from test-time evaluation, but adds to the memory burden, as multiple samples of model parameters need to be stored. We will emphasize this limitation in the updated manuscript and more clearly discuss this tradeoff.
- *Empirical evaluation:* We believe the core of this paper is our theoretical contributions. In particular, in contrast to standard ML settings where the loss is defined per-datapoint, in conformal prediction, the efficiency and coverage on a particular data point depends on the threshold, which itself is computed as a function of the entire calibration dataset. This dependency precludes direct application of standard PAC-Bayes proofs, and requires a novel theoretical approach, which we consider the main contribution of our work. As such, while we agree that more experiments would always be better, we would like to stress that our goal in the empirical evaluation was not to demonstrate the immediate practical applicability of this approach, but rather to highlight the potential of our theoretical contributions. We believe our experiments demonstrate that our approach can already yield modest benefits in terms of efficiency in the low-data regime, and these benefits have the potential to become more pronounced thanks to future advances in the tightness of PAC bounds and advances in prior and posterior selection.
- *How are generalization bounds used?* We produce two main generalization bounds. The bound on coverage (Theorem 1) is used to derive Corollary 2.1, which is used directly in the algorithm to compute the KL budget allowed for optimizing the posterior as a function of the desired test-time coverage guarantee. The bound on efficiency (Theorem 2) is used to select between different hyperparameter settings: we choose the model with the best bound on generalization efficiency among K hyperparameter choices. In our experiments, we evaluate these bounds using $\delta’ = \delta/K$, ensuring through a union bound argument that the generalization bounds hold for the best-performing model with probability greater than $1-\delta$.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my questions and concerns!
I agree that the core of this paper is the theoretical contribution, which is novel by itself.
However, since the results are only on simple datasets, it is still unclear to me how useful these bounds would be, considering that this method would induce higher computational complexity.
As a result, I will raise my score to borderline accept. | Summary: The efficiency of set-valued predictor is crucial and less explored, this paper use the framework of PAC-Bayes to obtain the generalization bounds on both the coverage and the efficiency of set-valued predictors. The authors also propose a novel algorithm to optimize the efficiency without a separate hold-out dataset. They provide a theoretical guarantee for the proposed method and empirically verify the superiority of the proposed method.
Strengths: - This paper uses the framework of PAC-Bayes to analyze the efficiency of inductive conformal prediction, which is novel and less explored.
- The algorithm proposed by this paper does not require a separate hold-out dataset, while still having a theoretical guarantee of coverage.
- This paper is well organized and the presentations are clear to me.
Weaknesses: - To achieve the test-time coverage, the key constraint is shown in Corollary 2.1. However, the authors do not show whether it is satisfied in the experiments.
- In the experiments, the authors show the average size of the prediction set, but I think the marginal coverage evaluated on the test data in the experiments is also important, and does a smaller average size of the prediction set empirically lead to worse coverage?
- In the experiments, the description of the ***learned*** baseline is unclear. Do the authors train the model using the method proposed in [1]? Meanwhile, the authors miss a recent work [2] that aims to output a smaller conformal prediction set with higher conditional coverage. [1] and [2] should be chosen as baselines in the experiments.
[1] David Stutz, Krishnamurthy Dvijotham, Ali Taylan Cemgil, Arnaud Doucet. Learning Optimal Conformal Classifiers. ICLR 2022.
[2] Bat-Sheva Einbinder, Yaniv Romano, Matteo Sesia, Yanfei Zhou. Training Uncertainty-Aware Classifiers with Conformalized Deep Learning. NeurIPS 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: A limitation of this paper is that the theoretical results and proposed method can only be applied for a stochastic model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed feedback. We address your questions below:
- *Constraint Satisfaction:* The constrained optimization solver we use in our experiment ensures that solutions satisfy Corollary 2.1 by construction. In particular, we solve the constrained optimization problem with an augmented Lagrangian algorithm (approximating the inner optimization with stochastic gradient descent), and only return a solution if it is feasible (i.e., if the solution satisfies Corollary 2.1). We will clarify this in the updated manuscript, and include results on constraint values in the appendix.
- *Coverage results:* We focused on presenting efficiency results because all approaches yielded similar coverage values that exceeded the desired value; however, we did include marginal coverage results in the supplementary material – we will update the main paper to highlight that these results are included in the appendix. We agree that the relationship between test-time coverage and test-time efficiency is interesting, and we will include these results (shown in the attached pdf, Fig. 2) in the updated version of the manuscript.
- *Learned baseline:* Indeed, the learned baseline is a reimplementation of ConfTr [Stutz et al, 2022], extended to the regression case by using the radius of the prediction set as an efficiency loss. We will make this more clear in the updated version of the manuscript. We also thank the reviewer for pointing out the work of [Einbinder et al, 2022] – we will update our related work section to discuss it. Indeed, they tackle a similar problem as [Stutz et al, 2022] of training ML models to yield efficient sets after conformalization; however, their approach is limited to a specific nonconformity function designed for classification. In contrast, our work (as well as [Stutz et al, 2022]) works with any nonconformity function and extends beyond classification problems. For this reason, we cannot directly compare against [Einbinder et al, 2022] in our experiments.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, and I decide to raise my score to a weak accept. | Summary: Accurate uncertainty estimation is crucial in building robust and trustworthy machine learning systems. This paper utilizes PAC-Bayes theory and inductive conformal prediction (ICP) to develop a practical algorithm that fine-tunes parameters and score functions using calibration data, while ensuring inference coverage and efficiency. Empirical validation of this work shows that it is effective and outperforms prior methods.
Strengths: The paper presents a well-executed translation from theory to practical algorithm and provides a comprehensive analysis of related work. It also discusses the practical considerations in their proposed algorithm and effectively demonstrates its ability to make efficient predictions and achieve good generalization certificates in two experiments.
Weaknesses: Here are some suggestions for improving the original content:
1. It would be beneficial to provide more information and explanation in section 5.2 to help readers better understand the algorithm.
2. Consider including an experiment that explores the tradeoff between desired coverage and prediction performance. This would provide valuable insights into the algorithm's effectiveness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I am interested in understanding the extent to which distribution shift can affect the performance of calibration and its coverage. Adding more noise to MNIST figures can increase the level of uncertainty, making it more challenging to ensure accuracy.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper identifies various limitations and proposes practical solutions to address them. The significance of this work lies in its contribution to achieving a more reliable and secure ML system. Particularly, it demonstrates its effectiveness in providing assurance to enhance the calibration of uncertainty.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review and insightful criticisms and comments. We address specific comments below:
- *Section 5.2:* We acknowledge that section 5.2 is a very succinct summary of the optimization approach. We will update our manuscript to go into more details of the algorithm, and add a section in the appendix which goes into more detail and reintroduces the concepts from [Stutz et al., 2022] that we re-use in our implementation to make the manuscript more self-contained.
- *Coverage/Efficiency Tradeoff:* The coverage / efficiency tradeoff is a fundamental tradeoff for all conformal prediction approaches, and we agree that plots showing this tradeoff would be a welcome addition to the results presented in our paper. We plotted this trade-off across all dataset sizes in the attached pdf to the main response (Fig 3.), and we will include it in our updated version. Notably, while all methods see a tradeoff between coverage and efficiency, our PAC-bayes method consistently lies on the Pareto frontier.
- *Distribution Shifts:* Indeed, distribution shifts can have an impact on calibration. Prior work evaluating uncertainty quantification under distribution shift has demonstrated that neural network uncertainty estimates can struggle to generalize as the data distribution is shifted [1]. Indeed, for this reason, we chose a distribution shift example to study our approach: While a network trained on clean MNIST does produce heuristic uncertainty estimates (through its predicted class probabilities), these estimates are unlikely to generalize well to corrupted data. Hence, if we have a limited amount of data from the test (shifted) domain, we would like to ideally use this data to both tune the network to produce better uncertainty signals as well as to produce calibrated uncertainty sets, as directly using the untuned network could yield loose prediction sets.
Now, a separate challenge is handling distribution shift *after* the calibration procedure. Robustifying standard conformal prediction to future distribution shift is an active area of research [2], as is literature extending PAC-Bayes to settings where the test-data distribution differs from the training data distribution [3]. We considered this challenge to be out-of-scope for this work, but we wholeheartedly agree that this is an important problem and a worthwhile direction for future extensions of this work.
[1] Ovadia, Yaniv, et al. "Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift." Advances in neural information processing systems 32 (2019).
[2] Tibshirani, Ryan J., et al. "Conformal prediction under covariate shift." Advances in neural information processing systems 32 (2019).
[3] Germain, Pascal, et al. "A new PAC-Bayesian perspective on domain adaptation." International conference on machine learning. PMLR, 2016.
---
Rebuttal Comment 1.1:
Title: Thanks for your response.
Comment: Dear Authors,
Thanks for your response. I have read your figure 3 about Coverage/Efficiency Tradeoff and your clarifications about distribution shifts. I keep my score and support this paper to be accepted. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their time and valuable feedback on our manuscript. We are glad the reviewers all found our paper clear, sound, and well-structured. Furthermore, we are glad to hear that the reviewers appreciated the novelty of our approach in applying PAC-Bayes generalization bounds to the setting of conformal prediction.
First, we would like to emphasize that our primary goal, as reviewer jcN9 highlighted, is to make a theoretical contribution which allows **self-certified learning of conformal predictors**. Indeed, all prior work on optimizing conformal predictors using data required heuristically splitting data and using part of the data to tune the model and score function parameters, and the rest to compute the threshold needed to achieve the desired test-time coverage guarantees. Our work uses PAC-Bayes generalization theory to show that data-splitting is **not** strictly required to achieve test-time coverage: instead, a coverage guarantee can be achieved by constraining the data-driven optimization in terms of KL divergence to a prior. We would like to reiterate that these results are novel, and more than a direct application of standard PAC-Bayes results: unlike typical loss functions which are defined per-datapoint, the coverage and efficiency of the prediction for a single input depend on the entire calibration dataset, since the threshold defining the prediction sets is computed as an empirical quantile. Handling this required novel extensions to classical PAC-Bayes theory.
Next, we would like to address shared concerns regarding the empirical evaluation of our practical algorithm. Many reviewers pointed out that in our evaluation, we considered using a portion of the available data to first optimize a prior, before then performing a second phase of optimization wherein we retain generalization guarantees on coverage and efficiency by enforcing a KL constraint to the (learned) prior. Indeed, as we state in the paper, this is a common practice for many works applying PAC-Bayes techniques to neural network models: while generalization bounds on loss are valid for any fixed prior, state-of-the-art methods achieving generalization performance competitive with empirical risk minimization (ERM) often use a portion of data to tune a prior [1,2]. Admittedly, this has the consequence of reducing the data that is used to compute the threshold for the conformal prediction sets. Nevertheless, we would like to stress that, in contrast to the data-splitting employed in the learned ICP baseline (and prior works), our approach uses **all** available data to optimize the score function and model parameters, while existing approaches only optimize parameters on one fraction of the dataset. Specifically, we use the first part to optimize the prior for the score function, and then use the second part to optimize the posterior. Earlier data-splitting techniques hold the score function fixed in the second step. Finally, we believe that future advances in applying PAC-Bayes to neural networks, both in terms of prior selection as well as posterior representation, will be complementary to the techniques in our paper and help enhance the practical utility of the algorithm.
Overall, we believe our core theoretical contributions are novel, and of interest to the broader NeurIPS community. Furthermore, we believe our experiments demonstrate that even with a very restrictive class of prior and posterior (diagonal Gaussian), our theoretical contributions yield an algorithm for learning conformal predictors which yields efficient prediction sets in practice and guaranteed test-time coverage, demonstrating the potential of the approach to be useful in the low-data regime. We refer the reviewers to our individual responses for response to other questions raised in the reviews.
[1] Ambroladze, Amiran, Emilio Parrado-Hernández, and John Shawe-Taylor. "Tighter pac-bayes bounds." Advances in neural information processing systems 19 (2006).
[2] Perez-Ortiz, Maria, et al. "Learning PAC-Bayes priors for probabilistic neural networks." arXiv preprint arXiv:2109.10304 (2021).
Pdf: /pdf/c47cda24cee789bf796ac204bfc7fa01b90923e1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper considers the setting of inductive conformal prediction (ICP). In this setting, given a learned point-predictor, a calibration set and a scoring function is used to obtain a set-valued predictor that contains the correct label with high probability. A drawback of previous approaches is that, for ICP guarantees to be valid, the scoring function has to be fixed, or part of the calibration set needs to be used to learn the scoring function, while the remaining part is used to learn the coverage sets. In order to solve this, the paper proposes to use PAC-Bayesian bounds for self-certified learning, so that the scoring function can be learned from the calibration data, while using the same calibration data to obtain test-time bounds by using generalization guarantees. The approach relies on new bounds, which are similar to known PAC-Bayesian bounds but specialized to the ICP setting, and data-dependent Gaussian priors according to a standard procedure. Numerical results demonstrate the potential of the proposed approach to improve ICP in low-data settings.
**Post-Rebuttal edit**: I read the author rebuttal, which addressed my questions, and consequently updated the rating (as stated below).
Strengths: — The paper proposes a creative and relevant use of the PAC-Bayesian framework for a practical use-case
— The presentation is pedagogic, providing an essentially self-contained summary of the necessary background from ICP and PAC-Bayes
— In addition to just presenting bounds, the paper suggests a well-motivated practical algorithm that is shown to have potential to improve results for certain settings. The interpretation of the PAC-Bayesian bound as providing a "KL budget" is neat.
Weaknesses: — The discussion of the PAC-Bayesian bounds does not make it entirely clear what the relation to prior work is (see questions below).
— The improvements that are seen in the numerical experiments appear to be marginal, and mostly within the error bars. The non-monotonicity in the order between "Learned" and "PAC-Bayes" in Figure 4 (middle) appears to highlight that they are essentially indistinguishable. Also, it was not entirely clear to me how Figure 3 demonstrated an improvement for PAC-Bayes.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: — Theorem 1, in terms of the binary KL, is very similar to the Maurer-Langford-Seeger bound (see, e.g., Maurer (2004)). In Maurer's result, a smaller $B(N)$ is found. However, I don't think that the two arguments of your binary KL are related by an expectation as in Maurer's derivation, which necessitates the path that you take. Is this interpretation correct? I think the paper would benefit by clarifying the relation to these classical PAC-Bayesian bounds.
— The derivation of Theorem 3, once the step has been taken from $\mathcal L$ to $R$, appears to be the same as a standard PAC-Bayesian bound via "sub-Gaussian"-type concentration for bounded random variables. Is there any other difference in terms of the proof?
— In the introduction, you refer to PAC-Bayesian priors as data-independent, which may cause some confusion since you use data-dependent priors in your experiments. Perhaps a note should be made that they can be data-dependent in certain circumstances.
---
Maurer, A Note on the PAC Bayesian Theorem, 2004
___
---
Minor comments:
Eq (5): Should this be averaged over the posterior? This is typically the case for PAC-Bayes bounds.
Line 196: “for all for all”
Line 197: bounuded -> bounded in footnote
Line 326: “we found that optimizing the yielded a poor prior” incomplete sentence
Donkser -> Donsker
Overfull margins in appendix
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are adequately discussed. Perhaps it can be made clearer that the proposed approach does not yield particularly significant improvements over the "Learned" baseline.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your very thorough review and helpful comments! Below are answers to your specific questions:
- *Theorem 1:* This is a great observation. Indeed, in the Maurer-Langford-Seeger bound, the two arguments to the binary KL are the empirical risk and the generalization risk: for a fixed hypothesis, the generalization risk has no dependence on the empirical risk. However, in conformal prediction, the test-time prediction sets depend on both the hypothesis (in this case, the score function) as well as the threshold, which is a function of the sampled data. Indeed, unlike typical loss functions that are defined per-datapoint, coverage and efficiency of conformal predictors depend on the threshold computed as a quantile of the entire calibration dataset. This is the key difference that precludes the direct application of a Maurer style PAC-Bayesian bound, and necessitates a different approach which yields a different B(n) term. We will more explicitly highlight these similarities and differences in our updated manuscript.
- *Theorem 3:* Yes, our proof follows the standard steps after translating from $\mathcal{L}$ to $R$. We will make this clear in the updated manuscript.
- *Wording in introduction:* We agree that our original wording is a source of confusion. We will update our writing to clarify that the restriction is only that the prior is chosen independently from the data used to compute the generalization bound.
*Limitations:* We will update our discussion of limitations to make clear that our approach yields modest improvements relative to the learned baseline; we will instead highlight the theoretical contributions as noted above, and the potential of the approach to become more useful with future complementary advances in PAC-Bayes literature. Finally, thank you for pointing out the small errors; we apologize for not catching these in our submission and will correct them in our updated draft.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. All my questions are clarified. As I am positive about this work and potential future extensions, I have increased my rating. | null | null | null | null | null | null |
SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT | Reject | Summary: This paper proposes SpikeBERT, a spiking-based BERT model for text classification. It employs LIF spiking neurons an surrogate gradients for backpropagation. The training method consists of a two-stage distillation process (pre-training + task-specific). The experiments conducted on different benchmarks for English and Chinese languages show that the proposed SpikeBERT achieves higher accuracy than prior spike-based language models and lower energy consumption than the (non-spiking) BERT.
Strengths: 1. The proposed method is novel and relevant to the community.
2. The technical sections are described clearly.
3. The experiments provide good-quality results.
Weaknesses: 1. The design decisions are not discussed in detail.
2. Unlike non-spiking BERT, there are scalability issues when increasing the depth.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Section 3, please discuss the design decisions made to devise the proposed method. Why choosing LIF neurons instead of other neuron models? Why these choices for the architectural parameters?
2. Since the SpikeBERT does not achieve a significant accuracy increase when increasing the depth, can the proposed method scale up and acquire similar properties like the emergent abilities in (non-spiking) large language models?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations have not been discussed by the authors, but there are no major limitations in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
#### **Q1: Why choosing LIF neurons instead of other neuron models? Why these choices for the architectural parameters?**
R1: Thank you for your reminding.
Firstly, LIF neuron is one of the most widely used spiking neurons, and other neurons can be seen as some variants of it. In works [1][2] on spiking neural networks and natural language processing, the neuron they chose was LIF neuron. Therefore, we just follow them to use LIF neuron.
Secondly, in order to distill knowledge from the teacher model more effectively, we use the same the architectural parameters as the teacher model BERT-base. For example, the hidden dimension is 768 and model depths is 12. However, the sentence length is not 512 due to the limited GPU memory (See Q3).
We have added these explanations in the revised manuscript.
#### **Q2: Can the proposed method scale up and acquire similar properties like the emergent abilities in (non-spiking) large language models?**
R2: Firstly, as discussed in Section 4.5, the reason why the accuracy of SpikeBERT is generally insensitive to the model depths is that the gradients error may accumulate with the increase of model depths due to the surrogate gradients. What’s more, the depths we used on the experiments were only 8, 12 and 18. Should you wish to explore the potential performance fluctuations across a broader range of depths, we are readily prepared to conduct additional experiments at your behest.
Secondly, the emergent ability of large language models was proposed by [3]. Large language models usually refer to large **generative** language models, which are mostly decoder-only and mainly used for text generation, such as ChatGPT. SpikeBERT is an encoder-only language **representation** model for language understanding tasks, which is similar to BERT. However, SpikeBERT can be easily extended to the decoder-only models because they are all Transformer-based.
#### **Q3: It seems the limitations have not been discussed by the authors.**
R3: Thank you very much for strong recognition of our work. Our Limitations section is not within the main body of the manuscript, but rather in **Appendix D** (Page 15 of the PDF file). In this section, we primarily discuss the following limitations:
Firstly, there are many neuromorphic event-based image datasets, such as CIFAR-10-DVS and DVS-128-Gesture, which perfectly align with the characteristics of SNN networks. However, such datasets are lacking in the natural language processing tasks.
Secondly, the data used for SNN training introduces an additional temporal dimension (T dimension) compared to traditional data. Limited to the GPU memories, we had to reduce the sentence length of input sentences, which significantly constrains the performance of our models.
##### Reference:
[1] Lv C, Xu J, Zheng X. Spiking Convolutional Neural Networks for Text Classification[C]. // The Eleventh International Conference on Learning Representations. 2022.
[2] Zhu R J, Zhao Q, Eshraghian J K. Spikegpt: Generative pre-trained language model with spiking neural networks[J]. arXiv preprint arXiv:2302.13939, 2023.
[3] Wei J, Tay Y, Bommasani R, et al. Emergent abilities of large language models[J]. arXiv preprint arXiv:2206.07682, 2022.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I would like to thank the authors for answering the reviewers' questions.
In light of all the reviewers' comments and author responses, my score is confirmed. | Summary: This paper presents SpikeBERT, an improved version of the Spikformer spiking transformer model for language tasks. SpikeBERT utilizes a two-stage knowledge distillation method that combines pre-training with BERT and fine-tuning with task-specific data. Experimental results show that SpikeBERT outperforms state-of-the-art spiking neural networks and achieves comparable results to BERT on text classification tasks for English and Chinese, while consuming significantly less energy.
Strengths: 1 The writing is commendable.
2 The proposed approach is captivating and innovative, and I believe it will generate significant interest within the machine learning community.
3 The results are promising, particularly in achieving comparable performance to BERT in text classification.
Weaknesses: 1 Although SpikeBERT significantly reduces energy consumption during inference, the two-stage knowledge distillation process may introduce additional costs. It would be helpful if the authors could provide insights on this matter.
2 Considering other model architectures such as the OPT model families, does SpikeBERT possess the potential to be extended to these models? It would be interesting to explore the applicability of SpikeBERT beyond the BERT architecture.
3 I am curious about the evaluations of SpikeBERT on additional tasks such as GLUE, RACE, and SQuAD. It would provide valuable insights into the model's performance across a broader range of language processing tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
#### **Q1: Although SpikeBERT significantly reduces energy consumption during inference, the two-stage knowledge distillation process may introduce additional costs. It would be helpful if the authors could provide insights on this matter.**
R1: Yes, the energy consumption is mainly reduced at the inference time. Once spiking neural networks (SNNs) are well software-trained, they can be deployed on neuromorphic hardware for energy-efficient computing. However, mature on-chip training solutions are not yet available, and it remains a great challenge due to the lack of efficient training algorithms, even in a software training environment. Thank you for pointing it out, and we have made it clear in the revised version.
#### **Q2: Does SpikeBERT possess the potential to be extended to the models like OPT?**
R2: Yes, SpikeBERT can be easily extended to the generative language models like OPT. As shown in Figure 1 (b), we have improved Spikformer in its architecture, making it possible to process languages. Our SpikeBERT is an encoder-only language **representation** model for language understanding tasks. Therefore, we can easily extend it to decoder-only generative language models by (1) replacing the spiking self-attention module (bi-direction) with spiking **mask** self-attention module (uni-direction), and (2) utilizing the pre-training strategies and corpus of GPT[2][3][4]. Actucally, we have started the research on language generation of SNNs and we are also interested in training large language models (LLMs) based on SNNs.
#### **Q3: The evaluations of SpikeBERT on additional tasks such as GLUE, RACE, and SQuAD.**
R3: Thank you for your good suggestion.
(1) The selection of the datasets on Table 1 is based on the fact that previous SNNs-related work[5] also used these datasets, which facilitates meaningful comparisons.
(2) Actually, we reported the performance of SpikeBERT on GLUE benchmark in our original manuscript, but we found that the performance of our baseline, SNN-TextCNN[5], on GLUE was extremely poor, and even failed to converge on some tasks, so we think that such a comparison is unfair.
(3) The performances of the baseline model and SpikeBERT on GLUE benchmark are shown in the following table:
| | SST2 | MRPC | RTE | QNLI | MNLI-(m/mm) | QQP | CoLA | STS-B |
| -------------- | ----- | ----- | ----- | ----- | ----------- | ----- | --------------------- | ---------------------- |
| Metric | Acc | F1 | Acc | Acc | acc | F1 | Matthew’s correlation | Spearman’s correlation |
| FT BERT | 92.31 | 89.80 | 69.31 | 90.70 | 83.82/83.41 | 90.51 | 60.00 | 89.41 |
| SNN-TextCNN[5] | 80.91 | 80.62 | 47.29 | 56.23 | 64.91/63.69 | 0.00 | -5.28 | 0.00 |
| SpikeBERT | 85.39 | 81.98 | 57.47 | 66.37 | 71.42/70.95 | 68.17 | 16.86 | 18.73 |
We find that the performance of the Natural Language Inference (NLI) task (QQP, QNLI, RTE) is not satisfactory. The possible reason is that we mainly focus on the semantic representation of a single sentence in the pre-training distillation stage. In the future, we intend to explore the incorporation of novel pre-training loss functions to enhance the model's ability to model sentence entailment effectively.
Reference:
[1] Zhou Z, Zhu Y, He C, et al. Spikformer: When spiking neural network meets transformer[J]. arXiv preprint arXiv:2209.15425, 2022.
[2] Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training[J]. 2018.
[3] Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9.
[4] Brown T, Mann B, Ryder N, et al. Language models are few-shot learners[J]. Advances in neural information processing systems, 2020, 33: 1877-1901.
[5] Lv C, Xu J, Zheng X. Spiking Convolutional Neural Networks for Text Classification[C]. // The Eleventh International Conference on Learning Representations. 2022.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I would like to thank the authors for their rebuttal. I will maintain my original rating. | Summary: The paper presents SpikeBERT, an implementation of BERT-based models on a Spiking Neural Network (SNN) architecture, motivated by theoretical energy efficiency benefits.
The paper presents the transformer architecture and a two-stage distillation approach which first distills a general purpose BERT model into a general purpose SpikeBERT, then finetunes the SpikeBERT by distilling a finetuned BERT model.
The approach is evaluated on several text classification tasks on English and Chinese datasets, resulting in accuracies lower but comparable than a standard finetuned BERT classifier.
A theoretical energy efficiency improvement is calculated, reporting improvements, which however I find dubious (see weaknesses)
Strengths: - Interesting work on an unusual architecture
Weaknesses: I am especially concerned about the claims of improved energy efficiency, which serve as the main motivation of the paper.
Starting from the introduction, where the author claim: "However, it requires too much computational power and energy to train and deploy state-of-the-art ANN models, leading to a consistent increase of energy consumption per model over the past decade. The energy consumption of large language models, such as ChatGPT[OpenAI, 2022] and GPT-4[OpenAI, 2023], is unfathomable even during inference."
It is clearly not true that "it requires too much computational power and energy to train and deploy state-of-the-art ANN models" since these models are in fact trained and deployed.
More concerning is the theoretical energy comparison of SpikeBERT and BERT (Section 4.4 and Appendix C), where the authors compare FLOPs for BERT and SOPs (spiking operations) for SpikeBERT, multiply by theoretical energy costs and declare SpikeBERT the winner. The theoretical energy costs seem to be copied from other papers, and following the citation chain they seem to come from Yao et al. 2022 "Attention Spiking Neural Networks" where they are computed using data from Horowitz 2014 "1.1 computing’s energy problem (and what we
can do about it)", with the assumption of 32-bit floating point operations on 45nm hardware. Modern GPUs use 7nm hardware, and inference is often done with 8-bit floating point operation or less, therefore I wonder whether these number are obsolete.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Table 2: mJ is a measure of energy, not power
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **Q1: The claims of improved energy efficiency.**
R1: We apologize for any confusion this may have caused.
Our claim of improved energy efficiency is that the energy consumption is mainly reduced **at the inference time**. Once spiking neural networks (SNNs) are well software-trained, they can be deployed on neuromorphic hardware for energy-efficient computing. However, mature on-chip training solutions are not yet available, and it remains a great challenge due to the lack of efficient training algorithms, even in a software training environment. Thank you for pointing it out, and we have made it clear in the revised version.
#### **Q2: The energy calculation equation is obsolete.**
R2: Thank you for your suggestion. We have revised the content the Appendix C based on "Attention Spiking Neural Networks"[1] and have incorporated the results of energy consumption calculations based on the new standards you suggested:
$
E=E\_{MAC}\times\mathrm{FL}\_{\text{SNN Conv}}^{1}+E\_{AC}\times(\sum\_{n=2}\^N\text{SOP}\_{\text{SNN Conv}} \^ n + \sum \_ { m = 1 }\^M\text{SOP}\_{\text{SNN FC}} \^ m + \sum \_ { l = 1 }\^L\text{SOP}\_{\text{SSA}}\^l)
$
, where $E\_{MAC} = 4.6pJ$ and $E\_{AC} = 0.9pJ$.
The original energy consumption figures have also been retained for reference.
#### **Q3: mJ is a measure of energy, not power**
R3: Thank you for your reminding. After conducting a thorough investigation, we have found your observation to be highly accurate. Errors similar to the one you pointed out exist in the works we have followed, such as [2] and [3]. We will rectify this issue in the revised manuscript.
Reference:
[1]Yao M, Zhao G, Zhang H, et al. Attention spiking neural networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2023.
[2]Zhou Z, Zhu Y, He C, et al. Spikformer: When spiking neural network meets transformer[J]. arXiv preprint arXiv:2209.15425, 2022.
[3]Lv C, Xu J, Zheng X. Spiking Convolutional Neural Networks for Text Classification[C]. // The Eleventh International Conference on Learning Representations. 2022.
---
Rebuttal Comment 1.1:
Title: Theoretical Energy Consumption
Comment: According to [1][2], for spiking neural networks (SNNs), the theoretical energy consumption of layer $l$ can be calculated as: $Energy(l) = E_{AC}\times SOPs(l)$, where SOPs is the number of spike-based accumulate (AC) operations.
For classical artificial neural networks, the theoretical energy consumption required by the layer $b$ can be estimated by $Energy(b) = E_{MAC}\times FLOPs(b)$, where FLOPs is the floating point operations of $b$, which is the number of multiply-and-accumulate (MAC) operations.
We assume that the MAC and AC operations are implemented on the 45nm hardware [3], where $E_{MAC} = 4.6pJ
$ and $E_{AC} = 0.9pJ$.
The number of synaptic operations at the layer $ξ$ of an SNN is estimated as $SOPs(ξ) = T × γ × FLOPs(ξ)$, where $T$ is the number of times step required in the simulation, $γ$ is the firing rate of input spike train of the layer $ξ$.
Therefore, we estimate the theoretical energy consumption of SpikeBERT as follows:
$
E_{SpikeBERT} =E_{MAC}\times\mathrm{EMB}_{\text{Emb}}^{1} + E\_{AC}\times\\left(\sum\_{m = 1}\^M\text{SOP}\_{\text{SNN FC}}^m+\sum\_{l=1}^L\text{SOP}\_{\text{SSA}}^l\\right)
$
where $\mathrm{EMB}\_{\text{Emb}}^{1}$ is the embedding layer of SpikeBERT.
Then the SOPs of $n$ SNN Fully Connected Layer (FC) and $l$ SSA are added together and multiplied by $E_{AC}$.
The energy consumption per sample of fine-tuned BERT and SpikeBERT during inference on 6 text classification benchmarks is as follows:
| Dataset | Model | Parameters(M) | FLOPs/SOPs(G) | Energy(mJ) | Energy Reduction | Accuracy(%) |
| -------- | --------- | ------------- | ------------- | ---------- | ----------------- | -------- |
| ChnSenti | FT BERT | 109 | 22.46 | 103.38 | | 89.48 |
| | SpikeBERT | 109 | 28.47 | 30.51 | 70.49%↓ | 86.36 |
| Waimai | FT BERT | 109 | 22.46 | 103.38 | | 90.27 |
| | SpikeBERT | 109 | 27.81 | 29.90 | 71.08%↓ | 89.66 |
| MR | FT BERT | 109 | 22.23 | 102.24 | | 87.63 |
| | SpikeBERT | 109 | 26.94 | 28.03 | 72.58%↓ | 80.69 |
| SST-2 | FT BERT | 109 | 22.23 | 102.24 | | 92.31 |
| | SpikeBERT | 109 | 27.46 | 28.54 | 72.09%↓ | 85.39 |
| Subj | FT BERT | 109 | 22.23 | 102.24 | | 95.90 |
| | SpikeBERT | 109 | 25.92 | 26.96 | 73.63%↓ | 93.00 |
| SST-5 | FT BERT | 109 | 22.23 | 102.24 | | 50.41 |
| | SpikeBERT | 109 | 26.01 | 27.33 | 73.27%↓ | 46.11 |
We will rewrite Table 2 and Appendix C in the revised manuscript.
Reference:
[1]Yao M, Zhao G, Zhang H, et al. Attention spiking neural networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2023.
[2]Zhou Z, Zhu Y, He C, et al. Spikformer: When spiking neural network meets transformer[J]. arXiv preprint arXiv:2209.15425, 2022.
[3]Horowitz M. 1.1 computing's energy problem (and what we can do about it)[C]//2014 IEEE international solid-state circuits conference digest of technical papers (ISSCC). IEEE, 2014: 10-14.
---
Reply to Comment 1.1.1:
Title: We look forward to hearing your insightful thoughts!
Comment: Dear reviewer XMQ4:
We greatly appreciate your time and effort in reviewing our work. We have carefully considered your suggestions and made the necessary revisions.
Specifically, we have incorporated your advice by modifying the energy consumption formula as per your recommendation. Additionally, we have recalculated the data presented in Table 2. We believe these updates address the concerns you raised in your previous review.
Please see our Official Comments named **Theoretical Energy Consumption** for more details.
Your expertise and insights are highly valued, and we are eager to ensure that our research meets the highest standards.
We are looking forward to hearing more about your insightful thoughts, and would be happy to answer more follow-up questions. | Summary: This work develops SpikeBERT, which extends Spikformer to perform language processing tasks, and proposes a two-stage knowledge distillation method for better training it. Experiments validate the improved accuracy of SpikeBERT over previous SNNs and the improved efficiency over vanilla BERT.
Strengths: 1. This work is the first transformer-based SNNs for language processing tasks.
2. Experiments validate the achieved efficiency improvement over vanilla BERT.
Weaknesses: I have the following concerns about this work:
1. The novelty and technical contributions of this work are limited: the modifications from Spikformer to SpikeBERT are minor, and similar distillation schemes have been widely studied and adopted in efficient BERT works, e.g., TinyBERT, TernaryBERT, FastBERT, DistilBERT, etc. It is hard to tell the key technical contribution of this work.
2. The experimental validation is insufficient: It only validates the improved efficiency over vanilla BERT while the aforementioned efficient BERT variants are not benchmarked, making it not clear whether SpikeBERT is a practical efficient BERT option. In addition, the theoretical power is not enough for indicating the real-device efficiency and on-device measurement is highly desirable for benchmarking the aforementioned efficient BERT variants.
3. This work may violate the formatting regulations, i.e., it adds an extra appendix in the main manuscript. In addition, the citation format seems to not follow the official template.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I have listed my questions in the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: This work may violate the formatting regulations, i.e., it adds an extra appendix in the main manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
#### **Q1: The novelty and technical contributions of this work are limited.**
R1: (1) Model Architecture: As shown in Figure 1, in addition to introducing the word embedding layer and the layer normal module, we also make the shape of the attention map yielded by Spiking Self Attention (SSA) to be N × N, rather than D × D, where D and N denote dimensionality of hidden layers and the length of inputs respectively. This is crucial.
(2) Knowledge Distillation: As discussed in the Introduction section, deep spiking neural networks (SNNs) directly trained with backpropagation through time (BPTT) using surrogate gradients could suffer from the problem of gradient vanishing or exploding due to “self-accumulating dynamics”. Meanwhile, through pre-experiments, we found that directly training deep SNNs language model is difficult to converge, but they can learn better representation by distillation method. Therefore, we choose knowledge distillation as our training method.
Additionally, this work is the first transformer-based SNNs for language processing tasks and is among the first to show the feasibility of transferring the knowledge of BERT-like language models to spiking-based architectures that can achieve comparable results but with much less energy consumption.
#### **Q2: The experimental validation is insufficient. The aforementioned efficient BERT variants are not benchmarked.**
R2: Efficient BERT variants like TinyBERT[1], DistilBERT[2], FastBERT[3] reduce energy consumption by reducing the number of model parameters during inference. As the model parameters decrease, FLOPs decrease as well.
However, our SpikeBERT has the same number of model parameters as BERT. Spiking neural networks (SNNs) do not lead to a reduction in the number of model parameters; however, they do effectively decrease synaptic operations (SOPs) so that they can reduce the energy consumption as mentioned in Appendix C. Once SNNs are well software-trained, they can be deployed on **neuromorphic hardware** for energy-efficient computing.
If we increase the number of model parameters for TinyBERT or DistilBERT to the same, then they do not lead to a reduction in energy consumption.
Thank you for your valuable reminding. We will add this explanation to the revised manuscript.
#### **Q3: This work may violate the formatting regulations**
R3: (1) Regarding the extra appendix in the main manuscript, we apologize for any confusion this may have caused. This was not intentional and can be attributed to an oversight during the final editing process. We assure you that we will promptly rectify this issue by removing the extra appendix and ensuring that the document adheres to the formatting regulations in the revised manuscript.
(2) After conducting a thorough investigation, we find that the citation format we used is a common one, which is also used by [4][5] etc. NeurIPS does not provide us a with a uniform citation format. For example, [6] uses number like “[1][2][3]” to cite reference paper in the manuscript, while [4][5] uses author name and year like “(Ramesh et al., 2022)” to cite. We will change our citation format as you want in the revised version.
Reference:
[1] Jiao X, Yin Y, Shang L, et al. Tinybert: Distilling bert for natural language understanding[J]. arXiv preprint arXiv:1909.10351, 2019.
[2] Sanh V, Debut L, Chaumond J, et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter[J]. arXiv preprint arXiv:1910.01108, 2019.
[3] Liu W, Zhou P, Zhao Z, et al. Fastbert: a self-distilling bert with adaptive inference time[J]. arXiv preprint arXiv:2004.02178, 2020.
[4] Wang L, Zhou Y, Wang Y, et al. Regularized molecular conformation fields[J]. Advances in Neural Information Processing Systems, 2022, 35: 18929-18941.
[5] Artemev A, An Y, Roeder T, et al. Memory safe computations with XLA compiler[J]. Advances in Neural Information Processing Systems, 2022, 35: 18970-18982.
[6] Liu F, Yang B, You C, et al. Retrieve, reason, and refine: Generating accurate and faithful patient instructions[J]. Advances in Neural Information Processing Systems, 2022, 35: 18864-18877.
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: Thank the author for their efforts in providing the rebuttal. However, my concerns regarding the novelty and the benchmark with efficient BERT variants are not well solved.
In particular, regarding the novelty, the size of the attention map in commonly adopted transformers is typically NxN and thus the proposed modification is intuitive; and knowledge distillation is also a widely adopted (almost "default") setting for model compression.
When it comes to benchmarking against efficient BERT variants, the authors' explanation did not address my concern. This is because it remains unclear what rationale they have for benchmarking energy consumption under the same number of parameters, rather than evaluating the overall energy-accuracy trade-off. The authors are expected to benchmark this trade-off to justify whether SpikeBERT is the way to go as compared to previous efficient BERTs.
I tend to hold my score for now and I'm willing to discuss with other reviewers to further adjust my score.
---
Reply to Comment 1.1.1:
Comment: We appreciate your insightful comments and the opportunity to provide further clarification:
**1. Novelty**:
(1) In previous research[1], only a simple TextCNN network was employed for single-sentence text classification experiments, which demonstrated the feasibility of using Spiking Neural Networks (SNNs) in Natural Language Processing (NLP) tasks. Our work, however, elevates the application of SNNs in NLP to a new level. Specifically, we have successfully implemented large-scale pretraining via knowledge distillation and achieved state-of-the-art (SOTA) results on a single-sentence classification dataset. This represents a significant advancement over the initial proof-of-concept provided in the earlier study. This work is the first Transformer-based SNNs for language processing tasks and we believe that this field will continue to evolve and advance.
(2) While it may seem like a natural idea to use knowledge distillation after direct training fails, applying it to spiking neural networks is still poses significant challenges. Firstly, how to align the spiking signals of the student model with the floating-point signals of the teacher model? We addressed this issue by introducing an external “MLP+LayerNorm” layer to convert the spiking signals. Secondly, training spiking neural networks typically requires specific training techniques to stabilize and accelerate the convergence process, so the traditional knowledge distillation methods may not adapt well to these techniques, resulting in training difficulties or suboptimal performance. We employed many training tricks, some of which were not explicitly mentioned in the paper. These tricks included dynamically adjusting the alignment signal weight ratios based on loss ratios, selectively ignoring representations from certain layers, and using longer warm-up periods. In practice, achieving a convergent spiking neural network language model is challenging because traditional knowledge distillation methods and SNNs training methods are ineffective in these scenarios.
**2. Benchmarking against efficient BERT variants**:
(1) We conducted experiments on the SST-2 dataset to compare our SpikeBERT with other BERT variants:
| Dataset | Model | **Parameters(M)** | FLOPs/SOPs(G) | Energy(mJ) | Energy Reduction | Accuracy(%) |
| ------- | ------------- | ----------------- | ------------- | ---------- | ----------------- | ----------- |
| SST-2 | FT BERT | 109.0 | 22.23 | 102.24 | - | 92.31 |
| | SpikeBERT | 109.0 | 27.46 | 28.54 | 72.09%↓ | 85.39 |
| | TinyBERT[2] | 67.0 | 11.30 | 52.01 | 49.13%↓ | 91.60 |
| | DistilBERT[3] | 52.2 | 7.60 | 34.98 | 65.78%↓ | 90.40 |
The results of energy consumption calculations is based on the new standards Reviewer XMQ4 suggested.
(2) We think that the energy efficiency achieved by spiking neural networks (SNNs) is distinct from methods such as knowledge distillation or model pruning, which aim to reduce model parameters. They represent **different technological pathways**. Spiking neural networks do not alter the model parameters but instead **introduce temporal signals** to enable the model to operate in a more biologically plausible manner on neuromorphic hardware. The energy reduction of spiking neural networks is still an estimate, and future advancements in hardware are expected to further decrease energy consumption while potentially accelerating inference speeds [4]. This represents a promising avenue for implementation on artificial intelligence. In the future, we aim to reduce the firing rate of spikes in the network while maintaining accuracy, or achieve higher performance while maintaining similar energy consumption levels.
We hope that these explanations address your concerns effectively, and we look forward to hearing your insightful thoughts! Thank you!
Reference:
[1]Lv C, Xu J, Zheng X. Spiking Convolutional Neural Networks for Text Classification[C]. // The Eleventh International Conference on Learning Representations. 2022.
[2] Jiao X, Yin Y, Shang L, et al. Tinybert: Distilling bert for natural language understanding[J]. arXiv preprint arXiv:1909.10351, 2019.
[3] Sanh V, Debut L, Chaumond J, et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter[J]. arXiv preprint arXiv:1910.01108, 2019.
[4]Horowitz M. 1.1 computing's energy problem (and what we can do about it)[C]. // 2014 IEEE international solid-state circuits conference digest of technical papers (ISSCC). IEEE, 2014: 10-14. | Rebuttal 1:
Rebuttal: We thank the reviewers for your insightful comments, which helped us to significantly improve the manuscript. The following major changes have been made in the revised manuscript:
(1) We have added the performance of SpikeBERT and baseline models on GLUE benchmark. However, the performance of baseline model, SNN-TextCNN, on GLUE benchmark is extremely poor, and even failed to converge on some tasks. In the future, we will we intend to enhance the model's ability to model sentence entailment effectively.
(2) We have added some text to explain how the design decisions made to devise the proposed method. Through pre-experiments, we found that directly training deep SNNs language model is difficult to converge, but they can learn better representation by distillation method. Therefore, we choose knowledge distillation as our training method. Our model parameter is the same as BERT.
(3) We have explained our claims of energy efficiency in more detail. In contrast to artificial neural networks (ANNs), the reduction in energy consumption of spiking neural networks (SNNs) is primarily observed at the inference time. Once SNNs are well software-trained, they can be deployed on neuromorphic hardware for energy-efficient computing.
(4) We have conducted a set of experiments to re-calculate the energy consumption based on the new standards Review XMQ4 suggested. Once the experimental results are obtained, we will report them.
(5) All the reviewers’ comments have been addressed in the revised version.
(6) We have revised the paper thoroughly and carefully. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors have proposed SpikeBERT which is an energy efficient Spiking Neural Networks(SNN) for natural language representation. The architectural design for SpikeBERT is inspired from Spikformer which is an SNN for computer vision, with the following major changes:
- Spiking Patch Splitting for images is replaced by embedding layer to encode words/tokens into vectors.
- BatchNorm replaced by LayerNorm
- Shape of Spiking Self Attention is changed to adapt to language tasks by changing its size based on length on input instead of dimensionality of hidden layers.
The model is trained using a two-step knowledge distillation approach:
- General purpose: The model is trained using embedding and intermediate hidden layer representations of BERT on unlabeled natural language texts.
Task specific: The model is tuned using task specific logits from a fine-tuned BERT model.
Strengths: - The paper is well written and easy to follow. The authors provide the necessary background on SNNs including the advantages and challenges in training them.
- The motivation is clear and the energy consumption presented in the results section helps convey the same.
- The metrics indicate that the proposed approach outperforms the previous spiking network based baseline developed for natural language understanding and standard SNN training mechanism using surrogate gradients.
- The authors present a thorough ablation analysis for all the contributions presented in the paper.
Weaknesses: - The novelty of the paper is limited:
○ The architecture is mostly derived from Spikformer: the usage of word embeddings instead of image patches, and using layer normalization in transformer is a standard approach.
○ The two-step knowledge distillation approach has been used widely in the past for distilling BERT and GPT style transformer models to smaller/specific architectures.
○ The usage of hidden layer representations in the distillation process is also a standard practice for BERT style models.
○ Most of the formulations and approaches related to spiking neurons, its derivatives and feature transformations are adapted from previous work.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - BERT was introduced several years ago, and since then, multiple adaptions of its architecture and training tasks have been proposed that outperform it such as RoBERTa. Why have the authors chosen BERT as a teacher model for SpikeBERT?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 1 poor
Limitations: - The datasets used for evaluation seem limited. With models like BERT, it's a standard approach to present results on all GLUE tasks or at least the ones such as MNLI as they are considered to be good & reliable indicators of model quality.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
#### **Q1: The novelty (4 aspects) of the paper is limited.**
R1: (1) As shown in Figure 1, in addition to introducing the word embedding layer and the layer normal module, we also make the shape of the attention map yielded by Spiking Self Attention (SSA) to be N × N, rather than D × D, where D and N denote dimensionality of hidden layers and the length of inputs respectively. This is crucial.
(2) As discussed in the Introduction section, deep spiking neural networks (SNNs) directly trained with backpropagation through time (BPTT) using surrogate gradients could suffer from the problem of gradient vanishing or exploding due to “self-accumulating dynamics”. Meanwhile, through pre-experiments, we found that directly training deep SNNs language model is difficult to converge, but they can learn better representation by distillation method, which is consistent with [1]. Therefore, we choose knowledge distillation as our training method.
(3) Although the usage of hidden features is a standard practice of knowledge, the alignment of hidden features between ANNs and SNNs is quite difficult in our work. As discussed in Section 3.3.1, due to the teacher model’s features being in floating-point format, while the student model’s features are integers and involve an additional dimension T (time step), we need external modules to align the features (See Equation 6). Even so, we still find it hard to align the features generated by the student model with those generated by BERT for the first few layers.
(4) We apologize for the lack of precision in our formulas. I find that most of the papers [1][2][3] related to knowledge distillation have similar formulas, but I can rewrite our formulas in the revised manuscript as you want.
Additionally, this work is the first transformer-based SNNs for language processing tasks and is among the first to show the feasibility of transferring the knowledge of BERT-like language models to spiking-based architectures that can achieve comparable results but with much less energy consumption.
#### **Q2: Why have the authors chosen BERT as a teacher model for SpikeBERT?**
R2: Our approach is applicable to any model similar to BERT and RoBERTa. Existing literature[4] indicates that BERT and RoBERTa exhibit minimal substantive differences in downstream tasks, and both models share a similar network architecture. The divergence between these models is primarily observed in the aspect of masking strategy, input tokenization, and training strategy [5], yet these differences do not impact the conclusions drawn in this paper. Moreover, as BERT serves as a representative example of large pre-trained models, we chose BERT as the teacher model.
#### **Q3: The datasets used for evaluation seem limited. The evaluations of SpikeBERT on GLUE.**
R3: Thank you for your good suggestion.
(1) The selection of the datasets on Table 1 is based on the fact that previous SNNs-related work[6] also used these datasets, which facilitates meaningful comparisons.
(2) Actually, we reported the performance of SpikeBERT on GLUE benchmark in our original manuscript, but we found that the performance of our baseline, SNN-TextCNN[6], on GLUE was extremely poor, and even failed to converge on some tasks, so we think that such a comparison is unfair.
(3) The performances of the baseline model and SpikeBERT on GLUE benchmark are shown in the following table:
We find that the performance of the Natural Language Inference (NLI) task (QQP, QNLI, RTE) is not satisfactory. The possible reason is that we mainly focus on the semantic representation of a single sentence in the pre-training distillation stage. In the future, we intend to explore the incorporation of novel pre-training loss functions to enhance the model's ability to model sentence entailment effectively.
| | SST2 | MRPC | RTE | QNLI | MNLI-(m/mm) | QQP | CoLA | STS-B |
| -------------- | ----- | ----- | ----- | ----- | ----------- | ----- | --------------------- | ---------------------- |
| Metric | Acc | F1 | Acc | Acc | acc | F1 | Matthew’s correlation | Spearman’s correlation |
| FT BERT | 92.31 | 89.80 | 69.31 | 90.70 | 83.82/83.41 | 90.51 | 60.00 | 89.41 |
| SNN-TextCNN[6] | 80.91 | 80.62 | 47.29 | 56.23 | 64.91/63.69 | 0.00 | -5.28 | 0.00 |
| SpikeBERT | 85.39 | 81.98 | 57.47 | 66.37 | 71.42/70.95 | 68.17 | 16.86 | 18.73 |
Reference:
[1]Qiu H, Ning M, Yuan L, et al. Self-Architectural Knowledge Distillation for Spiking Neural Networks[J]. 2022.
[2]Tang R, Lu Y, Liu L, et al. Distilling task-specific knowledge from bert into simple neural networks[J]. arXiv preprint arXiv:1903.12136, 2019.
[3]Jiao X, Yin Y, Shang L, et al. Tinybert: Distilling bert for natural language understanding[J]. arXiv preprint arXiv:1909.10351, 2019.
[4]Qiu X, Sun T, Xu Y, et al. Pre-trained models for natural language processing: A survey[J]. Science China Technological Sciences, 2020, 63(10): 1872-1897.
[5]Liu Y, Ott M, Goyal N, et al. Roberta: A robustly optimized bert pretraining approach[J]. arXiv preprint arXiv:1907.11692, 2019.
[6]Lv C, Xu J, Zheng X. Spiking Convolutional Neural Networks for Text Classification[C]. // The Eleventh International Conference on Learning Representations. 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you, authors, for providing answers to the questions and sharing more data points.
Based on the responses, the concerns related to novelty still persist. Specifically, word embeddings, attention matrix quadratic in sequence length and knowledge distillation are well studied and established practices for natural language processing in deep learning and I'm concerned that the current approach mostly combines these existing techniques with Spike Neural Networks.
Also, metrics on some large GLUE tasks such as QNLI and QQP seem quite inferior indicating limited applicability of the approach.
Therefore, I'll maintain the score as of now.
---
Reply to Comment 1.1.1:
Comment: We appreciate your insightful comments and the opportunity to provide further clarification:
(1) In previous research[1], only a simple TextCNN network was employed for single-sentence text classification experiments, which demonstrated the feasibility of using Spiking Neural Networks (SNNs) in Natural Language Processing (NLP) tasks. Our work, however, elevates the application of SNNs in NLP to a new level. Specifically, we have successfully implemented large-scale pretraining via knowledge distillation and achieved state-of-the-art (SOTA) results on a single-sentence classification dataset. This represents a significant advancement over the initial proof-of-concept provided in the earlier study. This work is the first Transformer-based SNNs for language processing tasks and we believe that this field will continue to evolve and advance. Although our current model's performance on the natural language inference (NLI) task is unsatisfactory, we believe that in the future, we can also expand our paradigm to capture the relationships between sentence entailment.
(2) While it may seem like a natural idea to use knowledge distillation after direct training fails, applying it to spiking neural networks is still poses significant challenges. Firstly, how to align the spiking signals of the student model with the floating-point signals of the teacher model? We addressed this issue by introducing an external “MLP+LayerNorm” layer to convert the spiking signals. Secondly, training spiking neural networks typically requires specific training techniques to stabilize and accelerate the convergence process, so the traditional knowledge distillation methods may not adapt well to these techniques, resulting in training difficulties or suboptimal performance. We addressed this issue by employing many training tricks, some of which were not explicitly mentioned in the paper. These tricks included dynamically adjusting the alignment signal weight ratios based on loss ratios, selectively ignoring representations from certain layers, and using longer warm-up periods. In practice, achieving a convergent spiking neural network language model is challenging because traditional knowledge distillation methods and SNNs training methods are ineffective in these scenarios.
(3) As spiking neural networks operate on specialized neuromorphic hardware, energy consumption is expected to decrease further with advancements in neuromorphic hardware technology. Our model has achieved over 70% energy savings under the same parameter settings, while maintaining comparable performance. Furthermore, according to [2], networks running on neuromorphic hardware exhibit faster inference speeds.
We hope that these explanations address your comments effectively, and we look forward to hearing your insightful thoughts! Thank you!
Reference:
[1]Lv C, Xu J, Zheng X. Spiking Convolutional Neural Networks for Text Classification[C]. // The Eleventh International Conference on Learning Representations. 2022.
[2]Horowitz M. 1.1 computing's energy problem (and what we can do about it)[C]. // 2014 IEEE international solid-state circuits conference digest of technical papers (ISSCC). IEEE, 2014: 10-14. | null | null | null | null | null | null |
Horospherical Decision Boundaries for Large Margin Classification in Hyperbolic Space | Accept (poster) | Summary: This work studies the support vector machines for data with latent hierarchical relationships, which are represented in hyperbolic spaces. Similar to the linear SVM in Euclidean spaces, the SVM in hyperbolic spaces have been seen in the literature but they are challenging due to some issues such as non-convexity or algorithmic convergence. In this work, the authors explored the analogs between horospheres and Euclidean hyperplanes, and proposed to use geodesic segment (counterpart of the Euclidean distances) to define the margin for the SVM. A nice theory shows the resulting problem is convex so a stable algorithm is developed. Numerical studies demonstrate the superior performance over some competitors.
Strengths: This paper is working on an integral question for classifying network data. It introduces an interesting idea of defining the SVM on horospheres and shows good performance on benchmark data. In general, the paper appears to be technically correct with sound theory for showing convexity.
Weaknesses: 1. The authors may comment on the computational speed of the proposed algorithm, as it may outperform Hyperboloid SVM due to its convexity.
2. From Table 2, it seems HoroSVM well outperforms the other classifiers when D = 2, and the advantage gets smaller when D is large. What is the performance of HoroSVM when D is larger? Furthermore, it would be useful to discuss how D is chosen in real-world scenarios.
3. The authors should also compare with the kernel SVM to see if the latent data structure can be implicitly handled by the kernel SVM.
4. Most datasets chosen for the study display a significant imbalance. It would be interesting to see if it is possible to incorporate cost-sensitive weights into these observations. Doing so could mirror the approach taken by weighted SVM to tackle the imbalance issue.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In Section 4.1, the hyperparameter C is selected from {1, 5, 10}. It might be helpful to comment on how sensitive this parameter is. The performance of a linear SVM is typically highly sensitive to this parameter, and the paper could potentially mislead readers by implying that a non-rigorous selection from only three candidate values would suffice for optimal performance.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer 38iz
We thank the reviewer for the valuable comments and questions. Please find our responses below:
> 1. computational speed of the proposed algorithm
The complexity of our approach depends on the Riemannian gradient-based optimization technique that we chose to use. In [36], a comprehensive analysis of the complexity of first-order methods was presented. In our case, we use the Riemannian conjugate gradient method, whose algorithm and global convergence analyses are presented in [C2]. Hyperboloid SVM, where a non-convex optimization problem is solved using projected gradient descent, results in a potential local optimum.
Furthermore, we will include the running time on our machine (specs. given in the paper) for each method in Experiment 3 as a reference to the time complexity in the revision. The average training time on our machine for each method on a dataset with 200 samples is as follows: 6.57s (hyperboloid SVM), 3.98s (hyperbolic LR), 9.06s (HNN), and 3.73s (HoroSVM). Notably, our HoroSVM stands out as the fastest.
> 2. It seems HoroSVM well outperforms the other classifiers
when D = 2, and the advantage gets smaller when D is large
It is well known that entangled data in low dimensions may be easier to separate by raising the dimensions. As evident in the experiments, some of the competing methods yielded higher accuracy improvements with increasing dimensions compared to our HoroSVM. This is to be expected since HoroSVM performed the best in terms of classification accuracy for $D=2$ and hence there was not much room for further accuracy gain to be achieved by increasing dimensions in the HoroSVM case. The strength of the HoroSVM lies in the ability to achieve higher accuracy classification even in lower dimensions where data might be highly entangled. We focus on low dimensions since, as the dimension increases (from 10 to 200), the performance gain of hyperbolic embeddings of graphs is marginal, as empirically evaluated in [23]. Our superior performance in low dimensions is by virtue of the property of hyperbolic space being efficient in representing hierarchical data. Theoretical analysis of our method's generalization error as a function of dimension is an interesting problem and will be the focus of our future work.
> 3. compare with the kernel SVM
We would like to point out that the only reason for our comparison with linear SVM in Euclidean space is primarily to show that ignoring the geometric structure of data space leads to poorer performance. We will include this reason in the revision. To perform a fair comparison with kernel SVM, one would have to develop a kernel version of SVM in hyperbolic space. We will explore this idea in future work.
> 4. Most datasets chosen for the study display a significant imbalance.
We have indeed encountered sensitivity while training our HoroSVM on highly imbalanced datasets, similar to the challenges faced by Euclidean SVM. In this work, we used a downsampling strategy that will remove the imbalance at the cost of losing some data but, since the WordNet dataset contains a sufficient number of samples in the minority class, this doesn't significantly affect the classifier performance.
To deal with more general imbalanced data, our HoroSVM can be naturally extended to a class/instance-weighted version by assigning a class/instance weight to the penalty term $C$ for each sample, just as it is done in Euclidean SVM.
The idea of extending our work to a cost-sensitive SVM, as explored in [C3] for Euclidean space, holds significant appeal and promise. We will investigate this direction in future research.
> 5. the hyperparameter C is selected from 1, 5, 10. It might be helpful to comment on how sensitive this parameter is.
The choice of hyperparameter $C$ is crucial in HoroSVM as it is in linear SVM since it balances misclassification and margin maximization. As is standard in Euclidean SVM, we employ grid search to select the hyperparameter $C$ from a set of candidate values in our method. We acknowledge that this limited set of candidates might not fully explore the optimal range for $C$, and we will include this discussion in the revision. | Summary: This paper presented a novel large margin classifier, dubbed HoroSVM, whose decision boundaries are horospheres in hyperbolic space, and proved it’s a convex optimization problem.
This paper presented several experiments depicting the competitive performance of the classifier in comparison to SOTA.
Strengths: 1. This paper presented a novel large margin classifier, dubbed HoroSVM, whose decision boundaries are horospheres in hyperbolic space. It’s innovative compared to the predecessors.
2. This paper systematically and clearly explain the proof, which is easy to follow.
Weaknesses: 1. The experiments “Synthetic Data with noisy Labels” are over synthetic data, which is not convincing enough compared to apply random perturbations over real-world dataset
2. The motivation of classification in hyperbolic space is explained inexplicit.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: The method in this paper and the compared methods are basically based on SVM. Wouldn’t the result better if neural networks are applied to classify in hyperbolic space?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer 1ZfQ
We thank the reviewer for the valuable comments and questions. Please find our responses below:
> 1. random perturbations over real-world dataset
The noisy label experiment was conducted using synthetic data, which
allows us to have full control over the level of noise injected in the labels. For the
real word dataset, we conducted additional noisy label experiments on two
binary-class network datasets used in Experiment 1 (karate and polblogs)
with a 20% noise level to demonstrate the robustness of our classifier. Our
method performs the best as evidenced in the following table.
| Dataset | Karate | Karate-Corrupted | Polblogs | Corrupted-Polblogs |
|-----------------|--------|------------------|----------|--------------------|
| Euclidean SVM | 0.95 | 0.90 | 0.92 | 0.91 |
| Hyperboloid SVM | 0.95 | 0.76 | 0.92 | 0.87 |
| Poincare SVM | 0.78 | 0.70 | 0.92 | 0.77 |
| HoroSVM | 0.98 | 0.91 | 0.93 | 0.92 |
> 2. The motivation of classification in hyperbolic space is explained inexplicit.
Hyperbolic spaces offer compact representation for hierarchical data, which provides a provable advantage over Euclidean spaces in terms of lower distortion. Classification naturally emerges as one of the standard downstream tasks for data represented in hyperbolic space A naive application of Euclidean methods to hyperbolic data (by regarding the data points to be in Cartesian coordinates) is inadequate, as it neglects the intrinsic geometry of the data. We then aim to design a classification algorithm in the hyperbolic space that respects the underlying hyperbolic geometry and achieves superior performance.
> 3. The method in this paper and the compared methods are basically based on SVM.
Hyperbolic neural networks may yield higher classification accuracy by providing a holistic solution for classification tasks, encompassing the extraction of rich hyperbolic features from complex network structures and the subsequent application of hyperbolic logistic regression (LR) [16] for final predictions. Our primary focus within this study is not on competing at the feature extraction level – that is, not evaluating which method extracts superior features – but rather on identifying the method that excels as a 'linear' large-margin classifier in hyperbolic space. Moreover, HNNs are not large-margin classifiers and hence do not possess good generalization abilities. Large margin classifiers have a well-established theory and provide performance guarantees. We have provided a framework for such a theory in the hyperbolic space by providing a formulation for the optimization that is geodesically convex and is guaranteed a globally optimal solution similar to the Euclidean SVMs. Thus our comparison is specifically focused on the class of SVMs.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. From the results in the table, there seems no distinct performance gap between HoroSVM and Euclidean SVM, and the paper should also compare the HoroSVM with Hyperbolic neural networks.
---
Reply to Comment 1.1.1:
Comment: # Response to Reviewer 1ZfQ #2
We thank the reviewer for the comment. Please find our further clarification below.
We acknowledge the lack of a significant gap between Euclidean SVM and HoroSVM in this noisy label real data example. The reason is, some network data (Polblogs and Karate in Table 1) after embedding in the hyperbolic space are well separated, and using a Euclidean SVM on the ambient Euclidean coordinates of these embedded data points yields reasonably high accuracy in classification, which is also observed in hyperboloid SVM [12]. However, note that we already showed in this submission that for other real data experiments (see Tables 1&2), Euclidean SVM performance is poor compared to HoroSVM.
As for the comparison to HNNs, we would like to emphasize that even in the published HNN work [16], classification of data embedded in hyperbolic space was achieved using hyperbolic logistic regression [16, Table 2 & Sec 4 Paragraph "MLR (multiclass logistic regression) classification experiments."] and not in an end-to-end fashion, i.e., learn features and classify. Since this is the SOTA in HNN, we compared our work to hyperbolic LR [16] in the WordNet real data classification (Table 2).
Despite the above reasoning, upon your insistence, we used a simple end-to-end HNN model (same as the one used in producing the results in Figure 4) and tested it on noisy label real data presented earlier in this rebuttal. The results are included in the last row of the table below. As evident from the last row, HNN performs poorly on the karate dataset as well as in the Corrupted-Polblogs. We believe that comparing HoroSVM to HNN will in general involve developing a novel architecture distinct from the aforementioned simple HNN architecture and this task is outside the scope of our work.
| Dataset | Karate | Karate-Corrupted | Polblogs | Corrupted-Polblogs |
|-----------------|--------|------------------|----------|--------------------|
| Euclidean SVM | 0.95 | 0.90 | 0.92 | 0.91 |
| Hyperboloid SVM | 0.95 | 0.76 | 0.92 | 0.87 |
| Poincare SVM | 0.78 | 0.70 | 0.92 | 0.77 |
| HoroSVM | 0.98 | 0.91 | 0.93 | 0.92 |
| HNN | 0.91 | 0.80 | 0.92 | 0.87 |
Furthermore, we have made comparisons to HNN in Figure 4 on synthetic noisy data, where noise levels can be controlled. We demonstrated the superior performance of HoroSVM over HNN, hyperbolic LR, and hyperboloid SVM.
***
Reference:
[12] Cho et al. Large-margin classification in hyperbolic space. AISTATS 2019.
[16] Ganea et al. Hyperbolic neural networks. NeurIPS 2018. | Summary: The paper proposes a large margin classifier in hyperbolic space, Poincare ball models. To this end, horospherical decision boundaries (which are based on the Buseman function and are different level sets of the Busmann function at the ideal point) are used for the large-margin classifier. They also show that the formed classifier is convex and optimization can be performed using Riemannian gradient descent. They perform experiments on four datasets and compare the results with Euclidean Hyperboloid, Poincare, and Horo SVM.
Strengths: The paper proposes a large margin classifier based on the horospheres. The research question is valid and the paper is well motivated, and well written. The paper also provides the proofs for the theoretical claims as well. Figures are also beneficial to understand, for example figure 2 is beneficial to compare the horocycle vs geodesic decision boundary.
The experiments also show the efficacy of the proposed approach specially on 2dimensional embedding space. Synthetic data with noisy labels is also an interesting experiment showing the robustness of the proposed approach and generally hyperbolic version to the noisy data.
The analysis on the performance on the experiments section is beneficial as well.
Weaknesses: There is one main question about comparison with [33]. The paper generally is well written , there a few questions which I will add to the questions section.
I would suggest the authors to do a proofreading. The paper discusses horosphere decision boundary in several different ways like horospheres decision boundary, horocycle decision boundary, and horospherical decision boundary. Using a fixed terminology throughout the paper can increase the comprehensiveness of the paper.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Why does not the paper compare the method with [33] in the experiments?
Lines 176- 184 are not very clear. How are the positive and negative $\pi$ subsets of the same one? shouldn't they be on the opposite sides of the Poincare ball model? Why do the authors claim that the positive samples are clustered near the boundaries?
What is the reference point in Poincare SVM? and why is it sensitive to the reference point?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The paper discusses future work shortly in the conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer yt6v
We thank the reviewer for the valuable comments and questions. Please find our responses below:
> 1. comparison with [33]
The reason for the absence of comparison with Weber et al. [33] in our experiments are:
(1). Weber et al. [33, Sec 6] focus on theoretically understanding the benefits of hyperbolic spaces for classification, using experiments to illustrate the results derived for linear models and separable data. Extensions to non-separable data and non-linear models are not addressed in [33].
(2). For datasets that are separable in hyperbolic space, the approach in Weber et al. [33, Sec 6 & Fig 3] is aligned with the approach of hyperboloid SVM [12] under specific constraints namely, when the adversarial budget $\alpha = 0$. Further elaboration on this point is provided below.
In [33], authors employ the same parameterization of the hyperbolic geodesic decision plane for classification in hyperbolic space as utilized in hyperboloid SVM [12]. It is shown theoretically in [33] that a hyperbolic perceptron employing such geodesic decision boundaries is guaranteed to converge on separable data using a gradient descent method. Subsequently, [33, Sec 3.2] demonstrates that by employing margin losses, such as the logistic or hinge loss, in the optimization process, a large-margin classifier can be effectively learned through gradient descent. It is noteworthy that this corresponds to hyperboloid SVM [12] when the hinge loss is applied.
Moreover, [33] establishes a method to inject adversarial examples into the gradient-based loss minimization process, leading to an algorithm that efficiently learns a large-margin classifier. In this approach, the magnitude of the perturbation added to the original example is bounded by an adversarial budget $\alpha$. When $\alpha=0$, the method in [33] aligns with the one presented in [12].
When dealing with real-world datasets where the assumption of data separability is scarcely fulfilled, the applicability of [33] becomes limited. For experiments on real data where the assumption that the data is separable is barely fulfilled, [33] is not applicable and we compare our method with [12]. Additionally, we will explore the adoption of an adversarial setting for our method in future work.
> 2. proofreading
We highly appreciate your comments and will align the terminology accordingly in the revision.
> 3. Lines 176- 184 are not very clear.
In Euclidean space, if $n$ is the normal vector of a hyperplane, $-n$ is also the normal vector of this hyperplane. In hyperbolic space, however, let $\Pi_{\omega}$ be the set of horospheres tangent at $\omega$, then $\Pi_{-\omega}$ is the set of horospheres tangent at $-\omega$, the antipodes of $\omega$ on $\partial\mathbb{B}^n$. Note that $\Pi_{\omega} \neq \Pi_{-\omega}$. For a given $\omega$, $\Pi^+$ is the collection of horospheres that are tangent at $\omega$ and do not contain the origin (spheres with radius $<$ 0.5), while $\Pi^-$ is the collection of horospheres that are tangent at $\omega$ and contain the origin (spheres with radius $>$ 0.5).
As the volume grows exponentially in hyperbolic space, there is more space/volume as we approach the boundary and hence more data can be accommodated near the boundary. Hence, for a tree embedded in hyperbolic space, the leaf nodes within a subtree appear as a cluster near the boundary, and the clusters (representing different subtrees) are well separated from each other. Hence in the subtree classification task, we can properly assign the positive labels to a subtree of interest.
> 4. reference point in Poincare SVM
In Poincare SVM [11], the reference point $p$ serves as the anchor point to which all data are lifted into the tangent space at $p$. The classification is performed in that tangent space. However, the data distortion varies across different tangent spaces, making the performance of Poincare SVM sensitive to the choice of the reference point.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: Thank you for providing the answers, I will keep my current score. I would recommend the authors to include the explanation of differences with [33] in the paper as well. | Summary: Based on recent successes with hyperbolic embeddings of data with a (latent) hierarchical structure, this paper proposes a new type of SVM in hyperbolic space. Their SVM, named HoroSVM, uses horospheres as decision boundaries and the authors derive a way to compute the distance of a point to such a horosphere. This makes the method different from other recently proposed hyperbolic SVMs, which use geodesic hyperplanes as decision boundaries. The authors argue that these horospheres are a more suitable generalization of hyperplanes than their geodesic-based counterparts and that, therefore, HoroSVM is likely to outperform the other methods.
After introducing their method, the authors show that their loss function is geodesically convex with respect to the learnable parameters. As a result, they state that a global optimum to the optimization problem, that arises when using HoroSVM, can be found using gradient-based optimization methods. They further state that, while their method has this nice property, the other methods to which they compare do not. Therefore, HoroSVM should again yield better results.
With a series of three experiments, the authors attempt to empirically show the superiority of their method over geodesic hyperplane methods and Euclidean SVM. The first experiment aims to show the superior performance on the classification of hyperbolic embeddings of network data. The second experiment shows that HoroSVM is better than its competitors at predicting the subtree to which nodes belong within WordNet. The third and final experiment shows that their method is more robust than their competitors to noisy labels.
Lastly, based on their theoretical analysis and empirical results, the authors conclude that their method is more performant and robust than its competitors, while also alluding to future work with hyperbolic kernel SVMs.
Strengths: The paper contains several, mostly theoretical strengths in my opinion:
1. The paper derives a way to compute distances from points on the Poincaré ball to horospheres. While directly relevant to their HoroSVM method, this formulation is likely useful to any other method that implements these horospheres on the Poincaré ball, making it a nice contribution.
2. The authors prove the geodesic convexity of their newly proposed loss function, resulting in a very strong statement about the possibility of finding a global optimum to their optimization problem. If it is indeed the case that the competitive hyperbolic methods do not have this property, then this is a very clear theoretical argument for the superiority of their method w.r.t. these other hyperbolic methods.
3. In my opinion the paper introduces an interesting theoretical analysis of their method, which could by itself be useful as a tool for studying other methods in hyperbolic space or even on other Riemannian manifolds.
4. The experiments show the superiority of their method with respect to their competitors both w.r.t. classification performance and noisy label robustness.
Weaknesses: There are a few concerns that I have with this paper, which mostly have to do with the motivation for using horocycles over geodesic hyperplanes and with the setup of the experiments. Firstly, my concerns with the motivation for the use of horocycles are:
1. There are multiple geometric definitions of hyperplanes possible and it is not clear which one is chosen here. However, it is stated that horocycles are the hyperbolic equivalent of Euclidean hyperplanes and that, therefore, horocycles are a natural choice for decision boundaries. This does not provide a clear motivation for choosing horocycles over geodesic hyperplanes. What is the geometric motivation for this choice and why should it lead to better results in practice?
2. Figure 2 is supposed to provide a clear motivation for why horocycles are better as decision planes than geodesic hyperplanes, but the geodesic hyperplane example (on the right) seems poorly chosen to me. In fact, it seems very easy to find a hyperplane by hand that would lead to significantly better results than the hyperplane in the figure. What is going on here? Is this really a problem with geodesic hyperplanes or is it a problem with the optimization process. If the latter is the case, then this is still not a direct argument for using horocycles over geodesic hyperplanes.
My concerns regarding the experiments (mostly experiment 1) are:
1. In the experiment from Subsection 4.1, it is unclear how the hyperbolic embeddings of the network data were generated. As the method for generating these embeddings is likely to affect the outcome of the experiment, it is difficult to judge its validity without a description hereof.
2. Moreover, after the 5-fold cross-validation, the differences between the results obtained with the different methods are quite small for at least two of the datasets, especially compared to the standard deviation. This makes the experiment seem a bit weak as a motivation for using HoroSVM.
3. Also, given that solving the optimization problem for HoroSVM leads to a global optimum, why do you not simply choose the optimal C? And in that case, would the standard deviation not simply be 0?
4. What happens when you use a Euclidean embedding of the data and then Euclidean SVM? Even though the Euclidean embeddings would be distorted, it seems plausible to me that Euclidean SVM could obtain better results in such a setting. The current comparison does not seem completely fair to me.
5. In the second experiment the embeddings are obtained through the application of hyperbolic entailment cones. However, there are some issues with this method and there is a method from the paper "Representation Tradeoffs for Hyperbolic Embeddings" by de Sa et al. which boasts greater performance than hyperbolic entailment cones. How do the results change if this method had been used to generate the embeddings?
6. Lastly, in Table 2 the focus is put on the case of low dimension D = 2, where HoroSVM is significantly better. However, for this low dimension the performance of every method is rather low and for higher dimensions the difference in performance seems to fade. Is there a reason to still focus on low dimensions? What happens if the dimension is chosen even greater? Does the advantage of HoroSVM disappear completely?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Alongside the questions posed in the weaknesses above, I have a few additional questions:
1. Why are there both a mu and a b parameter in equation (2)? You only need 1 parameter here. Is it for notational simplicity later on? If so, a small note might help to clarify this for the reader. If not, removing this over-parameterization would also make the different versions of pi with subscripts a bit easier to follow.
2. Theorem 3.7 tells us that, for a single sample, the loss function is geodesically convex w.r.t. the learnable parameters. Is this result strong enough to guarantee convergence to a global optimum in case of a collection of samples? Based on the discussion in the introduction I presume it is, but I think it would be nice to mention this for the reader.
3. Why is the Poincaré inner product named an inner product? This name seems confusing to me as it suggests that it actually has the properties of an inner product, which it clearly cannot have.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, the authors point out that kernel SVMs are usually preferred over linear SVMs and indicate that they will work on this in the future. I think that the current work is still an important step in this direction.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer cEx8
We thank the reviewer for the valuable comments and questions. Please find our responses below:
> 1. geometric motivation for horospheres
Our motivation behind developing a classifier based on horosphere decision boundaries was to create a large-margin classifier in Hyperbolic space that mirrors the concept of the large-margin hyperplane-based classifier (linear SVM) in Euclidean space. In Euclidean space, hyperplanes can be conceptualized as infinite-radius spheres, with parallel ones sharing the same direction.
Horospheres stand out as the optimal choice for hyperbolic hyperplanes where a horosphere can be viewed as the limit of spheres as their radii approach infinity [C1]. Horospheres with the same direction are parallel to each other.
Note that earlier works, such as [12,33], have defined the concept of hyperplanes in hyperbolic space using the intersection of a hyperplane in Minkowski space and the hyperboloid model. This extrinsic approach to defining the analog of Euclidean hyperplanes in hyperbolic space doesn't maintain the essential characteristic of parallel hyperplanes having the same direction.
> 2. Figure 2 ...
We used the hyperboloid SVM implementation provided by [12]. The choice of the geodesic hyperplane separator could be influenced by the optimization of the nonconvex loss function in the hyperboloid SVM. In contrast, our HoroSVM guarantees an optimal result.
In the specific case in Fig.2, different choices of geodesic (by hand) might lead to fewer misclassifications, but such a choice may not be the minimizer of the distance-based loss function used in hyperboloid SVM. In our work, the choice of horospheres is quite natural as explained in A1.
> 3. how the hyperbolic embeddings of the network data were generated.
> 5. ...choose the optimal C
We apologize for the lack of clarity in presenting details in Experiment 1. In this experiment, we followed the setup in [12] and evaluated our model on the network data embedded in hyperbolic space using the approach described in [8]. We tested 5 different embeddings of each network and report the mean and standard deviation across different embeddings for each dataset.
> 4. differences are quite small
The comparable performance among all methods in Table 1 for the mentioned two cases could be attributed to the fact that these datasets were well-separated. Therefore, each method is expected to perform equally well in these cases.
> 6. use a Euclidean embedding
The performances of Euclidean SVM on the Euclidean embedding of the data in experiment 1 are presented in [12], where Euclidean SVM performs poorly compared to hyperboloid SVM, even with high-dimensional Euclidean embeddings (d=25), e.g., for Karate, AUPR 0.5 (Euclidean) vs. 0.86 (hyperboloid SVM).
> 7. embeddings in Experiment 2
To present a fair comparison with the same embedding scheme, we utilize the hyperbolic entailment cone method [15] used in hyperboloid SVM [12] and hyperbolic LR [16]. We would be happy to explore the performance of all the methods using the suggested embedding scheme by de Sa et al [24] in our future work. We have to admit that we could not find any direct comparison between the two embedding methods in the suggested reference [24] as well as in any follow-up work in this domain.
> 8. focus on low dimension
Please refer to our response to Reviewer 38iz’s Q2
> 9. parameter in Eq 2
> 11. Poincare inner product
We apologize for the use of the potentially misleading term, inner product. A more appropriate presentation for it will be: given an ideal point $\omega$, the function defined by $\langle \omega,x \rangle_{B}: x \mapsto -b_{\omega}(x)$ is constant over horosphere tangent at $\omega$. Hence any horosphere can be parameterized with an ideal point $\omega$ and an offset value $b$ of the level set. As for Eq 2, $\mu$ is included for simplicity in analysis later on (Eq 3, 12). We will add a note as suggested in the revision.
> 10. in case of a collection of samples?
Yes, the result holds for a collection of data. We will revise line 224-227 as follows for clarification.
For a collection of training samples $S = \lbrace (x_i, y_i)\rbrace_{i=1}^N$, let $A_i = \lbrace \nu \in \mathbb{S}^{n-1} | y_i \cdot \frac{x_i^T \nu} {\lVert x_i \rVert} > 0 \rbrace $. If the data are separable by a horosphere, it follows that $\cap_{i=1}^N A_i$ is non-empty and convex. Then the loss function given S is geodesically convex on $\mathbb{R}^{+} \times \cap_{i=1}^N A_i \times\mathbb{R}^{+}$ and the global optimum can be obtained using any gradient-based optimization.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for addressing most of my concerns and questions. However, I still have a few questions regarding the theoretical motivation for horocycles over geodesic hyperplanes.
>
> 1. geometric motivation for horospheres
If I understand it correctly, then the authors state that "horospheres stand out as the optimal choice for hyperbolic hyperplanes" due to the fact that they "maintain the essential characteristic of parallel hyperplanes having the same direction". What I am unsure of is why this property is essential. Another way to geometrically define hyperplanes is by taking the set of straight lines through some point which are orthogonal to a normal vector at that point, leading to the geodesic definition of hyperplanes. Why is this geometric definition or property inferior to the one leading to horocycles?
>
> 2. Figure 2 ...
If I understand the authors correctly, then they state that the somewhat strange choice of decision boundary in Figure 2 may be due to the (non-convexity of the) loss function. Is this a problem inherent to geodesic hyperplanes? If so, how/why? If not, then this is not an argument for horocycles over geodesic hyperplanes, but an argument for your loss function over the loss function of [12].
---
Reply to Comment 1.1.1:
Title: Response to Reviewer cEx8 #2
Comment: We thank the reviewer for the response. Please find our responses to the follow-up questions below.
> 1. geometric motivation for horospheres
>
> (Follow-up 1) ...why this property is essential. ... Why is this geometric definition or property inferior to the one leading to horocycles?
>
> (Follow-up 2)...(non-convexity of the) loss function. Is this a problem inherent to geodesic hyperplanes?...
In [12, 33], the hyperbolic hyperplane is defined as the intersection of the hyperboloid model and a codimension 1 subspace in Minkowski space, the ambient space of the hyperboloid model. Similarly in [11, 16], alternative definitions using the concept of Riemannian log and exponential maps also lead to geodesic hyperplanes.
We choose to use horospheres as decision boundaries in hyperbolic space since they have some nice properties that were already explored in several recently published works, such as HoroPCA (Chami et al. ICML 2021), "Fully-Connected Network... "(Sonoda et al. ICML 2022) and HyLa (Yu and De Sa. ICLR 2023). The following two geometric properties motivated our choice of horospheres as decision boundaries:
(1). The important fact about parallels in hyperbolic space is that they converge onto an ideal point lying on the boundary of the Poincare disk. This concept is similar to parallels in Euclidean space meeting at an ideal point (point at infinity).
(2). Further, the horospheres not only satisfy the aforementioned property but also maintain a constant hyperbolic distance between themselves. This latter property is not possessed by parallel (none intersecting) geodesic hyperplanes in hyperbolic space. This constant hyperbolic distance property facilitates the maximization of a margin that uses the concept of a "gutter" (as in Euclidean SVM [3, Sec 7.1]) which is parallel to the decision boundary (the horosphere).
In addition to the above two geometric properties motivating the choice of horospheres, we also have an algebraic reason namely, our choice of horospheres as decision boundary leads to a geodesically convex loss function as proved in Theorem 3.7 in our submitted paper. In contrast, using geodesic hyperplanes leads to a non-convex loss function (see Eq 5 in [12]), primarily due to the use of the Minkowski inner product, which is an indefinite bilinear form.
Reference:
***
[3] Christopher M Bishop and Nasser M Nasrabadi. Pattern recognition and machine learning, volume 4 Springer, 2006. | Rebuttal 1:
Rebuttal: # General response to all reviewers
We would like to thank all reviewers for their valuable comments. We are particularly encouraged that they consider the proposed method innovative (1ZfQ), beneficial for future research (cEx8), that the theoretical result is valid (cEx8, yt6v, 1ZfQ, 38iz), the superior performance in our experiments (cEx8, yt6v, 1ZfQ, 38iz), and that our paper is well-written (yt6v) and easy to follow (1ZfQ).
We have addressed the specific questions raised by each reviewer through separate responses. Any inaccuracies in presentation or misuse of terminology will be thoroughly clarified in the revision. We value all the reviewers' suggestions for improving the clarity and organization of our work. Modifications according to those reviews will be reflected in the revision as well. We appreciate the insightful discussion about the future direction of this research and will incorporate this feedback appropriately in the conclusion section of the revision.
We appreciate the time and effort that the reviewers spent in assessing our work. We hope that our responses have addressed all reviewers' questions and concerns. Please let us know if there are further questions.
The references cited in the rebuttal use the same indexing as in the submitted paper.
***
References:
[8] Chamberlain et al. Neural embeddings of graphs in hyperbolic space. arXiv:1705.10359, 2017.
[11] Chein et al. Highly scalable and provably accurate classification in poincaré balls. ICDM 2021.
[12] Cho et al. Large-margin classification in hyperbolic space. AISTATS 2019.
[15] Ganea et al. Hyperbolic entailment cones for learning hierarchical embeddings. ICML 2018.
[16] Ganea et al. Hyperbolic neural networks. NeurIPS 2019.
[23] Nickel and Kiela. Poincaré embeddings for learning hierarchical representations. NeurIPS 2017.
[24] Sala et al. Representation tradeoffs for hyperbolic embeddings. ICML, 2018.
[25] Rik Sarkar. Low distortion delaunay embedding of trees in hyperbolic plane. In International Symposium on Graph Drawing, 2011.
[33] Weber et al. Robust large-margin learning in hyperbolic space. NeurIPS 2020.
[36] Zhang and Sra. First-order methods for geodesically convex optimization. COLT 2016.
***
Additional references in rebuttal:
[C1] Izumiya S. Horospherical geometry in the hyperbolic space. Noncommutativity and Singularities: Proceedings of French–Japanese symposia held at IHÉS in 2006. Mathematical Society of Japan, 2009, 55: 31-50.
[C2] Sato H. Riemannian conjugate gradient methods: General framework and specific algorithms with convergence analyses. SIAM Journal on Optimization, 2022, 32(4): 2690-2717.
[C3] Iranmehr A, Masnadi-Shirazi H, Vasconcelos N. Cost-sensitive support vector machines. Neurocomputing, 2019, 343: 50-64. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Fast Asymptotically Optimal Algorithms for Non-Parametric Stochastic Bandits | Accept (poster) | Summary: This paper considers the classical regret minimization problem in the stochastic multi-armed bandit setting with arms having distributions with support bounded from above by a known constant B, and sometimes also lower bounded by b. Typically, all the known asymptotically optimal algorithms for regret-minimization problem in the bandit setup depend on computing an optimization problem involving the KL divergences. This is known to be computationally expensive in practice. The main object of study of this paper is K-inf. In this paper, authors develop approximations for empirical version of K-inf and study the performance of algorithms using these approximations, instead of exactly computing K-inf at each step.
The first of these approximations simply uses the convexity of K-inf in the second argument, while the second one derives from exploiting a connection between K-inf and online no-regret learning.
While using the first approximation in IMED and MED still leads to asymp. optimal algorithms, the second approximation requires assumptions on the no-regret algorithm used, and finding a no-regret algo. satisfying those assumptions remains open.
Numerical evidence supporting the theory is also provided.
Strengths: I enjoyed reading the paper. It is generally well-written. While theory for asymptotically optimal algorithms for bandits have lately been developed in great generality, these optimal algorithms can be computationally demanding. These optimal algorithms all mostly depend on empirical K-inf. While the dual of this problem is a convex optimization problem, it is well known that its computation cost increases with number of samples (Cappé et al., 2013, Honda et al., 2015, Agrawal et al., 2021).
The paper studies an important practical problem of addressing the computational aspect of these optimal algorithms trying to avoid compromizing their optimality.
It is interesting to see the use of ideas from online portfolio selection algorithms in the bandit setting. As pointed by the authors, while this connection was previously observed in developing concentration inequality for K-inf by Agrawal et al. 2021, exploiting it to get fast algorithms is an interesting step in solving the computational aspect of K-inf, which, in my opinion, can be useful in other bandit settings as well.
Weaknesses: 1. The assumption on the no-regret algorithm in the second approach is a bit strange.
2. The paper can be a bit hard to read for someone new in the community.
- Before introducing the fast MED algorithms in Section 2, it would be good to introduce the basic idea of
MED algorithms (both IMED and MED) or how K-inf appears in them (or point the reader to relevant
appendix where it is discussed, if space is a concern).
- 1 line justification for the re-formulation of K-inf below line 126.
- Details of why K-inf computations only happen for O(log T) times in line 112 of the paper. Though the
reader is referred to Section 3 for discussion, I couldn't find a discussion on this there either.
3. In the introduction, the authors mention that KL-UCB requires several computations of K-inf per step, which I believe is not necessary - see, for example, Agrawal et. al, 2021 (Appendix D) where the index is expressed as a single 2D opt. problem. Similar approach can be used for KL-UCB algorithm in the setting considered here.
4. Minor comments:
- Line 142: is --> are
- Line 143: This is, for example, ...
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can the ideas be extended beyond bounded distribution class? For example, to the moment-bounded class considered in Agrawal et al., 2021 or even the centered-moment bound class considered in Baudry et al., 2023? What would the challenges be in extending?
2. Could you clarify how you obtained the inequality after the data-processing inequality on Page 30, line 703?
3. In Figure 3, since we are looking at performance of OIMED and FIMED, it will be good to include IMED as well.
4. Also see weakness above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading and your precise questions, as well as your helpful suggestions.
We first answer to the points detailed in the weakness section of your review.
* (1) In our bandit algorithm, a sub-linear upper bound on the portfolio regret avoids over-exploration while a sub-linear lower bound avoids under-exploration. The requirement for a lower bound is indeed unusual in the literature on portfolio selection, making this question an open problem for future work.
* (2) We understand your point, we will follow your suggestions to make some parts of the paper easier for non-experts. We will point to Appendix A.2 where IMED (Algorithm 2) and MED (Algorithm 3) are detailed.
* Still (2): KL-inf computations only happen when a (empirically) sub-optimal arm is pulled, so the upper bound on the number of pulls of Theorem 1 directly translates into an upper bound $\mathcal{O}(\log(T))$ on the number of KL-inf computations. We will clarify this in the revision.
* (3) It is true that for our claim in the introduction we assume a naive implementation of KL-UCB using al linesearch on $\mu$, that may be optimized e.g. using the method of [4]. However, that method makes the cost of KL-UCB similar to the one of MED or IMED, not significantly better. We will change our description l. 57 to account for that.
* (4) Thank you for noticing these typos.
We now answers the questions asked in the dedicated section of your review.
* (1) Extending our approach for the non-parametric families of distributions considered in [3] and [4] is indeed a natural direction for future works. Several challenges arise, making this extension non-trivial. First, the $\mathcal{K}_{\text{inf}}$ is not necessarily convex in its second argument in these settings (we exhibited convex then concave behavior in preliminary experiments for the centered family of [3]). As a consequence, the analysis of the pre-convergence term of FMED and FIMED would not be as simple as in the bounded case, or the algorithms may require modifications. Then, for the adaptation of OIMED and OMED the feasible set of the parameters $ (\lambda_1, \lambda_2)$ of the dual problem is of the form $\mathcal{S}= \{ (\lambda_1, \lambda_2) | \forall x \in \mathbb{R}, \; 1-\lambda_1(x-\mu) - \lambda_2 (B-h(|x|)) \geq 0 \}$, which is more difficult to map to the simplex than $\lambda \in \left[0, \frac{1}{B-\mu}\right]$ in the bounded case.
* (2) p. 30, l. 703: we skipped the following steps, and will add them in the revision to improve clarity. Denote by $G_\star$ the distribution satisfying $$K_{\text{inf}}(F_k, \mu^\star) = \text{KL}(F_k, G_\star),$$
and by $G_\star^{X_M}$ its discretized counterpart. Then, data processing inequality states that
$$K_{\text{inf}}(F_k, \mu^\star) = \text{KL}(F_k, G_\star) \geq \text{KL}(F_k^{X_M}, G_\star^{X_M}),$$
and we obtain l. 703 using that $\text{KL}(F_k^{X_M}, G_\star^{X_M})$ is itself larger than the KL-inf between $F_k^{X_M}$ and $\mu^\star$, by definition of this function.
* (3) We will include IMED in Figure 3 in the revision. As in Figure 2, its curve is superposed with the one of FIMED. In all our experiments the difference between the two algorithms is negligible while FIMED is much faster, showing the strength of this approach.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal by authors
Comment: I thank the authors for their responses and acknowledge reading the entire thread of rebuttals. I am satisfied with the authors' responses and retain the score. | Summary: In this paper they provide fast optimal algorithms for (non-parametric) stochastic bandits.
Specifically, they consider the MED family of algorithms that require the computation of $\mathcal{K}_{inf}$.
Based on this family of algorithms they construct new variants that require the computation of $\mathcal{K}_{inf}$ for one arm and approximate $\mathcal{K}_{inf}$ for other arms. Although, these variants (FMED, FIMED) achieve computational speedups they store all the rewards in memory. To address this issue they use portfolio algorithms.
Strengths: - This work addresses the well known issue of optimal bandit algorithms being impractical due to high computational complexity.
I think this is an important step towards practical optimal bandit algorithms.
- Novel (algorithmic) use of portfolio algorithms to estimate $\mathcal{K}_{inf}$.
- Well written
- Experiments prove the applicability of the proposed algorithms.
Weaknesses: Maybe you have to mention the regret assumption for the portfolio algorithms upfront, i.e. in section 2 (instead of the end of section 3).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is kl in line 78? Please clarify.
- How coupled is your approach with the knowledge of $b$ and $B$ the lower and upper bounds for the distribution support?
In case we don't exactly know these bounds, does under- or overestimating this parameters lead to poor perfomance? Specifically how is perfomance affects?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging evaluation of our work and for your helpful questions and suggestions.
* **Portfolio regret assumption** We agree that this crucial assumption should be mentioned sooner in the paper, we will follow your suggestion and introduce it already in Section 2.
* **Questions -- $\text{kl}$:** It denotes the Kullback-Leibler divergence of Bernoulli distributions, $$\forall (p, q) \in [0,1]\times (0,1)\;, \\text{kl}(p,q)=p\log(p/q) + (1-p) \log((1-p)/(1-q)).$$
We indeed forgot to define it, thank you for noticing.
* **Questions -- knowledge of $[b,B]$:** The strength of our approach is that the performance of our algorithms does not require the knowledge of $b$, while approaches relying e.g. on re-scaling are very sensitive to it. It just needs to be finite for our analysis, and $B-b$ multiplies some of the second-order terms of our regret bounds (which would be the case with any other standard algorithm like UCB). On the other hand, the knowledge of $B$ is crucial, but this is true for any bandit algorithm working with bounded distributions. Over-estimation can lead to sub-optimality, but preserves the logarithmic regret. In fact, if the bound used by the algorithm is $B+\gamma$ for some $\gamma>0$ then it would be optimal if the true model was ``the distributions are supported on $[b, B+\gamma]$''. This is a broader family of distributions than the distributions supported in $[0,B]$, which can lead to sub-optimal (but logarithmic) regret. On the other hand, under-estimation may lead to linear regret. Hence, if the practitioner does not know $B$ it is always preferable to over-estimate it.
---
Rebuttal Comment 1.1:
Comment: Thanks for your answers. | Summary: This paper consideres classic multi armed bandit problem. The focus is on developing asymptotically optimal algorithms with efficient computational and memory complexity. The results for the proposed algorithms are reported in Table 2. FMED and FIMED compute KL divergence for the armed that is pulled while using first order Taylor expansions for the other arms. In addition an online portfolio selection algorithm is used to estimate the KL divergences that further improves the computational and memory performance.
Strengths: The paper contributes to the bandit literature by improving the computational and memory requirement of asymptotically optimal algorithms with non parameteric distributions (where it is assumed that an upper bound on the rewards is known).
Weaknesses: Although the results are interesting, they rely on known results from Honda and Takemura [2010] on approximating KL diveregnce and results on online portfolio selection. This limits the significance of the contribution of the paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could authors provide more details on analytical challenges beyond using the results from Honda and Takemura [2010] and online portfolio selection. This seems important as this is mainly a theoretical paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper is mainly a theoretical paper, targeting the computational and memory requiremets of exisiting algorithmis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comment, in the following we hope to address your concern on the technical contribution of our paper. Though the results from Honda \& Takemura (2010) and the literature on portfolio selection are the starting point of our work, we believe that we solved several significant technical challenges to obtain our results. Hence, we respectfully disagree with the opinion that the technical aspect of the paper may limit the significance of our contribution, and would be happy to discuss more precise points during the discussion phase.
**Outline of our main technical contributions:** For the analysis of FMED and FIMED we had to upper bound new terms involving the variation of the best empirical arms (Var-Best and Transition-Best in the proofs). We believe that this part (l. 479-496) is non-trivial. Furthermore, we point out that our new way to upper bound the term PRE-CV$_{\texttt{IMED}}$ is of independent interest, since it largely simplifies the original proof of Honda \& Takemura (2015).
Then, for the analysis of OMED and OIMED we use existing results on portfolio selection only through the assumption on the portfolio regret. This provides only a small part of the result, and a significant challenge remains in upper bounding the regret due to the portfolio bias. Solving this challenge led us to propose the current form of OMED and OIMED as duel-based algorithms, and their analysis is quite non-standard. Thus, the proof of Theorem 2 (Appendix C) presented a significant technical challenge. | Summary: This paper studies non-parametric stochastic bandits. In particular, the author proposes algorithms named Fast (Indexed) Minimum Empirical Divergence (FMED, FIMED), and Online (Indexed) Minimum Empirical Divergence (OMED, OIMED). These algorithms are designed based on Minimum Empirical Divergence (MED). Regret guarantees comparable to that of MED is established for FMED, FIMED, OMED, and OIMED. In addition, these new algorithms have much better computational complexities. Numerical experiments have been conducted to support these claims.
Strengths: This problem is well-motivated and the paper is clearly written and easy to follow. The paper also presents a complete story from motivation to algorithmic design to theoretical guarantees and empirical justifications.
Weaknesses: I have listed a couple questions in the “Questions” part of this review.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It would be great if the authors could elaborate on the result established by Theorem 1. For example, the bound is dependent on $\epsilon$—can $o_{\epsilon}$ be enumerated and this bound be optimized over $\epsilon$ to get a more interpretable bound?
2. In addition, it would be helpful to include a more detailed comparison with Honda and Takemura 2015 on both the proof techniques and the regret bound and computational complexity of the algorithms.
3. Section 4 provides nice empirical justifications on the superior performance and computational complexity of the proposed algorithms. Is it possible to replicate the experiments conducted in Honda and Takemura and compare those algorithms there?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your questions, that we hope to address in the following.
* **Second-order terms** In the revision we will detail the scaling of the $o_{\epsilon}(.)$ term in Theorem 1, and include the following short discussion. In our proof, all its components are explicit, so the scaling in $\epsilon$ can be recovered: we obtain $\epsilon^{-6}$, that allows to get a sub-linear minimax bound of $\mathcal{O}(T^{5/6})$ if e.g. $\epsilon = \Delta_k/2$ is chosen, though not the optimal rate of $\mathcal{O}(\sqrt{T})$. We can make several remarks on this result. First, the same scaling is obtained for the vanilla MED (l. 517) with the proof techniques presented in [3]. Our fast implementations don't deteriorate that bound. This result is likely to be not tight, but improving it may require a much more involved analysis. A recent paper [2] established that MED is minimax optimal for Bernoulli distributions, but the analysis is very technical. Finally, to the best of our knowledge it remains open to prove that there exists an algorithm that is both minimax and asymptotically (instance) optimal in bounded bandits (i.e. with the true $\mathcal{K}_{\text{inf}}$ in the instance-dependent regret, not the $2\Delta^2$ or $\text{kl}$ approximation as for KL-MS [2]).
* **Comparison with Honda \& Takemura (2015)** Comparing the steps of the analysis would be intricate since they follow a quite different path. Our analysis is more inspired by the recent works [1] and [3]. We can state that the regret bound of Honda \& Takemura contains a $\mathcal{O}(\Delta^{-10})$ term (proof of their Corollary 4), so our result is tighter for small $\Delta$. However their bound holds for a more general case than ours since semi-bounded supports are allowed ($b=-\infty$ with our notation).
Regarding the experimental aspect, in our revision we will replicate the experiments of Honda \& Takemura and put the results in our Appendix F, already containing additional experiments. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments on the paper, and appreciate their overall positive feedback.
Following the suggestions of all reviewers, our revision will mainly serve the purpose of providing additional intuitions to some technical parts of the paper. We also thank the reviewers for pointing some typos and specific points that required clarifications.
We provide a detailed response under each individual review, and would be delighted to provide further insights if needed during the discussion phase.
In our responses we refer to the following references:
[1] J. Bian and K. Jun. Maillard sampling. Boltzmann exploration done optimally. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics, AISTATS
2022, 2022.
[2] H. Qin, K. Jun, C. Zhang. Kullback-Leibler Maillard Sampling for Multi-armed Bandits with Bounded Rewards, 2023.
[3] D. Baudry, K. Suzuki, and J. Honda. A general recipe for the analysis of randomized multi-armed bandit algorithms, 2023.
[4] S. Agrawal, S. Juneja, and W. M. Koolen. Regret minimization in heavy-tailed bandits. In Conference
on Learning Theory, COLT 2021, 15-19 August 2021, Boulder, Colorado, USA, 2021. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This papers studies asymptotically optimal algorithms for regret minimization in the multi-armed bandit setting with bounded reward distributions. The main contribution is an efficient approximation of the KL constraint of the lower bound, leading to faster algorithms with (asymptotic) regret guarantees. The proposed approach extends the MED / IMED by optimizing a lower bound on the KL, and using online learning algorithms to estimate the KL constraint. The claims are corroborated with numerical experiments on real-world data for crop management.
Strengths: Asymptotically instance-optimal bandit algorithms are well-studied, however existing approaches are expensive to compute because any optimal approach needs to (approximately) solve a linear program with KL constraints. Developing more efficient and practical algorithms is an import problem and this paper makes significant progress towards this direction. Both the relaxation of the KL and the use of online algorithms to compute K_inf are interesting novel ideas.
The paper is generally well-written, however fairly technical and could use some more guidance/intuition for the reader in a few places (e.g. how is the loss of the online algorithm connect to K_inf? how is the bias of the portfolio algorithm controlled?)
Related work is adequately discussed. Note that the use of online algorithms for approximating/minmizing KL objectives in bandit setting also appears in related context, e.g. [1-3].
[1] Foster, Dylan, and Alexander Rakhlin. "Beyond ucb: Optimal and efficient contextual bandits with regression oracles." In International Conference on Machine Learning, pp. 3199-3210. PMLR, 2020.
[2] Foster, Dylan J., Sham M. Kakade, Jian Qian, and Alexander Rakhlin. "The statistical complexity of interactive decision making." arXiv preprint arXiv:2112.13487 (2021).
[3] Kirschner, Johannes, Tor Lattimore, Claire Vernade, and Csaba Szepesvári. "Asymptotically optimal information-directed sampling." In Conference on Learning Theory, pp. 2777-2821. PMLR, 2021.
Weaknesses: The approach appears to be tailored to bound reward distributions, whereas in general, asymptotically optimal algorithms are known for a much more general class of algorithms and reward distributions. It is unclear if the proposed techniques can be extend beyond the current setting. It would be great if the authors could comment on this point.
As a minor comment, while I appreciate the use of real-world data, the simulation of the crop-management tasks is run over 10000 rounds. Is this realistic in this type of applications? Do the algorithms perform well over much smaller horizons?
Can anything be said about finite-time or worst-case performance of the algorithms?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The main result (Theorem 1) uses small-o-notation to suppress the lower-order terms. Are these terms controlled in the analysis? Could you provide a more detailed discussion of the finite-time performance of the algorithm, both from a theoretical and empirical point of view?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: * The online algorithm used to approximate the KL is assumed to have a bound on the absolute value of the regret. This is in rather unusual requirement and strongly limits the choice of online learning algorithms, as the authors discuss at the end of section 3.
* The approach requires an upper bound on the best reward, mu_max < B. It looks like this can easily be satisfied by replacing B by B + eps for small eps > 0. Does the bound depend on the gap B - mu_max? Is knowing B/mu_max a strong assumption?
I also suggest to add a more detailed explanation/clarify for the following points:
* eq below line 126: why do we get this equality? (How) Is N_k(t) defined?
* line 164-168: The dueling procedure is hard to understand, i.e. what is a "greedy duel"?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading and for proposing several ways to further improve the clarity of our paper. In the following we answer to the points raised in your review, in order. Please feel free to ask for any further clarification during the discussion phase.
* **Remarks in the Strengths section:** We will move Equation (13) in Section 2.2 in order to provide a clear summary of the link between $n \mathcal{K}_{\text{inf}}(F_n, \mu)$ and the quantities of the portfolio selection algorithm in this section.
Also, thank you for pointing us to additional related works. We will add them to our literature review.
* **Extension beyond bounded distributions:** Our approach is indeed tailored for bounded distributions, but we believe that it can be extended to other settings. Please note that in several usual settings MED/IMED are in fact easy to run and memoryless, so there is no incentive to modify their vanilla implementation: parametric families of distributions [3] (single parameter exponential families, Gaussian distributions), or Sub-Gaussian distributions (Maillard Sampling, [1]). However, as pointed out by reviewer zxKa, our approach may be interesting for some other non-parametric models presented in [3] (in which MED and TS approaches for these settings are analyzed) and [4], with families of distributions characterized by a moment condition of the form $\mathbb{E}[h(|X|)] \leq B$ for some known convex function $h$ and constant $B$. In such setting, $\mathcal{K}_{\text{inf}}$ can be expressed as the solution of an optimization problem, very similarly to the bounded case. There is hope that we can adapt our algorithms to these settings, but this is not straightforward (see our response to reviewer zxKa). This is a direction that we would like to consider in a future work.
* **Real-word data:** We chose the crop yield dataset because it was available and provided realistic examples of bounded distributions for which optimal algorithms are largely superior to sub-optimal ones using $\text{kl}$ or $2\Delta^2$ as a proxy for $\mathcal{K}_{\text{inf}}$. We agree that obtaining $10^4$ points is unrealistic in that application and that our considerations of computational cost could have been better illustrated on another setting.
* **Finite-time/worst-case performance/$o$ in Theorem 1:** In the revision we will detail the scaling of the $o_{\epsilon}(.)$ term in Theorem 1, and include the following short discussion. In our proof, all its components are explicit, so the scaling in $\epsilon$ can be recovered: we obtain $\epsilon^{-6}$, that allows to get a sub-linear minimax bound of $\mathcal{O}(T^{5/6})$ if e.g. $\epsilon = \Delta_k/2$ is chosen, though not the optimal rate of $\mathcal{O}(\sqrt{T})$. We can make several remarks on this result. First, the same scaling is obtained for the vanilla MED (l. 517) with the proof techniques presented in [3]. Our fast implementations don't deteriorate that bound. This result is likely to be not tight, but improving it may require a much more involved analysis. A recent paper [2] established that MED is minimax optimal for Bernoulli distributions, but the analysis is very technical. Finally, to the best of our knowledge it remains open to prove that there exists an algorithm that is both minimax and asymptotically (instance) optimal in bounded bandits (i.e. with the true $\mathcal{K}_{\text{inf}}$ in the instance-dependent regret, not the $2\Delta^2$ or $\text{kl}$ approximation as for KL-MS [2]).
* **Upper bound on the best mean:** The upper bound $\mu_{\text{max}}< B$ is necessary so that the bias of the portfolio algorithm does not become infinite, and some of the second-order terms of the regret scale in $\frac{1}{B-\mu_{\text{max}}}$. We first highlight that all existing optimal algorithms need to know the upper bound $B$, so this assumption is standard. On the other hand, deciding if knowing $\mu_{\text{max}}<B$ is a strong assumption or not boils down to determining if our bandit problem may contain distributions that are highly concentrated around $B$. In our example in agriculture this would be very unrealistic, and it is reasonable to expect that experts may provide quite well separated estimates of $B$ ($\approx$ highest ever recorded yield) and $\mu_{\text{max}}$ (how good can the yield be on average knowing that there will inevitably be poor seasons). In absence of such expert knowledge, the solution of artificially increasing the upper bound outside of the support is indeed valid.
* **Points to clarify** $N_k(t)$ is defined l. 16 as the number of pulls of arm $k$, and l. 126 is direct from the definition of as the KL-inf as an expectation over the empirical distribution (Eq. (4)). We use ``greedy duels'' as a convenient term to call duels where the algorithm chooses (without sampling) the arm with maximum empirical average (that would be the output of the greedy algorithm). These duels are necessary to guarantee a large enough sample size for the leader when updates of the online estimate of $\mathcal{K}_{\text{inf}}$ are performed.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the clarifying comments. | null | null | null | null | null | null |
Brain encoding models based on multimodal transformers can transfer across language and vision | Accept (poster) | Summary: There is a fastly growing literature analyzing how deep learning models' representations have predictive power over fMRI brain measurements. These papers typically train a regressor model (typically a linear regressor) to predict the fMRI measurements from the neural models' representations. Current encoding models are typically trained and tested on fMRI brain activity to each modality (i.e. language or visual or auditory) in isolation. They then evaluate predictive power by measuring the correlation between predictions and actual measurements.
This paper contributes to that literature by using representations from the multimodal Transformer (BridgeTower) model that can extract aligned representations of concepts in language and vision before using it in the above pipeline. Since language and vision rely on similar concept representations, authors investigated how well the multimodal Transformer representations can transfer across fMRI responses to stories and movies. By comparing the correlation between actual fMRI and predictive fMRI from multimodal and unimodal representations, the authors draw conclusions that multimodal transformers learn more aligned representations of concepts in language and vision. Finally, the authors perform a cross-modal experiment in which how well the language encoding models can predict movie-fMRI responses from narrative story features (story → movie) and how well the vision encoding models can predict narrative story-fMRI responses from movie features (movie → story).
Strengths: The paper contains the following key contributions:
* The novelty of this work: using a multimodal Transformer as the encoder architecture to extract the aligned concept representations for narrative stories and movies to model fMRI responses to naturalistic stories and movies, respectively.
* Encoding models trained on brain responses to one modality could accurately predict brain responses to the other modality.
* Cross-modality performance was higher for features extracted from multimodal transformers than for linearly aligned features extracted from unimodal transformers.
Originality:
* The idea of how concepts in language and vision are aligned in the brain and demonstrated that alignment using multimodal transformers representations.
* Mapping story features -> movie fMRI and movie -> story fMRI is interesting.
Clarity: The paper is written well. The information provided in the submission is sufficient to reproduce the results.
Significance: The idea of using a multimodal Transformer encoder model to extract aligned language-visual concept representations to predict fMRI responses is exciting. The paper has novel cognitive insights about language and visual ROIs; clear experimental evaluation, and adequate related work.
Quality: The paper supports its claims with enough details. The paper is well-written and easy to follow.
Weaknesses: * Since the narrative stories used in the story experiment are completely different from short movie clips, how much is the overlap percentage of concepts in both modalities?
* This problem is more interesting if authors can use the same information for both listening and watching a movie (i.e. the same subject is watching a movie and listening to the same story). Similar to the following work: The same subject was reading and listening same narrative story. Deniz F., Nunez-Elizalde A.O., Huth A. G., Gallant J. L., The representation of semantic information across human cerebral cortex during listening versus reading is invariant to stimulus modality . Journal of Neuroscience
* It could be interesting if authors could use captions for the videos and compare the performance of story-movie and captions-movie. Similarly, stories-captions and captions-stories.
* Which regions have a higher estimated noise ceiling in listening to and watching a movie on the same subject?
* What is the estimated noise ceiling when using the subject's story voxel to predict the subject movie voxels?
* The authors missed some recent works that studied multimodal Transformers for brain encoding.
Authors missed the following papers:
INTERPRETING MULTIMODAL VIDEO TRANSFORMERS USING BRAIN RECORDINGS, Dota et al. 2023, https://openreview.net/pdf?id=p-vL3rmYoqh
Visio-Linguistic Brain Encoding, Oota et al. 2022, https://aclanthology.org/2022.coling-1.11.pdf
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Did the authors fine-tune the BridgeTown multimodal Transformer on the current dataset?
* Is there any effect on encoding model performance in which adding noise to one modality features and predicting the other modality fMRI response?
* What if we initialize the model with random weights to perform neural encoding?
* It could be interesting, if authors can test encoding performance for both single-stream and dual-stream multimodal Transformer models?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed several limitations and made future directions for the research community.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and making these suggestions. We have addressed your concerns in order below:
>Since the narrative stories used in the story experiment are completely different from short movie clips…
Thank you for raising this point. We will describe the concepts contained in the story and movie stimuli in the Appendix of the final paper.
To approximate the overlap in concepts, we clustered GloVe vectors into 100 semantic categories. We then created 100-dimensional story and movie category vectors. The story category vector indicates how frequently each of the 100 categories occurs in the stories. The movie category vector indicates how frequently each of the 100 categories occurs in the movies, based on WordNet labels assigned to each movie frame [1]. The linear correlation between the story and movie category vectors was 0.5, indicating a reasonable degree of overlap.
We plotted the category vectors in Figure R2, which will be included in the Appendix of the final paper. Categories like location proper nouns (e.g. “Brooklyn”, “Ithaca”) occurred more in the stories. Categories like nature words (e.g. “bloom”, “soil”) occurred more in the movies. To plot the results on a log scale, we filtered out categories which did not occur in at least one of the stimulus modalities. However, such categories typically occurred infrequently in the other modality as well.
>This problem is more interesting if authors can use the same information for both listening and watching a movie…
We agree that using stimuli that are more closely matched in terms of semantics and timescales is an important direction for future work (line 217). We will emphasize this future direction in the discussion of the final paper.
>It could be interesting if authors could use captions for the videos and compare the performance of story-movie and captions-movie…
Since the movies were constructed by concatenating a sequence of 10-20 s video clips, there are unfortunately no high-quality captions readily available for this comparison. However, we agree that this is an interesting direction for future work. Previous studies have found that encoding models trained on caption features could be used to predict brain responses to images, albeit not as accurately as encoding models trained on image features (see [5]).
>Which regions have a higher estimated noise ceiling in listening to and watching a movie…
We evaluated our encoding models on single-trial responses, rather than responses to multiple repeats of the same stimulus. While evaluating on single-trial responses to a large number of stimuli provides high semantic coverage, there is no way to estimate a noise ceiling using single-trial responses.
The dataset that we used [3] also includes responses to multiple repeats of a test stimulus. However, the dataset only provides the averaged responses across repeats, whereas noise ceiling estimation requires having the responses to each individual repeat.
>What is the estimated noise ceiling when using the subject's story voxel to predict the subject movie voxels?
The noise ceiling is a function of the test data rather than the encoding model, so the $source→target$ noise ceiling will be the same as the $target→target$ noise ceiling. As we expect cross-modality performance to be lower than within-modality performance, $r_{target→target}$ provides an approximate lower bound on the noise ceiling for $r_{source→target}$ (Figure 3).
>The authors missed some recent works that studied multimodal Transformers for brain encoding…
Thank you for linking to these papers. We agree that these studies are highly relevant, and we will reference them in the final paper.
>Did the authors fine-tune the BridgeTown multimodal Transformer on the current dataset?
No, we did not. We used BridgeTower models that were pretrained on separate datasets (please refer to Section 2.1 for details).
>Is there any effect on encoding model performance in which adding noise to one modality features and predicting the other modality fMRI response?
To test this, we added Gaussian random noise to the stimulus features. For $story→movie$ transfer, prediction performance was robust to noise in the $story$ features. For $movie→story$ transfer, prediction performance slightly increased when noise was added to the movie features, which may result from the noise affecting negative as well as positive correlations. These results are shown in Figure R3.
>What if we initialize the model with random weights to perform neural encoding?
Previous studies have found that within-modality performance for randomly initialized neural networks is above chance, but below that of trained neural networks [13]. After fitting alignment matrices, we expect cross-modality performance to be above chance, but below that of trained unimodal neural networks (Section 5.4) and trained multimodal neural networks. We are currently waiting on results of this experiment, which we will include in the final paper. That said, we believe that the trained unimodal neural networks provide the most direct control for evaluating the effects of multimodal training.
>It could be interesting, if authors can test encoding performance for both single-stream and dual-stream multimodal Transformer models?
To test this, we performed many of the same analyses using KD-VLP [26] which is a single-stream multimodal transformer. These results are shown in Figure R4. We found very similar patterns of transfer performance across cortex. Across voxels and subjects, KD-VLP performance ($r_{story→movie} = 0.17$, $r_{movie→story} = 0.08$) was slightly worse than BridgeTower performance ($r_{story→movie} = 0.23$, $r_{movie→story} = 0.10$). However, these differences could be due to training data and training tasks, as well as model architectures. We will include this analysis in the Appendix of the final paper.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Dear authors,
Thanks for the rebuttal.
I appreciate the time and effort, especially in addressing all the questions. However, I still retain some reservations regarding the following inquiry:
"What is the estimated noise ceiling when using the subject's story voxel to predict the subject movie voxels?"
In light of Schrimpf et al.'s 2020 paper, where they managed to estimate the noise ceiling even with single-trial stimuli, I find it intriguing if the authors could elaborate more on their subsequent answer and potentially draw comparisons to the work conducted by Schrimpf et al. in 2020.
Schrimpf et al.'s 2020, The neural architecture of language: Integrative modeling converges on predictive processing, PNAS-2020
Your response stated: "We evaluated our encoding models on single-trial responses, rather than responses to multiple repeats of the same stimulus. While evaluating on single-trial responses to a large number of stimuli provides high semantic coverage, there is no way to estimate a noise ceiling using single-trial responses."
---
Reply to Comment 1.1.1:
Comment: Thanks for the response! The Schrimpf et al. paper uses data from other subjects to estimate noise ceilings for a held-out subject. As we understand it, this inter-subject approach essentially treats the responses from the other subjects as repeats of the responses from the held-out subject.
In our study, we computed model performance for each individual voxel. To estimate the Schrimpf et al. noise ceiling for a voxel in a held-out subject, we would need to find an analog of that voxel in every other subject. Since the subjects’ brains differ in anatomical and functional organization, this requires aligning voxels across subjects using anatomical or functional approaches.
Anatomical approaches are imprecise due to the variability of the semantic system (Huth et al. 2016). Concretely, we could project each subject’s responses into some shared space, and estimate the noise ceiling for the held-out subject’s voxel at coordinate (x, y, z) using other subjects’ voxels at coordinate (x, y, z). However, these voxels may not share any semantic tuning because fine-grained functional properties are not precisely coupled to anatomy. Computing a noise ceiling for a target voxel using unrelated voxels can lead to a low noise ceiling that inflates the results of our study.
Functional approaches typically rely on regions of interest (ROIs) defined using an independent localizer task. However, as mentioned in our response to Reviewer C9Qg, we do not believe that there is an effective multimodal localizer that would select the voxels that we are interested in evaluating. Functional approaches for voxel-level alignment are less common, and we believe that using them to derive a noise ceiling is beyond the scope of this paper.
We hope that this clarifies our comments on noise ceiling estimation: while it is possible to estimate a noise ceiling on single-trial data by treating data from other subjects as repeat trials, we believe that aligning data across subjects underestimates the noise ceiling for our analyses and diminishes the usefulness of such an estimate. | Summary: This paper aims to fit fMRI data using transformer models. Of note, the paper aims to extrapolate from models that fit fMRI responses to visual stimuli to be able to account for fMRI responses to what may presumably be language stimuli. The results throughout the paper are extremely weak but are presented as if the models can provide insights into the fMRI signals.
Strengths: The idea of comparing representations to visual stimuli and auditory stimuli is certainly very interesting.
The possibility of elucidating “conceptual” representations that transcend any one input modality is also quite interesting.
Weaknesses: It would be great to start by demonstrating that the fMRI data actually relates to the visual and story stimuli. This should be shown by rigorous description of the fMRI data, with actual units, error bars, and statistics. There is nothing in the current paper that suggests that there is any explainable variance in the fMRI data.
Even though the paper does not show that the fMRI data contain any information, assuming that there is some degree of correlation between the fMRI data and the stimuli, and after all the undisclosed maneuvering, the results indicate that the models simply cannot capture the fMRI data. Take Fig. 1b. The uncorrected correlation coefficient hovers around 0. After more maneuvering and correction, the authors manage to get to 0.01. In other words, there is no correlation between the model outputs and the fMRI data.
Figure 3 is more challenging to interpret because the authors stop reporting r values and report normalized r values. This is a poor practice. For example, it could be that r_{movie > movie}=0.03 and r_{storie>movie}=0.01 and the ratio is 0.33 which might confuse the reader into thinking that there is something there. But the bottom line is that all of the r values are negligibly small.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: This paper would need an enormous amount of work to bring it to a level where it has a minimal scientific description of findings, starting with an open discussion of all the steps involved, documenting whether there is any reproducibility in the data, showing that the fMRI signals relate to language or auditory or visual stimuli, and finally potentially a role for neural network models. The current results leave it open as to whether the fMRI data relates to the stimuli and compellingly demonstrate that the neural networks cannot capture the fMRI data.
Other points.
Figure 4 includes enigmatic color scales that go from “-“ to “+”. Color scales should have numbers.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: The authors failed to describe all the multiple limitations and to adequately interpret their results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and making these suggestions. We have addressed your concerns in order below:
>It would be great to start by demonstrating that the fMRI data actually relates to the visual and story stimuli…
We analyzed a publicly released dataset that has been used in previous studies relating fMRI responses to naturalistic stories and movies [1, 2, 3]. Furthermore, a large body of literature in the past two decades has demonstrated that fMRI responses to naturalistic stories and movies similar or identical to those used here contain meaningful information [2, 16, 17, 18], so a complete description of the fMRI dataset is out of the scope of this paper. In the final revision, we will emphasize that the fMRI data has been rigorously studied and modeled in prior peer-reviewed work.
The fMRI response for a cortical voxel is given by the time-course of the blood-oxygen level dependent (BOLD) signal. The information in the fMRI response is contained not in the absolute value of the BOLD signal, but in its time-course. The objective of an encoding model is to predict the response time-course from stimulus features [19]. As described in Section 4.1, encoding models are evaluated by predicting brain responses to novel stimuli and computing the linear correlation between the predicted and the actual response time-courses. If we can successfully predict responses in a voxel from stimulus features, then our model has successfully explained some variance in the fMRI data.
We report prediction performance in Figures 2, 3, and 5. Figure 2a shows that many voxels have cross-modality correlations greater than 0.1. Figure 3b shows that cross-modality correlations can reach 0.5 for $story→movie$ and 0.2 for $movie→story$, while within-modality correlations can reach 0.6 for $movie→movie$ and 0.4 for $story→story$. This demonstrates that there are many cortical regions where the encoding models explain a substantial fraction of the data variance.
>Even though the paper does not show that the fMRI data contain any information…
Figure 2b quantifies encoding model performance by taking the average correlation across all cortical voxels. As described in our reply to Reviewer C9Qg, averaging across cortex is an unbiased way to compare different encoding models. However, averaging across cortex does yield conservatively low correlation values, since there are many voxels (e.g. those in motor cortex) which are not involved in story or movie perception. Figure 2a shows that many regions involved in story or movie perception have correlations greater than 0.1, and Figure 3b shows that these correlations can reach up to 0.5 for $story→movie$ transfer, and up to 0.2 for $movie→story$ transfer. In the final paper, we will emphasize that the average correlation across cortex reported in Figure 2b is a conservatively low metric.
>Figure 3 is more challenging to interpret because the authors stop reporting r values and report normalized r values…
Please note that Figure 3a only shows $r_{source→target}/r_{target→target}$ for voxels with statistically significant (see Appendix A.1) values of $r_{target→target}$, so the ratios will not be inflated by negligibly small denominators. Furthermore, $r_{source→target}$ is directly compared to $r_{target→target}$ in the Figure 3b histograms, which show that many regions have high cross-modality and within-modality performance. Finally, Figure 3 is not meant to be a normalization of the results in Figure 2, but a way to compare cross-modality performance to within-modality performance. Many previous studies have demonstrated that language and vision encoding models achieve good within-modality performance (line 20), so within-modality performance provides a meaningful ceiling for cross-modality performance [4].
In the final paper, we will emphasize that previous language and vision encoding models achieve good within-modality performance (line 20).
>This paper would need an enormous amount of work to bring it to a level where it has a minimal scientific description of findings…
Our study builds upon a commonly used framework for modeling brain responses to naturalistic stimuli using features extracted from neural networks [20]. There is little debate that fMRI signals relate to language and visual stimuli, as demonstrated by studies that read information about the stimulus out of fMRI data [10, 17, 18, 21, 22], even using the same dataset as that used here ([23]: “Decoding the semantic content…”). There is also little debate that neural network encoding models can successfully predict brain responses to a single stimulus modality after being trained on responses to that stimulus modality [11, 12, 13, 24, 25]. In the final paper, we will emphasize how our study builds on previous studies. We will also describe the fMRI dataset and the encoding model framework in more detail in the Appendices.
The goal of our paper was to compare cross-modality performance to within-modality performance. We believe that our results convincingly demonstrate this, and that our analysis framework and the conclusions drawn from our results are comparable to previous fMRI modeling papers published in NeurIPS [11, 12].
>Other points. Figure 4 includes enigmatic color scales that go from “-“ to “+”. Color scales should have numbers.
Figure 4 shows the projection of encoding model weights onto principal components. We used “-” and “+” labels instead of numbers because the numerical scale of this projection is not meaningful.
---
Rebuttal Comment 1.1:
Title: Continued overinterpretation of fMRI signals
Comment: The authors rightly point out that there are many papers published with fMRI. There are also many papers published about extrasensorial perception. This does not make the work true.
"The fMRI response for a cortical voxel is given by the time-course of the blood-oxygen level dependent (BOLD) signal." This is simply not true. Blood-oxygen level dependent refers to particular model that people often impose in this type of data. The authors are encouraged to report REAL data with REAL units.
It has been quite common now to use neural network activations to fit signals from voxels. Unfortunately, it remains highly unclear whether this has led to any conceptual advancement. For example, it could be that an image makes people happy, that happiness leads to slightly more head movements, and this leads to some artifacts that can be correlated with the image. One can replace happiness with a lot of other variables, and head movements with a lot of other artifacts. Unfortunately, such data fitting does not help much, especially when dealing with signals that remain highly unclear and uncontrolled like those in fMRI.
These concerns are especially notable when the fitting leads to such low performance as in the previous case. There are lots of indirect variables that could lead to the kind of small correlations reported here. It would be great to compare results with head movements, eye movements, attentional controls, and finally rigorous neurophysiological measurements.
What does it mean from the point of view of neuroscience to "average across cortex"??? The voxels themselves are alreayd a massive averaging in spce and time with little meaning. This is an average of averages.
In contrast to what the authors state, there is extensive debate about whether fMRI responses relate to language or vision, except within the fMRI community of course. It would be very useful to verify any of the claims with real neurophysiological data. Citing fMRI data to justify fMRI data is circular. What is needed is rigorous independent validation.
---
Reply to Comment 1.1.1:
Comment: These concerns are addressed at the field of fMRI research, rather than the specifics of our current work. They are also contrary to the general consensus in the field of neuroscience. However, we appreciate that many readers may be unfamiliar with fMRI and neuroscience methods generally, so we will provide a brief context here.
First, many groups of researchers have independently established the correspondence between fMRI and the physiological processes in the brain, as well as between fMRI and other neurophysiological measures. This is a large body of literature spanning research across physics, biology, and neuroscience. Here are some representative examples:
- [1] and [2] both provide great literature reviews on the reasons and evidence showing why fMRI signals are dependent on the blood oxygen metabolism. Note that in as early as 1992, three independent groups were able to relate fMRI signals in human brains to task stimuli [3,4,5].
- Many additional studies (such as [6,7,8,9,10,11,12]) have validated fMRI responses by establishing a direct link between fMRI signals and the local neuronal activities measured by microelectrode recordings. This was shown in different animals (monkeys, cats, rats) and humans.
- Other studies (such as [13,14,15]) compared fMRI signals in human brains to ECoG and/or EEG data, and again found reliable links between these measures.
- Research has also shown that human cortical activities are reliable both within and across subjects in response to naturalistic stimuli, such as those used in our study (see [16] for a review). Again, this was demonstrated by ECoG and EEG data, in addition to fMRI data. The 3 measures are correlated [15].
Second, studies that related representations in deep neural networks and human fMRI signals have also been replicated using other measures. For example:
- [17] found converging results using fMRI and MEG.
- [18] found converging results using fMRI and ECoG.
In conclusion, there is rigorous evidence across decades of peer-reviewed literature showing independent validation of fMRI data, using various methods other than fMRI. This has led the overwhelming majority of neuroscientists to agree that the BOLD signal, as measured in the dataset we report, is a meaningful measure of brain activity that corresponds to internal representations and experiences of the model organism. We also note that these analyses are routinely corrected for physiological confounds such as head movements, breathing, etc. This is also the case for the dataset we used.
[1] Logothetis. 2003. The underpinnings of the BOLD functional magnetic resonance imaging signal. Journal of Neuroscience
[2] Kim & Ogawa. 2012. Biophysical and physiological origins of blood oxygenation level-dependent fMRI signals. Journal of Cerebral Blood Flow & Metabolism
[3] Bandettini, … 1992. Time course EPI of human brain function during task activation. Magnetic Resonance in Medicine
[4] Kwong, … 1992. Dynamic magnetic resonance imaging of human brain activity during primary sensory stimulation. Proceedings of the National Academy of Sciences (PNAS)
[5] Ogawa, … 1992. Intrinsic signal changes accompanying sensory stimulation: functional brain mapping with magnetic resonance imaging. PNAS
[6] Logothetis, … 2001. Neurophysiological investigation of the basis of the fMRI signal. Nature
[7] Niessing, … 2005. Hemodynamic signals correlate tightly with synchronized gamma oscillations. Science
[8] Mukamel, … 2005. Coupling between neuronal firing, field potentials, and fMRI in human auditory cortex. Science
[9] Smith, … 2002. Cerebral energetics and spiking frequency: the neurophysiological basis of fMRI. PNAS
[10] Logothetis. 2002. The neural basis of the blood-oxygen-level-dependent functional magnetic resonance imaging signal. Philosophical Transactions of the Royal Society of London
[11] Shmuel, … 2006. Negative functional MRI response correlates with decreases in neuronal activity in monkey visual area V1. Nature Neuroscience
[12] Nir, … 2007. Coupling between neuronal firing rate, gamma LFP, and BOLD fMRI is related to interneuronal correlations. Current Biology
[13] Debener, … 2005. Trial-by-trial coupling of concurrent electroencephalogram and functional magnetic resonance imaging identifies the dynamics of performance monitoring. Journal of Neuroscience
[14] Hermes, … 2012. Neurophysiologic correlates of fMRI in human motor cortex. Human brain mapping
[15] Haufe, … 2018. Elucidating relations between fMRI, ECoG, and EEG through a common natural stimulus. NeuroImage
[16] Hasson, … 2010. Reliability of cortical activity during natural stimulation. Trends in Cognitive Sciences
[17] Cichy, … 2016. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Scientific Reports
[18] Schrimpf, ... 2021. The neural architecture of language: Integrative modeling converges on predictive processing. PNAS | Summary: This work seeks to identify multi-modal processing in the brain. The proposed technique can be summarized as follows: a language encoding model is trained to predict neural story responses from story representations, and a vision language encoding model is used to predict movie responses from movie representations. Then, the language encoding model is used to try to predict movie responses from movie representations. The quality of predictions is used to measure the amount of cross-modal information being processed in a given brain region. An analogous procedure is carried out for the vision encoding model.
Strengths: - As far as I can tell, the technique is novel. There are other works that look for multi-modality in the brain by trying to map multi-modal model activation to neural activation, but this is the first work to look for shared conceptual representation in the brain directly, by training two separate vision and language encoders and comparing the prediction performances after swapping the modalities.
- This method seems mostly sound, with some remaining areas that could still be clarified (see weaknesses)
Weaknesses: - The experiment described in section 5.1 and figure 2 is difficult to interpret on its own. The ability to predict visual response from a model trained on language stimuli is not enough to conclude that an area is multimodal. There is some uncertainty due to the fact that we don't know the quality of the alignment between feature spaces (discussed in section 2.2). In the limit, if the alignment between features is perfect, then the representation of an image of a cat can be perfectly aligned to the representation of the word "cat", and from there it becomes unclear whether a good $r_{movie\rightarrow story}$ is due to a good alignment or true multi-modality in the brain.
- Fortunately, this problem is partially addressed by normalizing the scores by within-modality performance, as is done in 5.2, but even then, a poor alignment could result in poor correlations across all regions. For both cases, it would be helpful to see the quality of the alignment, perhaps in the appendix.
- I had trouble understanding the methods discussed in section 5.3 "Encoding model principal components" (see questions below).
- My biggest concern with this work is significance to the NeurIPS community. The most interesting results, the identification of cross-modal areas in the brain, seem to be neuroscience related. The most relevant ML takeaways from this work are already well known, namely the benefits of multi-modal training.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - line 129: is it possible to be more specific about what the movie stimuli were like? Did the movies have dialogue? Music?
- Figure 2: The white lines denoting the ROIs are hard to see. How exactly are these ROIs obtained?
- Figure 2: It would also be helpful to label the visual area and the auditory areas on the inset 3D brains as well.
- Figure 3: For $r_{story\rightarrow movie}/r_{movie\rightarrow movie}$, is the upper bound of this quantity 1 by default? Or did it just so happen that none of the cross-modal scores were better than the within-modal scores
- Line 215-223 discuss reasons why the $r_{movie\rightarrow story}$ were lower than $r_{story \rightarrow movie}$. Is there a way to rule out the possibility that the alignment of feature spaces (discussed in 2.2) is simply worse in one direction?
- Line 229: By what metric are the "top" voxels selected?
- Line 231: How can stimulus features be projected onto each PC? Don't the stimulus features and the encoding weights occupy completely different spaces?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations and negative societal impact are not discussed, but are probably not relevant here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and making these suggestions. We have addressed your concerns in order below:
>The experiment described in section 5.1 and figure 2 is difficult to interpret on its own…
We respectfully disagree with this assessment. Take the suggested example where the BridgeTower representation for cats in vision is identical to the BridgeTower representation for the word "cat" in language, and we are evaluating if a voxel that responds to cats in vision is multimodal (i.e. if it responds similarly to the word “cat” in language). During training, the encoding model learns that the voxel has a response of $β_{cat}$ when cats appear in movies. During $movie→story$ transfer, the encoding model predicts a response of $β_{cat}$ when the word “cat” occurs in stories. Transfer performance will only be high if the voxel has an actual response of $β_{cat}$ to the word “cat” in language.
In other words, a region with good transfer performance must respond similarly to concepts in language and vision, which is how we operationalize multimodality in the brain (line 22). In the final paper, we will emphasize how our transfer metric $r_{source→target}$ (line 150) relates to our definition of multimodality (line 22).
>Fortunately, this problem is partially addressed by normalizing the scores by within-modality performance…
Thank you for raising this point. We will include our measure of alignment quality in the final paper.
We estimated alignment matrices using the Flickr30k dataset (line 103). To test alignment quality, we estimated alignment matrices on 80% of the image-caption pairs and evaluated them on the remaining 20%. We scored alignment quality by correlating the predicted and actual values for each feature across test pairs, and averaging the correlations across features. Finally, we normalized alignment scores by reporting the number of standard deviations above the mean of a null distribution, which we obtained by randomly shuffling the test captions and images.
These results are shown in Figure R1. The $caption→image$ matrices had a normalized score of 221.30 σ, while the $image→caption$ matrices had a normalized score of 197.28 σ. Thus in both directions the alignment was roughly 200 standard deviations better than random.
>I had trouble understanding the methods discussed in section 5.3…
The methods used in Section 5.3 were based on previous studies that used PCA to characterize semantic tuning in the brain [1, 2, 4, 5]. We will clarify these methods in Appendix A.2 of the final paper.
>My biggest concern with this work is significance to the NeurIPS community…
Many papers recently published in NeurIPS [11, 12, 15] have shown that understanding the relationships between human and artificial information processing systems can benefit both fields of study. We believe that our results relating multimodal transformers and multimodal representations in the brain will be of interest to neuroscientists, computer scientists, and the growing number of researchers who work in the intersection of the fields.
>line 129: is it possible to be more specific about what the movie stimuli were like…
The movie stimuli (https://gin.g-node.org/gallantlab/shortclips/src/master/stimuli) originally had dialogue and music, but the clips were presented silently to the subjects. We will clarify this in the final paper.
>Figure 2: The white lines denoting the ROIs are hard to see…
ROIs were identified using functional localizers described in [1]. The relevant ROIs for this paper are auditory cortex (AC; identified using segments of music, speech, and natural sounds) and primary visual cortex (V1; identified using rotating wedges and expanding / contracting rings). In the final paper we will make the ROI lines more visible and include details about ROI localization in the Appendix.
>Figure 2: It would also be helpful to label the visual area and the auditory areas on the inset 3D brains as well.
We will label the 3D brains in the final paper.
>Figure 3: For r_story→movie/r_movie→movie, is the upper bound of this quantity 1 by default…
$r_{s→m}/r_{m→m}$ does not have a strict upper bound at 1. For instance, $r_{s→m}$ may be higher than $r_{m→m}$ if the story and movie datasets differ in semantic coverage (line 217) or signal-to-noise ratio. Nonetheless, $r_{s→m}/r_{m→m}$ rarely exceeded 1 in voxels that were well-predicted by the vision encoding model (Figure 3).
>Line 215-223 discuss reasons why the r_movie→story were lower than r_story→movie…
As mentioned above, we tested the quality of the alignment matrices by holding out test pairs from the Flickr30k dataset. Alignment was much higher than chance level and fairly symmetric, so it is unlikely that the asymmetry observed in Figure 3 was due to differences in alignment quality.
Previous studies have observed a similar asymmetry [15], and we agree that characterizing the sources of this asymmetry is an important direction for future work.
>Line 229: By what metric are the "top" voxels selected?
We selected voxels using a cross-validation procedure (line 143). In each iteration, 20% of the timepoints were removed from the training dataset and reserved for validation. Models were estimated on the remaining 80% of the timepoints and used to predict responses for the reserved timepoints. The 10,000 voxels with the highest linear correlations were used for PCA. We will clarify this in the final paper.
>Line 231: How can stimulus features be projected onto each PC…
Encoding model weights predict how a voxel responds to stimulus features. For instance, a voxel that responds to animal words will have weights that are aligned with the BridgeTower features of animal words. The only difference is that encoding model weights have 4 delays for each feature (line 137). We remove this temporal information from the weights before PCA by averaging across the four delays for each feature [2]. We will clarify this in the final paper.
---
Rebuttal Comment 1.1:
Title: Response to authors rebuttal
Comment: I thank the authors for their thorough response.
> Re: experiment described in section 5.1 and figure 2
I can at least believe that good transfer performance can only be due to multimodality. But I still believe that bad performance could also be due to poor (but out of the author's control) alignment between vision and language feature spaces. When it comes to identifying multimodal ROIs, I think this could lead to some false negatives. But I don't think this too great a weakness.
Otherwise, I have been convinced by this work's relevance to the NeurIPs community, and will increase the score to 7. | Summary: In this paper, the authors used a pre-trained multimodal transformer, the BridgeTower model, to obtain feature representations of language and visual stimuli in the fMRI experiments. They trained encoding models on these representations and brain responses, and demonstrated that models trained on one modality could predict brain responses to another modality. Brain regions with aligned representations and semantic dimensions were discovered. The authors also defined a correlation score for evaluation and performed comparison with unimodal transformers.
Strengths: Investigating the multimodal processing of brain activity is an interesting problem. This paper presented a novel idea to develop brain encoding models from representations of different stimulus modality learned from a multimodal transformer and fMRI responses. It provides a new approach for analyzing the alignment of language and visual representations in the brain.
Weaknesses: The authors used a publicly available dataset containing fMRI responses from only five subjects, which is quite limited. Some of the discoveries, including brain regions for representation alignment, semantic dimensions, and the differences in cross-modality process, need to be tested or validated with experiment data.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Besides the score defined from correlation, what are some other possible measurements? What are the inter-subject differences?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations were not adequately addressed in the paper. One possible discussion could be about the data limitation, and future work should include more subjects, maybe from different cultural backgrounds speaking different languages.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and making these suggestions. We have addressed your concerns in order below:
>The authors used a publicly available dataset containing fMRI responses from only five subjects, which is quite limited. Some of the discoveries, including brain regions for representation alignment, semantic dimensions, and the differences in cross-modality process, need to be tested or validated with experiment data.
Although five subjects is low relative to some other human neuroimaging studies, this number is comparable to previous studies that model brain responses to naturalistic language [2, 6] and vision [1, 7, 8]. In many experiments involving naturalistic stimuli, a small number of subjects is scanned, but a large amount of brain data is collected for each subject. This allows for the replication of the effects in each individual subject (Appendix C), which can be more powerful than looking for average effects across a group [9]. However, we agree that having a larger and more diverse dataset is preferable, and we will discuss this limitation in the final paper.
All of the results of the study were based on data from fMRI experiments, so we would appreciate it if the reviewer could further clarify what it means to “test or validate them with experiment data”.
>Besides the score defined from correlation, what are some other possible measurements [for encoding model performance]?
Some studies quantify encoding model prediction performance using a classification task, in which encoding model predictions are used to predict which of two stimulus segments occurred at a given time [6, 10, 11]. However, linear correlation is the most commonly used metric used to quantify encoding model prediction performance [2, 12, 13, 14]. We will reference the studies that use this metric in the final paper.
>What are the inter-subject differences [for encoding model performance]?
Our flatmaps suggest that the effects that we observe are consistent across subjects (Appendix C). In Figures 2 and 4, we summarize variability across subjects using error bars. We will note how our results are similar and different across subjects in the final paper.
>The limitations were not adequately addressed in the paper. One possible discussion could be about the data limitation, and future work should include more subjects, maybe from different cultural backgrounds speaking different languages.
Thank you for raising this point. In the final paper, we will add a third discussion paragraph summarizing limitations, including the stimulus limitations discussed in Section 5.2 (line 217) and the dataset limitations discussed above.
---
Rebuttal Comment 1.1:
Title: Thanks for rebuttal
Comment: Dear authors,
Thanks for your rebuttal!
I appreciate the effort in clarifying and addressing my concerns, adding discussion about limitations. For the first point “test or validate them with experiment data”, I mean that many discoveries made in this paper were derived from the limited fMRI dataset of five subjects. It would be better if the authors could provide more references or experimental results (if any) to support these discoveries.
Overall, I see the novelty and contribution of this work, and would still recommend "Accept". | Rebuttal 1:
Rebuttal: We thank the reviewers for taking the time to carefully read our paper, and for providing detailed feedback. We are excited to see that many reviewers agree that our approach is novel and yields interesting insights, and we sincerely appreciate the concerns and suggestions raised by all reviewers. In response, we have addressed these points in the subsequent sections for each individual reviewer, and will incorporate changes accordingly in the final paper. We hope these will serve to amplify the clarity and robustness of our research.
We have attached a list of figures, which we reference in the rebuttals as R1, R2, R3, and R4.
We reference the following papers in the rebuttals:
[1] Huth, A. G., Nishimoto, S., Vu, A. T. & Gallant, J. L. A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron 76, 1210–1224 (2012).
[2] Huth, A. G., de Heer, W. A., Griffiths, T. L., Theunissen, F. E. & Gallant, J. L. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature 532, 453–458 (2016).
[3] Popham, S. F. et al. Visual and linguistic semantic representations are aligned at the border of human visual cortex. Nat. Neurosci. 24, 1628–1636 (2021).
[4] Deniz, F., Nunez-Elizalde, A. O., Huth, A. G. & Gallant, J. L. The Representation of Semantic Information Across Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus Modality. J. Neurosci. 39, 7722–7736 (2019).
[5] Wang, A. Y., Kay, K., Naselaris, T., Tarr, M. J. & Wehbe, L. Incorporating natural language into vision models improves prediction and understanding of higher visual cortex. bioRxiv 2022.09.27.508760 (2022) doi:10.1101/2022.09.27.508760.
[6] Wehbe, L. et al. Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses. PLoS One 9, e112575 (2014).
[7] Shen, G., Horikawa, T., Majima, K. & Kamitani, Y. Deep image reconstruction from human brain activity. PLoS Comput. Biol. 15, e1006633 (2019).
[8] Allen, E. J. et al. A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nat. Neurosci. 25, 116–126 (2022).
[9] Braga, R. M. & Buckner, R. L. Parallel Interdigitated Distributed Networks within the Individual Estimated by Intrinsic Functional Connectivity. Neuron 95, 457–471.e5 (2017).
[10] Mitchell, T. M. et al. Predicting human brain activity associated with the meanings of nouns. Science 320, 1191–1195 (2008).
[11] Toneva, M. & Wehbe, L. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). in Advances in Neural Information Processing Systems (2019).
[12] Jain, S. & Huth, A. Incorporating Context into Language Encoding Models for fMRI. in Advances in Neural Information Processing Systems (2018).
[13] Schrimpf, M. et al. The neural architecture of language: Integrative modeling converges on predictive processing. Proc. Natl. Acad. Sci. U. S. A. 118, e2105646118 (2021).
[14] Caucheteux, C. & King, J.-R. Brains and algorithms partially converge in natural language processing. Commun. Biol. 5, 134 (2022).
[15] Lin, S., Sprague, T., & Singh, A. K. Mind reader: Reconstructing complex images from brain activities. in Advances in Neural Information Processing Systems (2022).
[16] Hasson, U., Malach, R. & Heeger, D. J. Reliability of cortical activity during natural stimulation. Trends Cogn. Sci. 14, 40–48 (2010).
[17] Nishimoto, S. et al. Reconstructing visual experiences from brain activity evoked by natural movies. Curr. Biol. 21, 1641–1646 (2011).
[18] Tang, J., LeBel, A., Jain, S. & Huth, A. G. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat. Neurosci. 26, 858–866 (2023).
[19] Naselaris, T., Kay, K. N., Nishimoto, S. & Gallant, J. L. Encoding and decoding in fMRI. Neuroimage 56, 400–410 (2011).
[20] Jain, S., Vo, V. A., Wehbe, L. & Huth, A. G. Computational language modeling and the promise of in silico experimentation. Neurobiol. Lang. (Camb.) doi:10.1162/nol_a_00101/114613.
[21] Haxby, J. V. et al. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425–2430 (2001).
[22] Kay, K. N., Naselaris, T., Prenger, R. J. & Gallant, J. L. Identifying natural images from human brain activity. Nature 452, 352–355 (2008).
[23] Huth, A. G. et al. Decoding the Semantic Content of Natural Movies from Human Brain Activity. Front. Syst. Neurosci. 10, 81 (2016).
[24] Yamins, D. L. K. et al. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Natl. Acad. Sci. U. S. A. 111, 8619–8624 (2014).
[25] Güçlü, U. & van Gerven, M. A. J. Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. Journal of Neuroscience 35, 10005–10014 (2015).
[26] Liu, Y. et al. KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation. arXiv [cs.CV] (2021).
Pdf: /pdf/cdedde487af082d7389e3142287fd76366bfd38f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies whether we can use multi-modal transformers to predict brain activities. It specifically uses neural nets’ encoding for languages to predict brains’ activities for seeing vidieos, and neural nets’ encoding for images to predict brains’ activities. Their finding is that these prediction models are statistically significant so multi-modal transformers may resemble the underlying mechanism of brains.
Strengths:
Understanding how neural nets are related to real brains has been a central question in deep learning. So this paper makes an important addition to the literature. While prior works already used similar frameworks (using encoding for transformers to predict MRI data/images), this work still seems quite impressive because it needs to circumvent quite a few kinks, such as aligning the visual and text encoding and carefully designing the statistical experiments to confirm models significance because the r2/r are fairly small.
Weaknesses: I feel the paper has a very high bar on the audience, i.e., requiring one to have sufficient knowledge on transformers, MRI, and statistical analysis (multiple hypothesis test). I don’t know what to suggest to fix the problem as the paper is already very carefully written. Maybe having Appendices A and B to give more background materials on MRI and the statistical tools they use.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
1. How can one make sense of the very weak correlation results? Like there are some statistical evidence that transformers resemble the brain but transformers are only < 1% like brains?
2. Do those t(4) mean t-stat/z-score? Do we need any kind of serial adjustments when one calculate these scores because I imagine input and/or response could potentially be correlated. I guess some bootstrapping machinaries are designed to address/alleviate this issue?
3. Why the correlation numbers in Fig 3 & 5 are much higher than those in Fig. 2? I guess I completely missed a fraction of the paper…
4. Do it still make sense to do PCA analysis when a model’s r2 is already that low? Some t-stat seems to drop to 2-ish (so shall we interpret them as significant or not?).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and making these suggestions. We have addressed your concerns in order below:
>I feel the paper has a very high bar on the audience, i.e., requiring one to have sufficient knowledge on transformers, MRI, and statistical analysis (multiple hypothesis test). I don’t know what to suggest to fix the problem as the paper is already very carefully written. Maybe having Appendices A and B to give more background materials on MRI and the statistical tools they use.
We used an fMRI dataset and an encoding model framework that have been described in several previous studies [1, 2, 3]. As a result, we focused our methods sections on explaining the aspects of our study that differ from previous studies. In the final paper, we will emphasize how our study builds on previous studies. We will also describe the fMRI dataset and the encoding model framework in more detail in Appendices A and B.
>How can one make sense of the very weak correlation results? Like there are some statistical evidence that transformers resemble the brain but transformers are only < 1% like brains?
Figure 2b quantifies encoding model performance by taking the average correlation across all cortical voxels. Many fMRI studies select a subpopulation of voxels with an independent localizer task that isolates the cognitive effect of interest. However, we do not believe there is an effective multimodal localizer that would select all voxels that would represent the semantic dimensions of interest. Averaging across cortex is therefore an unbiased way to compare the different encoding models. However, averaging across cortex does yield conservatively low correlation values, since there are many voxels (e.g. those in motor cortex) which are not involved in story or movie perception. Figure 2a shows that many regions involved in story or movie perception have correlations greater than 0.1, and Figure 3b shows that these correlations can reach 0.5 for $story→movie$ transfer and 0.2 for $movie→story$ transfer. In the final paper, we will emphasize that Figure 2b reports average correlation across cortex, which is a conservative but unbiased metric.
We will also note that we evaluated encoding models on single-trial fMRI responses, whereas many previous studies evaluated encoding models on fMRI responses averaged across multiple repeats of the same stimulus. Evaluating on single-trial responses to a large number of stimuli provides high semantic coverage. However, averaging across repeats will increase the signal-to-noise ratio of the data and lead to higher correlations than those reported in our study.
Thank you for bringing up this point, and we hope that clarifying the correlation results will strengthen our paper.
>Do those t(4) mean t-stat/z-score? Do we need any kind of serial adjustments when one calculate these scores because I imagine input and/or response could potentially be correlated. I guess some bootstrapping machinaries are designed to address/alleviate this issue?
Yes, $t$(4) indicates a t-test with 4 degrees of freedom. These tests were conducted across 5 individuals, so there is no autocorrelation that needs to be accounted for.
In analyses where we computed significance within each voxel (e.g. to obtain the voxel masks used in Figure 3) we used a blockwise permutation test to account for the autocorrelation in the responses (Appendix A.1). We will describe this process in more detail (line 158) in the final paper.
>Why the correlation numbers in Fig 3 & 5 are much higher than those in Fig. 2? I guess I completely missed a fraction of the paper…
As mentioned above, Figure 2b reports average correlation across cortex, while the histograms in Figures 3b and 5b report correlations for individual cortical locations. Further, the histograms in Figures 3b and 5b are restricted to voxels with statistically significant $target→target$ scores. In the final paper, we will emphasize that Figure 2b reports average correlation across cortex.
>Do it still make sense to do PCA analysis when a model’s r2 is already that low? Some t-stat seems to drop to 2-ish (so shall we interpret them as significant or not?).
We performed PCA using the top 10,000 voxels for each subject, which have an average correlation of 0.045. This approach is consistent with what has been done in previous studies [1, 2, 4, 5]. Furthermore, we found principal components that are consistent with those found in these previous studies.
All of the reported t-tests were statistically significant after correcting for multiple comparisons. The different t-tests correspond to different analyses (the first test compares multimodal transfer performance before and after correcting for negative correlations, whereas the second test compares multimodal transfer performance to unimodal transfer performance), so the t-statistics are not comparable across tests. | null | null | null | null | null | null |
Class-Distribution-Aware Pseudo-Labeling for Semi-Supervised Multi-Label Learning | Accept (poster) | Summary: The paper proposes a class-distribution-aware method to deal with the semi-supervised multi-label learning (SSMLL) problem. The main motivation is that the conventional pseudo-labeling methods cannot be applied to SSMLL scenarios, since instance is assigned with more than one ground-truth labels. To solve this challenge, the paper proposes a class-aware pseudo-labeling strategy, which is free of estimating the number of true labels for each instance. To capture the class distribution of unlabeled data, authors propose a regularized learning framework to determine the pseudo-labels by exploiting the estimated class distribution of labeled data.
Strengths: The motivation of this paper is clear. Although pseudo-labeling is a commonly used strategy in single-label case, it cannot be applied to multi-label case due to the unknown label count.
The proposed method is intuitive and effective. Authors propose the class-aware pseudo-labeling strategy to cleverly avoid the estimation of the label count for each instance. Consequently, the problem has been transformed to the task of estimating the class distribution of unlabeled data, which can be easily solved by the proposed class-distribution-aware thresholding method.
Theoretical and empirical studies comprehensively verify the effectiveness of the proposed method. Specifically, the paper studies the correctness of the estimated class distributions theoretically and conduct comparisons with sota methods.
Weaknesses: It seems that only one threshold can determine the positive and negative pseudo-labels for each class. It is suggested to explain the reasons why two thresholds are used in the paper.
It seems that the paper did not discuss how to set the parameter of ASL.
There are some language mistakes. The paper is suggested to proofread the paper carefully.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The paper considers the estimation of class distribution of unlabeled data. Is there any other factor can affect the pseudo-labeling performance?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments. We are glad that you considered our work “well-structured, easy to follow”. We are glad to answer all your questions.
**Q1:** It seems that only one threshold can determine the positive and negative pseudo-labels for each class. It is suggested to explain the reasons why two thresholds are used in the paper.
**A1:** By utilizing two thresholds, our goal is to identify the positive and negative pseudo-labels with high confidences, which guarantees a high reliability of the pseudo-labels. By training on the reliable pseudo-labels, the model would achieve favorable performance.
**Q2:** It seems that the paper did not discuss how to set the parameter of ASL.
**A2:** Thank you for your suggestions. Since the ASL loss function is not our contribution, we followed the settings in the original paper to use the default values for all parameters.
**Q3:** There are some language mistakes. The paper is suggested to proofread the paper carefully.
**A3:** Thank you for you suggestions. We will carefully check the paper and correct the writing mistakes in the revised version.
**Q4:** Is there any other factor can affect the pseudo-labeling performance?
**A4:** In general, the performance of pseudo-labeling depends mainly on two factors, i.e., the quality of the model predictions and the correctness of the estimated class distribution. We will consider improving the quality of the model predictions in the future version. | Summary: This work studies pseudo-labeling for multi-label semi-supervised learning. Differing from the traditional instance-aware pseudo-labeling methods, they propose to assign pseudo-labels to unlabeled data in a class-aware manner to capture the true class distribution of the unlabeled data. This work proposes CAT strategy to estimate the class distribution. It proves it is a desirable estimation by performing an analysis of the correctness of the estimation and providing the generalization error bound for CAP.
Strengths: This paper is well-structured and easy to follow. The methods are clearly presented in Section 3. They also provide a theoretical analysis in Section 4 of the proposed method. Solid experiments are conducted to show the effectiveness of the proposed methods. Multi-label classification is an important task, and pseudo-labeling for this scenario will be helpful to this community.
Weaknesses: * The key idea relies on the observation that the class proportions of positive and negative labels in labeled examples can tightly approximate the true class distribution. I am concerned if the unlabeled data have totally different distribution, will the assumption relying on such observation still hold true? Therefore, it would be helpful to show some examples when there is a distribution mismatch.
* The technical novelty seems not that clear. It would be great if the authors can re-state and present the novelty of the propose approach.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Adsh and FreeMatch are designed for traditional SSL, how do the authors implement these two algorithms for the SSMLL setting?
2. This method can be used widely. I would re-consider rating if the authors are willing to make the codebase public.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: They do not claim/discuss any limitation in their main manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments. We are glad that you considered our work “well-structured, solid experiments, helpful to the community”. We are glad to answer all your questions.
**Q1:** The key idea relies on the observation that the class proportions of positive and negative labels in labeled examples can tightly approximate the true class distribution.
**A1:** Many semi-supervised learning (SSL) methods have explicitly or implicitly adopted this assumption. For example, the early work [1] presented the expectation regularization, which encourages the the marginal distribution of model predictions on unlabeled data to match the marginal distribution of the ground-truth labels that was estimated during training based on the training examples. Many recent works [2-4] have applied this idea in deep semi-supervised learning. Compared with SSL that has achieved great advances, semi-supervised multi-label learning (SSMLL) (in the context of deep learning) is still in its nascent period of development. Therefore, our paper focuses on the standard setting where labeled and unlabeled examples follow the same distribution.
According to your suggestion, to show the influence of the label shift on the final performance of our method, following the works [5-6], we draw $P(y)$ from a Dirichlet distribution with concentration $\alpha$. We define the degree of label shift $\Delta=\sum_{k=1}^q|\hat\gamma_k-\gamma_k^*|$ between the labeled and unlabeled data. By using different values of $\alpha$, the degree of label shift $\Delta$ could be varied. From Table 1, we can see that the performance of CAP degrades slightly as the degree of label shift increases significantly. For example, when $p=0.1$ and $\alpha=1$ (the degree of label shift has increased by ten times), the performance of CAP is still better than the comparing methods in Table 1 in the paper.
We will add a more detailed discussion of this point in the revised version.
Table 1. MAP of CAP under different degrees of label shift. * denotes the result without introducing the label shift.
| $\alpha$ | \| | 1 | 10 | 100 | * | \| | $\alpha$ | 1 | 10 | 100 | * |
| :-------: | :----: | :---: | :---: | :---: | :-------: | :----: | :-------: | :---: | :---: | :---: | :-------: |
| $\Delta$ | **\|** | 1.345 | 0.904 | 0.940 | 0.136 | **\|** | $\Delta$ | 1.428 | 0.998 | 1.019 | 0.098 |
| $p$ = 0.1 | **\|** | 66.17 | 66.86 | 66.74 | **67.36** | **\|** | $p$ = 0.2 | 68.86 | 69.38 | 69.31 | **70.41** |
Recently, several methods have been developed specifically to solve the label shift problem [5, 6], where the class marginal distributions of the labeled and unlabeled data are different. This is really an interesting future direction of our work. To solve the label shift problem in the SSMLL scenario, a straightforward strategy is to estimate the class distribution of the unlabeled data using the well-established method [5, 6], and then apply CAP to obtain pseudo-labels.
[1] Simple, Robust, Scalable Semi-Supervised Learning via Expectation Regularization. ICML’07
[2] ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring. ICLR’20
[3] CoMatch: Semi-supervised Learning with Contrastive Graph Regularization. CVPR’20
[4] AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation. ICLR’22
[5] Detecting and Correcting for Label Shift with Black Box Predictors. ICML’18
[6] LTF: A Label Transformation Framework for Correcting Target Shift. ICML’20
**Q2:** The technical novelty seems not that clear. It would be great if the authors can re-state and present the novelty of the propose approach.
**A2:** Thank you for your suggestion. In SSMLL scenarios, conventional pseudo-labeling methods can hardly work when handling instances associated with multiple labels and an unknown label count. Our paper proposes a Class-Aware Pseudo-labeling (CAP) method to assign pseudo-labels in a class-aware manner, which is free of estimating the label count for each instance. This allows us to transform a hard problem, i.e., estimating the label count for each unlabeled instance, into a much easier problem, i.e., estimating the class distribution of unlabeled data. We further propose a class-distribution-aware thresholding strategy to reliably separate positive and negative pseudo-labels based on the estimated class proportions of labeled data, which tightly approximate the true class proportions. We will re-organize the Introduction section based on the above discussion to highlight our technical novelty.
**Q3:** Adsh and FreeMatch are designed for traditional SSL, how do the authors implement these two algorithms for the SSMLL setting?
**A3:** In our experiments, we compared our method with SSMLL methods and MLML (Multi-label Learning with Missing Labels) methods. To further validate the effectiveness of the proposed method, we also compared our method with the SSL methods additionally. As mentioned in line 283-284, to achieve better performance, we made several modifications for SSL methods: 1) use the ASL loss (also used in our method); 2) apply the same data augmentations as our method, including RandAugment and Cutout; 3) change the training strategy to make it more suitable for the multi-label scenario, specifically, employing AdamW optimizer and one-cycle policy scheduler. Furthermore, we also tuned the hyperparameters of these methods to achieve better performance.
**Q4:** This method can be used widely. I would re-consider rating if the authors are willing to make the codebase public.
**A4:** We have submitted the source code of our method as supplementary material. We promise to make the code publicly available as soon as the paper is accepted.
**Q5:** About the limitation of the paper.
**A5:** Due to character limit, please refer to the beginning of the rebuttal. | Summary: This paper proposes a Class-Aware Pseudo-labeling (CAP) method to solve semi-supervised multi-label learning (SSMLL) problem by controlling the assignment of positive and negative pseudo-labels for each class through a class-distribution-aware thresholding (CAT) strategy.
Strengths: 1. This paper proposes a pseudo-labeling strategy for semi-supervised multi-label learning in a class-aware manner.
2. This paper is easy to read and well organized.
3. The experimental results and the proof of Theorems in this paper seem to be solid.
Weaknesses: 1. Lack of some references. This paper is devoted to solving the SSMLL problem. Therefore, the authors should review and compare some SSMLL methods and more SSL methods. For example,
[1] Rizve et al., In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning, ICLR, 2021.
[2] Xu et al., Dash: Semi-supervised learning with dynamic thresholding, ICML, 2021.
[3] Zhang et al., Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling, NeurIPS, 2021.
2. What is the motivation for designing the operation $\tau(\alpha_{k})=exp(-\alpha_{k})$? This does not seem to be mentioned in this paper.
3. It is not enough clear about the Class-Distribution-Aware Thresholding. Also, it is suggested that the author gives a complete methodological illustration about CAP and CAT for each class to facilitate the readers' understanding.
4. The random seed in this paper is set to 1 for all experiments. Following the SSL setting, it is necessary to report the mean value with different random seeds as the final performance.
5. The authors should check if the title of the figure corresponds to the content. For example, Figure 2 is titled OF1 score while the results are CF1 scores. In addition, it is suggested to give a number for each formula.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the Weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of this paper should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your great efforts for reviewing our paper. We are glad that you considered our work “easy to read, well-organized, solid”. We are glad to answer all your questions.
**Q1:** Lack of some references. This paper is devoted to solving the SSMLL problem. Therefore, the authors should review and compare some SSMLL methods and more SSL methods.
**A1:** In our paper, we have compared the proposed method with the only SSMLL method found, DRML, and two recent SSL methods, Adsh (2022) and FreeMatch (2023), as well as multiple MLML (Multi-label Learning with Missing Labels) methods. We will add more SSL competitors in the future version. Also, we will improve the Related Works section according to the suggestions and add the mentioned relevant works to the revised version as follows.
The relevant works [1-3] will be added to the third paragraph of the Related Works section:
To improve the reliability of pseudo-labels, an uncertainty-aware pseudo-labeling method proposed in [1] selected reliable pseudo-labels based on the prediction uncertainty.
Unlike FixMatch that selects unlabeled examples with a fixed threshold, Dash [2] selected unlabeled examples with a dynamic threshold, with the goal of achieving better pseudo-labeling performance.
FlexMatch [3] was proposed to select unlabeled examples for every class according to the current learning status of the model.
[1] In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning, ICLR, 2021.
[2] Dash: Semi-supervised learning with dynamic thresholding, ICML, 2021.
[3] Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling, NeurIPS, 2021.
**Q2:** What is the motivation for designing the operation $\tau(\alpha_k)=\exp(-\alpha_k)$?
**A2:** For solving Eq.(7), according to [1], we obtain the closed-form solution $\hat{y}_k=1$ if $-\log(f_k(x_i))\leq\alpha_k$, that is, $\hat{y}_k=1$ if $f_k\geq\exp(-\alpha_k)$. For notational simplicity, we denote as $\tau(\alpha_k)=\exp(-\alpha_k)$. We will include the above discussion in the revised version.
[1] Self-Paced Learning for Latent Variable Models. NeurIPS 2010.
**Q3**: It is not enough clear about the Class-Distribution-Aware Thresholding. Also, it is suggested that the author gives a complete methodological illustration about CAP and CAT for each class to facilitate the readers' understanding.
**A3:** Thank you for your suggestion. As illustrated in Figure 1(a), Class-aware Pseudo-labeling (CAP) is a pseudo-labeling framework that differs significantly from conventional pseudo-labeling (Instance-Aware Pseudo-labeling, IAP). IAP performs pseudo-labeling for every instance (row), while CAP performs pseudo-labeling for every class (column). This implies that IAP needs to know in advance the label count for every unlabeled instance, while CAP only needs to know the class proportions of unlabeled data. For IAP, Figure 1(a) lists three existing pseudo-labeling methods (Top-1, Top-k, instance-aware thresholding). Unfortunately, these methods cannot capture the true label count precisely. For our CAP, we propose the class-distribution-aware thresholding, which aims to use the estimated class proportions of labeled data to approximate the true class proportions of unlabeled data. Figure 1(b) shows that even with a small number of labeled data (the labeled proportion $p=0.05$), the estimated and true class proportions highly agree with each other. We will add more notes and re-organize the caption of Figure 1 to make the illustration clearer in the revised version.
**Q4:** The random seed in this paper is set to 1 for all experiments.
**A4:** For the sake of easy reproducibility, following most of works in multi-label scenarios like [1-3], we report experimental results with a fixed random seed. Such an experimental setting has been adopted by most of multi-label learning literature. To make a fair comparison, the random seed was set as 1 for every comparing method.
[1] Multi-Label Image Recognition with Graph Convolutional Networks. CVPR'19.
[2] Multi-Label Learning from Single Positive Labels. CVPR'21.
[3] Asymmetric Loss for Multi-Label Classification. ICCV'21.
**Q5:** The authors should check if the title of the figure corresponds to the content. Figure 2 is titled OF1 score while the results are CF1 scores. In addition, it is suggested to give a number for each formula.
**A5:** Thank you for your suggestions. The title was mistakenly wrote as OF1. We will correct the title and label the formula with numbers in the revised version.
**Q6:** The limitations of this paper should be discussed.
**A6:** In general, the performance of pseudo-labeling depends mainly on two factors, i.e., the quality of the model predictions and the correctness of the estimated class distribution. Our work focuses on the latter. It is a promising future direction to boost the pseudo-labeling performance by improving the quality of the model predictions. A straightforward method is to use dual networks, which can prevent errors from accumulating on a single network, thus achieving better generalization performance. We will include a more detailed discussion in the revised version.
---
Rebuttal Comment 1.1:
Title: Thanks for detailed responds
Comment: After the above responds, there remain two major concerns:
1. This paper aims at the semi-supervised multi-label learning. And, all experimental results over one run are reported for comparison. However, I am concerned about the robustness of the proposed method. Following the SSL setting, it is necessary to perform experiments with different random seeds, which has a great effect on randomly dividing training data into labeled and unlabeled set. Besides, UPS [1] is the first to explore the semi-supervised multi-label learning problem. Unfortunately, there is no discussion and comparison with it.
[1] In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning, ICLR, 2021.
2. According to the solution in Section 3.3, the $\tau(\alpha_k)$ and $\tau(\beta_k)$ are calculated by the proportions of positive and negative labels in labeled data, respectively. However, does this strategy still work for cross-domain unlabeled data in the open world?
---
Reply to Comment 1.1.1:
Title: Thanks for your further response!
Comment: We hope that our response could address your concerns, and we are willing to provide further clarification if necessary.
**About repeated experiments and the comparison with UPS**
Table 1 reports the comparison results between CAP and the comparing methods. Due to the time limit, we select two strong competitors from the comparing methods in the original paper. According to your suggestion, we also include the SSL method, UPS, in the comparison. From the table, it can be observed that the performance of our method is significantly better than the comparing methods, especially when the number of labeled data is small. We will include these results and the discussion about UPS in the revised version.
Table 1. Mean and standard deviation of mAP over 5 runs (random seed = 1, 2, 3, 4, 5).
| | VOC | | | | COCO | | | |
| ---- | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |
| | p=0.05 | p=0.1 | p=0.15 | p=0.2 | p=0.05 | p=0.1 | p=0.15 | p=0.2 |
| CAP | 77.15 ± 0.58 | 82.54 ± 0.20 | 83.95 ± 0.24 | 85.04 ± 0.32 | 63.11 ± 0.35 | 67.96 ± 0.32 | 69.92 ± 0.41 | 71.23 ± 0.42 |
| IAT | 74.78 ± 0.82 | 81.11 ± 0.46 | 82.91 ± 0.30 | 84.38 ± 0.44 | 60.82 ± 0.26 | 66.13 ± 0.30 | 68.50 ± 0.32 | 69.94 ± 0.35 |
| PLC | 75.61 ± 0.67 | 81.58 ± 0.62 | 83.27 ± 0.47 | 84.48 ± 0.57 | 60.20 ± 0.18 | 65.60 ± 0.29 | 68.32 ± 0.35 | 69.85 ± 0.35 |
| UPS | 76.25 ± 1.41 | 81.23 ± 0.95 | 82.92 ± 0.67 | 83.71 ± 0.61 | 59.16 ± 0.31 | 64.54 ± 0.23 | 66.87 ± 0.22 | 68.29 ± 0.18 |
**About the label shift problem**
Many semi-supervised learning (SSL) methods have explicitly or implicitly adopted the assumption that the labeled and unlabeled examples follow the same distribution. For example, the early work [1] presented the expectation regularization, which encourages the the marginal distribution of model predictions on unlabeled data to match the marginal distribution of the ground-truth labels that was estimated during training based on the training examples. Many recent works [2-4] have applied this idea in deep semi-supervised learning. Compared with SSL that has achieved great advances, semi-supervised multi-label learning (SSMLL) (in the context of deep learning) is still in its nascent period of development. Therefore, our paper focuses on the standard setting where labeled and unlabeled examples follow the same distribution.
To show the influence of the label shift on the final performance of our method, following the works [5-6], we draw $P(y)$ from a Dirichlet distribution with concentration $\alpha$. We define the degree of label shift $\Delta=\sum_{k=1}^q|\hat\gamma_k-\gamma_k^*|$ between the labeled and unlabeled data. By using different values of $\alpha$, the degree of label shift $\Delta$ could be varied. From Table 1, we can see that the performance of CAP degrades slightly as the degree of label shift increases significantly. For example, when $p=0.1$ and $\alpha=1$ (the degree of label shift has increased by ten times), the performance of CAP is still better than the comparing methods in Table 1 in the paper.
We will add a more detailed discussion of the assumption in the revised version.
Table 1. MAP of CAP under different degrees of label shift on COCO. * denotes the result without introducing the label shift.
| $\alpha$ | \| | 1 | 10 | 100 | * | \| | $\alpha$ | 1 | 10 | 100 | * |
| :-------: | :----: | :---: | :---: | :---: | :-------: | :----: | :-------: | :---: | :---: | :---: | :-------: |
| $\Delta$ | **\|** | 1.345 | 0.904 | 0.940 | 0.136 | **\|** | $\Delta$ | 1.428 | 0.998 | 1.019 | 0.098 |
| $p$ = 0.1 | **\|** | 66.17 | 66.86 | 66.74 | **67.36** | **\|** | $p$ = 0.2 | 68.86 | 69.38 | 69.31 | **70.41** |
Recently, several methods have been developed specifically to solve the label shift problem [5, 6], where the class marginal distributions of the labeled and unlabeled data are different. This is really an interesting future direction of our work. To solve the label shift problem in the SSMLL scenario, a straightforward strategy is to estimate the class distribution of the unlabeled data using the well-established method [5, 6], and then apply CAP to obtain pseudo-labels.
[1] Simple, Robust, Scalable Semi-Supervised Learning via Expectation Regularization. ICML’07
[2] ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring. ICLR’20
[3] CoMatch: Semi-supervised Learning with Contrastive Graph Regularization. CVPR’20
[4] AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation. ICLR’22
[5] Detecting and Correcting for Label Shift with Black Box Predictors. ICML’18
[6] LTF: A Label Transformation Framework for Correcting Target Shift. ICML’20 | Summary: This papers introduces a novel method called Class-Aware Pseudo-Labeling (CAP) to address the challenges of semi-supervised multi-label learning (SSMLL). Traditional pseudo-labeling methods struggle with instances associated with multiple labels and an unknown label count, often leading to the introduction of false positive labels or the omission of true positive ones. The CAP method overcomes these issues by performing pseudo-labeling in a class-aware manner. It introduces a regularized learning framework that incorporates class-aware thresholds, effectively controlling the assignment of positive and negative pseudo-labels for each class. The paper highlights that even with a small proportion of labeled examples, the estimated class distribution can serve as a reliable approximation. This observation led to the development of a class-distribution-aware thresholding strategy to align the pseudo-label distribution with the true distribution. The paper provides theoretical verification of the correctness of the estimated class distribution and offers a generalization error bound for the proposed method. Extensive experiments on multiple benchmark datasets confirm the effectiveness of the CAP method in addressing the challenges of SSMLL problems.
Strengths: 1. The Class-Aware Pseudo-Labeling (CAP) method presents a significant innovation in the field of semi-supervised multi-label learning. Its unique approach of performing pseudo-labeling in a class-aware manner addresses the limitations of traditional pseudo-labeling methods, which often struggle with instances associated with multiple labels and an unknown label count.
2. The authors provide a theoretical verification of the correctness of the estimated class distribution, which is a crucial aspect of the Class-Aware Pseudo-Labeling (CAP) method. This theoretical grounding not only validates the method's approach but also enhances its credibility and reliability. Furthermore, the authors offer a generalization error bound for the proposed method. This is an important contribution as it quantifies the expected performance of the CAP method and provides a measure of its robustness.
3. The performance of the Class-Aware Pseudo-Labeling (CAP) method across all datasets and settings in the experiments is a strength of this paper. The fact that the proposed method consistently achieves optimal results demonstrates its effectiveness and robustness.
Weaknesses: The paper does not provide a clear explanation for the choice of using an exponential transformation in line 167 on the threshold value for each class in the Class-Aware Pseudo-Labeling (CAP) method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In line 167 of the paper, the paper mention the use of an exponential transformation on the threshold value for each class in the Class-Aware Pseudo-Labeling (CAP) method. Why not just use a number as the threshold? Could the authors provide more details on the rationale behind this choice?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your appreciation of our paper. We are glad that you considered our work “novel, a significant innovation, an important contribution”. We are glad to answer all your questions.
**Q1:** The paper does not provide a clear explanation for the choice of using an exponential transformation in line 167 on the threshold value for each class in the Class-Aware Pseudo-Labeling (CAP) method.
**A1:** Considering the BCE loss ($\ell_1(f_k)=-\log(f_k)$ and $\ell_0(f_k)=-\log(1-f_k)$), for solving Eq. (7), according to [1], we obtain the closed-form solution $\hat{y}_k=1$ if $-\log(f_k(x_i))\leq\alpha_k$; $(1-\hat{y}_k)=1$, if $-\log(1-f_k(x_i))\leq\beta_k$. With simple computations, we have $\hat{y}_k=1$ if $f_k\geq\exp(-\alpha_k)$; $\hat{y}_k=0$, if $f_k\leq 1-\exp(-\beta_k)$. For notational simplicity, we denote as $\tau(\alpha_k)=\exp(-\alpha_k)$ and $\tau(\beta_k)=1-\exp(-\beta_k)$. In the submitted paper, we mistakenly wrote as $\tau(\beta_k)=\exp(-\beta_k)$. We will correct this writing error in the revised version.
[1] Self-Paced Learning for Latent Variable Models. NeurIPS 2010.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Your clarifications have addressed my concerns. | Rebuttal 1:
Rebuttal: **About the limitation of the paper** (to reviewers xWdd, tMT4)
In general, the performance of pseudo-labeling depends mainly on two factors, i.e., the quality of the model predictions and the correctness of the estimated class distribution. Our work focuses on the latter. It is a promising future direction to boost the pseudo-labeling performance by improve the quality of the model predictions. A straightforward method is to use dual networks, which can prevent errors from accumulating on a single network, thus achieving better generalization performance. We will include a more detailed discussion in the revised version. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Robust Representation Learning via Asymmetric Negative Contrasting and Reverse Attention | Reject | Summary: This paper empirically shows previous supervised adversarial training methods have two shortcomings: (1) The features of the natural examples and those from other classes are not distinguishable and (2) the features of the natural and adversarial examples are not aligned. To mitigate these two issues, the authors propose a regularization to push away the features from other classes and freeze the natural examples and a reverse attention mechanism that increases the weight of the target class. The empirical results seem to validate the effectiveness of the proposed method.
Strengths: 1. The paper is well-written and can be easily followed.
1. The proposed method is well-motivated. The empirical study on the feature spaces is interesting and inspirable.
2. The authors conducted the experiments on comprehensive datasets to support their claim.
Weaknesses: 1. Regarding the experiments, the authors only provide the results on ResNet18 and PreActResNet18, which is limited. I suggest the authors provide the results on the WideResNet and make a comparison with the current state-of-the-art performance listed in the RobustBench (https://robustbench.github.io/) to validate the effectiveness of the proposed method.
2. The authors do not provide the error bar to validate the significance of the results.
3. It is weird the accuracies under PGD, FGSM, C&W are higher than the natural accuracy achieved by ANCRA shown in Table 1. It would be better to provide some explanations for these abnormal results. It seems the defence has the issue of the obfuscation gradients. I suggest that the authors report the results under the adaptive attacks that use the auxiliary probability $p$ and even different $p$ during the testing phase and AutoAttacks.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please refer to the comments in Weaknesses and provide the responses.
Minor comments: A typo “negative pair (PP)” => “negative pair (NP)” in Line 64?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Though the authors claim they have limitations, I did not find the discussion of the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your approval of the strengths and your constructive suggestions. The answers to your questions are as follows:
**W1) More models needed and comparison experiments with RobustBench.**
**A1)** We have followed your advice to make a comparison with the current state-of-the-art performance listed in the RobustBench on ResNet18. The results are shown below. Compared with those methods without synthetic or extra data (i.e., [2] and [3]), our method has a higher robust accuracy than theirs by 7.0%. And our method has even outperformed the methods with synthetic data ([1]) in robustness. Though the clean accuracy of A is more than ours by 5.6%-2.2%, the best robust performance has indicated the effectiveness of our methods. Limited by time, we cannot make more experiments in WideResNet. Experiment results in ResNet18 and PreActResNet18 have shown our superiority of robustness. And our insights about robust feature learning are more important than the improvement of performance.
| Defense | [1] | [2] | [3] | PGD-AT-ANCRA | TRADES-ANCRA | MART-ANCRA |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| Nat | 87.35 | 85.71 | 80.24 | 85.10 | 81.70 | 84.88 |
| AutoAttack | 58.50 | 52.48 | 51.06 | 59.15 | 59.70 | 59.62 |
[1] Sehwag, Vikash, et al. "Robust learning meets generative models: Can proxy distributions improve adversarial robustness?." arXiv preprint arXiv:2104.09425 (2021).
[2] Addepalli, Sravanti, and Samyak Jain. "Efficient and effective augmentation strategy for adversarial training." Advances in Neural Information Processing Systems 35 (2022): 1488-1501.
[3] Addepalli, Sravanti, et al. "Scaling adversarial training to large perturbation bounds." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
**W2) The error bar to validate the significance of the results.**
**A2)** We are sorry for our negligence. We have made each experiment at least three times, and the variation of accuracy in all the experiments is less than 1.7%, which is marginal compared with the variation of the improvement.
**W3) The explanation for robust accuracy higher than the natural one and the issue of the obfuscation gradients.**
**A3)** Reverse attention in our method gets feature of natural and adversarial examples to become close to each other. As a result, it is normal that robust accuracy nearly equals clean accuracy with the effect of alignment. Besides, our generation strategy based on targeted attacks can craft adversarial negative examples, which contain prior knowledge about adversarial noise. Thus, adversarial robustness against the general white box benefits from the prior knowledge and has better accuracy than natural accuracy.
To solve your concern about obfuscation gradients, we have tested the defense result against EOT. EOT is a powerful adaptive attack to show whether the defense has obfuscation gradients. And we set its step number as 100 and keep other parameters the same as the white box PGD we have used. The robust accuracy against EOT is almost the same as the robustness against PGD-40(83.13%$\approx$82.96%).
What's more, we also report the robustness of all the auxiliary probability vectors against normal white box attacks and the adaptive attack in the table below. Results show robust accuracies of different probability vectors are always the same. These two results indicate our method doesn't have obfuscation gradients.
| | Nat | PGD | Adaptive PGD |
| ---- | ---- | ---- | ---- |
| Auxiliary probability vector $p^0$ | 81.81 | 83.52 | 62.25 |
| Auxiliary probability vector $p^1$ | 81.81 | 83.49 | 62.23 |
| Final probability vector $p'$ | 81.81 | 83.47 | 62.24 |
**Q1) Typos.**
**A4)** We are sorry for typos and other writing errors in our paper. We will follow your and other reviewers' suggestions to fix all the typos as well as other writing errors. We will do our best to improve this issue in the revised version.
**L) The lack of limitations.**
**A5)** We will discuss limitations in the revised version. Details are as follows. Though reverse attention contributes to both generalization and robustness, its accuracy shows high dependence on intermediate predicted classes. It may become cause degraded performance when faced with powerful attacks. We will further study it and hope to improve its robustness in the future.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 2AVw,
We hope that our responses can address your concerns and gain your increase in the rating score. If your have any new questions, we will response positively. Thanks for your time again.
---
Rebuttal Comment 1.2:
Title: Need further clarification
Comment: Thanks for your response. It seems that adaptive PGD can yield a significantly higher attack success rate (according to A3). The authors should provide a comparison between the performance of the proposed method under adaptive AutoAttack that combines AutoAttack with auxiliary probability and the SOTA performance under AutoAttacks. In addition, I still believe the results evaluated on ResNet-18 are limited. Conventionally, the paper regarding improving adversarial robustness will report the performance of WideResNet-28-10 or WideResNet-34-10 to validate the effectiveness of the proposed method. It is meaningful to validate whether the proposed method is effective on large models.
---
Reply to Comment 1.2.1:
Comment: We have conducted experiments on WideResNet-34-10 and WideResNet-28-10. As shown in the Table below, our method has made great enhancements in robust accuracy. The accuracies of our method against adaptive AutoAttack are even higher than those of baselines against AutoAttack (e.g., 51.99%>50.79%). This indicates its effectiveness on large models.
| Model | Method | NAT | PGD | Adaptive PGD | AutoAttack | Adaptive AutoAttack |
|------------------|--------------|-------|-------|--------------|------------|---------------------|
| WideResNet-34-10 | TRADES | 82.04 | 56.47 | \ | 50.79 | \ |
| WideResNet-34-10 | TRADES-ANCRA | 83.19 | 79.31 | 56.76 | 66.28 | 51.99 |
| WideResNet-28-10 | TRADES | 82.47 | 57.08 | \ | 51.11 | \ |
| WideResNet-28-10 | TRADES-ANCRA | 82.11 | 78.52 | 57.07 | 66.08 | 51.14 | | Summary: This paper focuses on robust feature learning by combining two approaches: Adversarial Contrastive Learning and Robust Feature Selection. Specifically, it defines two characteristics for features: exclusion and alignment. The authors aim to enforce exclusion through Asymmetric Negative Contrast (ANC), which ideally should separate different classes. On the other hand, they aim to achieve alignment through Reverse Attention (RA), which should enhance the model's robustness.
Strengths: The proposed method sounds interesting, especially the Reverse Attention approach. However, I have some concerns and feedback that will be provided in the later section of the review.
Weaknesses: Writing Style:
Overall, the quality of the writing style could be significantly improved. Reading the paper is not smooth, and understanding it requires reading back and forth several times. For instance:
1. Generally, the flow of the paper is not engaging, which requires the reader to go back and forth in order to understand it. And, there are some sentences that are vague and difficult to understand.
3. Caption: Overall, the caption writing is not good; it contains long sentences that are difficult to follow. Labels for each plot have not been provided. Specifically, in Figure 1, regarding the distance between natural examples and OEs, it would be preferable to show distances between each pair instead of comparing class 0 with others, as class 0 may share common features with some other classes. It would be more informative to have the following plots: 0-1, 0-2, ..., 0-9.
4. Typos: "perturbation -> perturbations" in line 28 | "which does harm to robust classification -> which harms robust classification" in line 35 | "... negative pair (PP) -> (NP)" in line 64 | "import feature -> important feature ..." in line 118 | "a example -> an example" in line 147 | ...
Content:
1. There are certain terms that are vague. For instance, what does "well-trained DNN" mean in line 20?
2. The definition of an Adversarial Example is not entirely correct. The perturbation itself may be visible, but when it is added to a natural image, both the adversarial example and the original image may not be easily distinguishable by human eyes. Furthermore, it is important to note that it is not the adversarial example itself to which perturbations are added. Rather, the perturbations are added to natural examples so that they cannot be correctly classified.
3. Description about Figure 1 in line 45-47: It is not a Gaussian distribution; it is a bell-shaped plot but not a Gaussian distribution. The main condition for a Gaussian distribution is that the area under it should be 1. And indeed, those numbers show a significant distance between classes. A cosine similarity below 0.5 should be large enough to indicate a difference between the representations of one class and another. Again, indeed, a cosine similarity between 0.9 and 0.99 indicates that representations of NEs and AEs are very close to each other! And comparing AEs plots with the OEs plots, we see that they have a very close representation to their original examples that OEs. Furthermore, if the representation of NEs and OEs is not significantly distant, models should confuse them, leading to lower clean accuracy. However, that is not the case.
4. The definition of OEs is not clear from the beginning of the paper. The reader should read the whole paper to figure out what they are, especially since different strategies are considered to select or generate them.
6. Similarly, the term 'partial parameters' is vague, and the reader doesn't understand what it refers to until the later sections of the paper, which confuses the reader.
Approach and Methodology:
1. IT seems the main weakness is that ANC does not make significant contributions to the clean accuracy and robustness of the model, as indicated by the results in Table 3 of the ablation studies. Specifically, a model with only RA performs just as well as ANCRA. On the other hand, considering this, a large portion of the paper is dedicated to introducing and explaining ANC. However, the most important component, which is RA, is not studied well enough and lacks supporting experiments and theoretical insights.
2. The training process of a model with ANC is not clear enough. It would have been better to depict it through a diagram. Furthermore, the last paragraph in the introduction section is abstract and confusing, making it difficult for readers to understand the exact contributions.
Experiments:
1. The proposed method in the main table (Table 1) is evaluated without considering "p" in attack which creates a false sense of security. However, in Table 2, attacks with "p" demonstrate a significant decrease in the model's performance.
2. Even though "Error Bars" is marked in the checklist, I don't see standard deviation values in the reported tables.
3. Similarly, although "Reproducibility" is marked in the checklist, the code necessary to reproduce the results is not available.
4. I conducted an experiment with TRADES, and the accuracy I obtained differs from what is presented in Table 1. The clean accuracy is 80.92%, and the PGD accuracy is 50.10%.
4. The results of the Tiny-ImageNet experiments are inconsistent with those of other datasets, and the performance of the model does not seem promising. Additionally, they have not been compared with baselines, which is essential.
Technical Quality: 1 poor
Clarity: 1 poor
Questions for Authors: Questions to ask for clarification based on the claims in the paper:
1. Why does AT omit learning robust features? Is it possible to theoretically demonstrate why AT is unable to learn robust features? AT does not have poor performance in terms of adversarial robustness. In fact, in the presence of enough data, AT achieves state-of-the-art accuracy.
Major questions:
1. The code is not available to reproduce the results. Could you please provide runnable code?
2. Section 3.2.1 lacks clarity, and the justification for the ANC is not sufficiently sound. Could you please provide further elaboration on this? (Especially since the implementation is not provided for verification, again having a diagram could be helpful)
3. Is it possible to provide empirical or theoretical evidence for the statement in section 3.3? "Each element z_i in z represents the activation level of the feature channel that may be helpful for classification, with larger values representing more feature information extracted from that channel" I am asking because only the value of z_i may not be important, and the combination of weights of the linear layer corresponding to that specific feature is the crucial factor.
5. Could you please compare the running time of your proposed approach with the baselines? I am curious about how much overhead the additional training steps add.
6. As described in section 3.3, during the test phase, a sub-vector of the linear weights corresponding to the predicted class h(x) is chosen when the label is not available. However, if the linear layer is already predicting the wrong class, strengthening the prediction by doing so may not improve the model's robustness. Therefore, it raises the question of how this part actually contributes to robustness during testing. Does the robust performance change if we only output the predicted class without doing any reverse attention?
Minor questions:
1. What does it mean to freeze natural examples (in line 65)?
2. What does "selected carefully" mean in line 68?
3. What are the problems mentioned in line 158 when it states, 'Although we have a generic pattern of AT loss with a negative contrast, there are several specific details to address'?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 1 poor
Contribution: 2 fair
Limitations: 1. It appears that the approach does not work properly with large datasets like ImageNet.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your constructive suggestions. The answers to your questions are as follows:
**Q1) Why does AT omit learning robust features? Is it possible to theoretically demonstrate why AT is unable to learn robust features? AT has good performance in adversarial robustness with enough data.**
**A1)**
1. **First**, as we have explained in lines 46-52, AT doesn't have good enough performance in the feature distance. As shown in Fig. 1 on Page 2, feature distances between natural examples(NEs), adversarial examples (AEs) and examples of other classes (OEs) are not good enough. And we get similar results when fixing NEs as class 1, 3, 7 on CIFAR10.
**Second**, if AT has gained robust feature, NEs should be aligned with AEs, which guarantees the predicted results of NEs and AEs are also highly consistent. But as shown in Tab. 1 on Page 8, we can find there is a big gap between the clean and robust accuracy of the previous AT. So it is wrong according to reduction to absurdity.
**Third**, as shown in Tab. 1 and Tab. 2 on Page 9, our method has made great progress in robustness compared with previous methods, which proves learning robust feature helps AT gain robustness. In other words, it proves AT omits robust feature learning.
2. Perhaps. We have revealed it empirically.
3. As you say, AT needs much data to gain good performance in adversarial robustness. But we also need additional costs to get more data, such as training generative models and manually collecting data. Besides, more data means more training costs. Extra costs may be hard to pay. What's more, **it is meaningful to make models learn more from data rather than make models learn more data.**
**Major Q1) The code for reproducing the results.**
**A2)** We have provided Area Chairs with our code.
**Major Q2) The justification for the Asymmetric negative contrast based on probabilities (ANC) in Section 3.2.1.**
**A3)** In Section 3.2.1, we notice that negative contrast may cause the problem of class confusion. Attracted to its adversarial example and pushed ways from the negative example, a natural example can be easily moved toward the wrong class space. To alleviate this problem, we have two ideas. First, we stop the back-propagation gradient of natural examples when optimizing, keeping its feature in the right class. That is "Asymmetric negative contrast". Second, we design the negative contrast based on predicted probabilities. Since it has only directions to reduce confidence in the wrong classes, it does not have any actions to push the natural example into a specific class. So it works even in the scenario of class confusion.
And we add a diagram for ANC to help readers understand each step of the implementation, which is shown in Fig. 1 in the PDF. And We use the detailed caption to describe the calculation process in the diagram.
**Major Q3) More evidence for reverse attention (RA) in Section 3.3 and whether the combination is the crucial factor.**
**A4)** We have made an empirical explanation of why RA works in lines 236-257. And the similar idea of utilizing linear parameters to weight feature has been discussed in [1, 34]. Moreover, the good performance of individual RA shown in Tab. 3 on Page 9 also indicates its effectiveness.
Our statement about $z$ tries to describe the meaning of the variables involved in reverse attention. And by weighting feature with parameters corresponding to the target classes, we can have better aligned feature. So the combination rather than feature values matters.
**Major Q4) Running time.**
**A5)** In our experiments, our TRADES-ANCRA only costs more time than TRADES by 3.1 hours (3.1=9.3-6.2) in 120 epochs. Considering the significant gain in clean and robust accuracy resulting from the proposed method, the cost is relatively worthwhile.
**Major Q5) How RA actually contributes to robustness during testing? Does the robust performance change without reverse attention?**
**A6)** 1. RA helps defense models to obtain the ability to extract robust feature during training by alignment. Because robust feature is with good generalization, it still works during testing. As shown in Tab. 1 in the PDF, final predicted classes and intermediate predicted classes are highly consistent. Because RA has no effect on the predicted classes $p^0$, it indicates that the way for RA to improve robustness in testing is to guide models to extract robust feature during training.
2. Yes. And because predicted classes of $p^0$, $p^1$ and $p'$ in Tab. 1 in the PDF are highly consistent, it indicates that RA and the original block cooperate to keep the good representation and have a counteracting effect. If we remove RA, it will lead to an unbalance in the final layer, which may cause a decrease in robustness. Poor performance of Final probability vector without RA $p''$ has proven our opinion.
**Minor Q1) What does it mean to freeze natural examples (in line 65)?**
**A7)** It means we stop the back-propagation gradient of the feature of natural examples and only move other examples in the feature space. We explain it in lines 175-177.
**Minor Q2) What does "selected carefully" mean in line 68?**
**A8)** It means in adversarial contrastive learning, negative examples are often chosen from the dataset by a specific selection strategy. We point it out in lines 197-198.
**Minor Q3) What are the problems mentioned in line 158?**
**A9)** 1. How can we design proper formulae to calculate the negative contrast? 2. How can we obtain suitable examples as negative examples? We have explained them in lines 159-160.
**L) The effectiveness on large datasets.**
**A10)** We have improved our experiments on Tiny-ImageNet. As Tab. 2 in the PDF shows, our method has made great progress in robustness (31.44%\textgreater14.78%) compared with all the baselines and good improvement in clean accuracy(43.83%$\textgreater$38.61%), indicating its effectiveness on large datasets.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 1pqm,
We hope that our responses can address your concerns and gain your increase in the rating score. If your have any new questions, we will response positively. Thanks for your time again.
---
Rebuttal Comment 1.2:
Title: Requires Further Improvement and Clarification
Comment: Dear authors,
I want to acknowledge that I have read your response and appreciate your effort. However, I still believe that the contribution of the ANC component in your proposed method is quite limited, as indicated in Table 1, even though a significant portion of the paper is dedicated to this approach.
While you have made attempts to address the questions I raised earlier, I strongly recommend that you carefully review the weaknesses highlighted in my reviews to further enhance your work.
Best,
---
Reply to Comment 1.2.1:
Comment: **Writing Style.2: concerns about shared features by class 0 with other classes**
In Figure 1 and Figure 2, We show the overall distances between natural examples and OEs rather than distances between pairs in two specific classes, because we want to obtain the overall distribution of feature distance to prove our opinions. To solve your concern about the common feature shared by class 0 with other classes, we fix class 1, class 3, and class 7 as natural samples and repeat the experiments respectively. The results are consistent with the previous results, indicating that our conclusions are sufficiently credible.
**Content.1, 2: "well-trained DNN" and definition of an Adversarial Example**
We follow your advice to revise sentences in L20-23 as follows.
Given a naturally trained DNN and a natural example, an adversarial example can be generated by adding small perturbations to the natural example. Adversarial examples can always fool models to make incorrect output. At the same time, it is difficult to distinguish adversarial examples from natural examples by human eyes.
**Content.3: the description of Figure 1**
First, we will correct Gaussian distribution to bell-shaped distribution in L45~47 and line 46 and line 292.
Second, we object to the view that the feature distance has been optimized well enough.
1. If the distances between natural and adversarial samples are considered to be sufficiently small, then their natural and robust accuracy should be quite close. But in fact, the natural accuracy is much higher than the robust accuracy (e.g., 80.90%>44.35% in Table 1). Though a cosine similarity between 0.9 and 0.99 seems a very small distance between natural examples and adversarial examples, their representations still need to be optimized to be more similar to reach similar accuracies.
2. If the distances between natural samples and negative samples are considered to be sufficiently large, then the distances of different classes are large enough for classification. But in fact, the robust accuracy is obviously not high enough (e.g., 44.35% in Table 1).
3. High natural accuracy does not mean that the class space is distant enough. It only shows the boundaries of different classes are divisible. But in the task of adversarial training, our targets are both natural accuracy and robust accuracy. The distances should be enough not only to distinguish natural samples of different classes, but also to keep the adversarial samples located at the class boundary separable enough. So it is necessary to maintain a large margin to correctly classify adversarial examples. Exclusion is proposed to achieve this goal.
**Content.4: the definition of examples of other classes (OEs)**
For the definition, we define OE in L43-44 and L148-149. OE is an example of other classes (with a different label from that of the natural example). In this paper, we choose OEs as negative examples, following adversarial contrastive learning. When training, we can choose selection strategies or our generation strategy to get OEs. To make it more clear, we have made Table 2 in the supplementary materials. And we will add some detailed definitions to Notations.
**Content5: the meaning of 'partial parameters'**
'partial parameters' means those weights of the linear layer that are used to calculate the probability of the target class. Target class means label class when training, and denotes predicted class in the testing. We will add this description to Section 3.3 to help readers understand it.
---
Reply to Comment 1.2.2:
Comment: **Approach and Methodology.1(1): the Asymmetric negative contrast based on probabilities (ANC) does not make significant contributions**
Firstly, ANC can obviously improve natural and robust accuracy on the basis of baseline (e.g., 1.8% in natural accuracy and 5.7% in robust accuracy against PGD). And Figure 3 has also shown that ANC does promote exclusion and keeps large margins between different classes. So it's not objective enough to think ANC does not make significant contributions to the clean accuracy and robustness.
Secondly, unlike adversarial examples located at the boundaries of classes, natural examples generally set in the center of class space. So classifying natural examples are easier than classifying adversarial examples. It causes natural classification is benefited from ANC more than adversarial classification. On the other hand, RA helps alignment and pull adversarial examples close to natural ones, which makes the classified accuracy of adversarial examples become close to the corresponding natural accuracy. Naturally, it contributes to robustness more. It's not surprising that individual ANC does better in clean accuracy and RA does better in robust accuracy.
Thirdly, compared with individual ANC and RA in Table 3, ANCRA has a 1% increase in natural accuracy, and a tiny increase in robustness against all the attacks except AutoAttack. And its robustness against AutoAttack is down by 1.3%. Though there is always a trade-off between natural accuracy and robustness, a lot of researchers are dedicated to maintaining good natural accuracy in AT, even at the expense of robust accuracy. So natural accuracy is also an important indicator for adversarial training. Overall, the combination of two techniques does well in the trade-off and obtains a good improvement in clean accuracy without sacrificing much robustness.
Fourth, the motivation and design of ANC are enlightening and important contributions. ANC has improved performance, which indicates our ideas of exclusion for robust feature are correct. We believe that not only the technique itself but also our motivation and ideas can bring new insights to the community.
**Approach and Methodology.1(2): reverse attention (RA) is not studied well enough and lacks supporting experiments and theoretical insights**
We have shown our motivation for RA in L68-72 on Page 2 and L213-220 on Page 6. We have shown its principle and theoretical insights in L235-257 on Pages 6-7. Reviewer PqGt thinks it is intuitive and has been properly justified in the manuscript. And experiment results in Fig. 3 on Page 7, Tab. 1 on Page 8, and Tab 2 on Page 9 have proven its effectiveness.
**Approach and Methodology.2: the training process of ANC and the description of the last paragraph in the introduction**
For the training process of ANC, we add a diagram to help readers understand each step of the implementation, which is shown in Fig. 1 in the PDF. And We use the detailed caption to describe the calculation process in these diagrams.
For the description, we guess you mean PP and NP may cause readers hard to understand the contribution. So we have revised it as follows:
To address the issue, we propose an AT framework to concentrate on robust representation with the guidance of the two characteristics. Specifically, we suggest two strategies to meet the characteristics, respectively. For exclusion, we propose an asymmetric negative contrast based on predicted probabilities, which freezes natural examples and pushes away OEs by reducing the confidence of the predicted class when predicted classes of natural examples and OEs are consistent. In particular, we find OEs generated by the targeted attack are more beneficial for correct classification than those selected carefully. For alignment, we use the reverse attention to weight feature by parameters of the linear classifier corresponding to target classes, which contains the importance of feature to target classes during classification. Because the feature of the same class gets the same weighting and feature of different classes is weighted disparately, natural examples and AEs become close to each other in the feature space. Empirical evaluations show that AT methods combined with our framework can greatly enhance robustness, which means the neglect of learning robust feature is one of the main reasons for the poor robust performance of AT. In a word, we propose a generic AT framework with the Asymmetric Negative Contrast and Reverse Attention (ANCRA), to learn robust representation and advance robustness. Our main contributions are summarized as follows:
---
Reply to Comment 1.2.3:
Comment:
**Experiments.1: concerns about the false sense of security without ‘p’**
Attack with "p" is the adaptive attack for our method, which is harder to defend against than general white box attacks. So it's normal to gain lowered robustness when we test by adaptive with "p". This situation happens a lot in other work. Notice that the performance of our method against the adaptive attack still surpasses that of other methods against normal white box attacks (e.g., 61.68% > 48.88%). It shows our method does make some success in robustness.
**Experiment.2, 3: error bars and reproducibility**
We have shown all the details of implementation in L266-273 on Page 7 and supplementary materials. We have made each experiment at least three times, and the variation of accuracy is generally less than 1.7%, which is marginal compared with the improvement. And we have provided Area Chairs with the anonymized link of our code.
**Experiment.4: reproducibility for TRADES**
From our experience, your code may be different from ours in learning rate, weight decay and step size of attacks. We set the learning rate as 0.01, set the weight decay as 0.0002 and set step size of attacks as 0.007, which we have written in lines 267, 268 and 272. The code is also available. Besides, Vanilla TRADES maximizes the KL loss to generate adversarial examples for training, but we maximize the CE loss to generate adversarial examples like PGD. By modifying these, we keep three baselines (PGD-AT, TRADES, MART) having the same method to generate adversarial examples for training.
**Experiment.5: the results on Tiny-ImageNet**
We have conducted some experiments to provide three baselines. As shown in Tab. 2 in the PDF, our methods made obvious progress in robustness compared with all the baselines, indicating its effectiveness on big datasets.
---
Reply to Comment 1.2.4:
Comment: We hope that our responses can address your concerns about weaknesses and gain your increase in the rating score. If you have any new questions, we will respond positively. Thank you for your suggestions. We will improve our paper and experiments. | Summary: In this paper, the authors address a notions of exclusion and alignment in representaion learning for robust adversarial training (AT). They propose a generic framework for AT that includes asymmetric negative contrast and reverse attention in order to obtain robust representation. In addition, they propose to weight feature by parameters of the linear classifier as the reverse attention, to obtain class-aware feature and pull close the feature of the same class. The authors show empirical evaluations on three benchmark datasets, showing the improved robustness under AT and as well as giving increased performance in comparison to state-of-the-art methods.
Strengths: Originality. The proposed AT framework that concentrates on robust representation with the guidance of the two characteristics like exclusion and alignment appears to be novel. Generating and using negative samples by targeted attack to assist learning also appears novel.
Quality. The quality is good. The paper is nicely motivated. The idea seems reasonable regarding the use two characteristics for robust representation: (1) exclusion: by pushing away the feature representation of natural examples from the feature representations for examples coming from other classes; and (2) alignment: by pulling close to each other the feature representation of the natural examples and the feature representation for the corresponding adversarial examples. The approach can be used in a plug-and-play manner for a number of defence methods, while the empirical validation shows advantages under the setting of white-box and adaptive attacks. While on table Table 2, we see consistent improvement for the used attacks with p, while we do not have results for comparison when we do not use p.
Clarity. The paper organisation, the presentation and the writhing is good. I find the paper easy to follow on most parts. It might be beneficial some parts of the paper to be explained more easily so that the main idea behind the paper to be more obvious. Fig 2. might be improved to more concretely pin point the cases of class confusion issue. In Sections 3.2.1, the section within the lines 162-169, a bit difficult to rad on the first pass regarding the cases covered by the class confusion issue. I would suggest an additional small figure to illustrate intuitively and more obviously the cases. In Sections 3.2.1, the section within the lines 170-172 is not clear (not sure when the some sentences end and begin). Section 3.2.2 could benefit further polishing. Maybe a table or small figure describing what is negative samples or pairs (i.e., OEs), natural negative examples/pairs, adversarial negative examples/pairs, positive examples/pairs, adversarial negative examples/pairs, etc. could be helpful. The text in the lines 234-257 could also benefit sharpening, since some explanations are not that smoot and straightforward at least to my understanding. The results on Table 4 are interesting, but maybe the table could be reorganised and more clearly shown so that we can have a better grasp of what is the actual advantage.
Significance. Adversarial training is important topic for privacy and security applications. This paper proposes an approach towards more reliable defence of adversarial attacks. The approach also seems to provide improvements on top of other approaches like the TRADES approach. Generating and using negative samples by targeted attack to assist learning seems valuable. Adversarial training with reverse attention on the feature level seems interesting.
Weaknesses: Weaknesses
- In the empirical evaluation, some tables could be made more obvious (please see the comment on clarity for Table 4).
- The black box attacks not considered and comparison for black box attacks not done.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the author comment about the use of the proposed approach under black box attacks?
Does the reverse attention works only for the TRADES approach?
I wonder of the asymmetric negative contrast based on probabilities is not somewhat artificially enforced, could the authors elaborate on that?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations
- By generating negative samples by targeted attack, we use prior on about the possible attack, so possibly, the defence of the approach is biased towards the generated negative samples by the used targeted attack (or similar attacks).
- In the case of black box attacks when we do not have a prior about the attack, I'm not sure how much the proposed approach could help.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your approval of the strengths and your constructive suggestions. The answers to your questions are as follows:
**Clarity and W1) Some parts of the paper need a clear explanation to easily show our ideas.**
**A1)** Thanks for your advice about clarity, we will add a dotted box to Fig. 2 on Page 5 to directly point out the practical instance of **class confusion**. Additionally, we will improve our text in lines 162-169, lines 170-173, and other places you and other reviewers have mentioned. Subject to word limitation, we can only show a part of them below.
**Improved text in lines 170-173 on Page 4**: To alleviate the problem of class confusion, we should reasonably control the repulsion of negative contrast. We propose an asymmetric method of the negative contrast, Sim^α(x, xo), to decouple the repulsive force into two parts. It contains a one-side push from the natural example to the OE and a one-side push from the OE to the natural example, given by:
For the **definition in Section 3.2.2**, we have defined adversarial examples (**AEs**) and examples of other classes (**OEs**) in lines 43-44 on Page 2 and lines 148-149 on Page 4. A natural example means one natural example used to train and test in the current batch. AE is the corresponding adversarial example of the natural example. OE is an example of other classes (with a different label from that of the natural example). In this paper, we choose AEs as **positive examples** and OEs as **negative examples**, following adversarial contrastive learning. And we don't define **negative samples** explicitly because they are a common concept in contrastive learning, which means natural examples with different labels from anchors. **Natural negative examples** are negative examples selected in the batch or dataset, and **adversarial negative examples** are made from natural negative examples by our strategy of targeted attack. These two concepts are the same as their literal meaning. We only define **negative pairs** and **positive pairs** in lines 63-64 on Page 2 and lines 135-137 on Page 4. Negative pairs are double-element sets of natural examples and their negative examples(i.e., OEs), and positive pairs are double-element sets of natural examples and their positive examples(i.e., AEs). To make it more clear, we have made Tab. 2 in the supplementary materials. And we will add some detailed definitions to Notations on Page 4 in the revised version.
For **Tab. 4** on Page 9, we plan to add a time comparison table to highlight our strengths. As shown in the table below, our strategy costs less time than the average of these selection strategies (9.8 hours) but achieves the best performance. Besides, Soft-LS and Hard-LS pick up samples with specific predicted classes from other classes, which suffer from the risk of not finding suitable samples in a single batch. While our strategy only needs a random sample with a label different from the natural one, it is hardly possible for all the samples in a batch to have the same label. That's one of our strengths.
| Defense | Random | Soft-LS | Hard-LS | Targeted attack |
| ---- | ---- | ---- | ---- | ---- |
| Total time(hours) | 6.9(±0.2) | 11.4(±0.1) | 11.3(±0.3) | 9.3(±0.4) |
**Q1, W2, L2) The effectiveness of our method against black box attacks.**
**A2)** Our method can be trained to defend against black box attacks because we need labeled data only in the training rather than testing. We have made experiments against transfer-based black box attacks. We craft adversarial examples by PGD-100 on the source model and then attack target models. The results are shown in the table below. Notice that all the source models and target models are ResNet-18, so it is easy to be attacked. Our method gains the best robustness among all the methods, indicating the effectiveness of our method in the black box scenario.
| Target \ Source | PGD-AT | TRADES | MART |
| ---- | ---- | ---- | ---- |
| PGD-AT | 44.73 | 58.25 | 59.65 |
| TRADES | 58.91 | 48.53 | 60.21 |
| MART | 58.66 | 58.46 | 49.26 |
| TRADES-ANCRA | 62.03 | 60.43 | 62.23 |
**Q2) Does the reverse attention works only for the TRADES approach?**
**A4)** No. As we have highlighted in line 81 on Page 3 and line 263 on Page 7, it can be combined in a plug-and-play manner with other defense methods. And it does not need any extra components. As shown in the experiments, it can be successfully combined with PGD-AT, TRADES and MART.
**Q3) The reasonability of asymmetric negative contrast based on probabilities (ANC).**
**A5)** From the view of logic and motivations, we notice that adversarial learning lacks robust feature learning. Thus, we propose two characteristics to guide it, which are exclusion and alignment. We propose two techniques for two characteristics, respectively. And we design ANC to promote exclusion. Therefore, it sounds logical and fits our motivations.
From the view of the technique itself, experiments have proven that it does help exclusion and improves both generalization and robustness, which achieves our goal.
**L1, L2) The prior learned from targeted attacks.**
**A3)** When training, we craft negative examples via the targeted attack to teach models to learn prior knowledge of adversarial noise gradually. The prior knowledge can easily transfer from the training set to the test set, because we default that the data distribution of the training set and the test set is the same. We are not sure whether it has a bias against targeted attacks, but it helps models in robustness against various attacks from experiment results. It always works in the inference process under whatever kinds of attacks.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer K7R1,
We hope that our responses can address your concerns. If your have any new questions, we will response positively. Thanks for your time again.
---
Rebuttal Comment 1.2:
Title: Response to the Rebuttal
Comment: Thanks to the authors for their feedback, it addresses some of the concerns that were raised. I also read the other reviews and the discussion there. I'm with the opinion to keep my initial score. | Summary: This paper presents two characteristics of robust features, exclusion and alignment, and proposes a novel adversarial training method with asymmetric negative contrast and reverse attention. For exclusion, it introduces asymmetric negative contrast loss and generates adversarial negative examples by targeted attacks to push out examples of other classes in the feature space, and for alignment, it introduces reverse attention that weights features based on the parameters of the linear classifier to obtain class-aware features. Experimental results on three datasets show that the proposed method significantly improves the robustness of the models.
Strengths: 1. This paper introduces two characteristics that robust features should have: exclusion and alignment, and proposes a new AT framework that enables models to learn robust features effectively. It can be used in a plug-and-play manner and works well with existing AT methods.
2. Specifically, asymmetric negative contrast loss for exclusion of robust features, a technique to create hard negative samples through targeted attacks, and reverse attention using class information for alignment are proposed, which induce the characteristics of robust features. To my knowledge, these techniques are novel in AT.
3. The proposed method records the SoTA performance in experiments on CIFAR-10, -100, and Tiny-ImageNet. The performance improvement shown in Table 1 is impressive.
4. Overall, the paper is well written to help the readers understand the presented concepts. In detail, figures 1 and 3 show the statistical differences between NEs and OEs and NEs and AEs, making it easier for the reader to understand the difference in the distribution of features and the effectiveness of the proposed AT. Figure 2 also helps to illustrate the problem of class confusion.
Weaknesses: 1. The practical implementation details for Reverse Attention in Section 3.3 are unclear. Looking at line 228 of the paper, it says that the proposed method uses p and p' together to train the model, but it is ambiguous in what way the proposed method uses them together (e.g., does it use p+p' or alternate between p and p'). Releasing the source code in the future will answer a lot of questions, but it would be nice if it could also be clarified in the paper.
2. The experimental results for "adaptive attack" in Section 4.3 raise the question of whether the experimental results in Section 4.2 are an unfair comparison. Therefore, it is necessary to clarify what the role of p is and whether this is the role of the key in the gray-box setting.
3. Table 1 in the supplementary material, the results on Tiny-ImageNet, does not have baselines, making it difficult to see performance gains.
4. Making hard negative examples via targeted attacks requires additional computation, but according to Table 4, the resulting performance gain is small. This paper also needs a clear and specific description of the negative sample generation (such as whether PGD-N was used). In the supplementary material, it is described as if they used the AEs that are predicted as the classes of NEs in the current batch as OEs, which is difficult to apply in cases with a large number of classes like CIFAR-100 unless the batch size is large enough.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: (Typos)
In line 331, Trdeas-ANCRA -> Trades-ANCRA
In Table 2 in Supplementary material, Condation -> Condition
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: In this paper, possible limitations are not discussed separately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your approval of the strengths and your constructive suggestions. The answers to your questions are as follows:
**W1) The unclear computing details for Reverse Attention (RA).**
**A1)** Subject to page limitation, we focus on the principle and function of our method in the paper. It seems to lack practical implementation details for RA. We will provide a description and diagram to show how RA works in the supplementary materials in the revised version. The diagram is shown in Fig. 2 in the PDF. And the description is as follows: Firstly, we obtain two auxiliary probability vectors ($p^0$ and $p^1$) in the two blocks in the final layer of the model. Secondly, we calculate auxiliary loss $l$ and get predicted classes $h(x)$ to choose corresponding linear parameters $\Omega^j$ for testing. After multiplying the feature and $\Omega^j$, we get the final output $p'$ to calculate loss $l'$. Finally, we add $l$ and $l'$ by a specific weight to get the total loss and then backward.
**W2) Concern about the fairness of the comparison experiments in Section 4.2.**
**A2)** To prove its fairness, we first introduce the detailed implementation of the experiments. Our experiments in Section 4.2 are conducted on four robust models trained with four defense methods. First, we sample clean data of class 0 from the test set, and generate adversarial examples via untargeted PGD-10. Second, we pick up clean examples from other classes, and make them into negative examples by our generation strategy via targeted PGD-10. And then we measure the corresponding distance indicators. The processing procedure is completely consistent between four trained models and there is no unfairness in the whole procedure.
We are not so sure whether your concern about the unfair comparison derives from the method of attacks. We choose PGD as a representative of common white box attacks, to gain results of feature distributions that can better show how this defense performs against general attacks rather than the adaptive attack. Because the adaptive attack with auxiliary probability vector $p$ is specially designed for our method, the defense results against the adaptive attack cannot represent the performance of the model in general defense scenarios. The generalization of defense results is what we want. Therefore, it's fair to choose white box PGD rather than adaptive PGD to attack our method.
**W3) The lack of baselines on Tiny-ImageNet.**
**A3)** We have conducted some experiments to provide three baselines. As shown in the table below, our methods made obvious progress in robustness compared with all the baselines, indicating its effectiveness on big datasets.
| Defense | PGD-AT | TRADES | MART | PGD-AT-ANCRA | TRADES-ANCRA | MART-ANCRA |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| Nat | 41.31(±1.2) | 37.27(±0.5) | 38.61(±0.9) | 43.02(±1.7) | 38.94(±0.6) | 43.83(±0.9) |
| PGD | 10.28(±0.7) | 16.30(±0.8) | 14.78(±0.5) | 29.79(±0.7) | 31.24(±1.4) | 31.44(±0.4) |
**W4) The strengths and description of our strategy of negative sample generation.**
**A4)** Though its performance gain is not significant, it has four strengths.
1. The strategy of negative samples via targeted attack gains the best performance over other selection strategies, especially in the last epoch.
2. As shown in the table below, our strategy costs less time than Soft-LS and Hard-LS by 2 hours and more time than Random by 2 hours. It means it spends less time than the average of these selection strategies but achieves the best performance, which is attractive and excellent.
3. And as you have mentioned, Soft-LS and Hard-LS pick up samples with specific predicted classes from other classes, which suffer from the risk of not finding suitable samples in a single batch. While our strategy only needs a random sample with a label different from the natural one.
4. Though it learns a prior about adversarial noise only from targeted attacks, the prior still works when defending against other kinds of attack.
| Defense | Random | Soft-LS | Hard-LS | Targeted attack |
| ---- | ---- | ---- | ---- | ---- |
| Total time(hours) | 6.9(±0.2) | 11.4(±0.1) | 11.3(±0.3) | 9.3(±0.4) |
For the description of the negative sample generation, we have written in lines 197-201 on Page 5 and in Tab. 2 in the supplementary materials. Firstly, we gain a batch of natural examples as natural examples, whose labels are target classes. Secondly, for each natural example, we randomly choose an example with a label different from the target class from this batch, named by the natural negative example for easy understanding. Thirdly, we attack these natural negative examples from original classes toward target classes by targeted PGD-10, which is written in L201 on Page 5.
**Q) Typos.**
**A5)** We are sorry for typos and other writing errors in our paper. We will follow your and other reviewers' suggestions to fix all the typos as well as other writing errors. We will do our best to improve this issue in the revised version.
**L) The missing limitations in our paper.**
**A6)** We will discuss limitations in the revised version. Details are as follows. Though reverse attention contributes to both generalization and robustness, its accuracy shows high dependence on intermediate predicted classes. It may become cause degraded performance when faced with powerful attacks. We will further study it and hope to improve its robustness in the future.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer BjXk,
We hope that our responses can address your concerns and gain your increase in the rating score. If your have any new questions, we will response positively. Thanks for your time again.
---
Rebuttal Comment 1.2:
Comment: I would like to thank the authors for their effort in preparing the detailed and kind rebuttal.
The computational details explained by the authors in their rebuttal are very helpful for my understanding, and the experimental results of the baselines on Tiny-ImageNet demonstrate the performance improvement of the proposed method.
Regarding W2, the concern I raised was that it might be unfair to compare primarily on the basis of robustness against adversarial attacks from attackers who don't know 'p'. Since many adversarial defense strategies have been bypassed by white-box adaptive attacks, I believe that the proposed method should also be ‘primarily’ evaluated by its robustness against white-box adaptive attacks. In the future, it would be nice to include adaptive AutoAttack results when p is known.
Regarding W4, as we need to see and evaluate the performance of both the best model and the last epoch model, the performance improvement from the strategy of generating negative examples through targeted attacks seems quite limited. The differences in experimental settings also make it difficult to compare the robustness of SoTA models in RobustBench with the robustness of the experimental results in this paper.
Although more experimental results are needed to show a clear performance improvement, I think the motivation behind the proposed methods is novel, so I would maintain or slightly downgrade my initial rating.
---
Reply to Comment 1.2.1:
Comment: About **W2**, we have conducted new experiments on WideResNet-28-10 and WideResNet-34-10. As shown in the Table below, our method has made great enhancements in robust accuracy. The accuracies of our method against adaptive AutoAttack are even higher than those of baselines against AutoAttack (**51.99**%>**50.79**% and **51.85**%>**51.11**%). This indicates its effectiveness against adaptive attacks.
| Model | Method | NAT | AutoAttack | Adaptive AutoAttack |
|------------------|--------------|--------------|------------|---------------------|
| WideResNet-34-10 | TRADES | 82.04 | 50.79 | \ |
| WideResNet-34-10 | TRADES-ANCRA | 83.19 | 66.28 | 51.99 |
| WideResNet-28-10 | TRADES | 82.47 | 51.11 | \ |
| WideResNet-28-10 | TRADES-ANCRA | 83.61 | 66.08 | 51.85 |
About **W4**, as we have mentioned in the global response, our generation strategy via targeted attacks has four strengths in different aspects. Though it doesn’t show a large improvement in the current settings, the idea of it may contribute to other work such as Adversarial Contrastive Learning. And we will provide more experiments to show its effectiveness in the revised version, such as fair contrastive experiments with SOTA methods in RobustBench.
Thank you for your recognition and suggestions. We will improve our paper and experiments in the revised version. Thanks for your time again! | Rebuttal 1:
Rebuttal: Dear **ALL** reviwers,
We are very grateful for your time and constructive suggestions. Here, we first summarize the **strengths** acknowledged by multiple reviewers.
We are encouraged by the approval of Reviewer PqGt, Reviewer BjXk, Reviewer K7R1 and Reviewer 2AVw for our **inspirable motivations**. They think our motivations and insights for learning robust feature are novel and inspirable. Besides, all of our reviewers agree that we have proposed **interesting techniques**. Reviewer PqGt, Reviewer BjXk and Reviewer K7R1 praise our techniques for their novelty. Reviewer PqGt thinks they are intuitive and are properly justified in the manuscript. Reviewer K7R1 thinks they are valuable. What's more, **impressive improvements in experiments** are affirmed by Reviewer PqGt, Reviewer BjXk, Reviewer K7R1 and Reviewer 2AVw. We have reached the best performance in robustness on different datasets and models, while we also make good improvements in clean accuracy.
Here, we answer the **common concerns** of several reviewers and then state the limitations of our work.
**Question 1: Whether the asymmetric negative contrast based on probabilities (ANC) is necessary and effective in our method? (Reviewer PqGt, Reviewer K7R1, Reviewer 1pqm)**
**A1**: Yes. From the view of logic and motivations, ANC is necessary for exclusion and robust feature learning. And from the view of empirical evaluations, our method benefits from ANC in terms of the trade-off between generalization and robustness. As shown in Tab. 3 on Page 9, ANCRA is the only method to have natural accuracy over 81.0%, and it still keeps similar robustness as individual reverse attention.
**Question 2: What are the strengths of our generation strategy of negative examples via the targeted attack? ( Reviewer PqGt, Reviewer BjXk, Reviewer K7R1)**
**A2**: **First**, our strategy gains the best performance over selection strategies, especially in the last epoch. **Second**, as shown in Tab. 3 in the PDF, our strategy costs less time than the average of selection strategies and achieves the best performance, which is excellent. **Third**, Soft-LS and Hard-LS need to pick up samples with specific predicted labels from other classes, which suffers from the risk of not finding suitable samples in a single batch. While our strategy only needs a random sample with a label different from the natural one. **Forth**, though it learns a prior about adversarial noise only from targeted attacks, the prior still works when defending against other kinds of attack.
**Question 3: What will happen if the wrong predicted class $h(x)$ leads to the wrong weighting in reverse attention (RA)? And how does RA work in testing? (Reviewer PqGt, Reviewer BjXk, Reviewer 1pqm)**
**A3**: 1. The wrong predicted class always causes misclassification, but this situation does not significantly affect performance. As shown in Tab. 1 in the PDF, the final predicted class and predicted class $h(x)$ remain highly consistent. However, as shown in Tab. 2 on Page 9 and Tab. 1 on Page 8, our defense against adaptive attacks keeps the best performance compared with all the approaches against white box attacks, indicating that the wrong prediction does not significantly affect performance in practical testing.
2. RA works by teaching models **extract robust feature** during training and having a **counteracting effect with the feature block**. Because RA has no direct effect on the high accuracy of predicted classes $p^0$ in Tab. 1 in the PDF, it indicates that RA guides models to extract robust feature during training. Because robust feature is with good generalization, it still works during testing.
Besides, as shown by the high consistency of $p^0$, $p^1$ and $p'$ in Tab. 1 in the PDF, RA and the original block cooperate to remain good representation in different blocks and have a counteracting effect. Poor performance of Final probability vector without RA $p''$ has proven our opinion.
**Limitations:**
**A4**: We will add limitations in the revised version. Details are as follows. Though reverse attention contributes to both generalization and robustness, its accuracy shows high dependence on intermediate predicted classes. It may become cause degraded performance when faced with powerful attacks. We will further study it and hope to improve its robustness in the future.
Moreover, we will follow suggestions to fix **typos** and writing errors in the revised version. We will improve vague sentences and add diagrams for individual techniques (e.g., Fig. 1 and Fig. 2 in the PDF) for easy understanding. Our **code** has been submitted to Area Chairs.
Pdf: /pdf/e9f5f51b4cf0c50b7f8108b1bac9e52895f82642.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work aims to improve the adversarial training (AT) techniques from the perspective of learning robust representation representations. Specifically, the authors highlight two characteristics of having robust features. Exclusion: the similarity of features of samples of one class should be very less from the features of samples of other classes, so that model can differentiate between features of different classes for better classification. The second attribute is Alignment: the gap between features of adversaries and clean samples of same class must be very small, which would increase model's robustness against perturbed samples.
To effectively satisfy these conditions, this work proposes two techniques, (1) to enforce exclusion, a asymmetric negative contrast loss is proposed which minimizes the clean sample similarity with negative samples of other classes, crafted by the adversarial attack. (2) to satisfy alignment: reverse attention strategy is proposed which align together the features of training examples which belong to the same class.
The proposed method is compatible with existing adversarial training techniques. Extensive experiments are conducted where the proposed method improves the natural accuracy as well as the robust accuracy when combined with existing AT algorithms.
Strengths: Strengths:
1) The idea of improving adversarial robustness of model by explicitly learning robust representations seems interesting. Although the traditional AT and contrastive learning based AT implicitly learns robust features, this method attempts to achieve the same in more explicit manner.
2) The proposed techniques of utilizing asymmetric negative contrast loss and reverse attention to achieve exclusion and alignment during adversarial training are intuitive and are properly justified in the manuscript.
3) The method provides impressive results as compared to previous state-of-the-art approaches.
Weaknesses: Weaknesses:
1) There are concerns regarding the proposed reverse attention strategy. During the testing, the true labels are not known and reverse attention uses the predicted label h(x) to calculate z'. In case the model provides wrong predicted class label, the corresponding z' will be also then multiplied with wrong classifier vector. This will further lead to degraded performance. The authors have not tried to address this scenario.
2) From the works of [1] and [34], how is the proposed reverse attention different? Unfortunately the authors have not provided any comparisons or contrast.
3) In the ablation studies provided in Table 3, combining ANC and RT marginally improves results as compared to individual ANC and RT results. It looks like there is some sort of competition between the both proposed techniques.
4) Similarly, the use of negative samples via proposed targeted attack in Table 4 shows marginal improvements overall.
5) The paper is very difficult to understand, especially for the readers who are new to the technique of adversarial training. The presentation can be significantly improved.
Minor weaknesses:
typo at line 281 scenaios -> scenarios
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have not discussed any limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your approval of the strengths and your constructive suggestions. The answers to your questions are as follows:
**W1) The wrong predicted class leads to the wrong weighted feature and degraded performance.**
**A1)** The wrong predicted class always causes misclassification, but it does not significantly affect performance. First, the accuracies of predicted labels in the different blocks of reverse attention are shown in the table below. It shows that the final predicted results and intermediate predicted labels remain highly consistent. The dependence on intermediate predicted labels is a limitation of our method. We will add it to the revised version. Details are as follows. Though reverse attention contributes to both generalization and robustness, its accuracy shows high dependence on intermediate predicted classes. It may become cause degraded performance when faced with powerful attacks. We will further study it and hope to improve its robustness in the future.
| | Nat | PGD | Adaptive PGD |
| ---- | ---- | ---- | ---- |
| Predicted labels (first block) | 81.81 | 83.52 | 62.25 |
| Predicted labels (second block) | 81.81 | 83.49 | 62.23 |
| Final predicted labels | 81.81 | 83.47 | 62.24 |
Second, as shown in Tab. 2 on Page 9, our defense against the adaptive attack can still keep the best performance compared with all the approaches in Tab. 1 on Page 8 (e.g., 61.68%>48.88% against PGD), indicating that this issue does not significantly affect performance. It is because reverse attention not only helps defense models in terms of weighting feature in the final layer, but also teaches models to extract robust feature in the whole process. So it still works in improving robustness faced with this problem.
**W2) Difference of reverse attention compared with methods in [1] and [34].**
**A2)** [1] propose an additional linear layer to learn which feature channels are important for classification. And they aim to gain its parameter to weight feature. Our reverse attention not only has achieved this target without any extra components, but also can help models learn robust feature in the whole process. And our method can be explained by alignment of robust representation. [34] do similar work with [1] and makes contributions on how to weight feature properly, which is orthogonal to our contributions. We have made a brief introduction to [1] and [34] in lines 114-119 in Related work on Page 3, where we point out that they rely on extra model components and do not explain the reason.
[1] Bai, Yang, et al. "Improving adversarial robustness via channel-wise activation suppressing." arXiv preprint arXiv:2103.08307 (2021).
[34] Yan, Hanshu, et al. "Cifs: Improving adversarial robustness of cnns via channel-wise importance-based feature selection." International Conference on Machine Learning. PMLR, 2021.
**W3) The combination of the Asymmetric Negative Contrast (ANC) and Reverse Attention (RA).**
**A3)** The combination of ANC and RA does well in the trade-off between generalization and robustness. Though there is a trade-off between natural accuracy and robustness, a lot of researchers are dedicated to maintaining good natural accuracy in adversarial training, even at the expense of robust accuracy. It indicates natural accuracy is an important indicator for adversarial training. As we can see in Tab. 3 on Page 9, ANC contributes to natural accuracy more than RA and RA does better in boosting robustness. Compared with individual ANC and RA, ANCRA has a 1% increase in natural accuracy, and a marginal increase in robustness against all the attacks except AutoAttack. And its robustness against AutoAttack is down by 1.3%. The combination of ANC and RA obtains an excellent improvement in clean accuracy (81.70%) without sacrificing much robustness, which is a good result.
**W4) The effectiveness of our strategy of negative samples generated by targeted attack.**
**A4)** Here are four strengths.
1. Our strategy gains the best performance over other selection strategies, especially in the last epoch.
2. As shown in the table below, our strategy costs less time than Soft-LS and Hard-LS by 2 hours and more time than Random by 2 hours. It means it spends less time than the average of these selection strategies but achieves the best performance, which is excellent.
3. Soft-LS and Hard-LS pick up samples with specific predicted labels from other classes, which suffers from the risk of not finding suitable samples in a single batch. In experiments on CIFAR-100, they cannot find proper samples in 40%-70% of selections. While our strategy only needs a random sample with a label different from the natural one, it is hardly possible for all the samples in a batch to have the same label.
4. Though it learns a prior about adversarial noise only from targeted attacks, the prior still works when defending against other kinds of attack.
| Defense | Random | Soft-LS | Hard-LS | Targeted attack |
| ---- | ---- | ---- | ---- | ---- |
| Total time(hours) | 6.9(±0.2) | 11.4(±0.1) | 11.3(±0.3) | 9.3(±0.4) |
**W5) The paper is difficult to understand for readers new to adversarial training.**
**A5)** Subject to page limitation, we focus on the principle and function of our method in the paper. Some details and prior knowledge may be too difficult to understand by readers new to adversarial training. We add diagrams for individual techniques to help readers understand each step of the implementation, which is shown in Fig. 1 and Fig. 2 in the PDF. And We use the detailed caption to describe the calculation process in these diagrams.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer PqGt,
We hope that our responses can address your concerns and gain your increase in the rating score. If your have any new questions, we will response positively. Thanks for your time again. | null | null | null | null | null | null |
The s-value: evaluating stability with respect to distributional shifts | Accept (poster) | Summary: The paper presents a metric (s-value) that quantifies the uncertainty of statistical estimators in terms of their distributional instability. In addition, the techniques proposed can quantify the effect of directional shifts and the authors also discuss how the s-value can be used to improve estimation accuracy under shifted distributions.
Strengths: The development of methods that quantify the effect of distributions shifts is very relevant since such shifts are very common in practice. The overall approach in the paper is of interest and the results in the paper provide a promising initial step towards novel measures of uncertainty that account for distributional shifts.
Weaknesses: The interpretation of the s-value in (1) for scalar parameters is somehow clear since it measures the smallest divergence needed to change the sign of the parameter (assuming the parameter continuously depends on the distribution). However, the usefulness of its generalization to the multidimensional case in (11) needs further clarification. In the scalar case, the divergence needed to achieve a zero value reflects the divergence needed to have a change of sign. In the multidimensional case, it is not clear why measuring the divergence needed to achieve a zero vector valued \eta quantifies the instability.
The experimental results are not correctly described. It is hard to grasp the main takeaways of such results and the relationship with the theoretical results provided. In general, the paper presentation is rather poor and multiple results appear in the appendices. It is clear that the paper can be significantly improved.
The contribution of the proposed two-stage approach for transfer learning is hard to quantify. In general, the authors should compare their methods with existing techniques so that the paper's contributions can be better assessed.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: The definition in (2) is unclear in page 2. The authors should mention there that the set \mathcal{P} in (2) denotes joint distributions of Z and E, since in that page those distributions correspond to r.v. Z only.
In line 248 the authors mention "Our findings show that the average treatment effect is unstable with respect to changes in the marginal distribution of ‘age’, ‘education’, and ‘re75’. We also find that s-values conditional on ‘age’, ‘education’, ‘black’, ‘hispanic’, and ‘re75’ are non-zero, indicating that the average treatment effect can change its sign with a shift in the marginal distribution of these covariates (sX > 0.85)." Are those results shown in Figure 2?
The authors should improve the quality of figures that currently is rather poor.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The authors should improve the description of the limitations of the methods proposed as described in the "weaknesses" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful suggestions and interest in our work. We hope that the additional evidence and our explanations will address your concerns.
> The interpretation of the s-value in (1) for scalar parameters is somehow clear since it measures the smallest divergence needed to change the sign of the parameter (assuming the parameter continuously depends on the distribution). However, the usefulness of its generalization to the multidimensional case in (11) needs further clarification. In the scalar case, the divergence needed to achieve a zero value reflects the divergence needed to have a change of sign. In the multidimensional case, it is not clear why measuring the divergence needed to achieve a zero vector valued $\eta$ quantifies the instability.
We agree that (11) does not have many direct applications. The s-value in (11) is just an intermediate result. In practice, the stability of scalar components of multidimensional parameters is of paramount importance (Section 5.1, Section 5.2, Section D.1). For example, in causal inference, we are often interested in the average treatment effect (a scalar component) in the presence of multiple nuisance parameters (confounders). Such an example is presented in Section 5.1.
> The experimental results are not correctly described. It is hard to grasp the main takeaways of such results and the relationship with the theoretical results provided. In general, the paper presentation is rather poor and multiple results appear in the appendices. It is clear that the paper can be significantly improved.
We are not sure what is meant by “not correctly described”. In the final version, we will include tables of s-values (see response to reviewer oQ9C) and improve the quality of the figures. We believe that this will improve readability. If this does not address your concern, we would appreciate additional input.
Regarding the Appendix: We have decided to explain the conceptual ideas in the main paper and move more technical results to the appendix. The goal is to confer the main ideas and numerical evidence, without losing “tempo”. We believe that this is a stylistic choice and hope that the reviewer agrees with our reasoning.
> The contribution of the proposed two-stage approach for transfer learning is hard to quantify. In general, the authors should compare their methods with existing techniques so that the paper's contributions can be better assessed.”
Unfortunately, for the s-value itself, there exists no direct competitor (see rebuttal summary above).
Based on your feedback we have evaluated a popular feature selection method, the Lasso. For the wine quality data, it selects all covariates while for the NSW data, it selects no covariates. The reason for this failure is that the Lasso selects based on feature importance in a prediction problem instead of stability in an estimation problem.
The naive approach is to conduct semi-parametrically efficient transfer learning based on data on all covariates. This is an oracle procedure that provides an upper bound on other procedures. We show that our approach is competitive with this data-hungry oracle method while requiring much less data.
> The definition in (2) is unclear in page 2. The authors should mention there that the set $\mathcal{P}$ in (2) denotes joint distributions of Z and E, since in that page those distributions correspond to r.v. Z only.
We will address this and clarify that the set $\mathcal{P}$ in (2) denotes joint distributions of Z and E in the final version.
> In line 248 the authors mention "Our findings show that the average treatment effect is unstable with respect to changes in the marginal distribution of ‘age’, ‘education’, and ‘re75’. We also find that s-values conditional on ‘age’, ‘education’, ‘black’, ‘hispanic’, and ‘re75’ are non-zero, indicating that the average treatment effect can change its sign with a shift in the marginal distribution of these covariates ($s_X > 0.85$)." Are those results shown in Figure 2?
Yes, the s-values can be obtained from Figure 2 by checking at which x-values the line cross zero, but we agree that it is more complicated than necessary. Following your feedback, we have decided to include a table with the s-values in the main paper (tables can be found in the response to reviewer oQ9C). Thank you for your input!
> The authors should improve the quality of figures that currently is rather poor.
We realize that some characters are only partially visible in Figures 2, 4 and 5. We will address this in the final version. We will also fix some color issues. If you have any other feedback, please let us know.
> The authors should improve the description of the limitations of the methods proposed as described in the "weaknesses" section.
We will expand the discussion of the weaknesses; in particular, we will describe more directly situations that might require other types of distributional stability values. For example, ongoing work covers shifts in $Y|X$. We believe there is room for a variety of new stability values that cover different types of distributional shifts. If you have additional feedback, please let us know.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their responses. However, I still think the paper contribution is not significant enough for this conference. For instance, it seems the methods proposed are only useful for the scalar case.
---
Reply to Comment 1.1.1:
Title: Misunderstanding
Comment: Thank you for engaging with our review. We believe that there has been a misunderstanding.
Our procedure can be helpful in the multi-parameter case. For example, ANOVA can be used to test hypotheses about multiple parameters. ANOVA requires being specific about the null hypothesis (for example that the group means in ANOVA are all equal to zero). For such a hypothesis, one can also compute s-values as in equation (11) with eta=0. For more details, please see Appendix Section D.
Thus, the proposed method applies to and can be useful in multi-dimensional situations as well, but scientific applications are usually formulated as inferences on scalar parameters in the presence of (potentially high-dimensional) nuisance parameters, which is the main focus of the paper. The focus on scalar parameters in the presence of nuisance parameters is a stylistic decision that reflects scientific practice.
To the best of our knowledge, we are the first to study the stability of parameters under various types of distributional shifts. | Summary: This work defines a novel statistical measure of the “stability” of a parameter in a distribution with respect to changes in that distribution. From a high level, it is defined as the minimum KL distance to a perturbation that flips the sign of the parameter. A method is given to calculate the s-value for mean parameters and several experiments are performed to show its utility.
Strengths: The method extends to the directional case. I can see the application of this to average treatment effect to be useful in industry in online experimentation platforms. Strong arguments for the utility of s-value for doing robust transfer learning.
Weaknesses: One weakness is that the calculation of the s-value uses a theorem that is only applicable to mean value parameters. The difficulty is that the definition uses an optimization problem over a set of functions rather than real-valued variables.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Could you extend to more general cases by learning a flexible neural density approximator to P_0, and solving (3) by (constrained) variational inference?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: Even though it’s addressed in the appendix, it seems non-practical to calculate the s-value for parameters other than the mean or functions of the mean.
The paper restricts discussion to distributions that are absolutely continuous with respect to the training distribution, although this is not much of a limitation as it covers most practical cases of interest.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time in evaluating our manuscript. We hope that we can address your concerns with this rebuttal.
> One weakness is that the calculation of the s-value uses a theorem that is only applicable to mean value parameters.
Please note that we have many results that cover more general cases (e.g., Example 6 in the Appendix covers generalized linear models). We provide an extensive theory (Appendix D, E, and F) to cover a variety of real-world cases.
> The difficulty is that the definition uses an optimization problem over a set of functions rather than real-valued variables.
In statistical and machine learning problems, it is common to initially state a problem as an optimization over an infinite-dimensional space. For example, non-linear prediction can be stated as arg min_f E[(Y- f(X))^2]. In observational causal inference, under ignorability the parameter is a functional of an infinite-dimensional nuisance parameter. Thus, we believe that our approach is well within mainstream ML.
From a practical side, our theoretical results allow us to re-cast the problem as finite-dimensional optimizations (sometimes even one-dimensional convex problems, see lines 127 and 146).
> Could you extend to more general cases by learning a flexible neural density approximator to $P_0$, and solving (3) by (constrained) variational inference?
In general, such a direct approach is possible and might lead to interesting new procedures. However, our proposed procedure has two advantages that may not be straightforward to implement with the reviewer’s approach. First, neural density estimation does not consider the particular functional form of the estimand. As Donsker-Varadhan shows, taking into account the functional form can reduce the infinite-dimensional problem to a finite-dimensional one. Secondly, density estimation usually has extremely slow convergence rates and (in our experience) might lead to unreliable estimates unless penalized very carefully.
> Even though it’s addressed in the appendix, it seems non-practical to calculate the s-value for parameters other than the mean or functions of the mean.
We respectfully disagree with this statement. From an optimization perspective, the question is not whether the parameter is a function of the mean; it is whether the parameter is a linear functional in the distribution space. Parameters of linear regression are non-convex functionals when viewed as functionals on the distribution space. Thus, the empirical example in Section 5.2 already captures this challenge. More general cases (such as empirical risk minimization) are treated in extensive detail in the appendix. | Summary: This paper proposes a measure to quantify the instability of a statistical parameter under distribution shifts, which calculates the minimal KL divergence to flip the sign of the estimated parameter. The authors demonstrate its usage in helping to collect target samples in transfer learning. The idea is clear and the theoretical analysis is solid, while the experiments seem a little inadequate.
Strengths: 1. The idea makes sense and the paper is easy to follow.
2. The proposed measure is novel, and the theoretical analysis is solid.
3. The proposed measure could help to collect data in transfer learning, and the two-stage method is simple but seems efficient.
Weaknesses: Generally, I love this paper, but there are some drawbacks that stop me from giving a higher score:
1. The experiments seem a litter inadequate. I view this paper as technical work, but there are almost no baselines to compare in experiments. There are some recent papers sharing similar ideas (efficiently collecting data in transfer learning), and I think the authors should compare or at least mention them. For example:
* Data Shapley: Equitable Valuation of Data for Machine Learning. ICML 2019
* Shapley values for feature selection: The good, the bad, and the axioms
* Algorithms to estimate Shapley value feature attributions
* Towards Efficient Data Valuation Based on the Shapley Value
Also, maybe the authors could compare with some feature selection methods in experiments.
2. There are some typesetting problems in Figure 4 on page 9. Some contents are covered by figures.
3. The authors measure the instability via the effects on estimation. I wonder whether it is a better way to directly measure the model performance since model performance has already taken into consideration of the estimated parameters. For example, this paper: "Minimax Optimal Estimation of Stability Under Distribution Shift Hongseok Namkoong, Yuanzhe Ma, Peter Glynn". I hope that the authors could discuss these two perspectives more.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: How could the proposed measure be used in neural networks or in large-scale problems?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. We are excited to hear that you “love this paper”!
> The experiments seem a litter inadequate. I view this paper as technical work, but there are almost no baselines to compare in experiments. There are some recent papers sharing similar ideas (efficiently collecting data in transfer learning), and I think the author should compare or at least mention them. For example: (...)
Thank you for these references, they help us clarify differences. We will include a short discussion in the final version. Here is our view:
For Shapley values, the goal is usually to evaluate the value of data (sets) or features in the context of prediction. In contrast, we are interested in parameter changes under distribution shift. As an example, if you want to predict whether someone has a headache you might want to take into account the situation an individual is in; but understanding whether the causal effect of taking an aspirin generalizes to a new situation is a different question.
Regarding baselines: We are unaware of direct competitors to our approach. To the best of our knowledge, there exists no other method that measures the stability of a parameter with respect to various covariate shifts. Thus we compare it to an oracle procedure that gets to use additional data from the target distribution and conducts semi-parametrically efficient transfer using double-machine learning [1,2]. In our view, this is among the strongest oracles possible. Motivated by your question, we have tried feature selection based on the Lasso. As discussed in the rebuttal summary, feature selection based on the Lasso fails at our task of selecting the important variables for parameter transfer.
> Also, maybe the authors could compare with some feature selection methods in experiments.
Thank you for the suggestion. Based on your suggestion, we have applied a popular feature selection method (Lasso) to the experiment. For the NSW data set, Lasso selects no covariates and for the wine quality data set it selects all covariates. We attribute this failure to the fact that the Lasso quantifies feature importance in prediction problems, instead of sensitivity in parameter estimation problems.
> There are some typesetting problems in Figure 4 on page 9. Some contents are covered by figures.
Thank you. We will fix this in the final version.
> The authors measure the instability via the effects on estimation. I wonder whether it is a better way to directly measure the model performance since model performance has already taken into consideration of the estimated parameters. For example, this paper: “Minimax Optimal Estimation of Stability Under Distribution Shift”
Thank you for the thoughtful comment. We love the mentioned paper, since it employs a similar philosophy, albeit for prediction problems instead of parameter estimation problems. Let’s consider the example of fitting a model for causal inference. One parameter might correspond to the treatment, while the other parameters might correspond to nuisance parameters (such as the effect of age, education, socioeconomic status,...). Model performance would now measure the stability of the model with respect to shifts (including changes in age and education) and would be misleading if someone is just wondering whether the treatment would still work in a new situation.
In short, evaluating model stability is useful for predictions under shifts, while parameter stability is useful for parameter estimation under shifts.
### References
[1] Jin, Ying, and Dominik Rothenhäusler. "Tailored inference for finite populations: conditional validity and transfer across distributions." Biometrika (2023)
[2] Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., & Robins, J. (2018). Double/debiased machine learning for treatment and structural parameters.
[3] Wager, S., & Athey, S. (2018). Estimation and inference of heterogeneous treatment effects using random forests. Journal of the American Statistical Association, 113(523), 1228-1242.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed explanation. I updated my score to believe the following will be reflected in your revised manuscript (or camera-ready version).
* A discussion on relevant works measuring stability, e.g. “Minimax Optimal Estimation of Stability Under Distribution Shift”.
* A discussion on the choice of KL divergence, especially its potential shortcomings. | Summary: For a given data distribution P0 and family of distributions around it, mathcalP, the authors propose a measure of stability in estimating
a parameter theta(P) when there are distribution shifts within mathcalP. The idea is that you have data P0, but might really be interested
in estimating theta for some P' != P0 that you might get when deploying your model. That is, from data P0, know how bad you will
be at estimating theta(P'). Sensitivity to changes depend on the parameter of interest, and the idea here is to quantify how much of a shift it
would take (within P, from P0) to change the sign of theta(P0). The resulting quantity s(theta, P0), which should maybe be caled s(theta, P0, mathcalP) is called the s-value, which takes values in [0,1] for 1 meaning "small shifts will change the sign of theta" to 0 meaning "the sign is stable".
The authors not only study marginal shifts, but also those resulting when one knows that some given conditionals will be fixed. Finally, they describe a two stage procedure for determining if it would be useful to collect test set samples for certain subsets of features (those for which s value is most unstable).
Strengths: - distribution shift is discussed in ML for predictive tasks but less in statistical inference / parameter estimation settings. It's great that the authors are bringing together the two areas, and providing tools for reasoning about inferences in the real world where we do expect shifts
- Moreover, we don't always care just about absolute shifts, but instead qualitative things, like parameter sign changes, which the authors focus on
- The authors discuss the conditional case where shift is only limited to some features, which helps us study more specific instances of the problem as well as gain statistical efficiency when we do have some knowledge
Weaknesses: The weaknesses are not in the mathematical method nor experiments, but more so in the discussion and contextualization of this work.
- I think the iterative two stage procedure for selecting features to collect test data for is a pretty cool use case, but there is some lacking discussion about when this would actually be feasible in the real world. Please further discuss when one should or should not be able to do this.
- The authors somewhat quickly dismiss influence-function (IF) based estimation in the Related Work, but my strong feeling is that a more thorough discussion of the relationship to this field is necessary. IF based estimation is not just about robustness to outliers, but actually has a fairly deep connection to this work where influence functions show up in the functional derivative of a parameter to be estimated with respect to an underlying distribution. Moreover, just like the authors of this work consider the case of parameter sensitivity conditionally on certain factors of the joint distribution, the IF literature likewise considers projections that describe parameter sensitivity to changes in distributions when some factors are fixed. I admit that the IF literature is somewhat dense, but I believe it would be useful to state some relationships even at high level to the derivatives and Von-Mises ("distributional taylor") expansions in e.g. Kennedy's review here: https://arxiv.org/pdf/2203.06469.pdf
---- just some writing comments below ---
- I assume figure 2 is about the data NSW given the legend mentioning demographics, but the figure did not explicitly mention this. The figure caption should mention the name of the dataset to avoid confusion.
- Figure 3 does not mention which data it is about in the caption, and one has to work backwards to the NSW experiment to find out which data the figure is about by searching for the text "Figure 3". Please mention the data in the figure caption.
- when you say "We employ Jin and Rothenhäusler [22]’s transfer procedure to estimate", it would be helpful to be able to read what the method is from a re-statement in your paper rather than having to open the citation, since it's a detail that is part of your experiments.
- in lines 59-61, "We discuss how our ... same to re-estimate the parameter under shifted distribution", it would be correct to change "under shifted distribution" to either "under a shifted distribution" or "under shifted distributions".
- The remark in 63-70 could be moved to the end of work since it sort of interrupts the flow
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
- The two-stage procedure involves testing the stability of various subsets of features and seeing if there are some for which it would help to collect samples from the test set. Since it would be usually not possible to do this kind of search in the real world, what would be the practical take-away for how to make use of this in non synthetic studies? Did I understand correctly that testing a given subset Xs requires collecting it from the test set? It's fine to study this ideal case in a research paper, but it seems like it necessitates more discussion.
- NSW experiment: your initial estimator has a std dev of 492 which is large for an effect size of 820. How was the std dev estimated? Is this the expected ATE std dev using this method on this data?
- There is a claim in the paper that "full transfer" doesn't provide much gain over "partial transfer". This probably needs some more explanation. Is the claim that using Pproj to estimate theta is as good as using Ptest? This must only be true under some assumptions. Even if these are mentioned implicitly when describing the data, it would be helpful for the authors to restate what might cause this phenomenon. It's fine to report this phenomenon if it does hold on your data, but please contextualize.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful and thorough review. We are glad that you find it “great that the authors are bringing together the two areas” [distribution shift in ML for predictive tasks and estimation problems in statistics]!
Thank you for your thoughtful comments on writing, we will address them in the final version of the manuscript.
> I think the iterative two-stage procedure for selecting features to collect test data is a pretty cool use case, but there is some lacking discussion about when this would actually be feasible in the real world. Please further discuss when one should or should not be able to do this.
In the following, we will discuss an example from the social sciences since applied researchers from political science have signaled interest in our procedures.
Researchers are often interested in a causal effect estimate for a new location. For some covariates such as age and education, partial data is available via surveys such as American National Election Study (ANES) or Cooperative Election Study (CES). Additional partial data can be cheaply obtained via Amazon Mechanical Turk. However, some covariates are hard to collect, since they require running a study in the new location. Our numerical results show that the proposed approach can help prioritize data collection. This may drastically reduce the cost compared to running full-scale replication studies.
We will add this discussion to the final manuscript.
> The authors somewhat quickly dismiss influence-function (IF) based estimation in the Related Work, but my strong feeling is that a more thorough discussion of the relationship to this field is necessary. IF-based estimation is not just about robustness to outliers, but actually has a fairly deep connection to this work where influence functions show up in the functional derivative of a parameter to be estimated with respect to an underlying distribution.
We agree that functional derivatives are closely related to our work and apologize if the “related work” section gave this impression. In fact, the theory in the Appendix is based on functional derivatives of the parameter. We will rephrase the corresponding sentences in the final version.
> The two-stage procedure involves testing the stability of various subsets of features and seeing if there are some for which it would help to collect samples from the test set. Since it would be usually not possible to do this kind of search in the real world, what would be the practical take-away for how to make use of this in non synthetic studies? Did I understand correctly that testing a given subset Xs require collecting it from the test set? It’s fine to study this ideal case in a research paper, but it seems like it necessitates more discussion.
To compute the s-value it is *not* necessary to have data from the test set. Searching over several subsets can be done without having additional data. Only for conducting the transfer procedure, one needs partial data from the test set. For a more concrete example discussing a use-case, see above.
> NSW experiment: your initial estimator has a std dev of 492 which is large for an effect size of 820. How was the std dev estimated? Is this the expected ATE std dev using this method on this data?
We chose the NSW data set since it is the most widely used data set in observational causal inference. Standard deviations for classical procedures can be found in [1], and range between 500 and 1100. Our baseline procedure is based on SOTA cross-fitted doubly-robust machine learning and thus performs a bit better than classical approaches.
We estimated the standard deviation by the empirical standard deviation of the cross-fitted influence function, which is the standard implementation in the R-package GRF [2].
> There is a claim in the paper that “full transfer” doesn’t provide much gain over “partial transfer”. This probably needs more explanation. Is the claim that using Pproj to estimate theta is as good as using Ptest? This must only be true under some assumptions. Even if these are mentioned implicitly when describing the data, it would be helpful for the authors to restate what might cause this phenomenon. It’s fine to report this phenomenon if it does hold on your data, but please contextualize.
Thanks for helping us clarify this. Our intuition is that in these cases most of the shift is in observables covariates $X$ (while $Y|X$ is roughly invariant). The s-value captures which types of $X$-shifts the parameter is sensitive to. Full transfer does not improve performance, since it additionally updates the distribution along directions that the parameter is not sensitive to.
### References
[1] Dehejia, R. H., & Wahba, S. (1999). Causal effects in nonexperimental studies: Reevaluating the evaluation of training programs. Journal of the American Statistical Association, 94(448), 1053-1062.
[2] Athey, Susan, Julie Tibshirani, and Stefan Wager. "Generalized random forests." (2019): 1148-1178. | Rebuttal 1:
Rebuttal: # Rebuttal Summary
We thank you for your constructive feedback. We appreciate that you find the approach “novel and interesting” (qNys), that you “appreciate the quality of this paper” (oQ9C), that the “theoretical analysis is solid” (msmy), that it is “a promising initial step towards novel measures of uncertainty” (buuF) and that you see “strong arguments for the utility of s-value for doing robust transfer learning” (ktSh).
In the following, we will address two points that the majority of the reviewers have brought up as weaknesses: [1] practicality and feasibility, and [2] empirical evaluation.
## Practicality and feasibility
Several reviewers have asked for better contextualization, in particular, whether partial test data is available in practice, and how “impactful” stability values can be in practice.
Recall that the proposed workflow is to (i) evaluate the instability of the parameter with respect to shifts in different covariates; and (ii) update the parameter using data from the covariates flagged in the first step. Our procedure focuses on the first step. The second step can be conducted using existing procedures. In the following, we will discuss an example from the social sciences since applied researchers from political science have signaled interest in our procedures.
Researchers are often interested in a causal effect estimate for a new location. For some covariates such as age and education, partial data is available via surveys such as American National Election Study (ANES) or Cooperative Election Study (CES). Additional partial data can be cheaply obtained via Amazon Mechanical Turk. However, some covariates are hard to collect, since they require running a study in the new location. Our numerical results show that the proposed approach can help prioritize data collection. This may drastically reduce the cost compared to running full-scale replication studies.
As a side note, there is exciting empirical work by Egami and Devaux [1] who build a library of reference stability values based on survey data sets. They recommend thresholds for s-values based on empirical investigations of how data sets change between settings.
## Empirical evaluation
Overall, the reviewers have (i) asked about additional baselines, (ii) asked for a larger range of alpha in plots, and (iii) identified typesetting problems in some of the figures.
(i) Baselines
To the best of our knowledge, there exists no other method that measures the stability of a parameter with respect to various covariate shifts. Thus, there is no direct competing method. This is why we have compared our procedures to an oracle procedure that has access to additional data.
As suggested by the reviewers, we have evaluated a baseline based on feature importance. For the wine quality data set, Lasso selects all features, and for the NSW data set, Lasso selects no covariates. Thus, for NSW the performance of Lasso is equal to “Training”, while for “wine quality”, the performance of Lasso is equal to “Full Transfer”.
Why does Lasso feature selection fail? It solves a different problem; it captures the *feature importance in a prediction problem*, whereas our stability value captures the *sensitivity in a parameter estimation problem*.
(ii) A larger range of alpha
In the pdf attached to this summary, you find a plot corresponding to a larger range of alphas. Our conclusions remain unchanged.
(iii) Typesetting problems in some figures
The reviewers have criticized that some characters were only partially visible in Figures 2, 4 and 5. We will fix this in the final version of the manuscript. In addition, we will fix some color issues.
## References
[1] Devaux, M., & Egami, N. (2022). Quantifying robustness to external validity bias. Available at SSRN 4213753.
Pdf: /pdf/848afbcf11aa1f81ea9f2d23dd8193039e5ac388.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work introduces the s-value, a novel metric to measure the stability of statistical parameters. It is defined as the exponential of minus the largest KL-divergence for which the statistical parameter is 0. The smaller the s-value the more stable the parameter is. When the statistical parameter is the mean of a random variable, the s-value is shown to have a simple expression. The paper also introduces the less conservative directional s-value, which constrains the type of distribution shift possible.
Strengths: Many methods exist to find the worst case distribution shift within some radius as a way to obtain a robust estimator. This paper instead is finding the largest radius up to which a parameter is robust, and then use this value to derive a measure of stability. I find this approach novel and I can imagine this paper having some impact in its area.
The idea behind the paper and the derivations are sound, I also appreciate the exhaustive appendix.
Weaknesses: I found the experiments hard to interpret, e.g. in figure 4 and 5, what is $\beta$? Maybe you could improve the legend for those figures to make them easier to interpret. In figure 4, why are the scales so different ([-2,2] vs [-200,100]). It might have been more valuable to dive into a single example and describe it more in depth. Considering the main focus of this paper, I would also suggest showing the s-values for different parameters e.g. in a table.
Given the venue, I was expecting that the estimation of the s-values for parameters defined via risk minimization would be in the main paper.
Overall, I appreciate the quality of this paper, but I would have liked to see more clearly how impactful the s-values can be in concrete cases. Achieving this might simply require improving the experiment section to address it to a broader audience.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: How useful would s-values be for non-convex models? In the appendix, it is mentioned that a small s-value cannot be necessarily interpreted as a proof of stability in that case. What results would you expect to obtain if you were to run your parameter transfer experiment using a non-linear model?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I think limitations could be better addressed by e.g. adding a paragraph in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review. Also, we are glad to hear that you “appreciate the quality of this paper”.
> I found the experiments hard to interpret, e.g. in Figure 4 and 5, what is beta? Maybe you could improve the legend for those figures to make them easier to interpret. In Figure 4, why are the scales so different. It might have been more valuable to dive into a single example and describe it in more depth. Considering the main focus on the paper, I would also suggest showing the s-values for different parameters, e.g. in a table.
Thank you for the thoughtful feedback. We will improve the descriptions for the final version. In Figure 4, the scales are very different since the plots correspond to different covariates. Following your suggestion, we will show the s-values in a table.
### S-values for NSW data set
|Feature| Age | Education| Black | Hispanic | Married |Nodegree|Re75 |
| -----------| ----------- | ----------- |----------- | ----------- | ----------- |----------- |----------- |
|Directional s-value| 0.97 | 0.91|0.52 | 0.54|0 |0 |0.96 |
### S-values for wine quality data set (parameter "pH")
| Feature | fixed.acidity | volatile.acidity | citric.acid | residual.sugar | chlorides | free.sulfur.dioxide | total.sulfur.dioxide | density | pH | sulphates | alcohol |
| ------- | ------------- | ---------------- | ----------- | -------------- | --------- | ------------------- | ------------------- | ------- | --- | --------- | ------- |
| Directional s-value | 0.86 | 0.81 | 0.65 | 0.83 | 0.94 | 0.55 | 0.8 | 0.83 | 0.97 | 0.97 | 0.88 |
### S-values for wine quality data set (parameter "density")
| Feature | fixed.acidity | volatile.acidity | citric.acid | residual.sugar | chlorides | free.sulfur.dioxide | total.sulfur.dioxide | density | pH | sulphates | alcohol |
| -------------------- | ------------- | ---------------- | ----------- | -------------- | --------- | ------------------- | ------------------- | ------- | ---- | --------- | ------- |
|Directional s-value | 0.80 | 0.94 | 0.83 | 0.81 | 0.78 | 0.81 | 0.93 | 0.84 | 0.79 | 0.9 | 0.98 |
> How useful would s-values be for non-convex models? In the appendix, it is mentioned that a small s-value cannot be necessarily interpreted as a proof of stability in that case. What results would you expect to obtain if you were to run your parameter transfer experiment using a non-linear model?
To summarize, we believe that the s-value is useful for cases that are most important for empirical applications, despite computational difficulties for non-convex models. Let’s discuss this in more detail.
We expect s-values to be accurate for small distributional changes since in that case the distribution change is well-approximated by a Taylor expansion. For large distributional changes, the s-value may exhibit high variance and may not be interpreted as proof of stability. However, in this case, the weights in transfer learning procedures will get extreme, leading to large confidence intervals. From a practical perspective, transfer learning in such settings is challenging if not impossible.
> Given the venue, I was expecting that the estimation of s-values for parameters defined via risk minimization would be in the main paper.
From a conceptual perspective, there is no new step from what Section 3 covers in the main paper. Our goal with the main paper is to lay out the ideas and concepts as clearly as possible, without losing “tempo”. We believe that this is a question of style, and hope that the reviewer appreciates our reasoning.
> Overall, I appreciate the quality of this paper, but I would have liked to see more clearly how impactful the s-values can be in concrete cases. Achieving this might simply require improving the experiment section to address it to a broader audience.
As written above, we will update the description of the figures in the final version.
Regarding “impact”: The NSW “LaLonde” data set [1] is one of the most widely used data sets in observational causal inference over the last few decades. On this data set, our two-stage procedure performs as well as a “potentially unobtainable” oracle procedure and improves more than 50% over a naive baseline. Thus, we believe that a large audience will be able to appreciate the results of our analysis. However, we would be happy to receive further input and address them in the final version.
### References
[1] Dehejia, R. H., & Wahba, S. (1999). Causal effects in nonexperimental studies: Reevaluating the evaluation of training programs. Journal of the American Statistical Association, 94(448), 1053-1062.
---
Rebuttal Comment 1.1:
Comment: I thank you for your clarifications. I decide to keep my score but raise my confidence. | Summary: This paper proposes a novel metric, called the s-value, for evaluating the stability of statistical parameters with respect to distributional shifts. This metric is based on a variational problem involving the KL divergence between the target distribution and the shifted distribution, which can be solved via an equivalent one-dimensional convex problem. The authors also introduce the notion of directional s-value that quantifies the instability of directional shifts. Moreover, consistency and asymptotic normality results are proven for the plug-in estimators of these s-values. Finally, the authors illustrate the interest of the s-values on some real datasets.
Strengths: 1. I found the proposal novel and interesting.
2. Quantifying the stability of statistical findings under distributional shifts is an important question and this work takes a first step towards answering this question.
3. The paper is well-organized and easy to follow.
Weaknesses: 1. The treatment for the consistency of $\hat s_E(\mu, P_n)$ seems to be weak. In particular,
(1) Estimating the conditional expectation $E[Z | E]$ is a challenging problem, especially when the dimension of $E$ is large as in Example 4. The uniform convergence assumption (Assumption 1 in Appendix C) seems to be too strong.
(2) The rate of convergence of $\hat f_n(E)$ can be very slow when the dimension is large. It is of interest to know how the rate of convergence of $\hat s_E(\mu, P_n)$ depends on the one of $\hat f_n(E)$.
2. The practical usefulness of this metric is questionable. In the experiments the authors only computed a few s-values without giving empirical evidence supporting the reasonableness of these numbers. For example, is a statistical parameter with a large s-value really more sensitive to distribution shifts than one with a small s-value? It would be good to perform at least simulation studies to investigate this question.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you prove the consistency of $\hat s_E(\mu, P_n)$ under a weaker assumption on $\hat f_n(E)$?
2. How does the rate of convergence of $\hat s_E(\mu, P_n)$ depends on the one of $\hat f_n(E)$?
3. Can you provide empirical results supporting the practical usefulness of s-values? For example, is a statistical parameter with a large s-value really more sensitive to distribution shifts than one with a small s-value?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful remarks.
> Can you prove the consistency of $\hat s_E (\mu, P_n) $ under a weaker assumption on $\hat f_n(E)$?
Yes, such a result can be obtained under $L_p$ convergence and a boundedness assumption, with small modifications to the current proof. We are happy to relax this in the final version of the manuscript if desired.
> How does the rate of convergence of $\hat s_E(\mu,P_n)$ depends on the one of $\hat f_n(E)$?
If $\hat f_n$ has slow convergence (slower than $1/\sqrt{n}$), then the naive estimation of $\hat s_E$ also has a slow convergence rate. This can be improved, as explained below.
In Lemma C.4 in the Appendix, we derive a debiasing procedure that shows that even if the convergence rate of $\hat f_n$ is slow (e.g. $n^{-1/4}$), one can still obtain a fast $1/\sqrt{n}$ rate of convergence for $\hat s_E$. The proof is based on de-biasing techniques from the semiparametric literature that have recently attracted considerable attention under the name of “double machine learning” [1].
> Can you provide empirical results supporting the practical usefulness of s-values? For example, is a statistical parameter with a large s-value really more sensitive to distribution shift than one with a small value?
Checking whether s-values are “reasonable” is best done with a small toy example where it is clear what the correct answer should be. In the Appendix, we give such a toy example (Figure 6 and Table 1) based on the famous “Anscombe quartet”. We realize that it might be helpful to move this example to the main paper for the final version.
Regarding practical usefulness, the applied community has signaled interest in the proposed procedures. For example, Egami and Devaux [2] estimate the KL divergence between multiple surveys (such as ANES or CES) and use this to derive appropriate thresholds for s-values. Other ongoing applied work focuses on shifts in $Y|X$ “concept drift”.
### References
[1] Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., & Robins, J. (2018). Double/debiased machine learning for treatment and structural parameters.
[2] Devaux, M., & Egami, N. (2022). Quantifying robustness to external validity bias. Available at SSRN 4213753.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for your responses. I have no more questions and will maintain my rating. | Summary: This paper proposes method to quantify instability of a statistical parameter with respect to pertubrations around the KL divergence ball. This has implications to detect where statistical conclusions no longer hold when there is a distribution shift. The authors show this metric across both overall and directional shifted. The paper provides a two-step transfer learning strategy over two datasets to demonstrate the effectiveness of using stability as a measure of where to collect extra data for transfer learning.
Strengths: Tthe paper was well written, with its objectives and motivations clear. Theorems and mathematical notation were easy to follow. The examples in Section 3.2 illustrated well how this metric is concretely used for different distributions. It is evident that s-values can be useful in determining when to re-train models or re-estimate statistical queries, depending on the stability of a particular parameter. Detecting under which variables shifts occurs is an important open problem that the authors provide clear insight in, as well as a procedure to use s-values for improved transfer learning.
Weaknesses: However, while an interesting and intuitive idea, it does not seem that it provides significant improvements over a transfer learning approach with all covariates. In Figures 3 and 5, transfer learning outperforms the naive approach, but this is an expected result. What is the reason for a partial transfer if a full transfer works better than or just as well? I would also be interested to see an experiment where a greater amount of data can be collected to improve the transfer learning (aka a higher $\alpha$ value for the second experiment).
It would be also helpful to provide recommendations for practitioners for what it means when a parameter is stable (is there a general threshold of $s$ at which transfer learning works well over a parameter works well?) Is the $s_X > 0.85$ threshold as shown in the paper recommended?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: There are some uncertainties about the experimental methodology. Why were these particular datasets chosen? Something with an average treatment effect seems like it would be useful to be seen in medical datasets for a particular intervention? Would be interested to see if there would be significant improvements over standard transfer learning with all covariates if this procedure was done with datasets with more unstable covariate variables. It would be helpful to see a broader set of experiments, or a more thorough analysis of which covariates are helpful in this case or not.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful feedback and comments.
> What is the reason for a partial transfer if a full transfer works better than or just as well?
Cost! Full transfer relies on having data on all covariates. Some of the data might be very expensive to collect. Partial data is often available in the form of surveys (e.g. ANES or CES) or can be collected via Amazon Mechanical Turk [1]. Our numerical results show that s-values help decide which covariates one should obtain for transfer learning. In our view, these are exciting results that may help reduce the cost of planning and running replication studies.
> I would also be interested to see an experiment where a greater amount of data can be collected to improve the transfer learning (aka a higher alpha value for the second experiment)
Sure! We have updated the figures and included them in the pdf attached to "Rebuttal Summary". Our conclusions remain unchanged.
> It would also be helpful to provide recommendations for practitioners for what it means when a parameter is stable (is there a general threshold of s at which transfer learning works well over a parameter works well?) Is the $s_X > .85$ threshold as shown in the paper recommended?
For the data sets we have worked with, the threshold .85 has worked well. However, in the end, this is an empirical question and depends on the subject of study. There is some exciting parallel work by Egami and Devaux [1] who build a library of reference values based on survey data sets. Their empirically validated thresholds are similar to ours.
> Why were these particular datasets chosen?
The NSW data set (often referred to as the “LaLonde data”) is the most commonly studied data set in causal inference. The wine quality data set was chosen since it has appeared in recent work studying distribution shifts [2]. We have chosen these data sets to ensure high familiarity of the target audience with the data sets.
> Would be interested to see if there would be significant improvements over standard transfer learning with all covariates if this procedure was done with datasets with more unstable covariate variables. It would be helpful to see a broader set of experiments or a more thorough analysis of which covariates are helpful in this case or not.
Thank you for the suggestion. We suspect this might be the case since our procedure ignores small shifts in unimportant variables, stabilizing the transfer learning procedure. We consider this an exciting direction for future research. For the current paper, we focus on the foundations of the method, which include theoretical guarantees and showing that it can perform as well as an oracle procedure in practice.
### References
[1] Devaux, M., & Egami, N. (2022). Quantifying robustness to external validity bias. Available at SSRN 4213753.
[2] Podkopaev, A., & Ramdas, A. (2021). Distribution-free uncertainty quantification for classification under label shift. In Uncertainty in Artificial Intelligence (pp. 844-853). PMLR. | null | null |
Learning to Tokenize for Generative Retrieval | Accept (poster) | Summary: As one of the mainstream paradigms for document retrieval, generative approaches have enjoyed a steady growth of interest thanks to the recent thriving of large language models. Document tokenization is a crucial step in generative retrieval, which is rule-based in most existing methods, usually generalizing poorly to unlabeled documents. To this end, the paper presents a novel document tokenization learning framework GENRET that learns to tokenize a document into semantic docids in a discrete auto-encoding scheme. GENRET consists of a shared sequence-to-sequence-based document tokenization model, a generative retrieval model, and a reconstruction model, optimized in an end-to-end fashion using a well-designed progressive training scheme to stabilize the training process. Experimental results show that GENRET significantly improves over the baseline methods, especially in generalizing to unseen documents.
Strengths: 1. The authors have identified a significant problem in existing generative retrieval approaches. Conventional practice typically treats document tokenization as a fixed pre-processing step; however, this is not optimal as ad-hoc document tokenization methods often fail to capture the complete semantics of a document and generalize poorly to unlabeled documents. Therefore, the paper proposes for the first time to take document tokenization as a learnable module and introduces a novel framework to represent documents as discrete docids that effectively capture the semantic information of the document.
2. The auto-encoding scheme is not new but reasonably integrated into the proposed framework. The progressive training scheme effectively addresses the challenge of learning docids in an autoregressive fashion and stabilizes the training process. The diverse clustering techniques successfully facilitate the diversity of generated docids.
3. Extensive experiments on various benchmark datasets have demonstrated the superiority of the proposed method against established document retrieval approaches.
Weaknesses: 1. I think the presentation of the method description (Sec 3) could be further improved, given that some details are not clearly explained. For instance, according to Eq. 4, the reconstruction model is non-parametric, and the stop gradient operator is applied to $\mathbf{d}_t$ and $\mathbf{d}^*_t$, so which part of the parameters the reconstruction loss is aimed at optimizing? I guess it's probably the parameters of the codebook and the encoder-decoder. However, the authors mentioned that "the gradients to document representation $\mathbf{d}_T$ are defined as $\frac{ \partial \mathcal{L}_r}{ \partial \mathbf{d}_T} \coloneqq \frac{ \partial \mathcal{L}_r}{ \partial \mathbf{z}_T}$ " (in Line 173), but $\mathbf{z}_T$ is actually taken from the codebook based on Eq. 5, so the reconstruction loss $\mathcal{L}\_{rec}$ is to optimize the codebook only? Moreover, does the generative retrieval model $P$ share the same codebook with the document tokenization model $Q$? What about the encoder and decoder? In addition, this section also does not cover the inference phase after the training completion. Finally, I do not understand why the docid re-assignment technique can promote the diversity of generated docids.
2. All experimental evidence shows that the proposed approach is effective on purely quantitative measures. However, I do not see any qualitative experiments presented, e.g., by visualization analysis to demonstrate that the codebook embeddings are uniformly scattered in the document representation space or the diversity of generated docids is better than baselines, etc.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: **Q1:** Please take a look at **Weakness1** for suggestions to improve the presentation.
**Q2:** In Eq. 8, is the final loss directly the sum of reconstruction loss, commitment loss, and retrieval loss? Should we set different weighting coefficients for each loss to balance the effects between them?
**Q3:** For the implementation details (Sec 4.3), the authors say that "the length of the docid $M$ is dependent on the number of documents present. For datasets containing a larger number of candidate documents, a larger value of $M$ is set to ensure that all documents are assigned unique document ids". So what $M$ is set to for each of the three datasets used in this paper? Is it possible to suggest how $M$ should be taken for a given number of documents from the perspective of theoretical analysis?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: As mentioned in the conclusion, the limitations of this work are two-fold. One is the lack of validation of the proposed method on large-scale data, and the other arises from the insufficient generalization of the model to unoptimized document collections in different types or domains.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and valuable comments!
**Presentation of method section**
- Thanks for your suggestions, we will add more explanations to the final paper to improve presentation.
- Firstly, the reconstruction loss optimizes both the encoder-decoder and the codebook. Concretely, $z_T$ is the codebook embedding that is closest in terms of inner-product distance to $d_T$. This computation involves a non-differentiable operation–argmax(·). For optimization of $d_T$ using $L_{rec}$, we employ straight-through gradient estimation [37] that allows for the transmission of the $z_T$ gradient to $d_T$.
- Secondly, the parameters of both the encoder-decoder and codebook are shared between the tokenization model and the retrieval model.
- Thirdly, at the inference stage, the query is entered into the encoder, and it is then processed by decoder+codebook in an autoregressive manner to output docid tokens.
- Lastly, an intuitive explanation of re-assignment technique is that in instances where multiple documents receive the same ID, those with lower relative confidence are re-assigned. This re-assignment directs them towards less popular IDs. This process combined with the commitment loss; results in an increase in the prediction probability of these less frequented IDs, and an overall improvement in the diversity and uniformity of the assigned IDs.
**Qualitative analysis**
- Thanks for your suggestions. We visualize the codebook embedding and document embedding in the newly uploaded PDF files. We see that the codebook embedding does uniformly scatter in the document representation space.
**Loss weighting**
- Thanks for the question. During training, I add a factor of 0.1 to the reconstruction losses to balance the scale. We will add more clarification in our final paper.
**Length of docid**
- Thanks for the question. We list the final choice of hyper parameter M as follow, and we will add more detail our final paper.
| NQ320K | MSMARCO320K | BEIR-Arg | BEIR-Covid | BEIR-NFC | BEIR-SciFact | BEIR-SciDocs | BEIR-FiQA |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 3 | 4 | 2 | 4 | 2 | 2 | 3 | 3 |
- A theoretical analysis suggests that given a certain number of documents $D$, the $M$ value is related to the diversity of ID assignments (e.g., the diversity metric defined on line 285). If the assignment is uniform, then $M \propto log D$. Conversely, if the ID distribution is starkly imbalanced, then may $M \propto D$. In practice, predicting the diversity of assignments is challenging. Observations indicate that the diversity of assignments decreases with an increase in docid length. Consequently, in our study the $M$ value is determined through empirical results, some of which are provided in the table above.
**Limitation**
- As we have discussed in conclusion, scaling the model to larger data typically requires longer docid length, more model capacity (parameters), more computation resources.In addition, we would like to add that: (i) our evaluation scale aligns with previous studies such as DSI and NCI. (ii) our proposed method outperforms existing generative retrieval systems on the unseen subset, and finally, application to larger datasets may pose additional problems e.g. increase in computational resources, which may fall outside the realm this study but could provide an interesting direction for future research.
We hope our response can address your concerns.
[37] Oord, Aäron van den et al. Neural Discrete Representation Learning.
---
Rebuttal Comment 1.1:
Title: No further problems for me
Comment: I have read the other reviews and the rebuttal. And I thank the authors for adequately addressing my concerns. For $\mathcal{L}_{rec}$, I now better understand how the encoder-decoder and codebook are optimized. The added qualitative results also seem to support the validity of the proposed method well. I'm willing to raise my **score** to "6" if the authors promise to include these results and analysis in the final submission.
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive feedback and for considering a higher score. We promise to include these new results and analysis in our final paper.
Best regards. | Summary: Current document retrieval systems map documents and queries to doc ids. These docids are assigned randomly, by clustering by using text/attribute information. The authors propose a learned document tokenization scheme where the the semantics of the documents are encoded into learned docids. These docids are used in end-to-end document retrieval methods. The proposed method consists of a retrieval model, a tokenization model and a reconstruction model. All three components are jointly trained. The authors show that on slighgt improvements on many different datasets. They in particular show that thir method generalized much better to unseen data.
Strengths: Recently introduced end-to-end retrieval models are becoming more popular because they result in better performance. Learning better tokenization schemes is indeed a very relevant and important problem. The authors investigate a reasonable approach of using an auto-encoder to obtain doc ids. The 14% increase in performance on unseen data for the NQ dataset does suggest that the proposed method leads to learning better docids.
Weaknesses: The main weaknesses are as follows:
* It is not clear why the autoencoder and the retrieval model are jointly trained. Rajput et. al. [1] also propose using an auto-encoder to learn docids. They use a RQ-VAE to learn docids that are conducive for auto-regressive generation, where each additional token in the docid can be thought of as the residual. In their method the auto-encoder and the retrieval model can be separately learned.
* The evaluation using an unseen dataset is only done with the NQ dataset.
* Most retrieval methods usually use a variant of contrastive loss or a cross-entropy loss. It is not clear why the authors use both and there is ablation for the retrieval loss.
[1] Shashank Rajput, Nikhil Mehta, Anima Singh, Raghunandan H Keshavan, Trung Vu, Lukasz
Heldt, Lichan Hong, Yi Tay, Vinh Q Tran, Jonah Samost, et al. Recommender systems with
generative retrieval. arXiv preprint arXiv:2305.05065, 2023.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Here are some questions I have:
* How is the 'GENRET w/o learning' model trained? The paper mentions that the proposed learning scheme is not used for 'GENRET w/o learning.' Does this mean the progressive training scheme is not used? I'm trying to understand why GENRET performs so much better on the unseen test set.
* Can you compare GENRET with [1]?
* Have you inspected the learned docids? Do you see similar documents being assigned similar docids? Do you see meaningful clusters arise when you group by docids?
* How important is the commitment loss?
* How much better is GENRET when compared to other baselines on an unseen test set from other datasets? Is the improvement as significant as it is with the NQ dataset? The NQ test set is quite small and dividing it into seen and unseen further reduces the number of samples and the generalization might be more convincing if shown on larger datasets or on more datasets.
[1] Shashank Rajput, Nikhil Mehta, Anima Singh, Raghunandan H Keshavan, Trung Vu, Lukasz
Heldt, Lichan Hong, Yi Tay, Vinh Q Tran, Jonah Samost, et al. Recommender systems with
generative retrieval. arXiv preprint arXiv:2305.05065, 2023.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, the authors to discuss limitations in terms of scale of dataset and generalization to different types/domains.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review! We will address each of your points in turn.
**Compared with Rajput et al.**
- Thanks for your insightful comment. First, we believe it is beneficial to joint the modeling of tokenization and retrieval tasks on text retrieval tasks. From the perspective of task characterization both tokenization and retrieval tasks aim to convert textual semantics into docid within a shared discrete space.
Thus sharing parameters can potentially align the model's representation of the two tasks and improve their respective capabilities. From an empirical perspective, the results of the w/o learning variant that only employs docid' results for training a separate retrieval model reveal the merits of joint modeling. On the contrary, consider generative sequential recommenders, which predict the upcoming item IDs based on input interaction history (a sequence of IDs). The differences between the sequential recommendation task and content tokenization could potentially be greater than those in text retrieval, potentially leading to differences in modeling. We expect to do more in-depth analysis in our future work.
- Second, our proposed method also factors in the subsequent token as a residual of the previous one, assimilating this knowledge through the objective in Equation 4.
- Third, we would like to note that the paper from Rajput et al. is still a concurrent work that appeared on ArXiv very close to the NeurIPS submission deadline. As a result, comparative analysis is not pursued yet within our submission paper.
- Finally, we believe our approach is appropriate and effective for text retrieval tasks, and we will incorporate a more comprehensive discussion on Rajput et al.'s insightful study in our final version.
**Unseen test set**
- Thank you for your question. We did not split the test set because 84% of the queries in the MS MARCO test set are unseen, and all the queries in the BEIR test set are unseen [1]. Therefore, the results on MS MARCO and BEIR basically demonstrate the model's ability to retrieve on unseen. We will add clarification to our paper about this regard.
**Why both contrastive loss and cross-entropy loss**
- Since the proposed model jointly learns two objectives: decomposing/quantifying document semantic information into discrete docid, and generating relevant docid for a given query, both the contrastive learning loss for optimizing text representation, and the cross-entropy loss for optimizing generation, are employed. Due to the limited timeframe for a rebuttal, we would like to conduct a more in-depth study on the strengths and weaknesses between the joint-model and the pipeline-model in our future study. We thank you for these insightful comments.
**W/o learning**
- This variant directly employs the final docid assigned by the proposed system and trains a separate DSI model following the practice of DSI-QG. The model is trained without adopting progressive training or contrastive loss. This variant demonstrates that the generative retrieval model jointly trained with auto-encoding objectives can represent documents more sensibly based on semantics. This could potentially enhance performance on the less-optimized documents. Contrarily, the parameters obtained via cross-entropy loss on docid generation tasks might be less effective in conveying the semantic information of documents.
**Qualitative analysis of docid**
- Thanks for your suggestions. In our newly uploaded PDF, we add two qualitative analysis:
- Figure 1 shows a case study of document content and the corresponding docid generated by GenRet. We can see that those documents with more similar docid have more relevant content. For example, we find that documents with docid started with 338-173 are related to Email, while documents with docid started with 338 are related to information exchange methods.
- Figure 2 visualizes the codebook embedding and document embedding. We see that the codebook embedding does uniformly scatter in the document representation space, and meaningful clusters arise when documents are grouped by docids.
- Figure 3 illustrates the word cloud of documents grouped by GenRet produced docid prefix. We can see that different positions of docid represent different levels of information, and the semantics within each cluster are related.
**Commitment loss**
- The commitment loss can be divided into two parts: the loss for previous docid token $z_{<t}$, and the loss for docid token at the current training step $z_t$. The former is critical for the model not to forget previous tokens during progressive training; the latter is important for the model to converge on the assignment of docid.
- We have included detailed results of the ablation experiments in a newly uploaded PDF. Without commitment loss for previous tokens, the model's prediction accuracy for docid drops to less than 30% within 5 training epochs. Without commitment loss for the current token, the model's docid variance across epochs stays higher than 5%.
We hope our response can address your concerns.
[1] Thakur, Nandan et al. BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models.
---
Rebuttal Comment 1.1:
Comment: As the authors say that they are willing to include comparisons to Rajput et. al. [1], I've raised my score.I would also recommend adding the qualitative analysis of docids.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful feedback and for considering a higher score. We promise to include comparisons to Rajput et. al. [1] in our final paper.
Best regards. | Summary: GENRET, the proposed model learns to tokenize documents into short discrete representations (i.e., docids) via a discrete auto-encoding approach. Authors develop a progressive training scheme to capture the autoregressive nature of docids and diverse clustering techniques to stabilize the training process.
Strengths: - Very strong results on NQ, MS MACRO
- Beats dense retrieval.
- Nice ablation experiments.
Weaknesses: - Very handwavy in lines 173-174. Explanation on how argmax is bypassed is unintuitive
- Doc id reassignment looks to depend upon the documents in the batch. Was any specific batching done?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - GenRet w/o learning part in lines 297 unclear.
- The paper is very dense in math. Authors may consider writing out some more textual explanations of what is happening.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations were properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments! We appreciate your feedback and will address each of your points in turn.
**Line 173-174**
- Thanks for your comments and we will add more explanation and clarification in this part in our final paper. Specifically, we employ straight-through gradient estimation that allows for the transmission of the $z_T$ gradient to $d_T$ following *Oord et al. 2017 [37]*, which copies the gradient of $d_T$ directly to $z_T$. We will add more explanations to this section and revise the presentation as you suggested.
**Batching**
- Thanks for your insightful question! We pre-gather documents which share the same docid prefix into a batch. Therefore, the reassignment strategy is applied to a batch, where we aim to have documents with as diverse IDs as possible. We will clarify further on this matter in the final paper for better clarity.
**W/o learning**
- This variant directly employs the final docid assigned by the proposed system and trains a separate DSI model following the practice of DSI-QG. This variant demonstrates that the generative retrieval model jointly trained with auto-encoding objectives can represent documents more sensibly based on semantics. This could potentially enhance performance on the less-optimized documents. Contrarily, the parameters obtained via cross-entropy loss on docid generation tasks are less effective in conveying the semantic information of documents. We will add more explanations of this variant in our final paper.
**Presentation suggestion**
- Thanks for your valuable comment. We will add more explanations in the final paper to increase readability.
[37] Oord, Aäron van den et al. Neural Discrete Representation Learning.
---
Rebuttal Comment 1.1:
Comment: - Seen
No Score change
---
Reply to Comment 1.1.1:
Comment: Thanks for your time and valuable comment.
Best regards. | Summary: This paper introduces GenRet, an auto-regressive retriever that focuses on finding the right clusters (or document ID or document tokenization) approach. Compared to previous generative retriever approaches, GenRet uses three different losses, progressive training, and clustering techniques. Overall, the paper is quite interesting and has the potential to improve the performance of information retrieval systems.
Strengths: Interesting and important problem to work on. The document representation/tokenization for generative retrievers is one of the key problems here. I am glad that there is solid research here.
Weaknesses: Prior generative retrievers use a standard encoder-decoder with a pre-computed document ID. However, GenRet requires several modifications to work properly. While this is not necessarily a problem, I believe that some explanation and ablation studies are needed. For example, the retrieval loss makes GenRet a generalization of dual encoders. If the code-book size is the same as the number of documents in the candidate pool, and the length of the doc ID is 1, then GenRet will reduce to something very close to dual encoders. Therefore, I would like to see more analysis on how to decompose the doc ID and how much it affects generation. This type of study cannot be done in prior approaches because the doc IDs are fixed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could you provide more insights in what way the generative doc id is better than prior pre-computed doc id? Did you find the doc id semantically meaningful?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The BEIR results are still behind non-generative retrievers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and insightful comment. We would like to address your questions in turn.
**Compare to dual-encoder**
- Thanks for your insightful comment. We further compared the models with different values of M and K. The results are in the following table. We find that the performance of models with different configurations are close to the NQ320K. When length of the doc ID is 1, the model will be similar to dual encoder, but with the difference that doc embedding is not obtained by encoding with a doc encoder, but is obtained by training as a codebook parameter.
| Metric | Model 1 | Model 2 | Model 3 |
| --- | --- | --- | --- |
| K | 109,739 | 2,048 | 512 |
| M | 1 | 2 | 3 |
| R@1 | 68.5 | 68.8 | 68.1 |
- Employing a longer docid will reduces model parameters but necessitates more training steps. Utilizing a codebook of length 1 significantly increase the numbers of model parameters, rendering the model unsuitable for large-scale data sets. As advised, we will be incorporating more comprehensive analyses to our final paper.
**Qualitative analysis**
- Thanks for your suggestions. In our newly uploaded PDF, we add two qualitative analysis:
- Figure 1 lists document content and the corresponding docid generated by GenRet on NQ320K dataset, and shows that those documents with more similar docid have more relevant content. For example, we find that documents with docid started with 338-173 are related to Email, while documents with docid started with 338 are related to information exchange methods. Therefore, we find the generated docid semantically meaningful.
- Figure 2 visualizes the codebook embedding and document embedding. We see that the codebook embedding does uniformly scatter in the document representation space.
- Figure 3 illustrates the word cloud of documents grouped by GenRet produced docid prefix. We can see that different positions of docid represent different levels of information, and the semantics within each cluster are related. | Rebuttal 1:
Rebuttal: To all.
We appreciate all the reviewers for the constructive comments. We have included four figures in our newly updated PDF:
- Figure 1 presents a case study of document content matched with the relevant docid generated by GenRet. It suggests that documents with similar docids share closely related content. For instance, documents beginning with the docid 338-173 pertain to Email, while those starting with 338 concern various information exchange methods.
- Figure 2 shows a t-sne visualization of both codebook embedding and document embedding on NQ320K. The codebook embedding appears to distribute uniformly within the document representation space, producing meaningful clusters when documents are categorized by docids.
- Figure 3 shows a word cloud composed of documents grouped by GenRet-produced docid prefixes. It’s evident that documents of same group are semantically related.
- Figure 4 illuminates the results of an ablation experiment on commitment loss. The data suggests that commitment loss is beneficial for preventing the model from forgetting previous tokens and aids in enhancement of convergence.
We will add these results to our final paper.
Best regard.
Pdf: /pdf/2d6e9cfcfc5f7f02b7564f807f875279fd531028.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper works on an emerging research direction, generative retrieval, where retrieval is considered as a generating the document ids. This paper proposed to learn a seq2seq model to generate docids from the document. The challenge lies in how to propagate the retrieval loss (from another seq2seq model: query -> relevant docids) to the docid generation model. The paper proposes several techniques, including commitment loss, iterative optimization, and document reconstruction.
Experiments were done on NQ, MS MARCO and BEIR. Results show superior performance of the proposed method compared to prior generative retrieval approaches as well as standard dense retriever baselines.
Strengths: - It is not straight forward to learn a latent docid from query-document relevance labels. The authors propose several techniques to make enable this.
- Authors adopt diverse baselines
- The paper is clearly written
Weaknesses: My main concern is around the MS MARCO results, which seems very different from other published results. E.g., in the ANCE paper, ANCE's MRR is 0.33 [1]; docT5query's MRR is 27.2 [2]. The MRR numbers reported in this paper is ~50-60, and docT5query is better than ANCE according to this paper which is not true based on prior research. This makes me question the soundness of this set of experiments.
For BEIR experiments, it is unclear how the six datasets are selected from all BEIR tasks.
[1] Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. Xiong et al.
[2] https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Please explain why the MS MARCO results are off from results reported by prior work, and why dense retrieval (ANCE) is worse than docT5query on this set up.
- How was the six BEIR dataset picked?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Would be nice to discuss how this method scales up to larger corpus, and whether / how it supports index updating.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and insightful comment. We would like to address your questions in turn.
**Two reasons for different numbers for ANCE/docT5query**
- First, please note that we are focusing on the document retrieval task in our paper, whereas the results you referenced are from the passage retrieval task.
- Second, our setting is consistent with existing generative retrieval studies [1,2], where we retain the top-1 document for each query within our document collection. In contrast, the papers you cited utilize the MS MARCO leaderboard passage collection, which maintains the top-10 Bing Search passages for each query [3].
**Higher results of docT5query**
- We accredit the differences in results to the different retrieval tasks, and the size and construction methods of doc/passage collection. To provide more implementation details, for ANCE we use the [ance-firstp](https://huggingface.co/sentence-transformers/msmarco-roberta-base-ance-firstp). For docT5query we use [doct5query](https://huggingface.co/castorini/doc2query-t5-base-msmarco).
**BEIR data selection**
- Our selection of the six datasets from BEIR is based on the corpora size. In particular, the document size of selected data is around or less than 320K documents, intending to test the method's effectiveness on moderate-sized corpora following previous studies [1,2].
**Scaling corpus and updating index**
- Thanks for the questions! As we have discussed in conclusion, scaling the proposed model to larger corpora typically requires longer docid length, more model capacity (parameters), and more computation resources. In addition, expanding to a larger corpus may confronts several known challenges [4], such as knowledge forgetting and training efficiency, which need to be addressed in our future research.
- We anticipate that the proposed semantic docid generation method will indeed be conducive to large corpora. This is because a more meaningful docid results in a more efficient compression of the document's semantics, thereby reducing the burden on the model to memorize the sequence of these IDs. In comparison, when the docid is arbitrary, eg native string, the model needs to memorize these ID sequences without the aid of meaningful association, leading to additional overhead. We observe relevant evidence in the rate of training loss decline, e.g., the loss declines faster when using a meaningful docid.
- As for updating the index, we presume that expanding docid could provide a feasible solution e.g., adding new docid bits for new documents to differentiate them from existing ones, and continue training the model based on progressive training. In addition, the MOE approach might also be feasible, e.g., indexing documents in separate experts and merging the results of multiple experts through aggregation techniques. We would like to discuss this further and do more exploration in our final version.
We hope our response can address your concerns.
[1] Transformer Memory as a Differentiable Search Index.
[2] Ultron: An Ultimate Retriever on Corpus with a Model-based Indexer
[3] MS MARCO: A Human Generated MAchine Reading COmprehension Dataset.
[4] How Does Generative Retrieval Scale to Millions of Passages?
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response! They addressed many of my concerns and I'm willing to raise my rating. Please explain in the paper why document retrieval is focused instead of passage retrieval.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful feedback and for considering a higher score.
A document in our context typically corresponds to a webpage, while a passage is a piece of text within a document. Our study focuses on building docid for documents. Using document retrieval tasks allows us to make fair comparisons with existing methods that utilize document URLs, titles, etc. And it also aligns with the settings of NQ320K and BEIR. Passage retrieval, as a different task, could lead to different evaluation results.
Thanks for your comments and we will incorporate an explanation in our final paper. We are also open to exploring other IR tasks, e.g., passage retrieval, in our future work.
Best Regards. | null | null | null | null | null | null |
Simple and Asymmetric Graph Contrastive Learning without Augmentations | Accept (poster) | Summary: This paper proposes an asymmetric contrastive learning framework for the homophilic and heterophilic graphs, which does not rely on graph augmentations and homophily assumptions. The theoretical analysis and empirical results further support the effectiveness of the proposed method.
Strengths: 1. This paper is easy to understand and the motivation is clear and reasonable.
2. The proposed method is simple yet effective. The authors further demonstrate the effectiveness of the proposed method from the information theory and downstream tasks perspectives. Such results give a theoretical understanding of this work.
3. Extensive experimental results including comparison experiments, ablation study, visualization, and case study empirically verify the effectiveness of the proposed method. Moreover, the proposed method shows superior performance on both the homophilic and heterophilic graph datasets.
Weaknesses: 1. This paper is somewhat over-claim and some important references are missed. In lines 45-47, the authors claim that this work makes the first attempt to design contrastive learning objectives by neither relying on explicit nor implicit homophily assumptions. Actually, there are several works already done. For example,
[1] Wang H, Zhang J, Zhu Q, et al. Augmentation-free graph contrastive learning. arXiv preprint arXiv:2204.04874, 2022.
2. The proposed objective function seems like a variant of the GCL with representation smoothing diagram and the GCL with augmented views diagram. Actually, the difference between them and the proposed method is the definition of the positive pairs.
3. In this work, the authors claim that the proposed method does not rely on the homophily assumption and conduct a theoretical analysis to further support the proposed method. Are these theorems hold without any assumptions?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer gZUY, we thank you for your valuable suggestions and positive feedback. **We are happy to hear that you found our paper to be well-written and strong in both empirical and theoretical aspects.** The following is our point-to-point response to your comments:
**(C1). This paper is somewhat over-claim and the important references [1] are missed.**
**(R1).** We appreciate your insightful comments! **We believe there are some important misunderstandings:**
- We would like to clarify that **we did not miss the reference [1]. We cited this reference as [48] and also empirically compared their method in our experiments. It is essential to note that [1] and [48] refer to the same model and same experimental results from the same authors, with the titles being different.**
- In addition, the model in [1] (i.e., [48]) actually cannot outperform many recent GCL methods on many homophilic datasets (Cora, CiteSeer, PubMed, and Photo), as reported in their paper. Conversely, our GraphACL achieves state-of-the-art results on those homophilic datasets. Thus, our claim that we are the first to attempt answering the question, "What kind of insights and contrastive learning objectives should we look for, in order to ensure good node representations on **both homophilic and heterophilic graphs**", **is indeed valid and not an over-claim**. We appreciate your comments and will ensure that these points are clearly stated in the revision.
**(C2). The proposed objective function seems like a variant of the GCL with representation smoothing diagram and GCL with augmented views diagram. Actually, the difference between them and the proposed method is the definition of the positive pairs.**
**(R2).** Thank you for your comments. While GraphACL involves positive pairs, our contributions and differences compared to other contrastive schemas go beyond that. Here, we will provide a detailed clarification:
- *Comparisons with GCL methods with representation smoothing:* Different from this scheme which is based on the homophily assumption, GraphACL is asymmetric contrastive learning induced by a simple asymmetric predictor. **We theoretically and empirically show that this simple asymmetric predictor can jointly capture one-hop heterophilic patterns and two-hop monophily, which are important for heterophilic graphs.**
- *Comparisons with GCL methods with augmented views:* This contrastive learning scheme relies on graph augmentations, and are built based on the idea that the augmentation can preserve the semantic nature of samples, i.e., the augmented samples have invariant semantic labels with the original one. However, our GraphACL is not based on augmentations, but a simple asymmetric predictor. GraphACL directly considers analyzing neighborhood distribution in the original graph. **Thus, our GraphACL design is inherently different from GCL methods with augmented views.** Moreover, We theoretically show that GraphACL also implicitly aligns the two-hop neighbors and enjoys a good downstream performance for both homophilic and heterophilic graphs, which is theoretically and intuitively unclear for GCL with augmented views.
Given the above, our studied problems and insights are fundamentally different from these two schemes. Our theoretical analyses are novel and have not been proposed by these two GCL schemes. We believe that our work offers unique insights, and our technical contributions remain significant, when compared to the current GCL schemes.
**(C3). In this work, the authors claim that the proposed method does not rely on the homophily assumption and conduct a theoretical analysis to further support the method. Are these theorems hold without any assumptions?**
**(R3).** Thank you for your insightful questions! Actually, our theorems 1, 2, and 4 do not rely on any assumptions. Only theorem 3 makes some lightweight, reasonable, and widely-used assumptions, as mentioned in our paper. Not surprisingly, some assumptions are necessary for theoretical analysis in generalization for unsupervised node representation learning [2, 3, 4]. Specifically, our theorem 3 (downstream error on learned representation) assumes that the used downstream classifier is a mean classifier. This mean classifier assumption is very mild and has been utilized by many works [2, 3, 4]. Another lightweight assumption mentioned in our paper is the balanced class distributions. This assumption is also mild and widely-used in the literature [5]. Moreover, our theorem 3 can be easily extended to unbalanced settings in the future by considering a label shift term, as shown in domain adaptation literature [6]. Moreover, GraphACL consistently outperforms the baselines even when the classes are not well balanced, indicating that GraphACL is robust to violations of the class balance assumption.
**In light of these responses, we sincerely hope our posted responses have addressed your comments and clarified any misunderstandings. We believe your comments can be easily addressed in the final version and genuinely hope that you could consider increasing your score. If you have any notable points of concern that remain unaddressed, please do share them with us, and we will promptly address them. Thank you for your efforts!**
[1] Augmentation-free graph contrastive learning. arXiv preprint arXiv:2204.04874, 2022
**[48] (the cited number in our submission)** Can Single-Pass Contrastive Learning Work for Both Homophilic and Heterophilic Graph? arXiv preprint arXiv:2211.10890, 2022
[2] A Theoretical Analysis of Contrastive Unsupervised Representation Learning. ICML 2019
[3] Understanding Negative Samples in Instance Discriminative Self-supervised Representation Learning. NeurIPS 2021
[4] On the Surrogate Gap between Contrastive and Supervised Losses. ICML 2022
[5] Debiased Contrastive Learning. NeurIPS 2020
[6] Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift. NeurIPS 2020
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thanks for your efforts, I will maintain my positive score.
---
Reply to Comment 1.1.1:
Title: Thank you for your comments!
Comment: Thank you very much for your positive comments! We sincerely appreciate your helpful feedback and are grateful for your approval.
Best wishes,
Authors | Summary: This work first points out that existing GCL can fail to generalize to heterophilic graphs, then develops a new framework called GraphACL based on an encoder capturing one-hop neighbourhood context and two-hop monophyly. Experiments validate the effectiveness of the proposed method.
Strengths: The paper is well organized. Each part has a clear goal and supports the authors' claim well.
The proposed method, Graph Asymmetric Contrastive Learning, is not only explained in detail but also justified by solid theoretical analysis. The authors adequately answered how and why it works.
In the empirical study, the choice of baselines has good coverage. The selection of datasets respects the theme of each subsection, and the experiment setup is reasonable. The experiments are well explained, making it easy to reproduce them.
Weaknesses: There are no significant weaknesses in this paper. The application scenario of the proposed method may be a little bit limited, but this is completely acceptable.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Does GADC have the potential to be applied to graph tasks other than node classification shown in the paper?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed their work's limitations and potential societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer cxpZ, **thank you for the great summarization of our contributions on both theoretical and empirical analysis, and we appreciate your very positive and encouraging comments.** Please see our responses below:
**(C1). There are no significant weaknesses in this paper. The application scenario of the proposed method may be a little bit limited, but this is completely acceptable.**
**(R1).** Thanks for your positive comments! **We would like to further clarify that the application scenario of our method is actually wide due to the following two important reasons:**
- (1) Heterophilic graphs are important in various real-world domains, making them worth studying and essential to understand. Many real-world graphs demonstrate heterophilic properties. For example, in online transaction networks, fraudsters tend to connect with customers rather than other fraudsters [1]. In molecular networks, protein structures often consist of different types of amino acids linked together [2, 3]. Recent work [3] also provides a comprehensive review of so many GNNs for heterophilic graphs. High-quality datasets covering various heterophilic real-world applications have been made available through recent work [4]. For instance, malicious node detection, an important application of graph machine learning, is known to be heterophilic in many settings [4]. **Thus, studying heterophilic graphs is a significant research problem with benefits for many applications involving them.**
- (2) Learning effective representations for both homophilic and heterophilic graphs is also crucial for various real-world applications. In many scenarios, **collecting labeled data can be expensive and impractical, especially when domain knowledge is required, as in medicine and chemistry [5, 6].** Our model, as a key simple yet effective representation learning method, will show great potential in various high-level applications, including molecular property prediction [5,6], molecular graph generation, and drug-drug interaction prediction [7].
**(C2). Does GraphACL have the potential to be applied to graph tasks other than node classification shown in the paper?**
**(R2).** **Yes, actually, we have already conducted experiments on graph classification tasks. The reviewer can find the results in Table 8 in our Appendix D.5.** From the table, we can observe that our GraphACL performs well on the graph classification task and achieves better (or competitive) performance compared to baselines, which further strengthens our contribution.
**We appreciate the efforts from the reviewer and also sincerely hope our posted responses can address your questions. We also believe your comments can also be easily addressed in the revision. As noticed by the reviewer, our work is simple yet effective, and provides extensive theoretical and experimental analysis. In light of these responses, we sincerely hope you could consider increasing your score. Please feel free to let us know if there are any remaining questions. Thank you for your efforts!**
[1] NetProbe: A Fast and Scalable System for Fraud Detection in Online Auction Networks. WWW 2007
[2] Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs. NeurIPS 2020
[3] Graph Neural Networks for Graphs with Heterophily: A Survey. ArXiv preprint
[4] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods. NeurIPS 2021
[5] Self-supervised Graph Transformer on Large-scale Molecular Data. NeurIPS 2020
[6] Motif-based Graph Self-supervised Learning for Molecular Property Prediction. NeurIPS 2021
[7] A Systematic Survey of Chemical Pre-trained Models. In Arxiv.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification, and I appreciate the authors' effort in this. I will maintain my positive score.
---
Reply to Comment 1.1.1:
Title: Thank you for your update!
Comment: Thank you very much for your positive comments! We sincerely appreciate your helpful feedback and are grateful for your approval.
Best wishes,
Authors | Summary: This paper presents GraphACL, which aims to tackle limitations of other graph contrastive learning works which have implicit or explicit homophily assumptions, and suffer in learning effective representations for heterophilic tasks. The approach is designed to leverage the principle of monophily, and the authors evaluate their work on several homophilic and heterophilic datasets, achieving at-par performance on the former and improved performance on the latter, on downstream node classification tasks.
Strengths: - The approach presented here is intuitive once the monophilic principle is understood, and the implementation seems straightforward.
- Results suggest the approach is strongly effective (e.g. Table 1), and hte improvement on heterophilic datasets without any material performance loss / delta on homophilic datasets is compelling.
- The paper is well-written, and the figures help clarify the principles motivating the design.
Weaknesses: - Lines 69-80 could greatly benefit from a toy example with nodes illustrated; it is confusing to understand the principles of monophily and the design of GraphACL without such an example early in the paper.
- "we are faced with the challenge of simultaneously capturing the one-hop neighborhood context and monophily in the contrastive objective" (line 74) -- it's not clear why this is a particular challenge; is there something about capturing these two together that is difficult to reconcile in an objective? If so, I didn't understand this from the text.
- Lines 40 and 97 rely quite a bit on the (in paper) reference [22] to reference issues with contrasting view-based GCL approaches on heterophilic graphs; it would be helpful to include some more formal claims or concrete examples in this work to make it more self-contained, since the work strongly seems to draw motivation from [22]'s findings. Without this, it is difficult to fully appreciate the supposed homophilic biases in those GCL approaches.
- The section in Related Work from line 137 onwards seems to categorize BGRL as a contrastive method. BGRL is not conventionally contrastive due to its lack of utilizing negative examples. There are other such non-contrastive methods (several mentioned in [1]) which are used in graph SSL which should probably be mentioned in the related work in this paper. In fact, I believe the authors' intent to pursue a graph SSL approach which does not use augmentations but instead uses induced asymmetry is actually the objective of many "non-contrastive" methods. I would encourage they look into the literature and evaluate whether they indeed think their method is contrastive or not, given it's lack of need of negative samples.
- The words "context" and "preference" are used quite often throughout the paper (I think sometimes to mean the same thing). It would be helpful to formalize these words and just use the same word frequently, as it can be confusing to disambiguate which representation is which.
- Can the authors clarify if/why the loss in Eqn 4 is required? Methods like SimSiam, BYOL, BGRL etc. which use this kind of predictor structure and stop-gradient show they avoid collapse through the asymmetric predictor and the EMA procedure on target weight update. Does GraphACL not naturally avoid collapse for the same reasons? Does it require this uniformity regularization?
- Figure 3 could go earlier to help introduce monophily and the intuition behind what existing GCL methods prioritize and what this method prioritizes.
- There are a few very strong self-supervised GNN approaches missing from baselines, e.g. [2] or the approach mentioned in [1].
- There are opportunities to better evaluate the quality of self-supervised representations using link-level tasks (e.g. link prediction), or several of the tasks mentioned in [2], which may offer improved understanding beyond node classification and node clustering (referenced in Sec 6.1)
- Evaluation on some larger datasets with greater data diversity and lower homogeneity would have been more compelling, e.g. ogbn-arxiv or ogbn-products. Flickr might also be a useful moderate-size heterophilic dataset to try out.
- The node clustering results should be included in the main paper in my opinion -- the evaluation on just 1 downstream SSL task is fairly "light" and could use greater support to evidence that GraphACL is a generally strong SSL method, rather than "SSL for node classification" method. This is one of the biggest gripes in this paper -- the results demonstrating that GraphACL is a very effective SSL method are sparse, with only 1 demonstrated task in the main (evaluated) paper in terms of node classification. Other graph SSL work typically considers several downstream tasks and evaluates across them.
[1] Link Prediction with Non-Contrastive Learning (Shiao et al, ICLR'23)
[2] Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization (Ju et al, ICRL'23)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see "Weaknesses" for general comments / concerns which would be helpful during rebuttal.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: These are addressed in Appendix E.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer ycWA, **we appreciate your perception that our model is implementation-wise simple, strongly effective, and intuitive. We thank your insightful comments and give our responses below:**
**(C1). Lines 69-80 could greatly benefit from a toy example**
**(R1).** Thanks for your great suggestion! As mentioned in your following comments, we will move toy examples in Figure 3 to Lines 69-80.
**(C2). It's not clear why this is a particular challenge (line 74)**
**(R2).** Thanks for your insightful question! We apologize for your confusion and will clearly state the following in the revision. Reconciling the objective of capturing both the one-hop neighborhood (heterophilic) context and monophily (two-hop similarity) in one contrastive objective is challenging. For instance, although homophily-driven objectives can indirectly induce monophily by directly encouraging one-hop similarity, they neglect the heterophilic context, where one-hop connected nodes are not similar. Thus, our GraphACL focuses on addressing the challenges of simultaneously and theoretically capturing one-hop neighborhood (heterophilic) context and monophily in one simple contrastive objective.
**(C3). …issues with contrasting view-based GCL approaches on heterophilic graphs... include some concrete examples to make it more self-contained…**
**(R3).** Thanks for your suggestion! Please see our detailed responses in **Global Response 2**.
**(C4). ...whether they indeed think BGRL and [1] are contrastive or not, given the lack of need for negative samples...**
**(R4).** Thanks for your insightful comments! The BGRL and [1] should definitely be viewed as non-contrastive due to the lack of negative samples. Our intention in Lines 142-143 is to say that BGRL is a graph self-supervised learning method based on augmentations. We will clearly state this in the main text. Moreover, as mentioned in our paper (section 4.1), even though they also use an asymmetric projection head, our motivation, objective, and theoretical insights differ significantly from BGRL which is based on invariant augmentation assumption.
**(C5). It would be helpful to formalize "context" and "preference" and just use the same word frequently**
**(R5)**. Thanks for your suggestion! Please see our detailed responses in **Global Response 1**. We agree and will utilize the same "preference" word.
**(C6). Does GraphACL not naturally avoid collapse compared to SimSiam, BYOL, and BGRL for the same reasons? Does it require this uniformity regularization?**
**(R6).** Thanks for your very insightful questions! Uniformity is important and required for GraphACL, especially for homophily graphs. As shown in our ablation studies, GraphACL w/o uniformity loss will lead to a performance drop instead of completely collapsed solutions since it still serves as a strong baseline. Moreover, the ablation (w/o both asymmetric predictor and uniformity loss) also serves as a valid baseline and does not lead to a completely collapsed solution. This is contrary to SimSiam, BYOL, and BGRL, which focus on predicting the sample representation in one augmented view via the same sample from another view. One possible reason is that GraphACL is not based on augmentation but tries to predict the representations of neighbors via an asymmetric predictor. Thus, the observed graph structure serves as a strong prior and inductive bias. Since neighbors in the graph, even in homophily graphs, do not always share the same class or similar features, making GraphACL hard to completely collapse.
We leave the theoretical and deeper understanding of this phenomenon for future work, as it may extend beyond the scope of a single paper. Nonetheless, as demonstrated in our experiments, adding a uniformity loss remains crucial for enhancing representation diversity and increasing inter-class variation, ultimately leading to good generalization. We will explicitly state this in the main text.
**(C7). Fig. 3 could go earlier to help introduce monophily**
**(R7).** Thank you! We agree and will move Fig. 3 earlier into introduction.
**(C8). A few very strong self-supervised GNN approaches [1,2] missing from baselines**
**(R8).** Thanks for raising the great works [1,2]! Following your suggestions, we compare GraphACL with [1,2]. Please see details and results in **Global Response 3.**
**(C9). Beyond node classification and node clustering and using link-level tasks or the tasks in [2]**
**(R9)**. Thanks for your suggestions! **We want to kindly remind the reviewer that we already have not only evaluated node classification (Table 1) and node clustering (Table 5), but also the graph classification (Table 8).** Nevertheless, we agree that including more tasks will be better. Thus, we include two more tasks: link prediction and partition prediction. For simplicity, we follow the same settings as [2] as it provides splits for heterophilous graphs. The results given in attached PDF (Table 2) in Global Response show that GraphACL can still achieve better (or competitive) performance compared to elaborate methods.
**(C10). Evaluation on some larger datasets, e.g. ogbn-arxiv or ogbn-products**
**(R10).** Thanks for your great suggestions! **We believe there are misunderstandings. We have tested ogbn-arxiv, denoted as Arxiv, in Table 1 in our paper.** We also want to kindly remind the reviewer that we have tested GraphACL on 15 diverse datasets, as listed in Table 4, including various areas, sizes, and homophily ratios.
**(C11). The node clustering results should be included in the main paper**
**(R11).** Thank you! We will move them from the appendix to the main paper.
**We sincerely hope that our responses can address your comments and hope you will consider increasing your score. Thank you so much for your efforts!**
[1] Link Prediction with Non-Contrastive Learning. ICLR 2023
[2] Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization. ICLR 2023
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thanks for your responses and comprehensive rebuttal. In particular, I appreciate the experiments and references on low-frequency claims about existing GCL methods, which helps strengthen this work's motivation considerably (I'd make sure you add this in the new version, as it helps this work a lot).
R6: Thanks for the clarification. I missed this experiment and indeed it helps understand that the loss helps but is not required to avoid collapse. You may want to reference [1] from my response -- this work also introduces some auxiliary augmentations (instead of a uniformity loss term), but for the same effect of helping methods which have such asymmetric predictor structure avoid collapse. It may help the discussion around why the uniformity loss works in the paper and how it helps the reference GraphACL method achieve better performance.
R9: Thanks for adding this. It strengthens the work and helps support the argument of generality of representations.
In light of the above updates, I will raise my score to 6.
---
Reply to Comment 1.1.1:
Title: Thanks for your updates and updating the score!
Comment: Thank you very much for reviewing our paper and reading our rebuttal. We sincerely appreciate your recognition of our clarifications and the increase in your score! In the new version, we will include the experiments and references related to the low-frequency claims about existing GCL methods, as well as cite the work [1] from your response to further illustrate the uniformity loss and its relation to our work.
Best wishes,
Authors | Summary: This paper propose a simple and effective contrastive learning framework named GraphACL for both homophilic and heterophilic graphs. In particular, GraphACL can capture both one-hop local neighborhood context and two-hop monophily similarties in one single objective. The authors theoretically analyze the learning process of the GraphACL and show that GraphACL can explicitly maximize the mutual information between representations and one-hop neighborhood patterns. The experiments and theoretical analysis show the effectiveness of the proposed approach.
Strengths: -This paper is well written and well structured. The authors first analyze the common limitations of existing GCL frameworks, and then propose some ideas and validate them on many datasets, and finally solve these limitations through the proposed methods and theorems, making this paper well understandable.
-The experiments and theories are sufficient to demonstrate the effectiveness.
-This paper is an early investigation of GCL on heterophilic graphs, which provides a new direction for the subsequent development of GCL.
Weaknesses: -It is common to know about the graph contrastive learning with graph augmentations mentioned in paragraph 3 of the introduction. For the first contrastive scheme mentioned in paragraph 2, the authors should give more details.
-The authors should make a more specific presentation and explanation of the high-frequency and low-frequency information mentioned in the introduction and give a specific instance that high-frequency information is more beneficial for heterophilic graphs.
-Some nodes mentioned in the description of the monophily property of the graphs in lines 190 to 192 can be corresponded to the subgraph d in Figure 3, thus making it less difficult for the reader to read.
-The experimental setup is inconsistent. In the 296 line the authors introduce experiments on five random seeds, while in section 6.2 it is changed to ten runs.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.The 184 lines of "node identity and node preference in real-world graphs" is not very understandable. Is this a metaphor of your own?
2.In a general GCL methods (such as GraphCL), the representations are usually mapped into the contrastive space via a projection head before the contrastive loss is calculated. Is this operation similar to the one you mentioned in this paper.
Some typos:
-In Eq.9, $\mathcal{L}_{A C L}(q)$ -> $ \mathcal{L}_{A}(q) $
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer 1K5Q, **we appreciate your great summarization and recognition of our contributions and your positive comments on our work: "well written," "early investigation," and "sufficient experiments and theories."** Please find our responses to your comments below:
**(C1). For the first contrastive scheme mentioned in paragraph 2, the authors should give more details.**
**(R1).** Thank you for your valuable suggestion! The first contrastive scheme differs from augmentation-based methods. Instead, it leverages the graph's rich structure to generate contrastive signals [1,2,3,4]. This scheme operates on the homophily assumption, aiming to ensure that connected nodes exhibit similar representations in the latent space. Typically, this scheme utilizes contrastive losses similar to shallow embedding algorithms, as depicted in Table 1 of the related work [4]. We will give and emphasize these details in the revision.
**(C2). The authors should make a more specific presentation and explanation of the high-frequency and low-frequency information mentioned in the introduction and give a specific instance that high-frequency information is more beneficial for heterophilic graphs.**
**(R2).** We greatly appreciate the reviewer's insightful suggestion, which we believe will improve our work. **Please see our detailed responses in Global Response 2**, where we provide a more specific presentation and explanation of the high-frequency and low-frequency.
**(C3). Some nodes mentioned in lines 190 to 192 can be corresponded to the subgraph d in Figure 3.**
**(R3).** Thanks for your suggestion! We will correspond the mentioned nodes to the subgraph d in Figure 3.
**(C4). In the 296 line the authors introduce experiments on five random seeds, while in section 6.2 it is changed to ten runs.**
**(R4).** **We apologize for your confusion and we believe there are some misunderstandings.** The reason we conducted ten runs is because the standard and public split for heterophilic graphs follows a fixed 10-fold split. On the other hand, for homophilic graphs, the standard and public split is the fixed one-fold split, which is why we just used five random seeds in those experiments. However, to ensure consistency and robustness, we also ran ten random seeds for the homophilic graphs using our published code. The results in the table below with ten random seeds showed minimal (no) differences when compared to the five random seeds results. We apologize for your confusion and will more clearly state this in the revision.
| Dataset | Cora | Citeseer | Pubmed | Computer | Photo| Arxiv-year |
| --- | --- | --- | --- | --- | --- | --- |
| 10 random seeds | 84.20±0.27 | 73.62±0.20 | 82.01±0.13 | 89.83±0.22 | 93.31±0.18 |71.75±0.31 |
| 5 random seeds | 84.20±0.31 | 73.63±0.22 | 82.02±0.15 | 89.80±0.25 | 93.31±0.19 |71.72±0.26 |
**(C5). The "node identity and node preference in real-world graphs" is not very understandable. Is this a metaphor of your own?**
**(R5).** Thanks for your insightful questions! Regarding these concepts, please see our clarifications and responses in **Global Response 1.**
**(C6). Is the projection head in GraphCL similar to the one you mentioned in this paper.**
**(R6).** Thanks for your insightful question! The projection head in GraphCL is **inherently different** from our work:
- **Motivation and Method**. The projection head in GraphCL is symmetrical, relies on graph augmentations, and is built based on the idea that augmentations can preserve the semantic nature of samples, i.e., augmented nodes have consistent semantic labels with the same original nodes. However, our predictor is asymmetric and does not rely on augmentations. Instead, GraphACL directly considers predicting the neighborhood distribution in the original graph via an asymmetric predictor. Thus, our motivation and model is inherently different from GraphCL.
- **Theories**. As also noticed by the reviewer, we provide a theoretical analysis to show the connection between our asymmetric predictor and one-hop neighbor context and two-hop monophily, and we prove that learned representations by GraphACL probably enjoy good downstream performance. However, the projection head in GraphCL is unclear theoretically and intuitively in its ability to capture structural information in graphs, especially for heterophilic graphs. This significant difference further sets our work apart from GraphCL.
- **Experiments**. We conducted extensive experiments and analysis on both homophilous and heterophilic graphs. While GraphCL only works on homophilous graphs, for heterophilic graphs, contrastive learning has received little attention to date. Thus, we believe that demonstrating the effectiveness of our GraphACL on heterophilic graphs further distinguishes it from GraphCL
**(C7). Some typos $\mathcal{L}_{ACL} (q) $ in Eq.9.**
**(R7).** Thanks for pointing out the typo! We have corrected it.
**We sincerely hope that our responses can address your comments. Moreover, as noticed by the reviewer, our work presents some interesting findings, a simple yet effective framework, and some theoretical contributions. The reviewer's suggestions can be easily and effectively addressed, and we genuinely hope that the reviewer can consider increasing the score. Please feel free to let us know if there are any remaining comments. Thank you for your efforts!**
[1] Variational graph auto-encoders. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016
[2] Contrastive laplacian eigenmaps. NeurIPS 2021
[3] Localized contrastive learning on graphs. arXiv preprint arXiv:2212.04604, 2022
[4] Representation Learning on Graphs: Methods and Applications. arXiv preprint arXiv:1709.05584
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses. I am satisfied with the responses that address my concerns. I raise my score to 6.
---
Reply to Comment 1.1.1:
Title: Thank you for your comments!
Comment: Thank you so much for your efforts! We sincerely appreciate the reviewer for checking our responses and the increase in your score!
Best wishes,
Authors | Rebuttal 1:
Rebuttal: **We sincerely thank all the reviewers for their insightful comments and helpful suggestions. Overall, the reviewers praised our work's originality, soundness, and clarity. We deeply appreciate the numerous positive comments on our work, such as describing it as "simple, effective, and intuitive," "well-written," and the "solid theoretical and empirical analysis".**
We provide this **Global Response** to address similar comments or misunderstandings from reviewers. Additionally, we have **attached a one-page PDF** containing additional experimental results suggested by some reviewers.
**(Global C1) The concept of "node identity" and "node preference" (context) in real-world graphs**
**(Global R1)** Thanks for your comments! We would like to clarify this and will add the following responses in the revision.**
The "node true identity" and "node preference" in real-world graphs have been mainly introduced as the social concepts by [1,2]. Specifically, monophily in graphs implies the existence of nodes with strong preferences for specific nodes, which differ from their own identity. For example, users of a certain gender may not always prefer neighbors of the same gender, leading to a preference that deviates from their identity.
In our work, **we incorporate these social concepts into a practical contrastive learning framework.** We achieve this by converting the central node's identity representation to its preference (context) representation using a simple asymmetric predictor. Subsequently, the preference (context) representation of the central node is used to predict identity representations of its neighbors. **Our theoretical and empirical analysis demonstrate GraphACL effectively captures these social concepts, leading to a good performance on both homophilic and heterophilic graphs.**
**(Global C2) More explanation of the high-frequency and low-frequency information for augmentation-based GCL approaches on heterophilic graphs**
**(Global R2)** Thanks for your great comments! We want to clarify the following points and will add the discussion below in the final version.
- **(1)** From a spectral perspective of graphs, capturing low-frequency signals involves smoothing features across the graph, ensuring similarity among node representations. This is crucial for homophilic graphs. For instance, studies [3,4,5] have shown that using different signals is an effective approach for dealing with diverse graphs: low-frequency signals for homophilic graphs and high-frequency signals for heterophilic graphs
- **(2)** GCL with augmentation operates on the principle that augmentation should retain crucial information, encouraging the model to learn invariant representations by disregarding perturbations in unimportant information. Thus, preserving important frequency signals in graph augmentation becomes essential for GCL. The previous work [6] (i..e, [22] in our paper) has theoretically shown learned representations by GCL with augmentations essentially capturing the invariant low-frequency information. **Consequently, current GCL with augmentation implicitly relies on the homophily assumption, and can not perform well on heterophilic graphs with diverse neighbors [4]. This has been also supported empirically in previous research [4], as well as in our own experimental results on both real-world and synthetic datasets.**
- **(3) To further support the above motivation and make our work more self-contained, we evaluate the eigenvalue (spectrum) change of the augmented graph views under different methods: GCA [9], BGRL [10], and AD-GCL [11].** The results is shown in Figure 1 in our attached one-page PDF. The results show that the variation in the low-frequency components are smaller than those in the high-frequency components, for both homophilic and heterophilic graphs. This observation confirms that mainstream GCL methods with augmentations often maintain the invariance of low-frequency signals while perturbing high-frequency signals. Moreover, these methods exhibit lower effectiveness on heterophilic graphs compared to GraphACL, particularly when the perturbation in high-frequency signals is more significant. This highlights the importance of preserving high-frequency signals in augmentation-based GCL methods for heterophilic graphs.
**(Global C3) There are a few approaches missing from baselines, e.g. [12] or [13]**
**(Global R3)** Thanks for raising great works T-BGRL [12] and PARETOGNN [13]! Following your suggestions, we compare GraphACL with [12,13]. We follow the same data splits as our paper and present the results in Table 1 in our attached one-page PDF. We will include this comparison in our final revision. From the results, we can observe that GraphACL performs better than [12, 13], especially on heterophilic graphs. **These additional results, combined with the comparison against 10 baselines on 15 datasets using our source code in our submission, can strongly validate the effectiveness of GraphACL.**
[1] Monophily in Social Networks Introduces Similarity among Friends-of-friends. Nature Human Behaviour. 2018.
[2] Decoupled Smoothing on Graphs. WWW 2019. [3] Beyond Low-frequency Information in Graph Convolutional Networks. AAAI 2021.
[4] Adaptive Universal Generalized PageRank Graph Neural Network. ICLR 2021.
[5] Revisiting Heterophily For Graph Neural Networks. NeurIPS 2022.
[6] Revisiting Graph Contrastive Learning from the Perspective of Graph Spectrum. NeurIPS 2022.
[7] Decoupled Self-supervised Learning for Graphs. NeurIPS 2022.
[8] Deep Graph Infomax. ICLR 2019.
[9] Graph Contrastive Learning with Adaptive Augmentation. WWW 2021.
[10] Large-Scale Representation Learning on Graphs via Bootstrapping. ICLR 2022.
[11] Adversarial Graph Augmentation to Improve Graph Contrastive Learning. NeurIPS 2021.
[12] Link Prediction with Non-Contrastive Learning. ICLR 2023.
[13] Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization. ICLR 2023
Pdf: /pdf/a0fb25c997c84d78282e922aa22069f8e9dbb383.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents GraphACL, a contrastive learning method for graph representation learning. It aims to address the limitations of current methods that only consider the homophily property of graphs or rely heavily on graph augmentation methods. The authors propose an asymmetric predictor approach where the one-hop local neighborhood context and the two-hop monophily similarity are captured.
Strengths: - The premise of the need for a new contrastive learning method that does not overly rely on graph augmentation methods or the homophily property of graphs is reasonably argued and valid. Evidence from other studies or more data to support this argument could further its strength.
Weaknesses: - The explanation of how GraphACL works and how it captures both the one-hop local neighborhood context and the two-hop monophily similarity might be difficult to comprehend for readers lacking technical background. The use of concrete examples or analogies could strengthen this argument.
- The argument strength could be improved through more detailed explanation of the theoretical analysis of GraphACL, giving more justifications or linking it to established theoretical or empirical studies in the field of contrastive learning.
- The paper's argument that GraphACL is a superior tool when compared to current state-of-the-art graph contrastive learning methods is backed by empirical evidence on both homophilic and heterophilic graphs. However, some rencent proposed method are missing (i.e., [1] and [2])
[1] Spectral Augmentation for Self-Supervised Learning on Graphs
[2] Spectral Feature Augmentation for Graph Contrastive Learning and Beyond
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations:
Overall, the paper presents an interesting and valid argument. However, to enhance its strength, the author could provide more empirical data, concrete examples, and clearer explanation of the concepts and methodology.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer fMCY, **we appreciate your positive feedback on our paper's soundness, novel insights, and contribution. Please find our detailed responses below:**
**(C1). The explanation of how GraphACL works and how it captures both the one-hop neighborhood context and the two-hop monophily similarity might be difficult to comprehend for readers lacking technical background. The use of concrete examples or analogies could strengthen this argument.**
**(R1).** We thank the reviewer for your good suggestion, which is helpful for the further improvement of our work! We will give a concrete example to help introduce monophily and one-hop local neighborhood context in the introduction part. More specifically, as mentioned by the reviewer ycWA, we will move our Figure 3 to the beginning of the introduction part, which can serve as a good concrete toy example of various design motivations (homophily, one-hop neighborhood context, and two-hop monophily) of GraphACL.
**(C2). The argument strength could be improved through explanation of the theoretical analysis of GraphACL.**
**(R2).** Thank you for your suggestion. We would like to give the following explanation of our theoretical analysis. We will highlight the following discussions in the revision.
- (1) Although there are many theoretical works [3,4,5,6,7] trying to theoretically understand how contrastive learning works, these works mainly focus on contrastive learning in the image IID setting, the theoretical analysis on the non-IID node representation learning (each instance is a node in one large graph and instances are inter-connected resulting in a non-IID nature) is still quite limited. Our theoretical analysis shows that our simple GraphACL can simultaneously capture one-hop neighborhood context and two-hop monophily. We also theoretically and empirically show that the learned representations by GraphACL can achieve better downstream performance and the connection between GraphACL and graph eigenvalues. **Thus, our theoretical analysis is significantly different from those works and is complementary to established theoretical studies in contrastive learning.**
- (2) In addition, many established theoretical or empirical studies on image contrastive learning [3,4,5,6,7], and graph contrastive learning [1,2,8,9,10] are based on augmentations and hope that augmentations can preserve the invariant semantic nature of samples, i.e., the augmented samples have invariant semantic labels with the original ones. **However, our theoretical analysis is not based on augmentations but directly considers analyzing neighborhood distribution. Thus, GraphACL provides new perspectives on graph contrastive learning that are quite distinctive from current theoretical and empirical works.**
**(C3). The paper's argument that GraphACL is a superior tool when compared to current state-of-the-art graph contrastive learning methods is backed by empirical evidence on both homophilic and heterophilic graphs. However, some recent proposed method are missing (i.e., [1] and [2]).**
**(R3).** We thank the reviewer for sharing these two works [1,2]! **We kindly want to remind the reviewer that we have conducted an extensive evaluation on 10 methods on 15 datasets , including many state-of-the-art contrastive learning methods. Thus, we believe our experiments can strongly corroborate the effectiveness and scalability of GraphACL. Nevertheless, we completely agree with the reviewer that including an empirical comparison with SPAN [1] and SFA [2] would be beneficial.** Since the code of SFA [2] has not been publicly available, we decided to compare our GraphACL with COSTA [11], which also employs feature augmentation like SPAN [2] and provides publicly available code. We ran additional experiments on SPAN [1] and COSTA [11]. For all methods and datasets, we used the same public and standard splits as in our paper, and we will include the following results in the revision:
| Dataset | Citeseer | (Ogbn)-Arxiv | Squirrel | Chameleon | Crocodile| Arxiv-year |
| --- | --- | --- | --- | --- | --- | --- |
| COSTA | 72.31±0.27 | 71.00±0.40 | 48.36±0.25 | 61.82±0.24 | 59.70±0.33 | 42.21±0.28 |
| SPAN | 72.01±0.58 | 70.94±0.24 | 49.47±0.39 | 62.55±0.36 | 61.53±0.52 | 43.95±0.41 |
| GraphACL | **73.63±0.22** | **71.72±0.26** | **54.05±0.13** | **69.12±0.24** | **66.17±0.24** |**47.21±0.39** |
The results are encouraging and further strengthen our contribution! The results show that our GraphACL outperforms these two competitive baselines [1,11] based on augmentations, especially on heterophilic graphs.
**In light of these responses, we sincerely hope our rebuttal has addressed your comments, and believe that your comments do not affect our key contributions and can be easily addressed in the revision. We also genuinely hope you will reconsider increasing your score. If you have any other comments, please do share them with us, and we will address them further. Thank you for your efforts!**
[1] Spectral Augmentation for Self-Supervised Learning on Graphs. ICLR 2023
[2] Spectral Feature Augmentation for Graph Contrastive Learning and Beyond. AAAI 2023
[3] A Theoretical Analysis of Contrastive Unsupervised Representation Learning. ICML 2019
[4] Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere. ICML 2020
[5] Understanding Self-Supervised Learning Dynamics without Contrastive Pairs. ICML 2021
[6] Provable Guarantees for Self-supervised Deep Learning with Spectral Contrastive Loss. NeurIPS 2021
[7] Generalization Analysis for Contrastive Representation Learning. ICML 2023
[8] Graph Contrastive Learning with Augmentations. NeurIPS 2020
[9] Graph Contrastive Learning with Adaptive Augmentation. WWW 2021
[10] Large-Scale Representation Learning on Graphs via Bootstrapping. ICLR 2021
[11] COSTA: Covariance-Preserving Feature Augmentation for Graph Contrastive Learning. KDD 2022
---
Rebuttal Comment 1.1:
Comment: I am satisfied with author's rebuttal. It would be good to see the new result are included in the revision. I rasie my score from 5 to 6
---
Reply to Comment 1.1.1:
Title: Thank you for your comments and updating the score!
Comment: Thank you very much for carefully reading our response and increasing your score! We are glad our response has addressed your comments. We genuinely appreciate your support and will include the new results in the main paper.
Best wishes,
Authors | null | null | null | null | null | null |
Provably Safe Reinforcement Learning with Step-wise Violation Constraints | Accept (poster) | Summary: This paper studies safe RL with step-wise violation constraints, different from the popular CMDP with
an additive expectation cost constraint. The step-wise violation constraint is more suitable for safety-critical
systems. The authors propose an algorithm that provides violation and regret bound. They then further
develops an algorithm to learn a near-optimal safe policy and show its effectiveness in the experiments.
Strengths: 1) This paper studies an important problem. The paper is well-written and easy to follow.
The RL with step-wise violation constraint, as a formulation, is novel and more general than the popular CMDP with an additive expectation constraint.
2) The proposed approach looks correct and sound to me although I didn't check every proof.
3) The authors provide theoretical analyses of violation and regret bound.
4) The proposed safe RL algorithm achieves better performance than the existing baselines.
Weaknesses: 1. This paper may miss some reference for safe RL with step-wise violations such as
a) Wang, Y., Zhan, S. S., Jiao, R., Wang, Z., Jin, W., Yang, Z., ... & Zhu, Q. (2022).
Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments.
arXiv preprint arXiv:2209.15090.
The authors have to at least discuss this paper in this work as the timepoint-level "hard chance constraint" looks very similar to the step-wise violations proposed by the authors.
2. In the MDP model, it is not clear to me whether the paper solves safe RL with continuous/discrete deterministic/stochastic
systems or environments. The transition set \Delta_h(s, a) could be an infinite set when considering continuous state space.
3. Essentially, my understanding of this paper is that they are dealing with discrete action and state space, and because of this,
the RL agent can infer which state and/or future state is unsafe to visit by building the A_safe set via dynamical programming.
which is the core of the proposed algorithms.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I am using a Windows laptop and I cannot open the full_paper file (format) in the supplementary folder, so I cannot see the limitations of the approach.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I would like to authors to clarify the limitations of their formulation and approach in the response.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and effort in reviewing our paper! We really appreciate your positive comments about our work. Please find our responses to your comments below. We will be happy to answer any further questions you may have.
**1. Discuss with [3].**
[3] considers the same step-wise constraint as ours. They solve step-wise constraints via learning a generative-model-based soft barrier function. To be more specific, they use a soft barrier function to encode the constraint of the surrogate model provided by the generative model. They learn the generative model as a discrete-time SDE.
There are two major differences between [3] and our work: (a) They can only solve the continuous dynamics (stated in Assumption 3.1 in [3]), while we consider discontinuous tabular MDPs. (b) They focus on the empirical side and do not obtain any theoretical guarantees, while we focus on the theoretical side and provide rigorous regret analysis and lower bounds.
**2. Continuous space**
We agree that it is interesting and challenging to extend our framework to continuous state space since $\Delta(s,a)$ is infinite there. [2] avoids this problem by assuming $\Delta(s,a)$ is known. An important future direction is to study whether partial information of $\Delta(s,a)$ is sufficient, instead of the entire $\Delta(s,a)$. Moreover, we want to emphasize that previous works [1,2] cannot solve the safety problem on tabular MDPs: They assume the linear feature set is star and convex (Assumption 5 in [1] and Assumption 2 in [2]), which does not hold in tabular MDPs since the feature set in tabular MDP consists of one-hot vectors.
We emphasize that our problem is challenging and novel. In fact, [3] states that discontinuous MDP still remains challenging, even if the setting of continuous state space is solved. Thus, our results for tabular and discontinuous MDPs are significant.
**3. Cannot open the full paper file**
The "full$\\_$paper" file can be opened by adding a ".pdf" suffix. Thanks for pointing out this issue. We will fix it in our revision.
**Here we describe the limitation of our paper.**
First, Algorithm 1 will suffer a non-zero step-wise violation in the beginning. This non-zero violation is unavoidable, because the agent has no information about unsafe states to begin with. Second, in some real-world environments, the number of states and actions can be large and even infinite. Thus, the complexity of our algorithm can be high. To overcome this challenge, one interesting direction is to extend our work to the function approximation setting, such as linear MDP [4].
**References:**
[1]. Amani S, Thrampoulidis C, Yang L. Safe reinforcement learning with linear function approximation, ICML, 2021.
[2]. Shi M, Liang Y, Shroff N. A Near-Optimal Algorithm for Safe Reinforcement Learning Under Instantaneous Hard Constraints, arXiv preprint, 2023.
[3]. Wang Y, Zhan S S, Jiao R, et al. Enforcing hard constraints with soft barriers: Safe reinforcement learning in unknown stochastic environments, ICML 2023.
[4]. Jin C, Yang Z, Wang Z, et al. Provably efficient reinforcement learning with linear function approximation, COLT 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttals. In your future revision, please make sure to clarify that this paper considers a tabular MDP in the problem formulation. | Summary: This paper formulates and studies a strict step-wise violation constraint reinforcement learning problem, where a non-negative state-dependent cost is cumulated at each step. They show lower bounds for regret and safety violations. A model-based algorithm that matches the lower bound is provided, while another O(1) gap-dependent violation is provided too. They also study the reward-free problem and provide an algorithm that has a sample complexity with a leading term matching the SOTA of a no-constraint case.
Strengths: The step-wise violation constraint problem is meaningful. This paper is technically sound. The proofs are correct as far as I can tell. The empirical result looks good.
Weaknesses: 1. The idea of the unsafe set can only apply to state-dependent safety problems. I doubt this technique can apply to a (state, action)-dependent safety problem. Additionally, it is weird that the reward is (state, action)-dependent, but the cost is state-dependent only.
2. It is weird that the reward-free can give cost feedback.
3. Definition 6.1 is confusing. I guess from your proof that \pi in (4) means all feasible policies defined by \Delta and U?
4. line 545, '<' should be '>'?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Part of the limitations are shown. The authors don't discuss the limitations of state-dependent cost as well as the weird reward-free setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and effort in reviewing our paper! Please find our responses to your comments below. We will be happy to answer any further questions you may have.
**1. Can the cost be (state, action)-dependent?**
The reason why we choose the state-dependent cost is to follow the setting in previous works [1,2,3] for step-wise constraints.
Extending our framework to (state, action)-dependent cost is not difficult. Intuitively, we can define a state $s$ is unsafe when $\max_{a \in \mathcal{A}}c(s,a) \le \tau$. Then, in each state $s$, we can also use the uniform exploration to choose the action and estimate $c(s,a)$. It only leads to an extra $A$ factor in regret and violation.
**2. It is weird that the reward-free can give cost feedback.**
The motivation of RFE is that, since the reward functions are often engineered and changed in practice, we need an algorithm to explore the environment and work well for different rewards [4]. In many real-world scenarios such as robot control, the cost function depends on the environment, which does not change frequently. Hence, it is natural that we can receive the cost feedback without the reward signal. This setting is also considered in previous works for safe RFE, e.g., [5].
**3. Definition 6.1 is confusing**
Thanks for pointing out this confusion. $\pi$ in Eq.(4) denotes any feasible policies in $\Pi$ with respect to true $\Delta(s,a)$ and $\mathcal{U}$. Note that $\Pi^K$ is feasible with respect to $\hat{\Delta}^K(s,a)$ and $\mathcal{U}^k_H$, thus $\Pi^K\subseteq \Pi$ by optimism. We will improve the explanation in our revision.
**4. Typo**
The equation under Line 545 is $\ge$. We will fix this typo in our revision.
**References:**
[1]. Wachi, A., Sui, Y., Yue, Y., and Ono, M. Safe exploration and optimization of constrained
MDPs using Gaussian processes. AAAI 2018.
[2]. Wachi A, Wei Y, Sui Y. Safe policy optimization with local generalized linear function approximations[J]. Advances in Neural Information Processing Systems, 2021
[3]. Wang Y, Zhan S S, Jiao R, et al. Enforcing hard constraints with soft barriers: Safe reinforcement learning in unknown stochastic environments, ICML 2023.
[4]. Jin C, Krishnamurthy A, Simchowitz M, et al. Reward-free exploration for reinforcement learning, ICML 2020.
[5]. Huang R, Yang J, Liang Y. Safe exploration incurs nearly no additional sample complexity for reward-free rl, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttals. I still have some concerns after reading the rebuttal and I will keep my rating.
1. If the framework is extended to (state, action)-dependent cost as defined by authors, then it seems no hope to handle the large action case even if function approximation is used since the definition of unsafe states always requires maximize over all actions.
2. I am still not fully convinced by the reasons that the cost feedback can be given in the reward-free setting. In some cases, the reward function can also depends on the environment, which does not change frequently.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer ZGMh
Comment: Thank you for the response!
**1.If the framework is extended to (state, action)-dependent cost as defined by authors, then it seems no hope to handle the large action case even if function approximation is used since the definition of unsafe states always requires to maximize over all actions.**
If we extend to (state, action)-dependent cost, it indeed needs to explore all actions to receive the safety feedback of each action. One possible avenue to avoid this exploration is to posit that the cost function exhibits a certain favorable structure, (e.g. $c(s,a)= \theta^T \varphi(s,a)$ for some $\theta \in \mathbb{R}^d$ and known features $\varphi(s,a) \in \mathbb{R}^d$, as in existing works [1,8].) and then estimate the cost for all $(s,a)$ exploiting the structure without exploring all actions. This is an interesting future direction.
**2. I am still not fully convinced by the reasons that the cost feedback can be given in the reward-free setting. In some cases, the reward function can also depend on the environment, which does not change frequently.**
The RFE setting is motivated by the fact that in practice, the reward function can be difficult to specify and change frequently by manual crafting [2,3], or there can be multiple rewards of interest in the same environment [4].
In contrast, the cost function can heavily depend on the environment such as barrier placements [5] and the gesture of the robot [6]. Hence, it can be immutable and knowable. Previous safe RFE work [7] also receives the cost feedback during the exploration phase. Moreover, from a theoretical perspective, access to information about the cost function becomes imperative. Devoid of this information, ensuring safety constraints during the exploration phase becomes an impractical pursuit.
We thank the reviewer again for the detailed and valuable comments. Please let us know if you have any further questions. We will be happy to answer them. If you find our response satisfying, we wonder if you could kindly consider raising the score rating of our work?
Thank you very much!
**References:**
[1]. Amani S, Thrampoulidis C, Yang L. Safe reinforcement learning with linear function approximation, ICML, 2021.
[2]. Leike J, Krueger D, Everitt T, et al. Scalable agent alignment via reward modeling: a research direction, preprint 2018.
[3]. Fu J, Singh A, Ghosh D, et al. Variational inverse control with events: A general framework for data-driven reward definition, NIPS 2018.
[4]. Wu J, Braverman V, Yang L. Accommodating picky customers: Regret bound and exploration complexity for multi-objective reinforcement learning, NIPS 2021.
[5]. Thananjeyan B, Balakrishna A, Nair S, et al. Recovery rl: Safe reinforcement learning with learned recovery zones, IEEE Robotics and Automation Letters, 2021.
[6]. Thomas G, Luo Y, Ma T. Safe reinforcement learning by imagining the near future, NIPS 2021.
[7]. Huang R, Yang J, Liang Y. Safe exploration incurs nearly no additional sample complexity for reward-free rl, ICLR 2023.
[8]. Wachi et al. Safe policy optimization with local generalized linear function approximations. 2021. | Summary: The authors propose a new formulation for the safe RL problem Safe-RL-SW, whose violation constraints are step-wise, different from the existing work. A model-based general algorithmic framework SUCBVI is proposed and the theoretical guarantees on the upper bound and lower bound of its regret are provided. The authors also propose a similar reward-free safe RL problem with step-wise violation constraints (Safe-RFE-SW) along with an algorithm SRF-UCRL, the theoretical upper bound on whose regret is provided. It is claimed to be the first result on step-wise violation constraints in the RFE setting. Experiments are provided in support of the theoretical results.
Strengths: The authors propose new problem formulations for safe RL as well as algorithms under these formulations. Theoretical results are provided for the proposed algorithms. Overall, the new problem formulations seem original and the results seem significant.
Weaknesses: How the results in this work compare with the existing work is somewhat unclear, despite some discussion in the Related Work Section. In Section 2, the authors mention two closely-related work Amani et al. (2021) and Shi et al. (2023) that also consider step-wise violation and state that more detailed comparisons can be found in Appendix B, but I did not find any comparison with these two works in Appendix B. I strongly suggest that the authors compare with all related works and clearly state how this work differentiates from them in the revision. For example, do all CMDP works define violation with $C'(K)$ in Line 136? Can you compare with the CMDP works that do not use expected violation like $C'(K)$, if there is any? As for Amani et al. (2021) and Shi et al. (2023), please explain whether there is any difference in your formulations and how your results compare with theirs, if your setting is comparable with theirs.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I hope the authors can provide clarification for a few questions:
1. What is the role of $n$ in Theorem 5.2? Why considers $M_1, \cdots, M_n$ in this theorem instead of just focusing on the one MDP that establishes the lower bound?
2. Do all CMDP works define violation with $C'(K)$ in Line 136? Is there any CMDP work that do not use $C'(K)$ and have a violation definition that is more comparable to the step-wise violation in this work?
3. In the plot of "Step-wise Violation in Safe-RL-SW" of Figure 1, the cumulative step-wise violation curve for SUCBVI seems to grow much more slowly than $\sqrt{T}$. However, shouldn't we expect something $\Omega(\sqrt{ST})$, especially given your lower bounds in Section 5?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation is currently discussed in Appendix A. I encourage the authors to move part of it in the conclusion so that it is more conspicuous.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and effort in reviewing our paper. We will be happy to answer any further questions you may have.
**1. Comparison with existing works**
We provide a brief summary of existing papers with instantaneous constraints and list the assumptions.
**(1). Gaussian Process (GP) Structure**
[1,2] propose a GP-based algorithm, which assumes that (a) the transition is deterministic and known, and (b) the reward and cost functions are modeled by GPs. By using this particular structure, they can infer the safety cost by estimating the parameters.
**(2). Unbounded exploring time**
[3] assumes the reward and cost functions have a generalized linear structure. Their algorithm explores in a safe space until a time $t^*$ when the agent explores sufficiently. However, the upper bound of the exploring time $t^*$ is not given in their paper. Under the tabular MDP setting, $t^*$ can be infinite since the linear feature vectors consist of one-hot vectors.
**(3). Safe action, convex feature set and known transition set**
[4] considers the reward and cost functions to have a linear structure. It makes two assumptions: (a) There exists a safe action in each state, which avoids the agent go to a potentially unsafe state. (b) The feature set is a star convex set, which helps them change actions continuously. However, this makes their works infeasible in tabular MDPs: The feature set in tabular MDPs consists of one-hot vectors and is not a star convex set. Hence, the work [4] cannot solve our problem. [6] considers safe RL in linear mixture MDPs. It also contains assumption (b), making it infeasible in tabular MDPs. Moreover, although they do not have assumption (a), it assumes the transition set $\Delta(s,a)$ is known, which is not needed in our paper. Thus our paper is more challenging since we need to estimate $\Delta(s,a)$ in our algorithm adaptively.
**(4). Safety feedback oracle**
[5] considers safe RL problems with binary safety feedback. They also assume there is a safe action in each state. Moreover, they can receive safety feedback of $(s,a)$ by calling an oracle without taking action $a$ at state $s$.
There are also other related works that do not provide regret or sample complexity analysis [7,9], so we do not provide a technical comparison. In contrast, our paper does not require any of these assumptions.
**2. Other CMDP formulation**
Most CMDP works focus on the **expected violation**, i.e., $C'(K) = \sum_{k=1}^K (E[\sum_{h=1}^H c(s_h^k,a_h^k)]-\mu)$
to define their constraints. [10,11] consider the **clipped expected violation**
$C''(K)=\sum_{k=1}^K (E[\sum_{h=1}^H c(s_h^k,a_h^k)]-\mu)_+$, which is a slightly stricter constraint than the expected violation.
Different from previous works, we consider the step-wise violation $$C(K)=\sum_{k=1}^K \sum_{h=1}^H (c(s_h^k)-\tau)_+,$$
which enforces the safety at each step. The clipped expected violation $C''(K)$ in [10,11] cannot guarantee the safety at each step with certainty, since in the definition of $C''(K)$, the average violation at each step $c(s_h^k,a_h^k)-\mu/H$ can be positive or negative, and they can cancel out to get $E[\sum_{h=1}^H c(s_h^k,a_h^k)]\le \mu$.
**3. Comparison for [4,6].**
As we mentioned in Reply 1, [4,6] cannot work for tabular MDPs. Moreover, [4] requires an assumption that there exists a safe action in each state, while [6] needs known $\Delta(s,a)$. In our work, we do not require these assumptions. Hence, their works and ours cannot be directly compared.
**4. Why consider n MDPs instead of focusing on one MDP for the lower bound?**
If we just focus on one MDP M, it is impossible to derive a lower bound for M: If the algorithm guesses the optimal policy $\pi^*$ for a MDP correctly and takes $\pi^*$ all the time, it can achieve zero regret and violation on M. Therefore, to construct the lower bound, we must consider multiple MDPs, so that no algorithm can always take the optimal policies for all MDPs because the optimal policies are different. Then, one can prove that the algorithm does not work well in one of them. This construction idea has been used in many prior works for different problems, e.g., [8,12].
**5. The cumulative violation curve grows more slowly than $\sqrt{T}$.**
Theorem 4.2 shows that if the safety gap $C_\text{gap}>0$, we can get a bounded $\tilde{O}(S/C_\text{gap}+ S^2AH^2)$ violation. In our experiment, the safety gap is large, and our curve matches this result.
However, the lower bound Theorem 5.1 is constructed by the worst instance with safety gap $C_{\text{gap}}= O(\sqrt{S/T})$ in the construction. This safety gap is small and changes for different numbers of episodes $T$, which is not the setting of our experiment.
**References:**
[1]. Turchetta et al. Safe exploration in finite Markov decision processes with gaussian processes. 2016.
[2]. Wachi et al. Safe exploration and optimization of constrained MDPs using Gaussian processes. 2018.
[3]. Wachi et al. Safe policy optimization with local generalized linear function approximations. 2021.
[4]. Amani et al. Safe reinforcement learning with linear function approximation, 2021.
[5].Bennett et al. Provable Safe Reinforcement Learning with Binary Feedback, 2023.
[6]. Shi et al. A Near-Optimal Algorithm for Safe Reinforcement Learning Under Instantaneous Hard Constraints, 2023.
[7]. Wang et al. Enforcing hard constraints with soft barriers: Safe reinforcement learning in unknown stochastic environments, 2023.
[8]. Simchowitz et al. Non-asymptotic gap-dependent regret bounds for tabular mdps, 2019.
[9]. Thomas et al. Safe reinforcement learning by imagining the near future, 2021.
[10]. Efroni et al. Exploration and exploitation in CMDP, 2020.
[11]. Simao et al. AlwaysSafe: Reinforcement learning without safety constraint violations during training, 2021.
[12] Domingues et al. Episodic reinforcement learning in finite mdps: Minimax lower bounds revisited, ALT 2021.
---
Rebuttal Comment 1.1:
Comment: I appreciate the rebuttal from the authors. When you revise the paper, please make sure to discuss the comparison with other works and any implications of the proposed problem formulation sufficiently in the main text. I have raised my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for raising the score! We will discuss the comparison with other works and the implications of the formulation sufficiently in our revision. | Summary: This paper considers online RL problem for an MDP with stage-wise constraints. The stage-wise constraints basically specify the set of unsafe states which must be avoided at all times. While there is a lot of recent work on CMDPs, this formulation, which is actually more relevant is less studied. The authors propose a variant of UCBVI algorithm, and show that it achieves O(\sqrt{SATH^3}) regret which is order optimal except in H. It also achieves sublinear constraint violation, which is independent of T when it is gap-dependent. They also present a reward-free exploration algorithms with sublinear regret and constraint violation.
Strengths: This seems one of the few results on Online RL for MDP with stage-wise constraints. Attempt at numerical evaluation is commendable.
Weaknesses: Unfortunately, the results are weaker than what have been achieved recently for CMDP problems: some of the formulations only consider clipped violation functions, and still achieve bounded constraint violation. The results for reward-free learning are actually much weaker as it only achieves sublinear constraint violation. Furthermore, novelty in the algorithm is limited based as it is on UCBVI, a well-trodden path. Recent work on online learning for CMDPs has several interesting algorithmic ideas which the authors could potentially use to propose algorithms for sharper results.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could you use some of the recent work of Lei Ying and Dileep Kalathil to design algorithms that achieve even sharper results. I believe it is possible without too much extra work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and effort in reviewing our paper! Please find our responses to your comments below. We will be happy to answer any further questions you may have.
**1. Unfortunately, the results are weaker than what has been achieved recently for CMDP problems: some of the formulations only consider clipped violation functions and still achieve bounded constraint violation.**
To the best of our knowledge, [1,5] are the most related papers that consider the clipped violation $C''(K)=\sum_{k=1}^K (\mathbb{E}[\sum_{h=1}^H c(s_h^k,a_h^k)]-\mu)_+$ for a safety threshold $\mu$. However, our results are not weaker than theirs. In particular, our step-wise constraint induces a stricter requirement as it requires the agent to stay safe **at each step with certainty** rather than in expectation over each episode.
We will be happy to explain the differences if the reviewer has concerns about any other references.
**2. The results for reward-free learning are actually much weaker as it only achieves sublinear constraint violation.**
First of all, note that similar to the regret minimization setting, we can derive a constant constraint violation if the safety gap $\mathcal{C}_\mathrm{gap}>0$.
To our best knowledge, there is only one work [4] considering safety problems in RFE. They consider RFE in the CMDP setting with expected constraints, and achieve zero violation during exploration. Compared to them, we consider a stricter step-wise constraint that requires the agent to stay safe **at each step with certainty**, and provide bounded violation guarantees.
**3. Novelty in the algorithm is limited based as it is on UCBVI, a well-trodden path.**
Although our algorithm is motivated by UCBVI, our algorithm design and analysis are novel. Compared to unconstrained RL, we use a novel dynamic program to estimate the set of potentially unsafe states and plan for the future in each episode, avoiding going to an unsafe state in the future inevitably. In addition, since the potentially unsafe states are changing in each episode, we need to further consider the uncertainty brought by the inaccurately estimated unsafe states in regret analysis.
Moreover, we consider safety problems in the RFE setting, which is an important and challenging problem in RL. Compared to unconstrained RFE problems, the agent needs to both stay safe during the exploration phase and output a nearly safe policy afterward. Previous work for RFE in CMDP [4] cannot be directly applied to solve this problem, since they only consider expected violations during the exploration phase, and thus cannot guarantee safety for each step with certainty. We tackle this challenge by proposing a new uncertainty function defined in Eqs. (8) and (9) to upper bound both the violation during exploration and the expected violation for the output policy.
**4. Recent work of Lei Ying and Dileep Kalathil**
We find two related works by these two authors. [2] designs model-free algorithms for non-stationary CMDPs, where the main framework builds upon the Triple-Q algorithm. [3] considers doubly optimistic and pessimistic exploration to achieve zero violation when there is a safe baseline policy. These two papers are interesting but they both focus on the expected violation $C'(K)=\sum_{k=1}^K (\mathbb{E}[\sum_{h=1}^H c(s_h^k,a_h^k)]-\mu)$. Thus, their results cannot solve our step-wise constraint problems. In addition, we extend our algorithmic framework to solve the safety problem in RFE for the exploration phase and the final output policy, which is not considered in their papers.
[1]. Efroni Y, Mannor S, Pirotta M. Exploration-exploitation in constrained mdps. preprint, 2020.
[2]. Wei H, Ghosh A, Shroff N, et al. Provably Efficient Model-Free Algorithms for Non-stationary CMDPs, AISTATS 2023.
[3]. Bura A, HasanzadeZonuzy A, Kalathil D, et al. DOPE: Doubly optimistic and pessimistic exploration for safe reinforcement learning, NIPS 2022.
[4]. Huang R, Yang J, Liang Y. Safe exploration incurs nearly no additional sample complexity for reward-free rl, preprint 2022.
[5]. Simao et al. AlwaysSafe: Reinforcement learning without safety constraint violations during training,2021
---
Rebuttal Comment 1.1:
Comment: I know the difference between stage-wise and time-averaged constraints. I understand [3], etc. handle constraint violation in expectation. Since your requirement it stage-wise, it effectively is tantamount to constraining the policy space at each stage so that the constraint is satisfied. I think it should be possible to guarantee, zero or bounded constraint violation in your space as well. I will keep my score. | Rebuttal 1:
Rebuttal: Thanks for the responses of all the reviewers! The experiment curve with variance is attached in the PDF. We only plot the first few
episodes to show the variance of each algorithm more clearly.
Pdf: /pdf/ec30f75fc76df3e99d709cd8a33344032b258d99.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper studies an episodic constrained reinforcement learning problem with step-wise constraints on states. The authors first extend the classical UCB-VI to step-wise constraints and prove the sub-linear optimality gap and step-wise constraint violation. A lower bound is also provided to show optimal dependence on the state space size and the number of episodes. Second, the authors present a reward-free algorithm that takes nearly optimal sample complexity and subliner step-wise constraint violation during exploration. Some numerical experiments are provided to show the effectiveness of the proposed algorithms.
Strengths: **originality**
- The problem formulation follows the existing tabular constrained MDP with instantaneous constraints. The safety constraint used in this work is an inequality of a function of state. The learning objective is to minimize regret and step-wise violation. The instantaneous constraint is also studied in other papers, e.g., Safe Policy Optimization with Local Generalized Linear Function Approximations and Provable Safe Reinforcement Learning with Binary Feedback.
- The only problem assumption is the existence of an initial state that is not in the unsafe set. This assumption seems to be the weakest in the literature. Since the feasible policy has to satisfy the constraint at every step, transition dynamics can't guarantee the feasibility of the next state no matter the actions taken. So, it might not be intuitive that the feasibility has nothing to do with transition dynamics.
- In the first algorithm, the authors extend the existing UCBVI by incorporating the estimation of safe and unsafe sets, and prove sublinear regret and step-wise violation. It is useful if the authors could discuss how this work differs from previous applications of UCBVI to constrained MDPs, e.g., (Amani et al., 2021), which studies linear function approximation and offers zero violation.
- In the second algorithm, the authors generalize the existing reward-free algorithm to step-wise constraints, and prove nearly-optimal sample complexity.
**quality**
- Most statements are clarified by remarks or proofs.
- It is useful if the authors can make a more technical comparison with existing works since this paper is more theoretical.
- In experiments, it might be not very fair to compare other methods that study value-based constraints.
- It is less discussed about the weaknesses of the proposed method. It is useful to discuss if the proposed methods handle function approximation and provide zero constraint violation.
**clarity**
- The main results are delivered clearly, and the paper is organized well.
**significance**
- It is important to develop RL algorithms that can learn step-wise constraints. The authors have extended two known unconstrained RL algorithms to tabular constrained MDPs with step-wise constraints. However, step-wise constraints have been tackled in several other works, which warrant detailed comparison to show the significance of this paper.
- The provided theoretical analysis and guarantees are more expected. It is less discussed new challenges of extending the existing unconstrained analysis.
- The provided experiments do not seem to reflect the performance of the proposed algorithms. There is no variance in learning curves. Since all algorithms directly converge, exploration does seem to be an issue for convergence, which does not explains the adaptivity of online RL algorithms.
Weaknesses: - It is useful to make a more detailed comparison with existing works on constrained MDP with instantaneous constraints, especially problem assumptions. Discussing some related applications is also useful.
- The technical contributions can be better explained by comparing the constrained and unconstrained ones. It is useful to highlight the main technical challenges.
- It is useful to discuss extending to large problems using function approximation and providing better violation guarantees.
- For experiments, it is important to make a fair comparison with existing works on constrained MDPs with instantaneous constraints. It is also important to experiment algorithmms in a correct way by showing convergence and exploration randomness.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Some questions are stated in Strengths and Questions.
Here are some other questions.
- How the confidence bonus in Algorithm 1 is constructed?
- Is there a lower bound for Algorithm 2 that shows the hardness of step-wise constraints?
- To see how the proposed algorithms deal with step-wise constraints, can the authors plot the instantaneous constraint violation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and effort in reviewing our paper! We will be happy to answer any further questions you may have.
**1. Comparison with existing works and applications**
We provide a brief summary of existing papers with instantaneous constraints and list their assumptions.
**(1). Gaussian Process (GP) Structure**
[1,2] propose a GP-based algorithm, which assumes that (a) the transition is known, and (b) the reward and cost functions are modeled by GPs. Based on the GP structure, they can infer the safety cost by estimating the parameters.
**(2). Unbounded exploration time**
[3] assumes the reward and cost have a generalized linear structure. Their algorithm explores in a safe space until a time $t^*$ when the agent explores sufficiently. However, their is no upper bound of the exploration time $t^*$ in [3]. For tabular MDP, $t^*$ can be infinite since the linear feature vectors consist of one-hot vectors.
**(3). Safe action, convex feature set and known transition set**
[4] considers the reward and cost functions to have a linear structure. It makes two assumptions: (a) There exists a safe action in each state, which avoids the agent go to a potentially unsafe state. (b) The feature set is a star convex set, which helps them change actions continuously. However, this makes their works infeasible in tabular MDPs: The feature set in tabular MDPs consists of one-hot vectors and is not a star convex set. Hence, the work [4] cannot solve our problem. [6] considers safe RL in linear mixture MDPs. It also contains the assumption (b), making it infeasible in tabular MDPs. Moreover, although it does not require assumption (a), it assumes the transition set $\Delta(s,a)$ is known, which is not needed in our paper. Thus our problem is more challenging since we need to estimate $\Delta(s,a)$ adaptively.
**(4). Safety feedback oracle**
[5] considers safe RL with binary safety feedback. They also assume that there is a safe action in each state. Moreover, they can receive safety feedback of $(s,a)$ by calling an oracle without taking action $a$ in state $s$.
There are other works that do not provide regret analysis [7,8]. Compared to the above works, our paper does not require any of their assumptions.
**Applications**
For robotic control in complex environments, it is crucial to prevent the robot from hitting walls or falling into pools at all time. Our algorithm can be applied to train the robot and ensure that it stays safe at each step.
**2. Technical comparison with existing works and challenges/novelty**
(i) All previous works require strong assumptions (a) to help the agent infer the safety cost without going to an unsafe state or taking an unsafe action, and (b) to prevent the agent from getting stuck in a state. We do not require these assumptions. Instead, we estimate the cost and transition set $\Delta(s,a)$, and update the estimated set of potentially unsafe states by a novel dynamic program adaptively. We also consider the uncertainty of inaccurate unsafe states in regret analysis.
(ii) We consider the safe RFE setting. Compared to unconstrained RFE, the safety constraints make it harder for the agent to explore the environment. We propose a new uncertainty function to bound the step-wise violation during exploration and the expected violation for the output policy (Line 261).
(iii) We establish lower bounds to demonstrate that Theorem 4.2 is minimax-optimal. To the best of our knowledge, this is the first lower bound that shows the tradeoff between regret and violation for safe RL.
**3. Function approximation and zero constraint violation**
For the function approximation, the large state space makes the estimation of $\Delta(s,a)$ infeasible. [6] overcomes this difficulty by assuming $\Delta(s,a)$ is known. One direction is to study whether partial information about $\Delta(s,a)$ is sufficient. For the zero constraint violation, it is infeasible without additional assumptions since we can only receive the safety feedback by going to that state. Hence it is unavoidable to get in an unsafe state.
**4. Experiments for instantaneous constraints and variance curves**
Since all works for instantaneous constraints either require strong assumptions or are infeasible in discontinuous tasks, these works and ours cannot be directly compared.
As for the variance curve, see Figure 1 in the global response. In Figure 1, We plot the first 2000 episodes since the variance is evident and most of the algorithms converge (except Triple-Q, which requires a large $T$ in theoretical results).
**5. The confidence bonus in Algorithm 1**
We provide the definitions of confidence bonuses in Theorem 4.2 (Line 202).
**6. Lower bound for Algorithm 2**
A lower bound for Algorithm 2 (RFE) remains open. This is a very interesting direction, and we plan to study it in future work.
**7. Plot the instantaneous constraint violation**
We provide the curve of instantaneous violation in the second figure of Figure 1 in our paper.
References:
[1]. Turchetta et al. Safe exploration in finite Markov decision processes with gaussian processes. 2016
[2]. Wachi et al. Safe exploration and optimization of constrained MDPs using Gaussian processes. 2018.
[3]. Wachi et al. Safe policy optimization with local generalized linear function approximations. 2021.
[4]. Amani et al. Safe reinforcement learning with linear function approximation, 2021.
[5].Bennett et al. Provable Safe Reinforcement Learning with Binary Feedback, 2023.
[6]. Shi et al. A Near-Optimal Algorithm for Safe Reinforcement Learning Under Instantaneous Hard Constraints, 2023.
[7]. Wang et al. Enforcing hard constraints with soft barriers: Safe reinforcement learning in unknown stochastic environments, 2023.
[8]. Thomas et al. Safe reinforcement learning by imagining the near future, 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification and additional experiments. By viewing this, I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for raising the score! | null | null | null | null | null | null |
Fast Attention Over Long Sequences With Dynamic Sparse Flash Attention | Accept (poster) | Summary: This paper extends the FlashAttention to support the structured sparse attention, enabling attention to leverage sparsity to achieve further acceleration on the basis of FlashAttention. In this way, the prior QK-sparse attention and Hash-sparse attention methods can be further accelerated during training and evaluation.
Strengths: - The proposed method further accelerates QK-sparse attention and Hash-sparse attention, which helps to expand the future applications of sparse attention methods.
- The proposed method can accelerate both model training and evaluation. Since current LLM training is extremely time-consuming, this may help the efficient training of LLMs.
Weaknesses: - Experiments mainly compare with the FlashAttention, which is a dense baseline. But the sparse methods compared in the experiments are very naive implementations without any performance optimization, making it difficult to show the benefits that flashAttention brings to sparse computation. It is better to compare with the methods using efficient sparse acceleration libraries.
- This work is too engineering, which implements the existing methods under the FlashAttention framework.
- The application of this method will be limited, only suitable for two calculation modes, QK-sparse attention and Hash-sparse attention.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Are the block sizes in Algorithm 2 and 3 related to the sparse patterns or sparsities?
- QK-sparse attention and Hash-sparse attention adopt per-head sparse index and per-head hash, respectively, which leads to load unbalance during calculation. How much does this affect the GPU occupancy and utilization?
- Can you add the theoretical speedup of the two sparse attention methods? I just want to see how much potential acceleration there was.
- Can you show the time ratio of sorting and re-ordering costs with different nb and different sequence lengths in Figure 3? Can sorting and re-ordering be fused with attention?
- typo: line 145, $j \in [0, j_{stop}]$
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author discussed the limitation of the method and the societal impacts. Other limitations of my concern can refer to the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your feedback! We hope the following answers your questions and may bring you to raise your score.
> The application of this method will be limited, only suitable for two calculation modes, QK-sparse attention and Hash-sparse attention.
We would like to emphasize the generality and relevance of our patterns. The Hash-sparse pattern is conceptually similar to the LSH-attention implemented in the Reformer [1]. We improve upon the LSH-attention by providing an implementation that is (i) guaranteeing $100\%$ coverage of keys and queries falling in the same bucket, and (ii) is significantly faster. Moreover, the relevance of our QK-sparse pattern is supported by several works showing how a large number of tokens or heads can be dropped in transformer architectures [2,3,4,5,6] without detrimental effects on downstream tasks. While all of those works focus on more efficient inference, we show how our more general QK-sparse pattern can be used to speed up the training of language models.
> Are the block sizes in Algorithm 2 and 3 related to the sparse patterns or sparsities?
The block sizes are hyperparameters used to parallelize efficiently the computation across the GPU cores. Those parameters can have an impact on the runtime but are not tied to sparse patterns or sparsities. In all of our experiments, we set the block sizes to $128$.
> QK-sparse attention and Hash-sparse attention adopt per-head sparse index and per-head hash, respectively, which leads to load unbalance during calculation. How much does this affect the GPU occupancy and utilization?
Our QK-sparse pattern relies on building smaller "compact" tensors containing the keys and queries which have not been dropped. In the worst case, one head can have many queries or keys dropped, while another head might not drop any. The batching would then force us to retain the entire sequence and the compact tensors would be of the same shape as the original $\mathbf{Q},\mathbf{K},\mathbf{V}$ tensors. The runtime would increase as we now have to iterate over all the blocks of keys for one of the heads. As a small experiment, we consider random dropping $50\%$ of the queries and keys, for some set of hyperparameters we get a runtime of $14$ms. Now we run the same operation but with one single head not being dropped at all, the runtime increases to $48$ms, which is close to the runtime with nothing being dropped. Due to the structure of FlashAttention which loops over blocks of keys, in parallel for each block of queries, the runtime is tied to the longest sequence of keys to process. Anyone implementing a smarter QK-dropping mechanism on top of our QK-sparse pattern should ensure that queries and keys are dropped relatively uniformly across heads.
Concerning the Hash-sparse pattern, having all the keys and queries falling in the same bucket would equate to the normal FlashAttention. Therefore unbalanced buckets would increase the runtime as for the QK-sparse pattern. Interestingly, in our experiments on language modeling, we do not face imbalance problems and even notice a speedup after around $1$k iterations, see Fig.11.a in App.C.
We thank you for your comment and will add a section on imbalance in the next revision.
> Can you add the theoretical speedup of the two sparse attention methods? I just want to see how much potential acceleration there was.
For the Hash-sparse pattern, assuming perfectly balanced $nb$ buckets and a sequence length of $T$, then we expect $\mathcal{O} (\frac{T^2}{nb})$ complexity as opposed to $\mathcal{O} (T^2)$. For the QK-sparse pattern, given a balanced sparsity of $s \in [0,1]$ we expect $\mathcal{O} (T^2(1-s)^2)$ complexity for the attention mechanism.
> Can you show the time ratio of sorting and re-ordering costs with different nb and different sequence lengths in Figure 3? Can sorting and re-ordering be fused with attention?
In Fig.2 of the figures provided with this rebuttal, we show runtimes for the two kernels when the pre and post-processing steps are assumed to be free. We observe that, especially for the QK-sparse pattern, fusing those steps with attention could bring a significant speedup, especially for smaller sequences. We believe those figures could be further improved by a better tuning of GPU-related hyperparameters such as the block sizes, number of warps and stages used.
References:
[1] Kitaev, N., Kaiser, L., and Levskaya, A.: Reformer: The efficient transformer
[2] Michel, P., Levy, O., and Neubig, G.: Are sixteen heads really better than one?
[3] Voita, E., Talbot, D., Moiseev, F., Sennrich, R., and Titov, I.: Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned.
[4] Behnke, M. and Heafield, K.: Losing heads in the lottery: Pruning transformer attention in neural machine translation.
[5] Goyal, S., Choudhury, A. R., Raje, S., Chakaravarthy, V. T., Sabharwal, Y., and Verma, A.: Power-bert: Accelerating BERT inference via progressive word-vector elimination.
[6] Wang, H., Zhang, Z., and Han S.: Spatten: Efficient sparse attention architecture with cascade token and head pruning.
---
Rebuttal Comment 1.1:
Comment: Thanks for the feedback. My concerns are mostly well addressed, so I keep my rating. | Summary: This paper extends Triton implementation of FlashAttention to support two forms of sparse attention: key/query dropping and hashing-based attention. Source code for the kernels are made available.
Strengths: The paper gives clear descriptions of the released kernels and provides comprehensive validation experiments of the kernels.
Weaknesses: The main concern is that both types of sparse attention are very restricted. Although this is a good effort that benefits the researh community, I am not sure how significant the kernels are.
The K/Q dropping sparse attention seems not useful in training large language models. It seems very rare that we would mask certain tokens completely and do not let them contribute to attention at all.
The hashing-based sparse attention seems restrictive. We essentially need to separate tokens into groups and only allow attention within each group. The only useful scenario that I can think of is when a training sample is a concatenation of multiple unrelated pieces of text.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read our work. We hope those clarifications are addressing your concerns and may justify raising your score.
> The K/Q dropping sparse attention seems not useful in training large language models. It seems very rare that we would mask certain tokens completely and do not let them contribute to attention at all.
We believe there might be a misunderstanding. We do not mask entire tokens, but single heads. Fig.1 and Fig.2 illustrate the sparsity patterns for a single head. This means that if each token has $12$ heads for keys and $12$ for queries, then our QK-sparse method allows to efficiently drop e.g. $5$ heads for the query and $8$ for the key. All the heads need not to be dropped. Moreover, each layer can drop heads independently from other layers. Thus, we are not completely masking out tokens from the attention. There is a strong body of work suggesting it is possible to drop entire heads [1,2,3] or tokens [4,5] without losing too much accuracy on downstream tasks. Very recently, [6] use an elaborate query/key-dropping mechanism which can reduce the inference time without sacrificing perplexity. Our contribution is to show how these types of sparsity can be efficiently implemented to speed up the training, which to the best of our knowledge has never been done before.
> The hashing-based sparse attention seems restrictive. We essentially need to separate tokens into groups and only allow attention within each group. The only useful scenario that I can think of is when a training sample is a concatenation of multiple unrelated pieces of text.
Similar to the point made above. We hash single heads, and not entire tokens. Therefore heads $1$ and $5$ can end up in bucket $8$ while head $3$ might end up in bucket $2$. Moreover, we perform this hashing for each layer independently, and therefore each token is not attending within a single group and has many chances of fetching various bits of information across the entire sequence. This pattern is conceptually similar to the LSH-attention implemented in the Reformer [7] but solves some of its serious limitations while being significantly faster.
References:
[1] Michel, P., Levy, O., and Neubig, G.: Are sixteen heads really better than one?
[2] Voita, E., Talbot, D., Moiseev, F., Sennrich, R., and Titov, I.: Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned.
[3] Behnke, M. and Heafield, K.: Losing heads in the lottery: Pruning transformer attention in neural machine translation.
[4] Goyal, S., Choudhury, A. R., Raje, S., Chakaravarthy, V. T., Sabharwal, Y., and Verma, A.: Power-bert: Accelerating BERT inference via progressive word-vector elimination.
[5] Wang, H., Zhang, Z., and Han S.: Spatten: Efficient sparse attention architecture with cascade token and head pruning.
[6] Anagnostidis S., Pavllo D., Biggio L., Noci L., Lucchi A., Hofmann T.: Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers
[7] Kitaev, N., Kaiser, L., and Levskaya, A.: Reformer: The efficient transformer
---
Rebuttal Comment 1.1:
Title: after reading the rebuttal
Comment: Thanks for the responses, particularly for the clarification that the sparsity patterns may vary per attention head. However, attention-head pruning is not token dependent and does not require the proposed feature; dropping tokens is not head dependent, can be done in preprocessing and does not require the proposed feature either.
---
Reply to Comment 1.1.1:
Title: Response to reviewer Z3Z8
Comment: We thank you for your answer.
> attention-head pruning is not token dependent and does not require the proposed feature; dropping tokens is not head dependent, can be done in preprocessing and does not require the proposed feature either.
Concerning our QK-sparse pattern, our contribution is to enable the dynamic dropping of heads for keys and queries. As your comment suggests, this was not done before, and prior works indeed often had to rely on dropping entire heads (i.e. all the keys/queries/values for all the tokens for a given head) or entire tokens (i.e. all the keys/queries/values for a token). Implementation limitations constrained those choices. Our QK-sparse kernel enables the softer dropping of keys and queries. We cited works [1,2,3,4,5,6] as we believe that the feasibility of dropping entire heads/tokens is a good indication that our QK-sparse pattern is sound. Moreover, the very recent work of [6] does experiment with fine-grained dropping similar to ours and leverages this *dynamic* pattern to speed up inference. We believe this later work unambiguously supports the usefulness of our QK-sparse kernel.
> dropping tokens is not head dependent, can be done in preprocessing
All of the works we cited dropping tokens [4,5,6] are dropping them dynamically and cannot be implemented as a preprocessing step.
Let us know if you have further questions. We believe we have addressed all the questions raised concerning both of our kernels and hope you will consider raising your score. | Summary: This paper proposes an improved version of FlashAttention to support irregular block sparsity due to queries/keys-dropping or hashing. The proposed method modifies the mechanism used in FlashAttention to arbitrary indexing of queries/keys, which can be viewed as combining both FlashAttention with either QK-dropping or hash-sparse methods.
Strengths: - The proposed method, being simple and effective, improves over prior methods (FlashAttention, Reformer, and QK-sparse).
Weaknesses: - The paper is a bit low in novelty. The SCFA kernel design is based on a combination of prior methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can you provide more details in the SCFA kernel design? Also, can you comment of the reproducibility of the kernel design?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback.
> Can you provide more details in the SCFA kernel design?
Our kernel design relies on the FlashAttention algorithm as a starting point. On top of this, we (i) reshape input tensors to bring an interesting structure of the attention matrix, and (ii) exploit this structure by computing, for each block of queries, which tiles of keys should be considered.
We develop separate kernels for each type of sparsity (QK-sparse and Hash-sparse). Additionally, we provide a common Pytorch interface that abstracts away a lot of the complexity. The pseudo-code for the kernel implementations is provided in the Appendix.
We will add more details on the kernel design in the next revision.
> Also, can you comment of the reproducibility of the kernel design?
In the supplementary material (App.B) we provide a link to an anonymized repository containing code and instructions to reproduce all of our results. In order to reproduce our results, one simply needs a GPU supporting the bfloat16 data type. We used one A100 40GB for all of our runtime experiments.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I remain supportive on this submission. | Summary: The work proposes a new method to speed up and improve the causal self-attention of transformer-based language models for long sequences. The method uses a kernel called SCFA that can handle any sparsity pattern and causal mask in the attention matrix. The method also introduces two dynamic schemes to sparsify the attention: one based on hashing and one based on key/query dropping. The method achieves significant runtime gains and better performance than existing methods.
Strengths: 1: The work strives to speed up and improve the causal self-attention of transformer-based language models for long sequences, which is a relevant topic. The work conducts a thorough review of related work. And based on related work, it proposes two solutions based on hashing and dropping.
2: The work provides a clear explanation of the two proposed solutions. Besides explaining the solutions, it also discusses the overhead and edge cases. In the appendix, it shares the algorithm and implementation in detail.
3: The work explains the experiment in great detail, and also shows a significant speed improvement by applying the proposed solution.
Weaknesses: 1: One more experiment with large model will be great.
2: One more experiment on other task, like text-classification task will be great.
3: Agree, as the paper stated, a better dropping scheme is beyond the scope of this work. But it might improve the work further, such as achieving higher dropping rate and same speed performance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: When you run the comparsion, proposed solution running on 2 or 3 A100, while Reformer alwasy run on single GPU. Is it correct? if yes, why?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your review and your appreciation of our work!
> When you run the comparsion, proposed solution running on 2 or 3 A100, while Reformer alwasy run on single GPU. Is it correct? if yes, why?
Given our limited resources, we decided to use our multi-GPUs infrastructure to experiment with language modeling tasks which typically rely on training large models. | Rebuttal 1:
Rebuttal: We thank all reviewers for taking the time to review our work. We improved and optimized our Hash-sparse kernel and new results are shown in the figures provided with this rebuttal. Compared to our previous version, still without sacrificing perplexity, we further increase the training speed of a transformer language model from $1.8\times$ to $2.0\times$ and from $2.3\times$ to $3.3\times$ for sequences of respectively $8k$ and $16k$ tokens.
Pdf: /pdf/9562eb27bf615a476f1e107e810ae82011f316aa.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a method for fast causal attention by combining dynamic sparse attention (query & key dropping or from hashing) with FlashAttention, called SCFA (Sparse Causal FlashAttention). The paper shows that with the right preprocessing, attention with query & key dropping (QK-sparse) can also be done efficient in the style of FlashAttention, by only loading blocks of keys and values necessary for the computation. Similarly, with hashing-based sparsity (where only attention entries of queries and keys in the same hash buckets are computed), by careful manipulation of the indices, one can also load only the blocks of keys and values necessary, avoiding unnecessary memory reads/writes. These ideas are validated empirically, showing significant speedup compared to naive sparse attention, as well as speedup against FlashAttention for long sequence lengths. Training models with SCFA leads to similar quality but faster wall-clock time.
Strengths: 1. Impressive wall-clock speedup, which could have significant impact on how sparse attention can be used in practice. One of the main issues with approximate attention (e.g., sparse attention) is that they don't necessarily bring wallclock speedup. With models being trained at much longer sequence lengths (e.g. 32k-100k), approximate attention might be a necessity, and this paper takes a step towards making this easier for practitioners.
2. The hashing-based sparsity proposed in the paper seems to work better than the hashing method from Reformer, with higher coverage. This leads to higher model quality, as validated by the experiments.
Weaknesses: 1. The motivation for QK-sparse attention could have been explained better. I'm personally not as familiar with this method. Why would one want to drop query & key?
2. The technical challenges could have been highlighted better. E.g. explaining why it is hard to efficient implement attention when the sparsity pattern is dynamic, would help readers understand the paper better.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why is the focus on causal attention? Do these ideas apply to non-causal attention (e.g., in BERT, ViT)?
2. For hashing-based sparsity, if there are multiple hashing rounds (e.g. as in Reformer), the same key & query could end up in the same buckets in multiple rounds. How does one avoid over-counting in this case? Can that be done efficiently?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not necessary
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable time spent reviewing our paper!
> The motivation for QK-sparse attention could have been explained better. I'm personally not as familiar with this method. Why would one want to drop query \& key?}
There is a large body of work [1,2,3,4,5] that suggests it is possible to drop heads or tokens without degrading the performances on downstream tasks. However, those works only focus on speeding up the inference and usually add costly mechanisms during training. Our contribution is to propose a sparse pattern that is more general than those works, yet allows speed gains during training. We will add a paragraph to better justify the QK-sparse pattern in the next revision.
> The technical challenges could have been highlighted better. E.g. explaining why it is hard to efficient implement attention when the sparsity pattern is dynamic, would help readers understand the paper better.
Thanks for raising this point. We will better highlight the difficulties of efficiently implementing dynamic sparse patterns in the next revision of our paper.
> Why is the focus on causal attention? Do these ideas apply to non-causal attention (e.g., in BERT, ViT)?
You are right that the proposed methods can be extended outside of autoregressive models simply by removing the causal masking. We preferred to focus on causal attention as it's where a lot of the heavy, large model pre-training is currently done, and where efficiency gains seem to be most beneficial and relevant, especially on long sequences.
> For hashing-based sparsity, if there are multiple hashing rounds (e.g. as in Reformer), the same key \& query could end up in the same buckets in multiple rounds. How does one avoid over-counting in this case? Can that be done efficiently?
For this work, we focused on a single round of hashing. We already planned to extend this to multi-round hashing, and we believe that we can avoid the double-counting issue efficiently in our kernel, with the same mechanism that is used in the Reformer implementation [2].
References:
[1] Michel, P., Levy, O., and Neubig, G.: Are sixteen heads really better than one?
[2] Voita, E., Talbot, D., Moiseev, F., Sennrich, R., and Titov, I.: Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned.
[3] Behnke, M. and Heafield, K.: Losing heads in the lottery: Pruning transformer attention in neural machine translation.
[4] Goyal, S., Choudhury, A. R., Raje, S., Chakaravarthy, V. T., Sabharwal, Y., and Verma, A.: Power-bert: Accelerating BERT inference via progressive word-vector elimination.
[5] Wang, H., Zhang, Z., and Han S.: Spatten: Efficient sparse attention architecture with cascade token and head pruning.
[6] Kitaev, N., Kaiser, L., and Levskaya, A.: Reformer: The efficient transformer
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the explanation. I remain supportive of the paper. | null | null | null | null | null | null |
Attacks on Online Learners: a Teacher-Student Analysis | Accept (poster) | Summary: This paper sheds light on the vulnerabilities of online learners to adversarial attacks and provides insights into the manipulation of learning dynamics. The findings highlight the need for robust defenses against such attacks and contribute to the growing body of research on data poisoning attacks in machine learning.
Strengths: 1. The paper provides a theoretical analysis of online data poisoning in the teacher-student setup, which is a popular framework for studying machine learning models in a controlled way.
2. The paper compares different attack strategies and finds that properly calibrated greedy attacks can be as effective as attacks with full knowledge of the incoming data stream.
3. The authors empirically study online data poisoning on real datasets, such as MNIST and CIFAR10, using various architectures like LeNet, ResNet, and VGG.
Weaknesses: 1. This paper primarily focuses on linear regression models and does not extensively explore the impact of adversarial attacks on other types of machine learning models.
2. This paper does not provide specific examples of real-world scenarios where adversarial attacks on online learners can have significant consequences.
3. This paper lacks extensively discuss potential defense mechanisms or countermeasures to mitigate the impact of these attacks.
4. This paper does not thoroughly compare its findings with existing literature on attacks on online learners.
5. The practicality studied threat model should be discussed with more details.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. How do different attack strategies compare in terms of efficacy?
2. Are attacks subject to time constraints and do they involve transient dynamics?
3. Does the analysis take into account temporal correlations within the data stream?
4. The model's robustness improves or decreases of over-parametrization?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Missing limitation and negative societal impact discussion in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the constructive criticism and the relevant questions. Please see below for our response.
**Weaknesses**
This paper primarily focuses on linear regression models and does not extensively explore the impact of adversarial attacks on other types of machine learning models.
* The problem of online adversarial attacks has not been addressed theoretically before, so decided to start with the simplest possible setting: linear models. This allowed us to obtain exact and explicit results, which can help develop intuitions about the problem. Note that various deep learning phenomena have been investigated by means of linear models recently (feed-forward neural networks [1], self-supervised learning [2,3], pre-training [4], implicit bias[5-6]).
We agree with the reviewer that it is crucial to verify if the insights obtained theoretically carry over to more realistic settings. For this reason, we extensively performed experiments with deep networks (LeNet, VGG, ResNet) on MNIST and CIFAR10. We found that key insights from our theoretical analysis, e.g. the steady-state behavior of the learner under attack, are replicated in these settings.
Additionally, we have now expanded our empirical evaluations using Adam (see Fig. A, B in the attached document), thus exploring even further more realistic use-cases.
Examples of real-world scenarios.
* Several practical use cases motivate our analysis. In federated learning, for example, malicious nodes in the federation can modify the labels before sending the model updates to the central server for aggregation. Label attacks can also occur in online learning settings where models are fine-tuned on the fly using user-generated data, such as email spam filters, product recommendation engines, chatbots, and fake review detectors. The online setting makes adversarial attacks extremely dangerous, since the attackers can monitor the effect of previous poisoning on the learner and make adaptive decisions on how to poison next.
Defense mechanisms.
* Our contribution aims to establish the fundamental behavior of online learners under attack. This problem has not been addressed before theoretically, and the analysed baseline already exhibits rich and non-trivial phenomena. Therefore, a comprehensive analysis of defence strategies would extend beyond the scope of the current paper. Nevertheless, we note that our analysis of attacks that only perturb a fraction $\rho$ of the data (see answer to reviewer 7DCo) is suitable for describing a defence strategy combining the poisoned data with a trusted source of clean data. We found that the attack strength decreases by a factor $\rho$ in this case.
Compare findings with existing literature.
* We did an extensive search for studies addressing the problem of online label poisoning but could not find any relevant paper for benchmarking our methods. We will be happy to consider any additional reference in case the Reviewer had specific suggestions.
The practicality studied threat model should be discussed with more details.
* We consider targeted attacks that aim for the ML model to learn a “nefarious” target function $\phi^*$. The attacker perturbs the data labels to do so, so our setting can be considered a generalisation of label-flipping attacks. We consider attackers with different adversarial capabilities. RL agents ignore the learning algorithm used for training the ML model and see the model architecture. Greedy attacks assume full knowledge of the learning rule and model parameters.
**Questions**
How do different attack strategies compare in terms of efficacy?
* We compared the efficacy of different attack strategies in terms of the average running costs achieved at steady state (see Eq. (16) in the paper). The results are shown in Fig. 3 (paper, main text). Our evaluations indicate that clairvoyant (CV) attacks are the most efficient. This is expected as CV attacks have the advantage of seeing the future data stream upfront. Remarkably, greedy attacks almost match the clairvoyant efficiency in all settings that we considered. RL attacks perform well too when the agent can observe the entire state of the system, and become less effective otherwise (Fig. C, attached document).
Transient dynamics and temporal correlations.
* These are indeed interesting scenarios to be explored. They however need a significantly different setup compared to our analysis, where a key assumption is that the attacker tries to minimise the expected cost at steady state. Nevertheless, we shall note that our optimal control formulation can be re-cast in a finite-horizon scenario, which would be suitable to study time-constrained attacks and data exhibiting correlations in time. We have a comment in the concluding section of the paper that include the above considerations.
Robustness improves or decreases with over-parametrization?
* Our real-data experiments suggest that the robustness of ML models may decrease for highly parametrised architectures compared to simple ones (see the results obtained for a logistic regression and LeNet on the same dataset, Fig. 4-A in the paper). A thorough analysis of the role of over-parametrization represents yet another interesting research direction that could be addressed in a dedicated study.
**Limitations**
* In the concluding section, we present several limitations of our analysis, including the assumption of stationary data, the infinite-horizon setup, and the unexplored role of model complexity. In the final version of the paper, we will expand this section to include the limitations associated with the type of attacks (targeting the labels), and the threat model (white/black box attacks).
References
[1] Saxe et al. ICLR ‘13;
[2] Tian, Chen, Ganguli, ICML ‘21;
[3] Jing, Vincent, LeCun, Tian, ICLR ‘22;
[4] Wu, Zou, Braverman, Gu, Kakade, NeurIPS ‘22;
[5] Moroshko, Woodworth, Gunasekar, Lee, Srebro, Soudry, NeurIPS ‘21;
[6] Li et al. NeurIPS ‘22. | Summary: Data poisoning attacks have been extensively studied in the offline setting, while poisoning attacks in the online setting have received little attention. In the online setting, the attacker is forced to craft attacks exploiting the data stream, taking into account the state of the model and the possible future data stream to divert the model. This work aims to study theoretically and empirically the learning dynamics of an online learner subject to a poisoning attack in the white-box setting, i.e., the attacker knows the architecture and the current parameters of the target model. The problem of finding the best attack policy is formalized as a stochastic optimal control problem and is theoretically studied in the teacher-student setup. From that, the authors were able to obtain results about the steady state of an attacked linear regression model, i.e., the model weights, the optimal action and the distance of the target model depending on the cost of the attack, supposing both infinite and finite size batches. Furthermore, the work shows analytically that a greedy strategy can perform quite optimally. The experimental evaluation confirms the theoretical results on synthetic data and explores the dynamics of online data poisoning attacks on real data and non-linear predictors, showing that the theoretical findings almost hold in more complex scenarios.
Strengths: This paper makes a valuable contribution to the literature on data poisoning attacks, where there has been limited work in online settings. In particular:
- The idea of analyzing the attacker’s control problem from the perspective of the teacher-student framework represents the strong main novelty of the paper, since the formulation of the attack problem as an optimal control problem has been discussed before. The analysis allows the authors to derive interesting results about the dynamics of the learning process of the model and the steady state of a linear regressor that have not previously been discussed in the literature. It is interesting to see that above a certain attack strength the accuracy of the attacked model may drop dramatically and perturbations above the threshold may not be sufficient to realize the attack.
- The set of considered attack strategies sufficiently covers the possible options available to the attacker. The idea of showing that a greedy attack can be as effective as an attack with full knowledge is valuable.
- The theoretical results are confirmed empirically and punctually. Moreover, the experimental results on non-linear architectures and real data suggest that the theoretical results may also hold for more complex scenarios, that’s an interesting finding.
- The derivation of the theoretical results is explained in the appendix and the well-organized code to reproduce the experimental results is available (even though more instructions to reproduce the results may be added, like the Python version used by the authors to run the experiments).
Weaknesses: - The theoretical analysis is limited to consider linear regression as the student model, the standard normal distribution as input distribution and an infinite time horizon, so the theoretical results are a bit limited. However, it is recognized by the authors that more general results are missing because deriving them in a more general scenario is hard. Furthermore, the authors tried to derive some intuitions about the results in a more complex scenario through the experimental evaluation. Covering the theoretical analysis in a more complex setting and with a finite time horizon is an interesting future work to explore.
- It is not clearly motivated why the attacker can only modify the labels instead of the entire inputs and why the attacker should be able to apply the same perturbation to the entire batch (which turns out to be a very strong assumption). Data poisoning attackers should assume little control over the data to be considered practical [P1, P2].
- The presentation of the theoretical results is a bit confusing since it is mixed with the presentation of part of the theoretical results.
- The presentation of the attack strategies and their implementation lack details that cannot be found in the appendix. See the “Questions” section for more details.
- From the paper, it is not evident what the take-home message is and what suggestions are provided to improve robustness in stream pipelines or make attackers more stealthy. The paper presents numerous useful analyses and extensive experimental settings but lacks a "Take-home message/Discussion" section that summarizes the overall findings and suggestions. While the authors have reported some of these in the conclusion section, I believe they can be further expanded with higher priority/attention.
[P1] Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning. In IEEE Symposium on Security and Privacy (2022).
[P2] Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning. ACM Computing Surveys (2022).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Major questions:
- Why are the clairvoyant and RL attacks infeasible to deploy for complex models? Is it due to the time required for the calibration? This is not very clear from the text.
- In Eq. 1, please clarify why it should be important to model an attacker that “under/overshoots”. Why can the attacker obtain a perturbed label that resides outside the interval between the clean and target labels?
- I think that more details about the RL attack are needed. How is the problem of finding the optimal sequence of actions mapped to the RL problem? Which are the reasons for choosing to utilize TD3 (and a deep learning policy in general) instead of other RL approaches?
- Which is the utility of gamma^tilde in the objective function minimized by the greedy attack? Is it used to address the fact that the attacker should not try the most effective attack at the subsequent time step, but she has to take into account all the future streams of instances? How does the calibration of gamma^tilde work? I invite the authors to add more details about this.
- I suggest that the authors add a "Take-home-messages/Discussion” section that summarizes the theoretical and empirical results and highlights how the results can be useful for designing a poisoning attack or defending against it.
- The authors should clarify their threat model in the paper and where it may apply.
Minor questions:
- Are the results of Figures 1-B and 1-C obtained from the experimental evaluation? If yes, the setting used for obtaining these results should be indicated in the text. I may have missed it, so I kindly ask the authors to point me out the lines containing this information if it is included in the paper.
- Should g^nef be at the place of one of the g^pre in eq. 16?
Eventually, I kindly ask the authors to fix the links to the equations that appear in the text, since they don’t work at the moment.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have briefly discussed some limitations of their work in the conclusion section and they have clearly stated that extending their analysis to non-linear architectures and different probability distributions of the inputs is challenging. However, I think that a discussion about the limitations of the threat model considered is missing and I invite the authors to cover this point, by discussing, for example, why the attacker should only modify the labels of the instances. Moreover, I ask the authors to include a “take-on-messages/discussion” section to summarize and clarify the obtained results and their implications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the constructive criticism and the relevant questions. Please see below for our response.
**Weaknesses**
Attacker only modifies the labels instead of the entire inputs.
* The case of online poisoning of the labels has not been addressed before, despite its relevance in several practical applications (federated learning, learning from user-generated data, etc), so our aim is to provide a first contribution to fill this gap. Note that attacks on the labels have a reduced computational cost compared to high-dimensional adversarial perturbations of the inputs, which makes label poisoning more convenient in a streaming scenario.
Perturbation of the entire batch.
* While the online setting requires the attacker to keep perturbing the data over time (as otherwise the perturbation may be forgotten by the learner), it is true that the attacker may be able to perturb only a fraction of samples in the batch. Our analytical derivation can be generalised to accommodate this scenario. In the linear TSA problem, we find that greedy attacks lead to an average steady-state distance given by ${(C P / (\rho P + 2) + 1)}^{-1}$, with $\rho$ the fraction of corrupted samples, which for $P$ large reduces to $(C/\rho + 1)^-1$. This result indicates that the attack strength effectively decreases by a factor $\rho$. Our experimental evaluations shown in Fig. E (attached file) support this finding. This result suggests that a simple defence mechanism consisting in mixing poisoned data with clean examples from a trusted source (in case this is available) could improve the model robustness.
Presentation is a bit confusing.
* We will highlight more our theoretical results in a separate, dedicated section in the final version of the paper.
Implementation lacks details that cannot be found in the appendix.
* We thank the reviewer for pointing this out. We will add an Appendix in the final version of the paper to include more details about our implementation and our experiments.
**Questions**
Why Clairvoyant and RL attacks are infeasible?
* The clairvoyant setting is not doable due to the high nonconvexity of the problem. For complex architectures, finding a solution to the equations takes too long to be practical. In the RL approach, the agent observes the state of the system, given by the input data and the learner’s weights, and produces an action according to its policy function. For big architectures, the state space is extremely large, and tuning the RL policy function becomes prohibitively expensive.
* Following the Reviewer’s question, we considered using a TD3 agent that only observes the last-layer weights of the network, significantly reducing the dimension of the state space. The results are shown in Fig. C,D (attached document). The agent performs better than constant attacks, though it does not match the greedy attacks. We will add these evaluations to the final version of the paper.
Clarify why it should be important to model an attacker that “under/overshoots”.
* There is in principle no reason why $a$ should be constrained within a bounded region, as the attacker’s optimization problem includes the perturbation cost proportional to $a^2$. It is up to the attacker, then, to find the best possible sequence of actions, accounting both for the perturbation and the nefarious costs.
More details about the RL attack are needed.
* RL calibrates a policy function to maximise the expected total reward of the agent by interacting with the environment. In our case, the environment is characterised by the current learner’s weights and input data, with rewards given by the negative value of the running cost. A policy that maximises the expected rewards also minimises the running cost of the attacks, finding a solution to the attacker’s optimal control problem. We employed TD3 because it is a powerful agent that requires little tuning of the hyperparameters and can handle continuous action spaces.
More details about $\tilde{\gamma}$.
* Calibrating $\tilde{\gamma}$ helps the greedy attacker overcoming the limitation of a strategy that only looks one step ahead, accounting for long-run effects of the actions. To calibrate $\tilde{\gamma}$, the attacker uses past observations to simulate data streams, and it evaluates the total cost of the simulated trajectory associated with different values of $\tilde{\gamma}$. Averaging over many data streams, the attacker finds the value of $\tilde{\gamma}$ that minimises the (simulated) total expected cost.
The authors should clarify their threat model.
* We consider targeted attacks that aim for the ML model to learn a “nefarious” target function $\phi^*$. The attacker perturbs the data labels to do so, so our setting can be considered a generalisation of label-flipping attacks. We consider attackers with different adversarial capabilities. In particular, RL agents ignore the learning rule and see part or all of the learner's architecture. Greedy attacks assume full knowledge of the learning rule and model parameters.
Missing details of results of Figures 1-B and 1-C.
* For those experiments, with considered streams of $10^4$ batches of size $P=1$, learning rate $\eta=0.1$, and actions $a \in [0, 1]$. We will add these details to the paper.
g^nef at the place of one of the g^pre in eq. 16?
* Yes. Typo fixed.
**Limitations**
We thank the Reviewer for the insightful suggestions. We will expand the concluding section to include the limitations associated with the type of attacks (targeting the labels), and the threat model (white/black box attacks). We will as well include a take-home-message section summarising our results and clarifying their implications, incorporating Reviewers' feedback and including any valuable insight arising from the discussion period.
We hope that our answers have clarified the reviewer’s concerns and will provide grounds for a more favorable assessment of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their clear response to my concerns.
Similarly to other reviewers, I initially held reservations regarding the considered threat model and the current scope of the proposed work. However, despite these reservations, I have chosen not to lower my original score. This decision is primarily influenced by the significant and noteworthy experimental findings that the authors have presented in their rebuttal.
In conclusion, the paper will require several clarifications in its final iteration. These clarifications should encompass aspects such as the delineation of the threat model, underlying assumptions and limitations, and the rationale behind the chosen attack strategy. As pointed out by all reviewers, these elements are pivotal to enhancing the clarity and rigor of the paper. Furthermore, a more seamless integration of the outcomes presented in the main body of the paper with those provided in the appendix is advised for achieving a coherent narrative.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response and the positive evaluation of our supplementary findings in the rebuttal. We are pleased that our response effectively addressed the reviewer's concerns. Overall, we believe that our submission has significantly benefited from the reviewers' feedback. In the final version of the manuscript, we will incorporate the clarifications mentioned by the reviewer, as we discussed in our rebuttal. | Summary: This paper analyzes the robustness of online learners when the labels of the received data are manipulated. In particular, it analyzes a student-teacher online learning problem where the attacker poisons the labels provided by the teacher before feeding the student the labeled batch. The setup is analyzed both theoretically and experimentally where with the increase of the attack budget, the performance of the learner degrades significantly.
Strengths: The main strengths of this work are:
(1) The paper is well-written and the figures are polished.
(2) The setup is clearly explained and the theoretical analysis on synthetic data support the results.
Weaknesses: There are few weaknesses that I hope to be addressed before getting this paper accepted:
(1) The paper is missing a strong practical motivational example from real world scenarios when attackers have access to manipulate the *labels* before feeding the labeled batches to the learner.
(2) While the proposed problem setup studies an online learning scheme, the attacker is allowed for an unbounded computational budget to manipulate the labels. However, in online learning, the environment can reveal new batches *irrespective* of how efficient the attacker is in manipulating the previously revealed batch. Note that this assumption (given unlimited computation to the attacker) is not realistic and it questions that effectiveness of the proposed attack.
(3) Experimental evaluation are conducted on very small scale problems and datasets. I would expect evaluating the results on online learning datasets such as CLOC [A] and Firehose [B].
(4) The analyzed attackers should be evaluated when a defense mechanism is presented. For instance, would a learner upon convergence still update all model parameters on each received data? What happens if the learner stored labeled examples from the stream before the poisoning phase starts and update on mixed batches?
[A] Online Continual Learning with Natural Distribution Shifts: An Empirical Study with Visual Data, ICCV 2021.
[B] Drinking from a Firehose: Continual Learning with Web-scale Natural Language, 2020.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: In addition to the points raised in the weaknesses part, I have the following question:\
- To guarantee that the attack is imperceptible (in case of perturbing the input), the attacker has to produce the perturbations under budget constraints (e.g. $\|\delta\|_\infty \leq \frac{8}{255}$). How is this constraint translated to the proposed setup?
- The perturbed label in Equation (1) could produce a real number for integer valued labels. Couldn't this be a simple check for the learner to reject such samples from training on them?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have not addressed the limitations of this work such as the practicality of the proposed setup, the limited experimental setup (problem and dataset scale) along with how expensive is it to conduct such attacks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the constructive criticism and the relevant questions. Please see below for our response.
**Weaknesses**
Examples of real world scenarios where attackers have access to manipulate the labels before feeding the labeled batches to the learner.
* **There are several practical use cases where attackers have access to labels before the learner, motivating our analysis**. (i) In federated learning, where data are collected independently by each node in the federation. Malicious nodes could modify the labels before providing the model updates to the central server for aggregation. Even if data are distributed from the central server to the nodes, when collecting the model updates, the central server would have no access to the labels used locally to generate the updates. (ii) Label attacks can also occur in online learning settings where models are fine-tuned on the fly using user-generated data, such as email spam filters, product recommendation engines, chatbots, and fake review detectors. We will add these examples to the introductory section of the manuscript. Note that, in streaming scenarios, the attacker can monitor the effect of previous poisoning on the learner and make adaptive decisions on how to poison next. This adaptability, intrinsic to the online setting, makes attacks on online learners potentially extremely dangerous.
...the attacker is allowed for an unbounded computational budget to manipulate the labels.
* We agree with the reviewer that an unbounded computational budget for attacking the labels is unrealistic. Our proposed attack strategies have a small computational cost for deployment, however, as they only require one forward-and-backward pass on the learner architecture for the greedy attacks, and one forward pass on the TD3 agent for RL-based attacks. Moreover, we are considering attacks that target data labels, which have a reduced computational cost compared to high-dimensional adversarial perturbations of the inputs. These attacks might still be infeasible when data arrives with high frequency, so we will add a comment in the paper to reflect this limitation. To address this problem, we have extended our theoretical analysis to the case where the attacker poisons only a fraction $\rho$ of each data batch (see answer to reviewer 7DCo). The result indicates that the attack strength effectively decreases by a factor $\rho$. Our experiments support this finding (see Fig. E in the attached file).
... evaluating the results on online learning datasets such as CLOC [A] and Firehose [B].
* We agree with the reviewer that it would be interesting to explore attacks when clean data exhibits a natural covariate shift, as is the case in the CLOC and Firehose data sets, and we will explore this direction in future work. For this paper, our focus is on stationary distributions, and our theoretical analysis is based on this assumption. The goal of our experiments with deep architectures (LeNet, VGG, ResNet) on realistic datasets like MNIST and CIFAR10 was simply to verify our theoretical predictions in cases where data has non-trivial correlations and the optimisation problem is non-convex, which is standard practice in papers with a theory focus at venues like NeurIPS.
What happens if the learner stored labeled examples from the stream before the poisoning phase starts and update on mixed batches?
* We agree that it is important to design defence mechanisms against online adversarial attacks. However, the purpose of this paper was to establish the baseline behaviour of online learners when such attacks are present. This, to the best of our knowledge, was not known and it does already exhibit rich and interesting behaviours. Therefore, a thorough analysis of defence strategies would go beyond the scope of the present paper.
A simple strategy that mixes (new) poisoned data with clean samples makes the problem equivalent to the case where only a fraction $\rho$ of the data is perturbed by the attacker. Thus, our result mentioned above applies: the attack strength decreases by a factor $\rho$. Note that, to implement this defence mechanism, simply storing data from before the attacks might be challenging - how does the victim know when the attacks start? It would instead be doable in the context of federated learning.
How is this constraint translated to the proposed setup?
* The budget constraint is intrinsic in the setup of the optimal control problem as the attack strength (1/C), which effectively sets the amplitude of the attacks by affecting the perturbation cost. A large attack strength implies large perturbations. The question then is how small C must be to achieve a certain goal (for example making the accuracy low). Similarly, given a value of C, how close do different attack strategies get to a desired target? Our analysis addresses these questions.
...a real number for integer valued labels. Couldn't this be a simple check for the learner to reject such samples from training on them?
* A simple way to address this limitation is to discretise the actions. For example, greedy attacks could use the action $a^G_{\mu}$ on average, replacing a fraction $\rho=a^{\mathrm{G}}_{\mu}$ of the labels in each batch with $y^*$, and leaving the remaining clean labels unperturbed.
**Limitations**
In the final version of the paper, we will expand the concluding section to include the limitations associated with the practicality of the proposed attacks (high frequency streams represent a challenge), and with the threat model (label attacks). We will also include the case of distributions with covariate shift and designing defence strategies as important directions for future work.
We hope our answers have clarified the reviewer’s concerns and will provide grounds for a more favorable assessment of the paper; we would be happy to respond to further concerns during the discussion period.
---
Rebuttal Comment 1.1:
Comment: Dear Authors
I would like to thank you for the efforts put into writing the rebuttal. However, several concerns of mine are still unresolved.
While I understand that the nature of this work is to establish baseline behavior's of online learners, it is still required to show the importance of this problem when the learner is somewhat smart. That is, one should consider *at least* simple defenses and show that such approaches fail in defending against the proposed attack.
Second, since the problem setup tackles online learning, I am not sure about the validity of the results when considering stationary and small scale distributions such as CIFAR10 and MNIST. Hence, one should include distribution-changing datasets that are designed specifically for online learning (such as the one mentioned in the review) and study the problem there instead of small scale datasets.
Third and regarding the unbounded computational budget: while I appreciate this extension, one should study the effect of making $\rho$ a function of how expensive the developed attack. Further, it is important to conduct the empirical experiments under this setting as well.
That being said, I think that this work is not ready to be accepted and hence I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: We thank the Reviewer for the response. We are happy that our rebuttal addressed, at least partially, the Reviewer's concerns. We would kindly request the Reviewer to consider the following points before reaching their final decision:
* **Simple defense mechanisms**. The design and effectiveness of defense strategies greatly rely on the specific attack scenario. For instance, in the context of federated learning, anomaly detection filters might be employed on weight updates by the central server. Conversely, in centralized learning, data may undergo scrutiny, such as feasible-set projection, to ensure label integrity (an example is to control that labels are integers, as discussed before). While we recognize the importance of addressing these instances, the study of defense mechanisms falls outside the scope of our current work. Our primary focus is on offering a broader understanding of label attacks on online learners, serving as a foundational framework for future explorations.
* **Non-stationary datasets**. We appreciate the Reviewer's suggestion. We would like to point out that the experiments we carried out are very well aligned with the related literature [1, 2, 3, 4]. We will carefully consider non-stationary distributions as a possible direction for future work, as we acknowledge the relevance of the proposed analysis.
* **$\rho$ experiments**. We would like to point out that, in our rebuttal, we did run empirical experiments using MNIST (see the attached file). Due to time constraints during the rebuttal phase, we restricted our analysis to this dataset and a simple logistic regression model. We are currently running the $\rho$ experiments for all empirical scenarios outlined in our draft, and we will include the results in the final version of the manuscript.
Once again, we thank you for your valuable feedback.
[1] Pang, T. et al. Accumulative Poisoning Attacks on Real-time Data (NeurIPS, 2021).
[2] Lee, S. et al. Continual Learning in the Teacher-Student Setup: Impact of Task Similarity (ICML, 2021).
[3] Zhang, X. et al. Online Data Poisoning Attacks (L4DC, 2020).
[4] Wang, Y. and Chaudhuri, K. Data Poisoning Attacks against Online Learning (arXiv, 2018). | Summary: The authors conduct theoretical analysis and empirical evaluation of poisoning attacks in the online learning setting, where the attacker can intervene in the labels of sequentially provided data. As a result of the theoretical analysis, the authors show that the strength of the attack becomes discontinuously stronger as the batch size asymptotically approaches infinity. It is also shown that even a greedy attack can be as strong as that in the clairvoyance setting under an appropriate condition.
Strengths: It introduces a new problem setup for poisoning, and although the analysis is performed in a very simple setting, it leads to interesting results.
It is interesting that a properly designed greedy attack is as effective as an attack in the clairvoyant setting.
Theoretical analysis utilizing optimal control theory is not very common, and it is useful to introduce such methods.
Weaknesses: There is little discussion of threat models and attacker models when viewed as a security issue. There is no disagreement that poisoning in an online setting is a serious threat, but the paper does not discuss where the significance of considering this type of dirty label attack lies and whether it could be a significant risk. In particular, I wonder if there is any point in dealing with a setting where the batch size is infinite in an online setting.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: In equation 4, is a parameter to adjust the balance between g^per and g^nef necessary?
In equation 6, when d_mu(C,P)=1, it cannot be said that the student's prediction and the target's prediction coincide since they merely coincide in their expectations.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The subject is an attack on a learning system and it can affect society in a malicious way, but it is obvious that the research interest is in the theoretical aspect of the behavior of the system under attack.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the constructive feedback. Please see below for our response.
**Weaknesses**
There is little discussion of threat models and attacker models when viewed as a security issue. There is no disagreement that poisoning in an online setting is a serious threat, but the paper does not discuss where the significance of considering this type of dirty label attack lies and whether it could be a significant risk.
* There are several practical use cases that motivate our analysis. In federated learning, where data are collected independently by each node in the federation, malicious nodes could modify the labels before providing the model updates to the central server for aggregation. Label attacks can also occur in online learning settings where models are fine-tuned on the fly using user-generated data, such as email spam filters, product recommendation engines, chatbots, and fake review detectors. Note that, in streaming scenarios, the attacker can monitor the effect of previous poisoning on the learner and make adaptive decisions on how to poison next. This adaptability, inherent to the online setting, makes attacks on online learners potentially extremely dangerous. We will expand the introductory section of the manuscript to include these examples.
In particular, I wonder if there is any point in dealing with a setting where the batch size is infinite in an online setting.
* The Reviewer raises a good point: in practical contexts, it is unlikely that streaming data would arrive in batches of infinite size. We considered the case of infinitely large batches ($P\to\infty$) as an approximation for cases where $P$ is sufficiently large to make batch averages effectively equal to averages over the data distribution, which is a limit receiving much interest from the theoretical point of view recently [1-3]. This reduces the optimal control problem to a deterministic one, which we could solve exactly. We would like to point out that **our experiments approach the theoretical limit of $P\to\infty$ very rapidly**. See for example Fig. 2-B in the paper: for $P=10$, the steady state accuracy of the model already gets very close to the $P\to\infty$ prediction. Moreover, in practical applications, online learning may involve an accumulation phase where multiple batches are collected before being used to update the ML model, which represents a context where our analysis may apply.
**Questions**
In equation 4, is a parameter to adjust the balance between g^per and g^nef necessary?
* This parameter is necessary to ensure that the two costs have the same scale, so that we can analyse the effect that the attack strength has on the steady state of the learner in a standardised fashion across architectures. In more detail, the nefarious cost, $g^{\mathrm{nef}}_{\mu}$, represents the distance between the learner and the target at training step $\mu$. This quantity can vary significantly depending on the setup of the experiment (the learner architecture, the target of the attacker, etc), thus affecting the amplitude of the perturbations. This makes it hard to compare results from different setups. In order to mitigate this effect, the perturbation cost, $g^{\mathrm{per}}$, is weighted by a factor that has the same magnitude of the average nefarious cost. We hope this clarifies the role of $\mathcal{E}(\phi^*)$, and we are happy to answer any further question the Reviewer may have.
In equation 6, when $d_{\mu}(C,P)=1$, it cannot be said that the student's prediction and the target's prediction coincide since they merely coincide in their expectations.
* We thank the reviewer for pointing this out. We will make the suggested correction to the sentence.
We hope that our answers have clarified the reviewer’s concerns and will provide grounds for a more favourable assessment of the paper; we would of course be happy to respond to further concerns during the discussion period.
References
[1] Ba et al. NeurIPS ‘22;
[2] Damian et al. COLT ‘22;
[3] Dandi et al. arXiv:2305.18270.
---
Rebuttal Comment 1.1:
Comment: Dear Authors
Thank you for your responses to the reviewer's questions and comments. After reading the responses, some of my questions are resolved. As the other reviewers pointed out, this work requires several clarifications reflecting the reviewers' comments for publication while I would like to keep the current score since I believe this manuscript contains an interesting approach to attacks on online learning.
---
Reply to Comment 1.1.1:
Comment: We thank the Reviewer for the positive evaluation of our submission. We are glad the Reviewer finds our work important for online adversarial attacks, and that we resolved part of the Reviewer's questions. Although the official discussion phase has concluded, we remain available to address any additional questions the Reviewer might have.
We will of course incorporate in the final version of the manuscript all the clarifications that we provided in our rebuttal.
Thank you again for your valuable feedback. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their constructive criticism. We are thrilled to see that the reviewers broadly agree this is an important, underexplored problem, and that our theoretical and empirical analysis offers some novel, non-trivial insights. We certainly agree that the scenario we study analytically is simplified, but we would like to point out that obtaining analytical results on learning dynamics is non-trivial even with simplified models, and the insights we derive are replicated in our analysis of deep networks (VGG, ResNet) on realistic data sets (MNIST, CIFAR10).
Concretely, the reviewers pose several interesting questions (detailed inline response below); while several such questions would necessitate a significantly broader scope than afforded by this paper, we have implemented a number of suggestions, namely:
* We have analysed empirically the dynamics of attacks when the learning algorithm employed is Adam, instead of vanilla SGD, observing again a very good agreement with our theoretical insights, see Fig. A, B (suggested by reviewer rUtu).
* We extended the reinforcement learning analysis to more complex models, under the proviso of using only the last layer weights to characterise the state of the student (unavoidable due to the very large state space resulting for using the full weight set of a DNN), with results in line with what we observed on simpler models, see Fig. C, D (addressing the questions of reviewers rUtu and 7DCo)
* We have examined analytically a simple defense mechanism/alternative scenario where only a fraction $\rho$ of labels are perturbed in each batch (as requested by reviewers 61gD and 7DCo). This can still be analysed theoretically in simple cases and reveals that the attack strength effectively decreases by a factor $\rho$.
* We will significantly expand our discussion of practical use-cases where our type of attacks can arise, namely the fine-tuning of systems learning from user-generated data and federated learning (suggested by reviewers cwDs, 61gD, and MYtD)
We hope that these extra analyses, and the detailed replies we give below, will be useful in informing a constructive discussion period, solidifying the positive outlook on the general work, and highlighting the necessity of some of the restrictive assumptions we make.
Pdf: /pdf/6a5946be3f7ae25353dc88e324b02820e7101fa7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper provide an analysis of steady state of linear learners trained via SGD under online setting, where the found a phase transition in terms of attack strength. Some experiments are also conducted to demonstrate the insight from the theory.
Strengths: The formulation of attacking learning model under online setting is clean and intuitive. The theory is also illuminating. The insight that model performance will sharply decrease if attach strength is higher than a threshold is an interesting observation. In addition, the theory is supported by experiments.
Weaknesses: Overall my feeling is both empirically and theoretically, the paper is not strong. Theory side, the main concern I have on the work is that the problem might be oversimplified, thus it is not sure how much the insight can carry over to more realistic settings. On empirical side, there are quite a few simplifications in the setting analyzed, e.g., linear model, SGD only training. While I agree theory should start simple, it would be more convincing to see experiments on more realistic settings, i.e., to test if the insights from the simple model can hold for more practical settings, even if they do not hold, it would be still interesting to see the results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. If the training algorithm is not SGD, instead it is, say Adam, how would this change the analysis result conceptually? In addition, in experiment would it change anything empirically?
2. In Fig. 3, reinforcement learning and clairvoyant methods are not compared for neural nets due to their complications. I can understand clairvoyant might be infeasible due to high nonconvexity of the problem, but why RL methods are not feasible? It seems like the online adversarial attack could be a reinforcement learning problem by nature.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitation are sufficiently discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive criticism and for the enthusiastic description of the strengths of our paper.
**Weaknesses**
Overall my feeling is both empirically and theoretically, the paper is not strong. Theory side, the main concern I have on the work is that the problem might be oversimplified, thus it is not sure how much the insight can carry over to more realistic settings. On empirical side, there are quite a few simplifications in the setting analyzed, e.g., linear model, SGD only training.
* To the best of our knowledge, this is the first paper to study online poisoning from the theoretical point of view. We therefore decided to start with the simplest possible setting, linear models, which have been used to study various deep learning phenomena recently (feed-forward neural networks [1], self-supervised learning [2,3], pre-training [4], implicit bias[5-6] etc.) This framework allows us to obtain exact and explicit results, which are easy to interpret and help us develop our intuition about the problem.
While I agree theory should start simple, it would be more convincing to see experiments on more realistic settings, i.e., to test if the insights from the simple model can hold for more practical settings, even if they do not hold, it would be still interesting to see the results.
* We agree with the reviewer that it is key to verify that the insights obtained theoretically need to be verified experimentally in more realistic settings. We point the reviewer to the experiments with deep networks (LeNet, VGG, ResNet) on MNIST and CIFAR10 (Sec. 4), which confirmed the key insights from our theoretical analysis carried over to these settings, in particular the existence of a critical threshold of attack strength beyond which catastrophic collapse in accuracy happens.
**Questions**
If the training algorithm is not SGD, instead it is, say Adam, how would this change the analysis result conceptually? In addition, in experiment would it change anything empirically?
* Following the reviewer’s suggestion, we have run simulations for the case of logistic regression classifying MNIST data using Adam. The result is shown in the attached document - see Fig. A and B. The steady state reached by the learner is very similar to what we observe using SGD, thus providing further validation for our results. Due to the limited time allowed for the rebuttal, we could obtain the results for the logistic regression case only, and we will cover all real-data experiments considered in the draft for the final version of the paper. From a theoretical point of view, Adam has -- to the best of our knowledge -- defied an analysis even in the vanilla case so far (without attacks), with most recent work on learning dynamics focusing on vanilla SGD [6-9]
In Fig. 3, reinforcement learning and clairvoyant methods are not compared for neural nets due to their complications. I can understand clairvoyant might be infeasible due to high nonconvexity of the problem, but why RL methods are not feasible? It seems like the online adversarial attack could be a reinforcement learning problem by nature.
* The clairvoyant setting is not doable due to the high non-convexity of the problem, as the reviewer points out. Running the clairvoyant non-linear solver becomes infeasible for complex architectures: finding a solution to the equations takes a long time, and since the problem is non-convex, there are no guarantees the sequence of actions converges to the optimal one.
* In the RL approach, the state space is given by the input data and the learner’s weights. The latter makes an RL approach prohibitively expensive for large architectures, if you train all the weights. Following the reviewer’s question, we considered an approach where the whole network is trained, but our TD3 agent only attacks the network using knowledge of the read-out weights and the inputs. The results are shown in the attached document, see Fig. C and D. The agent is able to perform better than constant attacks, though, in terms of running cost, it does not get as close to the greedy attacks as it does when it has full sight of the learner’s parameters (for a comparison, see the paper Fig. 3-A,B bottom panels, and the supplementary material Fig. 5-A,B bottom panels). We will add these evaluations to the final version of the paper, and a note to clarify why exactly clairvoyant and RL-based approaches become not practical for complex models.
We hope that our answers have clarified the reviewer’s concerns and will provide grounds for a more favourable assessment of the paper; we would of course be happy to respond to further concerns during the discussion period.
References
[1] Saxe et al. ICLR ‘13;
[2] Tian, Chen, Ganguli, ICML ‘21;
[3] Jing, Vincent, LeCun, Tian, ICLR ‘22;
[4] Wu, Zou, Braverman, Gu, Kakade, NeurIPS ‘22;
[5] Moroshko, Woodworth, Gunasekar, Lee, Srebro, Soudry, NeurIPS ‘21; Li et al. NeurIPS ‘22;
[6] Chizat & Bach NeurIPS ‘18;
[7] Goldt et al. NeurIPS ‘19;
[8] Ben Arous et al. JMLR ‘21;
[9] Veiga et al. NeurIPS ‘22. | null | null | null | null | null | null |
Asynchrony-Robust Collaborative Perception via Bird's Eye View Flow | Accept (poster) | Summary: This article proposes a robust detection algorithm to address the issue of detection errors caused by asynchronous information transmission in multi-agent collaborative perception tasks. For late fusion, a robust prediction algorithm is designed using a BEV flow map generation algorithm, which provides a method to align perception information from different time frames. Additionally, the authors claim to have introduced the first asynchronous collaborative perception dataset and tested the algorithm on this dataset as well as the DAIR-V2X dataset.
Strengths: This paper addresses an important research problem in multi-agent collaborative perception tasks and proposes a structurally concise and effective 3D detection framework. By utilizing BEV flow, the framework significantly reduces the bandwidth consumption in information transmission among different agents. To overcome the challenge of low matching accuracy in information fusion during asynchronous collaborative perception, the paper introduces a BEV flow map that can align neighboring frames of vehicles. By predicting the BEV flow, the framework aligns detection results with time intervals. Additionally, the paper presents a benchmark dataset specifically designed for asynchronous collaborative perception tasks, which contributes to the advancement of research in this field
Weaknesses: The writing in this paper can be further improved to enhance readers' understanding of the formulas and methods described. In the main experiments, such as the benchmark comparison in the quantitative evaluation, it would be helpful to include a table that presents the comparative results for easier reference.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The experiments in this paper are not comprehensive enough, as they only validate the proposed methods on the dataset introduced in this paper, but lack testing on datasets such as V2XSet[1].
2. The research motivation in this paper focuses on the time intervals between timestamps, but it seems insufficient to be a major concern. Even if the time intervals for information transmission between agents are inconsistent, the prediction algorithm can still rely on predicting algorithms combined with several consecutive data transmissions. In other words, does the algorithm's performance significantly differ when the time intervals are the same or different?
3. In the section "Adjacent frames' ROI matching," the author uses the Euclidean distance between the same ROI in two consecutive data transmissions from the same agent as the association criterion. However, when there is high vehicle density in the environment, this can significantly impact the association algorithm. How is this issue addressed?
4. Also in the section "Adjacent frames' ROI matching," how is the "feasible angle range" defined in line 213?
[1]. Xu, et al. V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer. ECCV 2022.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The proposed algorithm in this paper seems to be affected by complex vehicle scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1: Enhancing Clarity of Formulas, Methods, and Experiments
Thank you for your feedback. We will revise the corresponding expressions in the final version for better understanding. We have provided data tables in the appendix corresponding to the main text, which can be referred to more accurately.
---
## Q1: Dataset Validation and Comparison
In the main text, the proposed method is validated on two datasets, our new dataset **IRV2V** and **DAIR-V2X**. Note that DAIR-V2X is the first real-world collaborative perception dataset, which is collected from real road scenes. In this dataset, there is one vehicle and one roadside unit. The communication process takes place between the vehicle and the roadside unit. The dataset is collected from real-world scenarios, encompassing a wide range of conditions including clear, rainy, and foggy days, as well as daytime, nighttime, urban roads, and highways. It is extensively employed in collaborative perception research. The experiments conducted on DAIR-V2X are mainly shown in Figure 4(b), Table 4(Appendix), and Figure 8(Appendix). Figure.4(b) in the main text and Table 4 in the appendix show that CoBEVFlow outperforms the best methods by 5.73% and 2.04% in terms of AP@0.50 and AP@0.70, respectively, under a 300ms interval expectation. The red line represents our method, CoBEVFlow, and it can be observed that our method consistently outperforms others. In Appendix Figure 8, we present the visualization of detection results on the DAIR-V2X dataset with an expected interval of 300ms. The red boxes indicate the predicted results, while the green boxes represent the ground truth. In the presence of temporal asynchrony, our method's predictions closely match the ground truth, resulting in fewer false positives. These results demonstrate that CoBEVFlow continues to achieve superior performance in real-world scenarios.
The V2XSet dataset mentioned by the reviewer is a simulation dataset that does not explicitly consider latency issues. We show the **additional experimental results** on V2XSet in **Table 1** of the global response pdf. CoBEVFlow outperforms the best SOTA methods by 8.74%, and 17.04% for AP@0.50 and AP@0.70 respectively under the expectation of time intervals is 300ms. The findings obtained from experiments on the V2XSet dataset align with the conclusions drawn from experiments on the other two datasets in this paper. CoBEVFlow demonstrates its ability to alleviate the impact of temporal asynchrony and maintain robustness in the presence of temporal asynchrony.
---
## Q2: Impact of Time Intervals
Different time intervals lead to significantly different performance for methods that rely on temporal information and assume known fixed time intervals. Under the assumption of regular sampling, the temporal variation information between continuous frames is fixed, and the model does not need to consider or extract the specific time interval between consecutive timestamps. When facing inputs with random different time intervals, the model may mistakenly perceive them as having a fixed interval, leading to incorrect analysis of temporal change speed and incorrect compensation for specific delays. Syncnet, discussed in the paper, is such a method. To validate this point, we trained Syncnet (which is an LSTM-based method) with a fixed time interval of 100ms, while using different time intervals during the inference process. We compensate based on two frames of historical information, fixing the delay of the first frame at 500ms. And we vary the time interval between the second and the first frame at 500, 600, 700, 800, and 900ms. In the table, $\Delta t$ represents the difference between two time intervals. The experimental comparison is shown in Table 2 in the global response PDF. Syncnet cannot cope well with varying time intervals, its performance is significantly affected. However, our method remains highly robust in the face of time irregularities.
---
## Q3: Addressing Association Algorithm Impact in Dense Environments
For such situations, we can leverage object tracking methods. During matching, not only the distance but also the similarity of the features covered within the ROI range are considered to solve this problem. However, in most cases, such situations are very rare in all our datasets. Our current matching method still has very accurate matching results even when vehicles are relatively dense, so there is not an urgent need for this kind of requirement.
---
## Q4: "Feasible Angle Range" in ROI Matching
The "feasible angle range" is actually the range of angles to which a vehicle may move during regular driving. It is generally a certain angular range with the vehicle's front or rear as the centerline. In our experiments, we set it uniformly to plus or minus 45 degrees.
---
Rebuttal Comment 1.1:
Comment: The author addressed my doubts, so I will raise the score of the paper from 6 to 7. | Summary: The paper proposes CoBEVFlow, a new system for collaborative 3D perception that can handle temporal asynchrony among multiple agents. CoBEVFlow compensates for motion to align asynchronous collaboration messages and has two advantages: it can handle irregular time stamps without discretization and only transports original perceptual features. The authors validate CoBEVFlow's efficacy with a newly proposed synthetic dataset IRV2V and an existing real-world dataset (DAIR-V2X), showing that CoBEVFlow outperforms other methods and is robust in extremely asynchronous settings. Overall, the paper presents a novel approach to collaborative perception that improves collaboration among multiple agents.
Strengths: - The paper contributes to the formulation of the asynchrony collaborative perception task and proposes a novel asynchrony-robust collaborative perception framework. Compared with the existing methods to solve the delay, the proposed benchmark is more challenging and closer to real-world scenarios.
- CoBEVFlow has two advantages over other methods. Firstly, it can handle asynchronous collaboration messages sent at irregular, continuous time stamps without discretization. Secondly, it only transports the original perceptual features, avoiding additional noise and generating new perceptual features.
- Comprehensive experiments conducted on both IRV2V and a real-world dataset, DAIR-V2X, show that CoBEVFlow consistently outperforms other baselines and is robust in extremely asynchronous settings.
- The writing in this paper is good, and the figures and charts are clear.
Weaknesses: - The training process requires three stages, which are complex and time-consuming, which limits practical application scenarios.
- The collaboration performance is constrained by the performance of the ROI generator.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Time complexity analysis of the proposed ROI matching method and the Hungarian matching method.
- Based on the information provided, it is unclear why the performance of Where2comm+SyncNet increases with a decrease in communication volume in Figure 5. Further investigation is needed to understand the reasons behind this phenomenon.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes, they have addressed them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1: Training Process Complexity and Practical Application
The training process does require three stages, but it is not time-consuming and does not limit practical application scenarios for two reasons.
1. First, each of the three modules is lightweight. The training of the individual detection module, and the finetuning of the collaborative components, are common to all collaborative perception tasks. The stage of training the prediction module is both lightweight and fast. The entire prediction training stage takes only a matter of minutes and does not entail significant additional time consumption during the inference time.
2. Additionally, the inference process is also not time-consuming. The time required for inference remains unaffected by the intricacies of the training stages. It is noteworthy that the inference time doesn't exhibit a mere increase as a result of its nature of multi-stage training. To support our point, We examine the inference time increment compared to the Where2Comm approach, which does not account for temporal asynchrony, our method introduces only a marginal 3% increment in time consumption during the inference time with proper implementation optimization. Furthermore, we observed that when compared to SyncNet, another method that equally considers delay compensation, our approach remarkably demonstrates a reduction of 33% in time consumption.
---
## W2: Performance Dependency on ROI Generator
That's a good question. We agree with the reviewer's insightful comment. But we want to emphasize two points:
1. ROI generator generally provides reliable ROIs. The performance of the ROI generator is strongly correlated with the performance of the single-object detection. There are many mature and powerful single-object detection methods available now. In most cases, we can easily find an excellent single-object detection method and adopt it as the ROI generator with minor modifications. According to our experiments, the ROI generator can easily achieve commendable results even using the most common 3D detectors, such as PointPillar, which we use to generate the experiments results table in the manuscript. We will add this information to the revised version.
2. When the ROI generator fails, our collaboration performance is still better than baselines. In extreme cases where the ROI generator performs poorly, it indicates that it is difficult to find a sufficiently good single-object detection method. In this case, all collaborative perception methods would struggle to perform well. However, even if only a portion of the ROIs are accurate and effective, our method can still compensate for them, thereby achieving stronger robustness to time asynchrony compared to other methods.
---
## Q1: Time Complexity Analysis of ROI Matching Methods
The theoretical time complexity of our matching scheme is **$O(n)$**, while the general theoretical time complexity of the Hungarian algorithm is **$O(n^3)$**, so our scheme has a significant advantage in terms of theoretical time complexity. Given that about 50 ROIs are typically encountered during the detection process, our method only consumed 12ms to complete the matching task, while the Hungarian algorithm took about 16ms. In practice, the time consumption for matching is a very small part of the overall inference process and can basically be ignored.
---
## Q2: Investigating SyncNet's Performance Increase with Decreased Communication Volume
The reason that the performance of Where2comm+SyncNet increases with a decrease in communication volume might be that SyncNet applies a Conv-LSTM-based compensation strategy which struggles to accurately conduct motion compensation when features have lower quality. With the increase in communication volume, features with lower confidence are more likely to be transmitted compared to situations when the communication volume is lower. It indicates that the overall qualities of the features become lower when the communication volume increase. Therefore, the motion compensation capability of SyncNet tends to be worse in such situations and results in worse performance. To support our point, Figure 1 in the global response PDF visualizes the detection results of CoBEVFlow and Syncnet under different bandwidths. We can observe that as the bandwidth increases, the number of false positives in Syncnet's detection results noticeably grows, showing some misalignment with the ground truth. | Summary: This work points out that there is a time delay among agents and the delay period is not fixed. Thus, when these agents share their environment perception information in BEV, there will be an uneven spatial mismatch. To address the aforementioned problem, this work constructs a benchmark and proposes a strategy to align the BEV feature of various agents by predicting BEV flow map.
Strengths: 1. This work finds a new problem and develops a corresponding benchmark. This studied problem is interesting and meaningful.
2. The proposed strategy for tackling the asynchronous perception problem is easy to understand and makes sense.
Weaknesses: 1. The proposed algorithm predicts the BEV flow estimation rather than movement speed, so the communication delay among agents should be known. How to know that in practical applications.
2. In the developed benchmark, there is a random time turbulence. How to tackle this turbulence?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: not.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please see the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1: Communication Delay in Practical Applications
In practical applications, it is straightforward to know the communication delay because different vehicles can easily acquire a unified world timestamp.
1. A unified world timestamp is easily obtainable for agents. Several existing technologies can achieve this capability, such as GPS systems, network time servers, and online time services.
2. When vehicles send information, they also transmit the corresponding world timestamp. Thus, the information received by the ego agent comes with its specific timestamp, which allows the computation of the exact time interval and delay. Given the globally synchronized time stamps, the time intervals or delays can be easily calculated.
---
## W2: Tackle Random Time Turbulence
Here we assume that "Tackle this turbulence" mentioned by the review means the process of mitigating the effect of temporal asynchrony. Here are the responses:
1. Regarding temporal asynchrony in real-world scenarios: The 'turbulence' is a crucial factor of the temporal asynchrony in the data. Correspondingly, in real scenarios, this turbulence emulates a variety of potential unfavorable factors, including but not limited to situations where the sensors or associated hardware are unable to handle their tasks, resulting in data collection not proceeding entirely at the predetermined frequency; disruptions like illumination, electromagnetic waves, and unstable hardware and software drivers that could interfere with the precise moments when the sensor collects information. These adverse factors all contribute to turbulence at the time of data collection. The method we propose in our paper can address asynchrony caused by any factor.
2. About the datasets: IRV2V is a synthetic dataset created by adjusting the sampling times to simulate temporal asynchrony. The sampling times are continuous and more complex, which better reflects real-world applications. The other dataset we used, DAIR-V2X, is a real dataset sampled at 10Hz. We simulate temporal asynchrony in situations like communication delays by selecting non-aligned perception information.
3. About the baseline methods: State-of-the-art collaborative perception methods like late fusion, DiscoNet, Where2comm, etc., do not consider the impact of temporal asynchrony. In our testing, we directly use asynchronous collaboration information as inputs to these models. SyncNet addresses temporal delays but assumes fixed time intervals between input information. In our experiments, we directly use irregularly spaced perception information as input, which proves to be challenging for these methods to handle effectively.
In summary, CoBEVFlow addresses the challenges of temporal asynchrony in real-world scenarios, and our experiments (Figure 4) show that it outperforms other methods that do not explicitly consider such 'turbulence'. Please let me know if you have any further questions. | Summary: The paper introduces CoBEVFlow, a new asynchrony-robust collaborative 3D perception system designed to enhance multi-agent perception abilities. The method compensates motions to align asynchronous collaboration messages from different agents, aiding in dealing with real-world issues such as communication delays. The authors developed BEVflow to reassign asynchronous perceptual features to appropriate positions and thus mitigating the impact of asynchrony.
To evaluate its effectiveness, a new dataset IRregular Vehicle-to-Vehicle (IRV2V) was created that simulates various real-world scenarios. Extensive experiments on both this new dataset and an existing dataset DAIR-V2X showed that CoBEVFlow consistently outperforms other baseline methods (evaluated on Vehicle detection).
Strengths: 1. The challenge of temporal asynchrony in multi-agent collaborative perception is quite new and the proposed CoBEVFlow firstly addresses it by reassigning asynchronous perceptual features (at RoI level) to ensure better alignment.
2. The creation of the IRV2V dataset to simulate real-world scenarios provides an important resource for future research.
3. Detailed evaluations of two datasets provide good results (Fig. 4) for the superiority of CoBEVFlow over existing methods.
4. With smaller bandwidth, Fig.5 shows CoBEVFlow actually shows the potential to greatly improve the performance over single-object detection.
Weaknesses: 1. The main concern is the limited evaluation. There might be limitations in applying CoBEVFlow to other types of multi-agent systems as it was mainly tested on vehicle-to-vehicle communication scenarios. Is there any reported performance of Pedestrian or Cyclist?
2. A background paragraph (Sec.3) about the comparison between collaborative and original settings (perhaps a visual comparison rather than a tedious explanation in text) could be more elaborate for better understanding for new readers. Even if I was working on the 3D detection, I cannot understand the key difference quickly.
3. While CoBEVFlow shows potential in simulated scenarios, its performance in broader, real-world situations remains untested.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Will the dataset be released for future research?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: There is a discussion about limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1: Limited Evaluation and Applicability
1. Besides vehicle-to-vehicle collaboration communication, we also report the result of experiments on the vehicle-to-infrastructure dataset DAIR-V2X in the main text. This dataset includes information from both the vehicle side and the roadside units, enabling communication between vehicles and infrastructure. Figure.4(b) in the main text and Table 4 in the appendix show that our approach -- CoBEVFlow outperforms the best methods by 5.73% and 2.04% in terms of AP@0.50 and AP@0.70, respectively, under a 300ms interval expectation, which exhibits commendable performance on the vehicle-infrastructure collaborative dataset in real-world scenarios. The visualization of detection results can be referenced in Figure 8 in the Appendix. All these results suggest that our method can effectively handle arbitrary multi-agent systems and exhibits superior efficacy in the real-world setting.
In addition to the two datasets mentioned in the main text, based on suggestions from other reviewers, we also conducted supplementary experiments on the **V2XSet** dataset. V2XSet is a simulated vehicle-to-infrastructure dataset, which does not consider asynchrony issues. The experimental setup was consistent with the other datasets. The specific results can be seen in Table 1 of the global response PDF.CoBEVFlow outperforms the best SOTA methods by 8.74%, and 17.04% for AP@0.50 and AP@0.70 respectively under the expectation of time intervals is 300ms. The findings obtained from experiments on the V2XSet dataset align with the conclusions drawn from experiments on the other two datasets in this paper.
2. Considering more object categories in the evaluation of multi-agent perception systems is a valuable suggestion. The two datasets reported in the main text – the IRV2V dataset and the collaborative portion of the DAIR-V2X dataset, as well as the additional V2XSet dataset provided in the rebuttal, all of them do not include labels for Pedestrians and Cyclists. Consequently, we cannot present corresponding detection results. Moreover, as far as we know, none of the existing collaborative detection datasets currently have annotations for these two entities.
---
## W2: Comparing Collaborative and Original Settings
We appreciate the reviewer's suggestion!
1. In the future version, we will provide a more detailed description of collaborative sensing.
2. The difference between original and collaborative settings is: Under the original setting, the ego agent can only use the information obtained from its own sensors for detection. However, in a collaborative setting, **different agents can transmit and share information, meaning the ego agent can use information from other agents for detection in addition to its own information**, thereby expanding and enhancing its field of perception. By leveraging complementary perceptual information through communication, agents overcome the inherent limitations of single-agent perception, such as occlusion and long-range issues. In Figure 4 of the main text, the gray dashed lines represent individual perception, while the solid lines represent collaborative perception methods. Under the condition of zero expectation of time interval, the performance of collaborative perception methods is better than that of single perception. However, as the degree of temporal asynchrony increases, the performance of methods like late fusion, V2VNet, V2X-ViT, Disconet, and Where2comm are affected by temporal asynchrony and even fall below the level of single perception. This is precisely the problem we aim to address. As shown by the red line in Figure 4, in the case of temporal asynchrony, our objective is to maintain the performance of collaborative perception at a higher level as much as possible.
---
## W3: Real-World Performance of CoBEVFlow
In the main text, the proposed method is validated on two datasets, our new dataset IRV2V and DAIR-V2X. We believe the experiment results on the DAIR-V2X should reveal the potential of CoBEVFlow in real-world scenarios. DAIR-V2X is the first real-world collaborative perception dataset, which is collected from real road scenes. In this dataset, there is one vehicle and one roadside unit. The communication process takes place between the vehicle and the roadside unit. The dataset is collected from real-world scenarios, encompassing a wide range of conditions including clear, rainy, and foggy days, as well as daytime, nighttime, urban roads, and highways. It is extensively employed in collaborative perception research. Figure.4(b) in the main text and Table 4 in the appendix show that our approach -- CoBEVFlow outperforms the best methods by 5.73% and 2.04% in terms of AP@0.50 and AP@0.70, respectively, under a 300ms interval expectation. In Figure 4(b), the red line represents our method, CoBEVFlow, and it can be observed that our method consistently outperforms others. In Appendix Figure 8, we present the visualization of detection results on the DAIR-V2X dataset with an expected interval of 300ms. The red boxes indicate the predicted results, while the green boxes represent the ground truth. In the presence of temporal asynchrony, our method's predictions closely match the ground truth, resulting in fewer false positives. These results demonstrate that CoBEVFlow continues to achieve superior performance in real-world scenarios.
---
## Q1: Dataset Availability for Future Research
**Yes**, the dataset and the code both will be fully accessible to support research in the field.
---
Rebuttal Comment 1.1:
Title: Thanks for author's rebuttal
Comment: The rebuttal has addressed my concerns. I raised the rating to weak accept. | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback!
In this work, we propose CoBEVFlow, an asynchrony-robust collaborative 3D perception system based on bird’s eye view flow, to address the issues caused by temporal asynchrony among agents.
In the main text, we conducted experiments on two datasets: IRV2V and DAIR-V2X[1]. IRV2V is the first synthetic collaborative perception dataset with various temporal asynchronies that simulate different real-world scenarios that we created, containing 2 to 5 vehicles as collaborative agents in each frame. **DAIR-V2X is a real-world dataset** consisting of a roadside unit and a vehicle as collaborative agents in each frame. Our experimental results conducted on both datasets are depicted in Figure 4. The corresponding numerical outcomes are detailed in Appendix Table 4 and Table 5. In the case of the IRV2V dataset, CoBEVFlow demonstrates substantial performance, achieving 83.1% and 75.7% in AP@0.50 and AP@0.70 respectively, within the 300ms time interval. This performance exceeds that of the leading baseline method by 23.3% and 35.3%. Similarly, on the DAIR-V2X dataset, CoBEVFlow achieves remarkable results of 73.8% and 59.9% in AP@0.50 and AP@0.70 respectively, within the 300ms time interval, outperforming the best baseline method by 5.73% and 2.04%.
Furthermore, during this rebuttal process, we extended our experiments to include the V2XSet[2] dataset. V2XSet is a simulated dataset where each scenario involves at most one roadside unit and 2 to 4 vehicles as collaborative objects. The outcomes of these experiments are summarized in Table 1 of the global response PDF. Under time delays of 300ms and 500ms, the AP@0.70 scores achieved by CoBEVFlow are 0.776 and 0.713 respectively. These values surpass the best baseline methods by **17.0%** and **10.6%** respectively, and are notably higher than the results of single-object detection(0.556). This once again underscores CoBEVFlow's ability to maintain high levels of collaborative perception performance in scenarios involving temporal asynchrony.
The attached PDF contains various tables and figures. Please review it.
[1] Haibao Yu, Yizhen Luo, Mao Shu, Yiyi Huo, Zebang Yang, Yifeng Shi, Zhenglong Guo, Hanyu Li, Xing Hu, Jirui Yuan, and Zaiqing Nie. DAIR-V2X: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 21329–21338. IEEE, 2022.
[2] Runsheng Xu, Hao Xiang, Zhengzhong Tu, Xin Xia, Ming-Hsuan Yang, and Jiaqi Ma. V2x-vit: Vehicle-to-everything cooperative perception with vision transformer. In Computer VisionECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIX, pages 107–124. Springer, 2022.
Pdf: /pdf/268c9dc4c8cb21aa73f6e2ef5b5a77d461a0c378.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Segment Everything Everywhere All at Once | Accept (poster) | Summary: The paper proposes SEEM, a method that unifies several segmentation tasks. The network takes text and/or different kinds of visual prompts as inputs. The model then outputs the corresponding segmentation masks for the referred objects. In particular, SEEM trains a unified prompt embedding space. A set of learned queries then alternately attends to image features and the prompt embeddings. Finally, the output embeddings are matched to the prompts and decoded into masks.
Strengths: - Unifying several segmentation tasks.
- Impressive results on most tasks.
- Similar or better than SAM in many cases, despite less training data.
- Reasonably well written.
Weaknesses: 1. Some parts of the method lack motivation and details. In particular, the attention mask used in the prompt attention is not well motivated. It is also not well explained that (and how) the output embeddings are matched to the prompts.
2. The paper is heavily derived from X decoder. The magnitude of the contributions is not clear. The authors do not discuss the relation to X decoder. In fact it is hardly mentioned in the paper, yet the method is completely based on it. Seems like the main new addition here is the visual prompt encoder.
3. The results for VOS are quite poor. This indicates that the method is not as robust to object appearance variations, which is crucial in many tasks.
4. The authors should report training and inference time and memory requirements.
I am leaning positive for the paper. However, the authors need to address my concerns above.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Moreover:
Is the method robust to the case when the prompted text does not match any object in the image?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are not discussed in the main paper, but should be mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Lack motivation and details (e.g. self-attention mask, output embeddings).**
Sorry for the confusion, we explain the motivation of self-attention mask in details here.
* Inputs: Text, user input interactive scribbles, and none.
* SEEM Decoder Inputs: (1) Queries. (2) Prompts. (3) Image Features.
* SEEM Decoder Operation: (1) Cross-Attention between Queries and Image Features. (2) Self-Attention between Queries and prompts. (3) Compute outputs using Queries.
The self-attention mask shown in Fig.3 (b) are modeling the interaction between queries and prompts. As we are computing output masks and semantics using queries, *the self-attention between queries and prompts link between inputs and queries, so that the queries understand where to attend during cross-attention with image features to output the final results.* And this is the prompt attention defined in L7 of Algorithm 1.
Then, we illustrate the details in how to compute the output embeddings. Given the queries that have attended with image features and prompts, we project it separately using mask and class embedding layers (L112-123). The final mask and semantic class are computed by the similarity with 1/4 scale image features and language embeddings.
**Q2: Novelty in comparison with X-Decoder.**
Good question. We appreciate the reviewers point out this very related work that we are not discussed sufficiently. In comparison with X-Decoder, SEEM has the following difference:
* Technique
(1) The model design is different. As marked with red font in Equation.1, SEEM has visual and memory prompts other than text prompts. Visual prompts can have various formats such as clicks, scribbles, boxes and referred images. Memory prompts are used to memorize history interactions and support multi-round interactive segmentation.
(2) The training method is different. Different from X-decoder which train on all the masks of an image in an iteration, SEEM trains on one mask for multiple rounds in one iteration to enable multi-round interaction.
(3) Human Interactive Animation. We construct an interaction segmentation dataset to animate human interactions such as scribbles and clicks as shown in Appendix C1.
* Capability:
(1) Modality Composition. Our model is able to combine prompts of different modalities for prompting segmentation due to the joint representation space for text and visual prompts.
(2) Emerging property. Without training, our model shows the capability of image referred segmentation and VOS. Surprisingly, SEEM performs well even when source and target images for referring are in different styles.
**Q3: Performance on VOS in comparison with other methods.**
Thanks for the interesting question, VOS indeed is a very important application. However, the main contribution of SEEM is lied on interactive segmentation, and zero-shot prompt composition, referring image segmentation. While VOS is an emerging property.
As a zero-shot method, although we are not able to out-perform the previous approach that is fully supervised on both dataset and task. However, we achieve better and comparable performance with Painter-L and SegGPT-L that are trained on paired image data. Even without supervised cross-image referencing, SEEM is able to out-perform Painter-L with a large margin (53.8 vs. 24.1 on G metrics in YT-VOS).
Comparing SEEM with SegGPT is an apple to pear comparison, for example in coco panoptic segmentation, we are able to out perform Painter and SegGPT with a large margin.
| | Pretrained | Segmentation Data | PQ | mAP | mIoU |
|---------------|---------------|--------------------------|------|------|------|
| Painter (L) | * | COCO+ADE+NYUv2 (0.16M) | 43.4 | - | - |
| SegGPT (L) | Painter (L) | COCO+ADE+VOC+... (0.25M) | 34.4 | - | - |
| X-Decoder (L) | * | COCO | 56.9 | 46.7 | 67.5 |
| SEEM (L) | X-Decoder (L) | COCO+LVIS (0.12M) | 57.5 | 47.7 | 67.6 |
In addition, when the input prompt is not complete enough to conver the whole object, our tracking performance is much better than SegGPT, which indicates that our model can take more types of prompt and better generalization ability. We also show the visualization results in the rebuttal PDF.
**Q4: Training and evaluation time and memory cost.**
| | Resolution | #Param | FPS/img | Training Memory | Inference Memory |
|----------|------------|--------|---------|-----------------|------------------|
| SEEM (T) | 1024 | 101.1M | 9.0 | | 3124M |
| SEEM (B) | 1024 | 140.7M | 6.6 | | 3184M |
| SEEM (L) | 1024 | 415.3M | 3.7 | | 4612M |
We compute the number on 1 V100 gpu with a single image with resolution 1024 x 1024.
**Q5: Generalization capability on text not included in the image.**
Yes, our method is robust to the case when the prompted text does not match any object in the image. Our matching is implemented by computing the similarities between the prompt embedding and object features. We are able to use a threshold to filter those object features with very low similarities.
---
Rebuttal Comment 1.1:
Title: Clarification of Q4 and Q5
Comment: **Q4: Training and evaluation time and memory cost.**
| | Resolution | #Param | FPS/img | Training Memory | Inference Memory |
|----------|------------|--------|---------|-----------------|------------------|
| SEEM (T) | 1024 | 101.1M | 9.0 | 4804M | 3124M |
| SEEM (B) | 1024 | 140.7M | 6.6 | 5556M | 3184M |
| SEEM (L) | 1024 | 415.3M | 3.7 | 6588M | 4612M |
We compute the FPS on 1 V100 gpu with a single image with resolution 1024 x 1024. Both the training memory and inference memory are referring to batch size equal to 1. Specifically for model training, we fix the visual encoder and language encoder, only the SEEM-Decoder part is learnable.
**Q5: Generalization capability on text not included in the image.**
Yes, our method is robust to the case when the prompted text does not match any object in the image. Our matching is implemented by computing the similarities between the prompt embedding and object features. We are able to use a threshold to filter those object features with very low similarities. The task we are evaluating is referring segmentation.
To further demonstrate this aspect quantitatively, we run our model SEEM-Large on COCO val 2017 of 5000 images. For each of the 5000 images, we sample a positive text prompt and a negative text prompt, and perform the text-based referring segmentation. For each prompt, we calculate the scores over $K$ object tokens using the following calculations:
$v = v / norm(v) \in \mathcal{R}^{K \times d}; t = t / norm(t) \in \mathcal{R}^{1 \times d}$
$score = t \cdot v^t / scale \in \mathcal{R}^{1 \times K}$
$score_{max} = \max(score, dim=1)$
where $v$ is the $K$ object query embeddings, $t$ is the textual embedding, and $d$ the feature dimension. We use the maximal score $score_{max}$ as the matching score.
Based on the above calculations, we obtain 5000 matching scores for positive prompts and negative ones, respectively. Then we compute the mean and standard deviation as shown below:
| Text Prompt Type | Mean | Std |
| -------- | -------- | -------- |
| Positive | 17.88 | 1.43 |
| Negative | 7.36 | 3.00 |
Clearly, there is a huge margin between the matching scores for positive and negative text prompts, which justify our above statement. | Summary: This paper introduces SEEM, an innovative and interactive model for segmentation. SEEM stands out with its versatility in handling various prompts, such as points, boxes, scribbles, masks, and texts. The model's design is elegantly simple yet highly effective. Additionally, SEEM incorporates memory prompts to preserve segmentation history, enhancing its interactivity and overall performance.
Strengths: 1. The paper is well-presented, showcasing clear and concise writing. The authors effectively communicate their ideas.
2. The proposed idea is both innovative and highly effective. The designed model, despite its simplicity, showcases remarkable capabilities when dealing with a wide range of prompts. Its versatility and adaptability make it be applicable with more tasks.
3. The experimental results are solid. Through a combination of qualitative and quantitative analyses, the authors demonstrate the effectiveness of their idea.
Weaknesses: 1. Do the authors observe whether training on all types of prompts performing better than on one type or a subset types of prompts? or which types of prompt performs best and which one still lacks of improvement. It would be better to perform this ablation because it can give more insights to the society.
2. since this is a very general framework supporting many different types of prompts, and it can be more easily extended with more training data. I would also like to see its scalability with larger amount of supervised training data if there is sufficient computing resources.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Do the authors plan to train on larger amount of dataset? (such as the one trained on SAM or other rich datasets with the semantic labels). It would be very interesting to see how SEEM scale up with large amount of supervised or weakly supervised labels.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: not mentioned
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Ablation study on prompt type.**
Thanks for the valuable suggestion, ablation the prompt type is intuitive to study the effectiveness of each counter part, here are the results:
| | Panoptic | Grounding | Interactive | | COCO | | | Ref-COCOg | | | VOC | |
|----------|----------|-----------|-------------|-------------|------|------|------|-----------|------|------|----------|----------|
| | Vanilla | Text | Visual | Multi-Round | PQ | mAP | mIoU | mIoU | cIoU | AP50 | 20-NoC50 | 20-NoC90 |
| SEEM (T) | Y | Y | Y | X | 51.2 | 39.8 | 61.8 | 63.8 | 58.7 | 72.6 | 1.82 | 5.51 |
| SEEM (T) | Y | Y | X | X | 51.0 | 39.9 | 61.1 | 64.2 | 58.8 | 72.4 | - | - |
| SEEM (T) | Y | X | X | X | 50.8 | 39.9 | 60.7 | - | - | - | - | - |
We gradually remove the visual prompt and text prompt from SEEM decoder, and here we only trained on a single round on Interactive Segmentation for fast training purpose. From the table, we can clearly observe that while gradually add addition capabilities to the model, both text and visual prompt will improve panoptic segmentation. And add interactive segmentation will not influence referring segmentation task.
**Q2: Scaling up training data.**
Thank you for your insightful suggestion, scaling up SEEM is for a good direction. Although with the short-time frame we are not able to fully train on large scale dataset such as Object365, or SA-1B. However, we are able to take advantage of the pretrained checkpoint on SA-1B. By using the SA-1B pretrained backbone, we are able out-perform our paper reported number with a good margin as shown below:
| | Backbone | Multi-Queries in Single Inference | 20-NoC@50 | 20-NoC@85 | 20-NoC90 | 1-IoU |
|----------|----------|------------------------------------|-----------|-----------|----------|-------|
| SAM (B) | SAM-B | X | / | 3.28 | 4.12 | / |
| SAM (L) | SAM-L | X | / | 2.60 | 3.12 | / |
| SAM (H) | SAM-H | X | / | 2.55 | 3.11 | / |
| SEEM (B) | Davit-d3 | X | / | 2.93 | 3.79 | / |
| SEEM (L) | Davit-d5 | X | / | 2.77 | 3.61 | / |
| SEEM (B) | Davit-d3 | Y | 1.37 | 2.71 | 3.43 | 87.2 |
| SEEM (L) | Davit-d5 | Y | 1.38 | 2.70 | 3.47 | 88.5 |
| SEEM (B) | SAM-B | Y | 1.29 | 2.53 | 3.21 | 86.2 |
| SEEM (L) | SAM-L | Y | 1.30 | 2.42 | 2.97 | 84.5 |
Here, Multi-Queries in Single Inference means that we are able to forward multiple interactive query in a single forward. This new capability will not add any training or inference overhead but will improve inference efficiency and accuracy. Our approach now safely surpasses SAM with a good margin, even compare with their huge model.
In addition, as acknowledged by the reviewer, our model holds the potential for scaling up data due to its compatibility with a diverse range of prompts. Some large-scale datasets that can be harnessed for training encompass SA-1B (consisting of 11 million images, each annotated with masks) and Objects365 (comprising 1.7 million images, annotated with semantic information). To effectively leverage Objects365, which exclusively offers bounding box annotations for training, we can employ SAM to generate mask annotations based on the provided bounding box in Objects365. It is important to note that accommodating such voluminous data for training demands an extended period, coupled with the intricacies of harmonizing diverse data formats inherent in disparate datasets. Therefore, we leave this to future work.
---
Rebuttal 2:
Comment: Hi reviewer x17t, thanks for your reviews again! And hope the rebuttal material solve your confusion.
It would be really appreciated if you are willing to give any additional comment, and feel free to drop any question if you have any confusion on the rebuttal material : )
---
Rebuttal Comment 2.1:
Comment: Thanks for your effort during the rebuttal.
The experimental results are very satisfying and promising. I will slightly improve my rating after the rebuttal.
If the authors have time, It would be better to scale up the the model with larger dataset and share the results with the whole community, although it is not necessary to do it during the rebuttal due to the limited time.
---
Reply to Comment 2.1.1:
Comment: Thanks for your encouraging comment! We will try our best scale up SEEM to larger dataset, it is both beneficial to the community and our work itself : ) | Summary: [Task] In this work, authors introduc SEEM, a promptable and interactive model designed for comprehensive image segmentation. SEEM aims to segment all objects in an image simultaneously, addressing various segmentation tasks. The key contribution of SEEM is its novel decoding mechanism, which allows for diverse prompting and behavior similar to large language models (LLMs).
[Method] SEEM is built with four key objectives: versatility, compositionality, interactivity, and semantic-awareness. The model incorporates a visual prompt that unifies different spatial queries such as points, boxes, scribbles, and masks, enabling generalization to different referring images. It also learns a joint visual-semantic space for dynamic prompt composition in different segmentation tasks. Additionally, SEEM incorporate learnable memory prompts in the decoder to retain segmentation history and utilizes mask-guided cross-attention from the decoder to image features. Moreover, a text encoder is employed to encode text queries and mask labels into the same semantic space, facilitating open-vocabulary segmentation.
[Experiments] Empirical studies validate the effectiveness of SEEM across diverse segmentation tasks. The model achieves competitive performance in interactive segmentation, semantic segmentation, referring segmentation, and video object segmentation across nine datasets, requiring only minimal 1/100 supervision. SEEM demonstrates remarkable generalization capability to novel prompts or their combinations, positioning it as a versatile and universal image segmentation interface.
Strengths: [highlight the trend towards more flexible segmentation models] In this work, the authors address the need for a universal segmentation interface capable of handling different types of human prompts and addressing various segmentation tasks. They highlight the trend towards more flexible segmentation models, including open-vocabulary segmentation, referring segmentation, and interactive segmentation. Inspired by the success of Large Language Models (LLMs) as universal interaction interfaces for language tasks, the authors propose SEEM, a promptable and interactive model for segmenting everything everywhere all at once in an image.
[technical contribution] The proposed approach, SEEM, introduces a novel prompting scheme in the mask decoder with four key properties: versatility, compositionality, interactivity, and semantic-awareness. By encoding different types of prompts (points, masks, text, boxes, and referred regions) into a joint visual-semantic space, SEEM achieves strong compositionality and can handle any combination of input prompts. Memory prompts are introduced to retain previous segmentation information and enable interactivity. The model also provides open-set semantic labels for output segmentation.
[tasks and results] The model is trained on diverse segmentation tasks to learn how to handle different prompts, align visual and text prompts, and promote their synergy through cross-attention. The single pre-trained SEEM model achieves competitive performance across all segmentation tasks, leveraging the joint visual-semantic space for prompt combination and zero-shot adaptation. In addition to its strong generalization capability, SEEM is efficient for interactive segmentation compared to other methods. Overall, SEEM offers a segmentation interface with a single pre-trained model that can handle all types of prompts, segment every object with semantics, and cover every pixel in the image.
Weaknesses: [Model performance] The model performance of SEEM on certain benchmarks, including ADE20K for open-vocabulary panoptic segmentation and DAVIS for video instance segmentation, is often lower, and in some cases, significantly lower than the baselines X-Decoder, ODISE, and UNINEXT. It is worth noting that SEEM's framework follows a similar approach to X-Decoder.
While the concept of a universal and interactive segmentation interface is intriguing, it appears that the model performance on open-vocabulary benchmarks is compromised in SEEM. It would be beneficial for the authors to provide an explanation for this discrepancy.
[Unfair comparisons] In Table 2, there are some comparisons that seem unfair for two reasons. Firstly, the evaluation of SEEM on COCO, which provides instance-level annotations, aligns better with SEEM's training pipeline that includes instance-level annotations. However, this evaluation method puts SAM at a disadvantage since it is trained to respect hierarchical segmentation, including whole-part-subpart relationships. Secondly, SEEM is already trained on in-domain data from COCO's training set, while SAM is pretrained on out-of-domain data. This discrepancy in training data can impact the performance comparison between the two models. To ensure fair comparisons, it would be beneficial to evaluate both models using evaluation protocols that align with their respective training pipelines and take into account the domain of the training data.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: [Handling dataset imbalance] Since SEEM is trained on multiple datasets, there might be an imbalance in the number of samples per dataset. I'm curious to know if adjusting the frequency of training samples from each dataset has any impact on the model's performance. Can the authors shed some light on whether they have explored techniques such as data augmentation or sample weighting to address this issue? Additionally, it would be interesting to understand if adjusting the training sample frequency has any effect on mitigating the impact of dataset imbalance and improving the overall model performance.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Performance on open-vocabulary segmentation is lower in comparison with X-Decoder, and performance on video instance segmentation performance on DAVIS is lower than UNINEXT.**
We agree with the reviewer that our open-vocabulary performance on ADE20k is *potentially* lower than X-Decoder and ODISE, and the VIS performance is lower than UNINEXT. While all these methods are toward a unified architecture, the underlying goal and training data are very different.
| | Input Modality | Output Modality | Composition | Emerging Property | Segmentation Data | Ensemble |
|-----------|---------------------------------|-------------------|------------------|-----------------------------------------------------------|--------------------|----------|
| SEEM | Vision & Language & Human Input | Vision | Task & Embedding | Referring Image Segmentation, Video Instance Segmentation | COCO (0.12M) | N/A |
| X-Decoder | Vision & Language | Vision & Language | Task | N/A | COCO (0.12M) | N/A |
| ODISE | Vision | Vision | N/A | N/A | COCO (0.12M) | Y |
| UNINEXT | Vision & Language & Video | Vision | N/A | N/A | Image + Video (3M) | N/A |
(1) Open vocabulary Segmentation: In comparison to X-Decoder, ODISE, we agree with the reviewer that we are potentially lower. However, X-Decoder are trained on 4M more image text pairs, while ODISE have heavily fixed ensemble modules (Implicit Captionor, Mask Generator, Diffusion UNet), while only the alignment is tuned. The SEEM objective is never trained towards open-vocabulary segmentation than X-Decoder and ODISE.
One interesting thing is that we never provide any quantitative open-vocabulary evaluation in neither main paper nor supplementary material. To resolve the confusion, here we provide the full evaluation table. we can clearly observe that although SEEM never trained towards open-vocabulary segmentation objective, however, it retains resonable open-vocabulary capability with X-Decoder pretrained checkpoint. In addition, for the model initialized with SAM ViT-B backbone that doesn't have any vision-language pretraing than UniCL, it still perform comparable with X-Decoder-Seg.
| | COCO | | | Others | ADE20K | | |
|--------------------|-------|------|---------|------------------|--------|------|------|
| | Class | Mask | Caption | Image-Text Pairs | PQ | mAP | mIoU |
| x-Decoder-Seg (B) | Y | Y | X | X | 15.3 | 8.3 | 19.5 |
| x-Decoder-Seg+ (B) | Y | Y | Y | X | 16.9 | 9.5 | 23.8 |
| X-Decoder (B) | Y | Y | Y | Y | 21.1 | 11.7 | 27.2 |
| SEEM (davit-B) | Y | Y | X | X | 16.7 | 11.6 | 23.5 |
| SEEM (samvit-B) | Y | Y | X | X | 13.3 | 9.5 | 18.9 |
(2) Video Object Segmentation: **UNINEXT is explicitly trained with video data and even on VOS dataset**, where our approach is only trained on single image and coco dataset (4th column in Table.3). For the models that does not trained on Video dataset but with pair images (Painter-L), SEEM nearly doubled the performance (34.6 vs. 62.8 on JF Davis17, 24.1 vs 53.8 on G VOS18). Noted that we never trained on pairwise images. And even compared with the model tuned on in-domain dataset (PerSAM-L with DAVIS data), SEEM could also surpass the baseline with a margin (62.8 vs. 60.3 JF on DAVIS). These results all showed that our approach has strong zero-shot and task generalization capability.
**Q2: Unfair Comparison with SAM on in-domain dataset COCO**
In both Table.1 and Table.2, we compare SEEM and SAM on VOC, OpenImage and ADE20k dataset (these are out-of-domain datasets) in addition to COCO on both interactive segmentation and first click IoU (1-IoU). COCO is only one of four dataset that we are evaluating, and our model is trained on COCO only and never see other datasets like OpenImage, VOC and etc..
**Q3: Training dataset imbalance and joint training protocol**
The problem on joint dataset training is very interesting. However, as shown in Table. 1 Main Paper (L188-189) we never trained on multiple datasets in SEEM, we only trained on COCO dataset that is 1/100 data of SAM.
For the discussion of joint dataset training itself, X-Decoder and UNINEXT are trained on multiple datasets. In X-Decoder, it trained on COCO and 4M image text pairs while it takes 32 COCO images and 1024 image text pairs in one training iteration. That strategy is chosen by task, where COCO are trained 50 epoch in Mask2Former, and image-text pair loss usually need 1024 batch size to avoid performance drop. In addition to sampling frequency, balancing loss on different tasks also plays an important role on joint training. In addition to train the model in a single batch one pass, UNINEXT are taken use of both object level and pixel level image data, while also leverage video data. In addition to balance the training data in a single batch, they also train different data in different round that means their model is trained on object detection data in the first round, segmentation data in the second round, while for video segmentation on the third round.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comprehensive response addressing the questions raised during the rebuttal period. I would also like to extend my gratitude to the authors for clarifying some of the confusion I had regarding this paper. Having reviewed the rebuttal, I find that the majority of my questions have been effectively resolved. Consequently, *I am inclined to recommend the acceptance of this paper.*
While I acknowledge the significant interest inherent in many of the findings presented in this work, I still have reservations regarding the performance gap between X-Decoder and SEEM, particularly given that SEEM is built upon X-Decoder. Echoing questions raised by other reviewers, I propose that training the model on a large-scale dataset and reporting the results would be nice. I am intrigued to learn whether increased data for training could yield further improvements in model performance.
---
Reply to Comment 1.1.1:
Comment: Hi Reviewer R6GZ, thanks so much for your encouraging comments! We agree with your comment that while building upon X-Decoder, SEEM has its own advantages (composition capability, emerging property and etc. ) and disadvantages (overall open vocabulary segmentation performance). However, we emphasis that the disadvantages are coming from the task focusing. We have done some pilot study on scaling up segmentation dataset, that has shown generalization ability of our approach (improved NoC, 1-IoU). Thanks again for the reviewer's reply : )
---
Reply to Comment 1.1.2:
Comment: Hi reviewer R6GZ,
Thanks so much for your review and positive comments!
We appreciated that you are considering "inclined to recommend the acceptance of this paper" during the discussion period. The current rating is still "Borderline accept", it would be really appreciated if you could **make the final recommendation before the deadline on Aug 21st 1 pm EDT** : )
Thanks for your patience again!
---
Rebuttal 2:
Comment: Hi reviewer R6GZ, thanks for your reviews again! And hope the rebuttal material solve your confusion.
It would be really appreciated if you are willing to give any additional comment, and feel free to drop any question if you have any confusion on the rebuttal material : ) | Summary: This paper proposes a universal segmentation model, SEEM, for all segmentation tasks. A visual sampler module unifies different kinds of human inputs and they are encoded into a joint visual-semantic space together with image and text so that the SEEM model can learn semantic labels for masks. The proposed SEEM model achieves competitive performance across interactive segmentation, generic segmentation, referring segmentation, and video object segmentation. Besides, SEEM shows a strong ability to generalization to novel prompts or their combinations.
Strengths: 1. The proposed model has superior performance and has achieved competitive results across different types of segmentation tasks.
2. The proposed model can accept a variety of visual prompts input types and has a strong generalization ability, which makes it possible to be applied to more scenarios.
3. Different from the previous segmentation model, the proposed model can learn the semantic labels corresponding to the segmentation masks, thus enabling open vocabulary segmentation.
Weaknesses: 1. In this paper, when comparing the performance of the proposed model with the previous models, only the scales of training data are compared, without the comparison of parameters and calculation. This may cause some doubts about the comparison fairness.
2. The analysis of some results is insufficient. In the ablation study section, the performance changes caused by the change of variables are not consistent in different segmentation tasks. Only the changing trend of the results is given without analyzing the potential reasons.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Although SEEM can learn the semantic labels of masks, it is limited to the categories of COCO. Is there any way to expand the scope of semantic labels without introducing a large amount of extra data?
2. Can you provide a comparison with other universal segmentation models in terms of the number of model parameters and calculations? If this comparison is not applicable, what are the reasons?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are not explicitly discussed in this paper. The scale of training data is limited, and it does not support partial segmentation as in SAM (Segmenting Anything Model).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Insufficient Analysis for ablation study.**
Thanks for the suggestion. We have analyzed the ablation study in Section 4.2 of the main paper. To make it more comprehensive, we further summarize and analyze the main findings in Table 4 below:
*Removing LVIS mask annotations (Row 2 vs. Row 6)* from the training data will improve generic segmentation, and referring segmentation performance, but decrease the interactive segmentation performance. That is caused by the fact that although LVIS segmentation mask is more accurate than coco (improves performance on interactive segmentation), however, its mask has a domain gap with coco annotation (decrease on generic and referring segmentation).
*Training from scratch (Row 4 vs. Row 7)* that (1) UniCL pretrained weight instead of X-Decoder pretrained weight is used, (2) the vision backbone and transformer encoder are unfixed. It will not influence generic segmentation performance, but decrease the referring segmentation performance, and improve interactive segmentation, vos performance. That is caused by the fact that X-Decoder weight actually provides better region-language alignment than UniCL pretrained weights. Thus the performance drop on referring segmentation is reasonable. In addition, as the weight of the vision backbone and transformer encoder weight is tuned, it will give better performance on interactive segmentation tasks that is newly proposed in SEEM as more weight can be adapted to the new task.
*Increasing the training iteration (Row 5-8)* on interactive segmentation will not influence the performance on generic segmentation, referring segmentation and vos, while improving the interactive segmentation result consistently. That is caused by the fact that the first n-1 iterations will only forward interactive segmentation without any gradient. Also adding training iterations that mimic the human interaction with images will improve performance when evaluating multi-round IoU.
We will include the above detailed analysis in our revision.
**Q2: Generalize to open-vocabulary interactive segmentation.**
We are able to generalize to more semantic labels without further training as shown in the example images in Fig.3 of supplementary material (e.g. Yellow Corn, Leaf, Orange Juice, etc. are never trained in coco dataset). This behavior is acquired by two reasons:
(1) We leveraged language encoders that are pretrained with language-image contrastive loss (CLIP or UniCL), which renders more generalizable language embeddings for a wide range of visual concepts.
(2) We used the pre-trained X-Decoder as the initialization and only fine-tuned the decoder part so that it retains open-vocabulary capability.
In addition to the qualitative results, here we evaluate on ADE20K dataset and compared it with prior work X-Decoder. From the table, we can clearly observe that although SEEM never trained towards an open-vocabulary segmentation objective, however, it retains reasonable open-vocabulary capability with X-Decoder pretrained checkpoint. In addition, for the model initialized with SAM that doesn't have any vision-language pretraining such as UniCL, it still performs comparably with X-Decoder-Seg that uses the same amount of annotations.
| | Backbone | COCO | | | Others | ADE20K | | |
|--------------------|-----------|-------|------|---------|------------------|--------|------|------|
| | Pretrain | Class | Mask | Caption | Image-Text Pairs | PQ | mAP | mIoU |
| x-Decoder-Seg (B) | UniCL | Y | Y | X | X | 15.3 | 8.3 | 19.5 |
| x-Decoder-Seg+ (B) | UniCL | Y | Y | Y | X | 16.9 | 9.5 | 23.8 |
| X-Decoder (B) | UniCL | Y | Y | Y | Y | 21.1 | 11.7 | 27.2 |
| SEEM (davit-d3) | X-Decoder | Y | Y | X | X | 16.7 | 11.6 | 23.5 |
| SEEM (samvit-B) | SAM | Y | Y | X | X | 13.3 | 9.5 | 18.9 |
We will incorporate this discussion in our revision.
**Q3: Comparisons of model parameters and calculations.**
Thanks for pointing out this. In this work, our goal is to develop a versatile and universal image segmentation interface and thus we mainly focused on the comparisons with previous work in terms of task breadth and performance. Regarding the model parameters and calculations, we did list the rough size of backbones used in different methods (tiny (T), base (B), and large (L)), but failed to find a well-established way from the prior arts to analyze the computation of each generalist model, considering it is task-dependent while each model is used for different sets of tasks. Moreover, we faced the difficulty that the weight and inference code are not necessarily released for each specific task. That being said, we are not able to provide a thorough analysis of calculations for now but will definitely try our best to find a common-grounded way to solidly study this aspect.
---
Rebuttal 2:
Comment: Hi reviewer cQYZ, thanks for your reviews again! And hope the rebuttal material solve your confusion.
It would be really appreciated if you are willing to give any additional comment, and feel free to drop any question if you have any confusion on the rebuttal material : )
---
Rebuttal Comment 2.1:
Comment: Thanks to the authors for the response. My concerns have been addressed and I will keep my positive rating.
---
Reply to Comment 2.1.1:
Comment: Hi reviewer cQYZ,
Thanks so much for your review and positive comments! It would be really appreciated if you could **confirm your final recommendation rating before the final deadline on Aug 21st 1 pm EDT** : )
Thanks a lot! | Rebuttal 1:
Rebuttal: First of all, **we thank all reviewers for their valuable comments and suggestions!**
We sincerely appreciate all reviewers’ time and efforts in reviewing our paper. We are glad to find that reviewers generally recognized our contributions:
**Model.** A strong generalization ability that is able to apply to more scenarios (cQYZ). Enabling open vocabulary segmentation in addition to interactive segmentation (cQYZ). Highlight the trend towards more flexible segmentation models (R6GZ). The proposed method is technical novel (R6GZ). Innovative and highly effective (x17t). Unifying several segmentation tasks (gMmy).
**Experiments.** Superior performance across different types of segmentation tasks (cQYZ). The single pre-trained SEEM model achieves competitive performance across all segmentation tasks (R6GZ). The experimental results are solid (x17t). Impressive results on most tasks (gMmy).
**Writing.** The paper is well-presented, showcasing clear and concise writing (x17t). Reasonably well written (gMmy).
**Two Additional Contributions:**
* We add multi-instance visual prompt training and inference for interactive segmentation (Refer to Fig.1 in Rebuttal PDF). That enables query multiple objects in a single forward pass.
* We use SAM ViT backbone as the pretrained checkpoint to "scale up" our training dataset.
| | Backbone | Multi-Queries in Single Inference | 20-NoC@50 | 20-NoC@85 | 20-NoC90 | 1-IoU |
|----------|----------|------------------------------------|-----------|-----------|----------|-------|
| SAM (B) | SAM-B | X | / | 3.28 | 4.12 | / |
| SAM (L) | SAM-L | X | / | 2.60 | 3.12 | / |
| SAM (H) | SAM-H | X | / | 2.55 | 3.11 | / |
| SEEM (B) | Davit-d3 | X | / | 2.93 | 3.79 | / |
| SEEM (L) | Davit-d5 | X | / | 2.77 | 3.61 | / |
| SEEM (B) | Davit-d3 | Y | 1.37 | 2.71 | 3.43 | 87.2 |
| SEEM (L) | Davit-d5 | Y | 1.38 | 2.70 | 3.47 | 88.5 |
| SEEM (B) | SAM-B | Y | 1.29 | 2.53 | 3.21 | 86.2 |
| SEEM (L) | SAM-L | Y | 1.30 | 2.42 | 2.97 | 84.5 |
We found both techniques improves the performance with a good margin.
**New Experiments:**
* Open-Vocabulary Generic Segmentation on ADE20K dataset in comparison with X-Decoder baselines. (Reviewer cQYZ, R6GZ)
* "Scaling up" with SAM pretrained checkpoint. (Reviewer x17t)
* Enable multiple interactive query in a single forward pass.
* Ablation Study on gradually removing prompt type. (Reviewer x17t)
We carefully read all comments and attempted to address the concerns with comprehensive responses, and hope our responses could answer the questions and resolve any concerns regarding our work. Again, we thank all reviewers again for their efforts and constructive comments!
Pdf: /pdf/443ff40e106ac4bc54ad2001e72e6e7f69839da8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
MEMTO: Memory-guided Transformer for Multivariate Time Series Anomaly Detection | Accept (poster) | Summary: This paper presents a reconstruction-based method for multivariate time series anomaly detection. It is a memory-guided Transformer which contains the gated memory module. Because of the training instability in updating memory items incrementally when the items are
initialized randomly, the authors propose a two-phase training paradigm to ensure stable training. The method calculates anomaly scores considering the input and latent space. The method achieves state-of-the-art results on five benchmark datasets.
Strengths: 1. The authors make a good explorations on leveraging memory module to multivariate time series anomaly detection task, and the proposed method can be effective solutions for the given task.
2. This paper contains enough experiment results and ablation study on five datasets to support its claims.
3. The proposed methodology is well presented, and most of the paper is well written.
Weaknesses: 1. The paper writing can still be improved. For example, the grammar error on line 88 "there exist video representation learning" -> "there exists video representation learning". Typo on line 234 "32.56p%". The citation in related work can be extended with author's name + 'et al.' rather than just a number.
2. The font size in the figures (in introduction and experiment) is pretty small.
3. I saw that you split the training data into 80% for training and 20% for validation. Do other methods for comparison in the experiments share the same data split(ratio and files)? How many times you repeated your experiments? Since I saw the results you reproduced for Anomaly Transformer are quite different from their own paper, is it fair to list all these methods to compare with your model?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address the points mentioned in Weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have addressed the potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are thankful for your meticulous review and feedback on our paper. Your perceptive observations have steered us towards enhancing and refining our work.
$\textbf{Weaknesses}$
1. We appreciate your keen attention to detail and will proceed to correct the grammar error on line 88, the typo on line 234, and the citation format as you pointed out.
2. We agree with your observation regarding the small font size in our figures. We will ensure to increase it for better readability as per your suggestion.
3. In our paper, all the methods including MEMTO employed the same data division ratio of 80% for training and 20% for validation. The main experiment in Table 1 in our paper was conducted once, but we carried out additional trials to calculate statistical measures such as averages and standard deviations, yielding robust scores over multiple runs (see Table 1 of the PDF, in the section on global response). Moreover, we noticed a discrepancy between the performance of the Anomaly Transformer reported in its original paper and our findings. While the original paper stated that they used 80% of the data for training and 20% for validation, a review of the official source code revealed a different setting. They used the entirety of the training data (100%) for training, inclusive of the 20% initially set aside for validation. Since this deviates from the standard measurement procedure, we decided to adjust the data settings for the Anomaly Transformer to align with that of MEMTO, by excluding the validation data from training. This adjustment ensures a fair comparison of performances.
---
Rebuttal Comment 1.1:
Comment: My concerns have been addressed by authors.
---
Reply to Comment 1.1.1:
Title: Thank you for your comment
Comment: We are delighted that we could address all of your questions, and we deeply appreciate your active involvement. | Summary: The authors propose a memory-guided transformer with a reconstruction paradigm for multivariate time series anomaly detection. The time-series encoder, memory parameters of normal patterns, and projection heads are updated during training time in a two-step fashion. On test time, input queries are projected on the normal patterns learned base with a learned sparse linear projection. As a result, reconstructed anomalies tend to look like normal samples, amplifying the reconstruction loss and enabling detection. Experiments are run over common multivariate time-series datasets following previous works practices.
Strengths: * The proposed gated memory module seems simple, original, and efficient for detecting anomalies on multivariate time series based on transformer encodings and input sample reconstruction.
* The authors provide thorough ablation studies and some insights into the importance of each part of their algorithm.
* The manuscript is overall easy to follow and well-written. Code is provided.
## Suggestions
The typical memory size is quite small, which I see as a positive point of this method. So, I suggest this could be emphasized in the main manuscript. I read the entire paper wondering what the memory size was and whether the trade-off memory-performance was good, just to be positively surprised in the appendix.
Weaknesses: ## Regarding novelty
1. Overall this work seems to be an incremental modification of [8], [24], and [40]. In particular, contribution #3 (line 77) does not seem very novel. I would like the authors to please discuss Equation (11) in this paper versus (6) in [40] in more detail.
2. How does the memory update stage is different from other memory gates, such as GRU, MRU, etc?
## Regarding state-of-the-art claim
1. Why not compare to MNDA [24], memAE [8], USAD[2] in Table 1? Why are A.T. numbers in Table 1 different from [40]? I would suggest adding error bars at least to these methods and yours to make the comparison fair. I know this was not done in these previous works, but claiming state-of-the-art without comparing confidence intervals in an empirical field is not a good practice.
## Regarding methodology
4. Not clear why not just have a memory on the input space and retrieve the closest samples instead of the proposed mechanism on the latent space?
5. Why is it so sensitive to k-means initialization? Is it computed with enough iterations of the K-means algorithm?
6. How does algorithm 1 outlines the two-phase training paradigm (line 175)? The first and second phase seems to be omitted.
## Other less important questions
7. How are the thresholds computed in figure 3?
8. What is the computation overhead when compared to A.T.? MEMTO seems simpler computationally. Could the authors discuss this?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see the weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed potential limitations and shortcomings of their algorithm and no special negative societal impact exclusive of their work is evident.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We're grateful for your detailed evaluation and insight on our research paper. We will make sure to incorporate the parts that you suggested for clarity and reflect their feedback on paper.
$\textbf{Suggestion}$
As you suggested, we will emphasize in the main manuscript that our proposed MEMTO is robust to the number of memory items and even performs well with small memory size (low resource setting), in our revised version.
$\textbf{Regarding novelty}$
1. Drawing inspiration from [8], [24], and [40] like you mentioned, MEMTO stands out from prior works with the following distinctive features. Due to the limit on response length, we answered the follwoing question on the global response.
2. As you mentioned, both our proposed Gated memory module and other memory gates, such as the GRU and MRU, have the ability to control how much new information is injected. However, other memory gates, such as GRU and MRU, are often inadequately compartmentalized to retain precise knowledge from the past, as the information gets compressed into dense vectors. Therefore, it is impossible to have them compartmentalized and memorize diverse prototypical features of normal patterns in the data. In contrast, Gated memory update stage in our Gated memory module applies attention-based operation (query-conditioned memory attention) while writing new information into each memory item. Since each memory item can retrieve information from only relevant queries, this mechanism enables each memory item to be compartmentalized and retain prototypical features of normal patterns in the data.
$\textbf{Regarding state-of-the-art claim}$
3. The reason we did not add MNAD [24] and memAE [8] as baselines in Table 1 in our paper is that they are anomaly detection models for the computer vision domain. The performance of MNAD and MemAE is shown in Table3 in our paper as the result of an ablation study on the memory module. In the experiment, only the memory module is modified while keeping all other experimental conditions, including the pipeline and architecture, identical to those used in MEMTO. Therefore, they cannot be considered as independent models for multivariate time series anomaly detection tasks, as the only difference lies in the memory module itself. Also, while USAD computes the anomaly score on a window level to determine if each window is anomalous, we compute the anomaly score on a timestamp level to ascertain the anomalousness of each time point. Therefore, we only used models that compute the anomaly score on a timestamp level as our baselines. Moreover, we agree that adding error bars is a valid way to properly demonstrate the state-of-the-art results and provide a more comprehensive understanding of the experimental outcomes. We provide you with some additional experiments comparing with the latest state-of-the-art model, Anomaly Transformer (A.T), on five real-world benchmark datasets.The results can be seen in Table 1 of the PDF, in the section on global response.
$\textbf{Regarding methodology}$
4. The latent representations obtained from the transformer encoder for each timestamp consider the temporal dependency information between timestamps. On the other hand, the raw timestamp values in the input space do not inherently account for temporal dependencies with other timestamps, making them unsuitable for accurately learning the normal prototypical pattern of the time series. As you pointed out, not explicitly mentioning this in the main script could lead to ambiguity. We will clearly note this in our revised paper.
5. As mentioned in line 243~247, we empirically observed that updating memory items incrementally can result in training instability if the items are initialized randomly. Memory module initialization with K-means clustering enables injecting inductive bias of normal prototypical patterns into memory items. By providing the memory items with informative initial values (rough prototypical feature of the normal pattern), the model starts with a better understanding of the normal pattern at the second phase of two-phase training paradigm, which can lead to more stable and efficient learning compared to simply using random initialization. When experimented with multiple random seeds(using a two-phase training paradigm), MEMTO consistently demonstrates stable and robust detection performance. This result indicates that the iteration value, which is the hyper-parameter of the K-means algorithm, is large enough for our model to have reliable performance.
6. Thank you for careful checking of our paper. As you rightly pointed out, we will correct it as ‘Algorithm 1 outlines the memory module initialization with K-means clustering’ in our revised version.
$\textbf{Other less important questions}$
7. In the case of threshold selection, you can refer to Appendix A Training details (line 440~442).
8. The computation analysis between MEMTO and Anomaly Transformer is explained in the response on the first question (Regarding novelty). To quantitatively verify the computational efficiency in MEMTO, we conduct additional experiments for inference time of MEMTO compared to Anomaly Transformer on five real-world benchmark datasets. Results in Table 4 of the PDF in the section on global response, show that whereas training time for MEMTO is longer than that of Anomaly Transformer, its inference time in real time detection process is shorter.
Since we agree that computation overhead is crucial measurement for real-world application, we will include this content in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and the additional experiments. All my concerns were addressed, especially regarding the experimental and novelty parts. I raise my recommendation accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you for your comment
Comment: We are grateful to you for dedicating time to read our clarifications and for reevaluating your assessment of our work. | Summary: The paper used memory network to capture frequently present normal pattern in date set. While training, the memory are built in latent space in the autoencoder framework. Along with reconstruction error, the distance between a representation with the closest memory unit is used to calculate anomaly score. If a data does not have representation close to memory then that is an anomaly.
Strengths: Idea of keeping more than one memory for common patterns from normal data is Novel and makes more sense than existing methods of keeping memory of anomalies. Experimental results were produced on a large number of data sets.
Weaknesses: Literature surveys missed graph based anomaly detection techniques for multivariate time series like Deng, Ailin, and Bryan Hooi. "Graph neural network-based anomaly detection in multivariate time series." In Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 5, pp. 4027-4035. 2021.
It is required to have an ablation study in the main paper to bring our significance of memory gate and Kmeans clustering.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Most cases the F1 score for the proposed method is close to the A.T benchmark. Did you do a statistical significance test?
Do you run your experiment for different random train text split? In that case how is the variance of accuracy ?
The F1 score for only LSD and only ISD is quite low. Did you note down if there are more FP or FN in both cases?
How to decide the required number of memory unit ?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Number of majority patterm in normal part need to be known before which is not possible during unsupervised task.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your careful reading of our paper, and your feedback. Your insightful feedback has guided us in expanding and improving our paper.
$\textbf{Weaknesses}$
$\textbf{Literature surveys missed graph based anomaly detection techniques for multivariate time series}$
We will also include graph-based anomaly detection methods for multivariate time series such as the paper (1), (2), and (3) in the literature review.
(1) Deng, Ailin, and Bryan Hooi. "Graph neural network-based anomaly detection in multivariate time series." In Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 5, pp. 4027-4035. 2021.
(2) Zhang, Weiqi, Chen Zhang, and Fugee Tsung. "Grelen: Multivariate time series anomaly detection from the perspective of graph relational learning." Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22. Vol. 7. 2022.
(3) Zhao, Hang, et al. "Multivariate time-series anomaly detection via graph attention network." 2020 IEEE International Conference on Data Mining (ICDM). IEEE, 2020.
$\textbf{It is required to have an ablation study in the main paper to bring our significance of memory gate and Kmeans clustering.}$
The key aspect of the Gated memory module is the update gate, which is the most significant point of differentiation compared to MNAD and MemAE. Therefore, to assess the capability of our proposed Gated memory module mechanism, we conduct an ablation study comparing MEMTO with other memory modules in Section 4.3 (line 232-242) and present the following results in Table 3. Moreover, as for ablation study on memory module initialization with K-means clustering, experimental results in Table 3 provide strong evidence of its effectiveness (see section 4.3 Ablation study line 243-249).
$\textbf{Questions}$
$\textbf{Most cases the F1 score for the proposed method is close to the A.T benchmark. Did you do a statistical significance test?}$
We had not performed statistical significance tests in the manuscript since it was not generally done in the previous works. However, as you requested, we conduct additional experiments with varying random seeds to verify the mean and standard deviation of the performance (see Table 1 of the PDF, in the section on global response). We also perform statistical tests comparing our results to the latest state-of-the-art model, Anomaly Transformer. We conduct a t-test to prove a significant performance difference between MEMTO and Anomaly Transformer. The results show a p-value less than 0.05 across all datasets, confirming a significant performance difference between the two models (see Table 1 of the PDF, in the section on global response).
$\textbf{Do you run your experiment for different random train text split? In that case how is the variance of accuracy?}$
We did not run the experiment for different random train-test splits. The reason is that the train dataset and test dataset from the benchmarks are pre-split and designated when downloaded from the archive, where the anomaly labels exist only in the test dataset.
$\textbf{The F1 score for only LSD and only ISD is quite low. Did you note down if there are more FP or FN in both cases?}$
In Table 2 in our paper, we provided F1 score as the only performance metric. Nonetheless, at our experiments, we also record precision and recall values. The results we obtained can be seen in Table 2 of the PDF, in the section on global response. Both precision and recall values are lower in the "Only LSD" or "Only ISD" cases compared to the "LSD+ISD" case. This indicates that in both cases, we have more false positives (FP) and false negatives (FN) compared to the combined approach (LSD+ISD).
$\textbf{How to decide the required number of memory unit?}$
In the Supplementary material, Appendix C.2 (line 478), there is an experiment result conducted on the number of memory items. Through this experiment, we discovered that MEMTO is robust to the number of memory items and works effectively even with a small number of items. Considering computational efficiency, we chose to use only 10 memory items as default value in our experiments.
$\textbf{Limitations}$
$\textbf{Number of majority patterm in normal part need to be known before which is not possible during unsupervised task.}$
The number of memory items, identical to the number of prototypical patterns of normal data, is not a known prior value but a hyperparameter. We experimented by varying this hyperparameter in Appendix C.2 and found that MEMTO shows robust performance regardless of its value.
---
Rebuttal Comment 1.1:
Comment: I thank Author for answering all my questions.
---
Reply to Comment 1.1.1:
Title: Thank you for your comment
Comment: We are pleased that all of your questions have been resolved, and we are grateful for your engagement. | Summary: The authors of this paper focused on tackling the problem of over-generalization in reconstruction-based deep models and made a contribution to the field of multivariate time series anomaly detection. The main challenge they encountered was dealing with complex dependencies and inter-variable correlations within the data. To address this, they proposed a novel gated memory module approach. This module learns how much each memory item should be updated based on the input data, effectively capturing the prototypical features of normal patterns in the data. By doing so, it aims to overcome the inability of existing models to capture dynamic nonlinear temporal dependencies and complex inter-variable correlations.
Strengths: • The authors explained the importance of multivariate time series anomaly detection with the proper real-world examples
• Comprehensive comparison of their approach with traditional methods (OC-SVM, Deep-SVDD, LSTM-VAE, Anomaly Transformer etc.)
• Thorough explanation of the training hyper-parameters, dataset setting, algorithms and code
• Achieved state-of-the-art results on five real-world benchmark datasets
Weaknesses: • Lack of theoretical proof: two-phase training approach and bi-dimensional deviation-based criterion need a detailed theoretical proof
• Limited discussion of computational efficiency: how gated memory module and two-phase train- ing affect the training time
• I couldn’t find any explicit explanations that how the LSD and ISD help the training. In the descriptions, LSD and ISD only appears in Table2 (and the sec3.4). If LSD+ ISD just help the anomaly score but doesn't help the training, this should be clearly justified.
• Need more quantitative experiment about the improvement from gated memory seems limit. (Table 3)
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: • How does the proposed method compare to existing approaches in terms of computational efficiency? Were any considerations or optimizations implemented to ensure its feasibility in real- time or resource-constrained environments?
• What led the authors to choose K-means clustering for initializing memory items? Is K-means considered the most effective approach for this task, particularly in terms of computational efficiency?
• How does the model behavior change if sub-series are generated with the help of overlapping sliding window?
• Since K-means clustering creates prototypes based on the seen data items, how does the model handle unseen cases? If the proposed MEMTO couldn't consider the degree of new information being injected into existing memory items, what is the contribution of the MEMTO compared to the MNAD?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: Please refer to the Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your dedicated review of our work. Your critical feedback have helped us to extend and refine the paper. We provide a detailed response to your comments.
$\textbf{Weakness}$
- $\textbf{Lack of theoretical proof}$: We agree that we lack theoretical proof, and we have mentioned this in Conclusion(line 286). We plan to address this as a topic of our future research.
- $\textbf{Limited discussion of computational efficiency}$: As you suggested, we measured the training time with and without using Gated memory module and the two-phase training paradigm, respectively. The results are presented in Table 4 of the PDF, in the section on global response. Training MEMTO without the Gated memory module reduces both the number of parameters and training time, but it leads to decreased performance by 26.27%p. Also, while the two-phase training paradigm requires approximately 2.45 times more time for training compared to its absence, the inference time remains the same. As shown in Table 3 in our paper, using a two-phase training paradigm improves the performance by 10.4%p, leading us to apply it at the cost of increase in training time.
- $\textbf{Explanation of LSD and ISD}$: As you can see in Section 3.4, LSD and ISD are used in our proposed anomaly score computation scheme, which reflects how anomalous a given timestamp is. As conventionally done in anomaly detection task, anomaly score is only for inference and is completely irrelevant to the training process. Table 2 in our paper represents an ablation study where we experimented with varying methods of calculating the anomaly score while performing inference on the test dataset with MEMTO.
- $\textbf{Quantitative experiment of Gated memory module}$: The 'Statistical analysis of LSD' experiment conducted in Section 4.4 is related to the Gated memory module (See line 251). To verify if the Gated memory module effectively captures the prototypical normal patterns of the data, we evaluated the distances from the memory items within the Gated memory module to queries of normal timestamps and to queries of abnormal timestamps, respectively.
$\textbf{Questions}$
- $\textbf{Computational efficiency of MEMTO / Considerations for real-world applications}$: We compare the computational efficiency between MEMTO and the previous state-of-the-art model, Anomaly Transformer. The Anomaly Transformer computes and retains series-association and prior-association per encoder layer, then averages the association discrepancies across layers. Performing this process during inference is computationally demanding, while MEMTO, requiring only dot product operations between each query and a few memory items, is simpler and results in shorter inference time. We conducted an additional experiment to verify the effectiveness of MEMTO compared to the previous state-of-the-art model in the aspect of inference time. The results can be seen in Table 4 of the PDF, in the section on global response. The resources used to train MEMTO only require four GTX 1080 GPUs, and the training time took about 2 minutes on SWaT dataset, which can be considered as low computation resources and short training time compared to recent deep learning models. To ensure fast and effective operation in real world applications, we utilized only ten memory items within the Gated memory module and deployed just two MLP layers in the decoder, which makes the little number of parameters possible.
- $\textbf{Reasons to use K-means clustering for initializing memory items}$: Since our objective for initializing memory items was to quickly acquire rough normal prototypical patterns rather than precisely, we chose K-means clustering, which has relatively low computational cost. Given $n$ as the number of data points, the time complexity of K-means clustering is $O(n)$, whereas that of Hierarchical clustering or Expectation Maximization clustering is both $O(n^3)$. Considering computational efficiency, we ultimately opted for K-means clustering.
- $\textbf{Overlapping sliding window vs non-overlapping sliding window}$: We did not employ an overlapping sliding window when constructing sub-series due to computational inefficiency during the model's training and inference phase. Using an overlapping sliding window increases the size of the training dataset, which in turn augments the computational cost required to train the model. Also, when performing inference on the test dataset, the computational cost increases as the anomaly score for overlapping timestamps needs to be calculated multiple times and averaged.
We observe the following changes in performance when using an overlapping sliding window (see Table 3 of the PDF, in the section on global response).
- $\textbf{How does the model handle unseen cases during memory module initialization with K-means clustering?}$: For K-means clustering, we randomly sampled 10% of the training data under the assumption of iid (independent and identically distributed). Given this assumption, we believe the distribution of the remaining 90% of the training dataset, not used for clustering, is similar to that of the sampled 10%.
- $\textbf{The contribution of MEMTO compared to MNAD}$: We agree that the structure of the Gated memory module in MEMTO is similar to the memory module in MNAD, if we set aside the ability to control the degree of injecting new information into existing memory items. However, MEMTO simplifies the anomaly score computation process with bi-dimensional deviation-based detection criterion, which consists of multiplication of normalized LSD and ISD, where LSD functions as weights for amplifying the normal-abnormal gap in ISD. More detailed explanations regarding this question are elaborated on global response.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the additional information in the rebuttal. I have decided to adjust my score to "Borderline reject" as most of my concerns have been addressed. Nonetheless, I remain uncertain about the choice of using K-means clustering for initialization due to its computational efficiency. I am not yet convinced that this is the optimal approach, which is why I refrained from elevating my score to the acceptance level.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer pZZS
Comment: Dear Reviewer pZZS,
We would like to express our gratitude for taking the time to carefully read our rebuttal.
As mentioned in the rebuttal, we propose a two-stage training paradigm that uses a clustering method to set the initial value of memory items to the approximate normal prototypical pattern of the data. As you pointed out, there is no guarantee that 'K-means clustering' is the optimal method. Our claim is not that 'K-means clustering is the optimal choice among various clustering methods,' but rather the effectiveness of a 'two-phase training paradigm that allows for the selection of an appropriate clustering algorithm depending on the type of dataset.' Table 3 in our manuscript shows that designating the initial value of memory items through this 'two-phase training paradigm' is highly effective. According to the experimental results for five real-world datasets, the performance of MEMTO trained with the two-phase training paradigm applied outperforms the performance without the two-phase training paradigm by an average of 8.4%p. Specifically, in the SMAP dataset, the difference is 26.54%p. | Rebuttal 1:
Rebuttal: We are very grateful to all the reviewers for your careful reading of our paper and helpful feedback. We will make sure to incorporate the parts that you suggested for clarity and reflect your feedback on the revised paper. We have compiled the results of additional experiments related to the reviewers' questions and comments in the attached PDF, and kindly ask you to review them. The following responses address common questions raised by several reviewers.
$\textbf{Comparing MEMTO with the Anomaly Transformer [40] in the aspect of anomaly criterion}$
Eq (11) in our paper and eq (6) in [40] are the equations for calculating anomaly scores for real-time detection process (inference). Since both belong to reconstruction-based models, they use reconstruction error as the main criterion for distinguishing between normal and abnormal samples. However, the LSD in MEMTO and the Association Discrepancy in Anomaly Transformer represent fundamentally different concepts. LSD, the distance between each time point’s query and its nearest memory item in latent space, employs the memory items’ nature of storing prototypical features of normal patterns in the data. However, Association Discrepancy, quantified by the distance between each time point’s prior-association and its series-association, utilizes the inherent normal-abnormal distinguishability of the association distribution. The Anomaly Transformer calculates and stores series-association and prior-association in memory for each encoder layer, and then averages the association discrepancies across multiple layers. Performing this process during inference is computationally demanding, while MEMTO, requiring only dot product operations between each query and a few memory items, is simpler, resulting in shorter inference time.
$\textbf{Comparing MEMTO with MemAE [8] and MNAD [24] in the aspect of overall architecture}$
Previous works [8] and [24] are both for the anomaly detection task in the computer vision domain. We proposed MEMTO, an algorithm that takes the characteristics of time series data into account and appropriately applies a memory module to the anomaly detection task in the time series domain for the first time. You can refer to Section 2 for comparison with [8] and [24] in the aspect of memory module mechanism. In particular, our proposed Gated memory module’s main distinction from [8] and [24] is written in line 93~97. Also, to further alleviate the over-generalization problem in reconstruction-based models, we use a weak decoder which requires a small number of parameters. While existing models, including both [8] and [24], typically use a symmetrical encoder-decoder architecture, we deviate from this norm by adopting an asymmetric model structure with a weak decoder. Leveraging a decoder with less parameters also enables our MEMTO to be much more computationally efficient.
Pdf: /pdf/b733de2c348c6630a0b8973cca2583938998ae68.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Can Language Models Solve Graph Problems in Natural Language? | Accept (spotlight) | Summary: The applications of Large Language Models (LLMs) have been extended beyond natural language, covering more complex tasks that might have an implicit graph structure, such as planning for robots. This work introduces a benchmark named Natural Language Graph (NLGraph) to examine the explicit graph reasoning of advanced LLMs.
NLGraph contains eight basic graph reasoning task, varying from easy (connectivity, cycle, topological, ...) to hard (maximum flow, Hamilton path, ...). Based on NLGraph, this work has several findings:
* Current LLM (text-davinci-003) shows preliminary graph reasoning abilities. It achieves much better results than random guess in the easy task, for example, connectivity and cycle.
* Advanced prompting fails in more complex tasks.
* LLM are not robust to spurious correlation.
They also propose two simple prompts to enhance the LLM in easy graph reasoning task.
Strengths: * This paper is well-motivated and finely written. As large language models (LLMs) grow increasingly proficient, we are in need of novel, challenging benchmarks to thoroughly assess their capabilities. The graph structure exemplifies data structures that are prevalent in real life.
* This benchmark is fairly comprehensive, encompassing the eight fundamental types of graph problems. It also forms subsets of varying difficulty levels by manipulating factors such as the number of nodes and density.
* The findings are both intriguing and illuminating. For instance, the revelation that current advanced prompt strategies cannot generalize to complex tasks could potentially motivate research into exploring innovative techniques.
Weaknesses: The evaluations are only conducted on text-davinci-003, therefore, some conclusions may be limited to this model.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: What is the correspondence between, train, main, and test splits provided in the supplementary material and the standard and extended set in the text?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors discuss limitations comprehensively in the supplementary material, partly as future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and positive feedback!
> The evaluations are only conducted on text-davinci-003, therefore, some conclusions may be limited to this model.
While we evaluate text-davinci-003 on the whole dataset, we also provide the results of code-davinci-002 on several tasks in the NLGraph benchmark, which show similar trends. Moreover, we give qualitative analysis results of GPT-3.5-turbo and GPT-4 in the appendix. As we will make the NLGraph benchmark publicly available, we hope future work will further explore other LMs on the NLGraph benchmark.
> What is the correspondence between, train, main, and test splits provided in the supplementary material and the standard and extended set in the text?
Main splits are split into train splits and test splits. The supplementary material only contains the standard set in the text, which we use to run all the experiments. We will make the extended set publicly available after the anonymity period. | Summary: The paper curates a benchmark dataset called NLGraph that contains 8 types of graph reasoning problems. Given that various previous works have used LLMs to solve real-world tasks that implicitly require some form of simple graph reasoning, this dataset aims to isolate and analyze the ability of LLMs to graph reasoning problems. The paper tests various existing prompting strategies and also proposes two new strategies specifically useful for solving graph reasoning problems. The newly proposed strategies are evaluated on the proposed NLGraph dataset and are shown to improve the reasoning performance.
Strengths: ### originality
The paper is the first to curate a dataset aimed at isolating the graph reasoning abilities of LLMs in the abstract. This is complementary to all the works that use LLMs to solve tasks that implicitly require graph reasoning, and thus can provide valuable insights that can be useful in real applications.
### quality
1. The discussion about the limitations of the work (in the appendix) clearly states the scope of applicability of their finding.
2. The experiments are well-designed, especially the ones demonstrating the brittleness of LLMs.
### clarity:
The paper is well-organized, clearly written, and easy to follow.
### significance
The curated dataset and the experimental setup proposed in the paper can be a good starting point for further exploration of the graph reasoning abilities of LLMs, making this work useful for the community.
Weaknesses: ### quality:
1. The performance of the model depends heavily on how the prompts are constructed. While Figure 1 shows the instructions used in the prompts clearly, to ensure reproducibility, the complete prompts for more complex strategies like COT should be provided.
### Significance:
1. The motivation behind choosing the eight specific graph reasoning tasks is lacking. The ultimate goal of performing graph reasoning using LLMs is to use it to solve simple real-world tasks where graph reasoning is required. Given this motivation, the authors should provide additional justification for picking the reasoning tasks used in the paper.
2. For the findings to apply to real-world data, the proposed prompting strategies should be tested with graphs that have real-world semantics associated with notes and edges. For example, representing nodes as numbers might allow the model to easily decouple the inherent meaning of the nodes from the meaning in the context of the given graph. This might not be an easy task if the nodes were a set of words, phrases, or actions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
1. Line 160 states how the easier tasks were further subdivided into easy, medium, and hard. How are the more algorithmically challenging tasks categorized (Line 162)? Does it make sense to use the number of valid solutions for categorizing difficulty?
2. How are the exemplars selected? How does the performance vary with the selection? Does is show high or low variance?
3. In the zero-shot case, when there is not way to ensure the proper output formatting, how do you perform automated evaluation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations discussed in the appendix are satisfactory. However, the authors should provide a reference to this discussion in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging our novel contributions as well as raising valuable questions.
> The performance of the model depends heavily on how the prompts are constructed. While Figure 1 shows the instructions used in the prompts clearly, to ensure reproducibility, the complete prompts for more complex strategies like COT should be provided.
We will include all prompts in the appendix of the final version. Here we give a CoT example of the topological sort task:
In a directed graph with 5 nodes numbered from 0 to 4:
node 0 should be visited before node 4
node 0 should be visited before node 1
node 2 should be visited before node 1
node 3 should be visited before node 2
Q: Can all the nodes be visited? Give the solution.
A: Since no nodes are required to be visited before node 0 and node 3, node 0 and node 3 can be visited first.
Since visiting node 4 requires visiting node 0 first, after node 0 is visited, node 4 can be visited.
Since visiting node 2 requires visiting node 3 first, after node 3 is visited, node 2 can be visited.
Since visiting node 1 requires visiting node 0, node 2 first, after node 0 and node 2 are visited, node 1 can be visited. So the solution is: 0,3,4,2,1.
> The motivation behind choosing the eight specific graph reasoning tasks is lacking. The ultimate goal of performing graph reasoning using LLMs is to use it to solve simple real-world tasks where graph reasoning is required. Given this motivation, the authors should provide additional justification for picking the reasoning tasks used in the paper.
We discussed NLP tasks and graph tasks alignment in section A in the appendix. We will incorporate more of it in the final version given the additional page.
> For the findings to apply to real-world data, the proposed prompting strategies should be tested with graphs that have real-world semantics associated with notes and edges. For example, representing nodes as numbers might allow the model to easily decouple the inherent meaning of the nodes from the meaning in the context of the given graph. This might not be an easy task if the nodes were a set of words, phrases, or actions.
Thanks for your suggestion. It is certainly a good point while our work is an initial study on this topic. Nevertheless, we provide the results of changing “node”s to “city”s, “edge”s to “roads”, and “weight”s to “distance”s in the shortest path task in Table 9.
> Line 160 states how the easier tasks were further subdivided into easy, medium, and hard. How are the more algorithmically challenging tasks categorized (Line 162)? Does it make sense to use the number of valid solutions for categorizing difficulty?
More algorithmically challenging tasks are divided into easy and hard subsets.
> How are the exemplars selected? How does the performance vary with the selection? Does is show high or low variance?
We selected exemplars according to the empirical performances on each task. The specific process is similar to the experiments in Figure 7 in the appendix. The performance varies when the exemplars are selected from different difficulties. Specifically, the performance drops when the exemplar difficulty increases.
> In the zero-shot case, when there is not way to ensure the proper output formatting, how do you perform automated evaluation?
Though zero-shot setting does not ensure that all the output is in the same format, there are a limited number of patterns in the LLM response. We use these patterns to automatedly evaluate. For instance, in the cycle task, “there is no cycle”, “there is a cycle” and “which creates a cycle” appear in all the output, while no output contains two or more of the three patterns, so we just match the phrases in the output to get the results. We will publicly release the LLM responses and our code to make automated evaluations upon acceptance.
> The limitations discussed in the appendix are satisfactory. However, the authors should provide a reference to this discussion in the main paper.
Thank you for your suggestion. We have added a reference to the limitation discussion in the revised paper.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications
Comment: Thank you for answering my questions and for providing clarifications. I have no further questions. | Summary: This paper investigates whether large language models (LLMs) are able to solve graph algorithm problems in natural language. A benchmark NLGraph contains 29,370 problems, covering 8 graph reasoning tasks with varying complexity from simple tasks such as connectivity, cycle, and shortest path to more complex problems such as topological sort, maximum flow, bipartite graph matching, Hamilton path, and simulating graph neural networks. Various prompting techniques have been applied to evaluate the capabilities of LLMs for performing complex reasoning on graphs. Several valuable insights are provided based on the experimental results and analyses. Two prompting techniques Build-a-Graph and Algorithmic Prompting are further introduced for enhancing graph reasoning.
Strengths: 1. The proposed NLGraph benchmark is comprehensive, covering intuitively simple to more sophisticated tasks. Also, as the benchmark is synthetic, answers are unlikely to appear in the pretraining corpus of LLMs, making it a more robust benchmark for evaluating complex reasoning with LLMs. I think it would be a challenging and valuable testbed for evaluating the graph reasoning abilities of large language models.
2. Extensive experiments and analyses are conducted on the proposed benchmark. Specifically, results of text-davinci-003 model with various prompting techniques (e.g. Chain-of-Thoughts, Least-to-Most, Self-Consistency) are reported. Such results and analyses help the research community to better understand the capabilities/behavior of LLMs for graph reasoning or even complex reasoning. The four key phenomenons summarized in the paper are quite counter-intuitive and thought-provoking.
3. The paper is clear and well-organized.
Weaknesses: I only have several minor concerns/questions about this paper as follows:
1. Using a programming style of prompting techniques (e.g. PAL[1], PoT[2]) for solving these graph reasoning tasks is more intuitive. It could potentially address the problem of generating too many tokens of code-davinci-002.
2. Except for the results of text-davinici-003, only part of the results of code-davinci-002 are reported (i.e. cycle, shortest path, and hamilton path). For GPT-3.5-turbo and GPT4, only 19 tasks across 8 tasks are provided. I am not sure the four key findings listed are universal for the LLMs. It would be interesting to know the overall performance of other LLMs on the benchmark. It would shed light on this problem.
3. It seems to me that including the Graph Neural Networks (GNNs) task is not well-motivated. Why do we want the LLM to perform graph convolution on a two-dimension node embedding? We already have an efficient way to calculate it even for node embeddings that have hundreds or thousands of dimensions. If the LLM can use tools (e.g. calculator, python interpreter), why do we want it to perform large number calculations itself? Similarly, LLMs can write the code of different GNNs to perform message propagation.
References:
[1]PAL: Program-aided Language Models.
[2]Program of Thoughts Prompting.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors adequately addressed the limitations in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful and constructive feedback!
> Using a programming style of prompting techniques (e.g. PAL[1], PoT[2]) for solving these graph reasoning tasks is more intuitive. It could potentially address the problem of generating too many tokens of code-davinci-002.
Yes, this is a very good point! Though we have considered a programming style of prompting techniques, our concern is that LLMs are trained on a large amount of text including codes for solving these graph problems, thus they could easily recite the codes to solve the problems. The evaluation results might not truly reflect reasoning ability. Anyway, this can be a good follow-up work based on the released dataset in our work.
> Except for the results of text-davinici-003, only part of the results of code-davinci-002 are reported (i.e. cycle, shortest path, and hamilton path). For GPT-3.5-turbo and GPT4, only 19 tasks across 8 tasks are provided. I am not sure the four key findings listed are universal for the LLMs. It would be interesting to know the overall performance of other LLMs on the benchmark. It would shed light on this problem.
It will certainly be interesting to see how more recent LLMs (e.g., GPT-4) perform on the NLGraph benchmark. However, due to monetary costs and as we will make the NLGraph benchmark publicly available, we leave it for future work.
> It seems to me that including the Graph Neural Networks (GNNs) task is not well-motivated. Why do we want the LLM to perform graph convolution on a two-dimension node embedding? We already have an efficient way to calculate it even for node embeddings that have hundreds or thousands of dimensions. If the LLM can use tools (e.g. calculator, python interpreter), why do we want it to perform large number calculations itself? Similarly, LLMs can write the code of different GNNs to perform message propagation.
The main purpose of the GNN task is to evaluate LLMs' graph reasoning (propagation of information) abilities instead of utilizing LLMs to perform GNN operations in real use. As the GNN task requires LMs to consider relations in the graph and perform arithmetic operations simultaneously, we consider it as a good task for evaluating graph reasoning abilities.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. I will keep my rating as accepted (8) for this paper. | Summary: This work tried to answer the question, "Are LLMs capable of mapping textual descriptions of graphs and structures to grounded conceptual spaces and solving graph algorithm problems explicitly with natural language?" The answer to this question has profound implications for large language model applications with implicit graphs and structures, the reasoning ability of LLMs in advanced and graph-based settings, and more. This paper proposes the Natural Language Graph (NLGraph) benchmark, a comprehensive testbed of graphs and structured reasoning designed for language models and in natural language. Extensive experiments on the NLGraph benchmark demonstrate that:
1. LLMs do possess preliminary graph reasoning abilities.
2. The benefit of advanced prompting methods diminishes with complex problems.
3. Learning from examples did not happen on complex graph reasoning problems.
4. LLMs are (un)surprisingly brittle to spurious correlations in problem settings.
To improve large language models as better graph reasoners, this work proposes two instruction-based prompting approaches to better elicit the graph reasoning abilities of large language models. Build-a-Graph prompting encourages LLMs to map the textual descriptions of graphs and structures to grounded conceptual spaces before tackling the specific problem through a one-sentence instruction, while Algorithmic prompting instructs LLMs to revisit the algorithmic steps for a given task before learning from in-context exemplars.
Strengths: 1. The motivation of this work is to answer the question, "Are LLMs capable of mapping textual descriptions of graphs and structures to grounded conceptual spaces and solving graph algorithm problems explicitly with natural language?" is very interesting.
2. The extensive experiments are comprehensive. Experiments demonstrate that build-a-graph and algorithmic prompting successfully empower LLMs to better tackle graph reasoning problems, resulting in 3.07% to 16.85% performance gains across multiple tasks.
3. This paper provides the Natural Language Graph (NLGraph) benchmark, a comprehensive testbed of graphs and structured reasoning designed for language models and in natural language.
Weaknesses: This work is mainly on testing different prompt engineering strategies and in-context learning results. If we can include more LLM fine-tuning results, it will be of great help.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and positive feedback!
> If we can include more LLM fine-tuning results, it will be of great help.
It would be valuable and interesting to see LLM fine-tuning results, but it is too expensive to fine-tune LLMs such as GPT-3. As we will make the NLGraph benchmark publicly available, we leave it for future work.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for your reply! | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to the reviewers for providing us with valuable feedback. These constructive comments have been instrumental in improving the quality of our work. We are glad that our efforts have been well-received by the reviewers, and we are confident that their insightful feedback will further enhance the impact of our paper. With their guidance, we have outlined clear plans to incorporate their suggestions in the camera-ready version of the paper, and we are excited to present an even stronger final version. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper evaluates and studies the performance of state-of-the-art LLMs on graph reasoning-based tasks. For this, the authors construct a testbed of graph and structured reasoning tasks, comprising of 29K problems on 8 graph reasoning tasks such as topological sorting and max flow. Each task has three subsets based on problem difficulty and the default metric is exact match accuracy. The comprehensive evaluation conducted by the authors shows that LLMs perform well on graph reasoning tasks but advanced-prompting algorithms such as CoT, L2M, Self-Consistency based COT, etc. do not yield performance improvements on complex tasks. The authors also show that the LLMs evaluated (mainly text-davinci-003, but also others from OpenAI) are brittle to spurious correlations in the problem settings. Finally, the authors propose two very simple, domain-specific prompting methods, named Build-a-Graph and Algorithmic Prompting, which improves upon zero-shot performance of text-davcinci-003 on the tasks.
Strengths: 1. Comprehensive and Interesting: I really like this work. This is primarily an evaluation paper, and the authors do a great job of conducting a systematic and comprehensive study of LLM performance on graph reasoning tasks.
2. A Useful Resource for the Community: A number of reasoning problems could be reduced to implicit reasoning over graphs and the proposed benchmark, NLGraph will help the research community evaluate LLMs on such reasoning problems.
Weaknesses: 1. Lack of Systematic Comparisons between LLMs: Although I am empathetic to the plight of researchers conducting research on black-box models, only through an API, I feel the work would have been more valuable had the researchers also focused on a comparison between 3 generations of LLMs: GPT-3, GPT-3.5 and GPT-4. It would have enumerated a few things: the impact of instruction finetuning since the chat models are more aggressively instruction finetuned, the gains in performance across these models and the impact of advanced prompting with scale/model generation. The qualitative analysis in Appendix E on 19 problems is insufficient in ascertaining any of the performance jumps.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. From Appendix E, it appears that GPT-4 is considerably more capable than GPT-3.4-Turbo. A number of other works have found GPT-4 to show considerably higher performance, thereby hitting the thresholds of practical utility. Can you comment on any of this behavior wrt the graph reasoning tasks?
2. I am surprised that zero-shot is the best performing method in Figure 2 (right, max-flow task). Similarly, I am finding it hard to understand why CoT underperforms zero-shot. How much time was spent iterating on CoT? Can you comment of this exact trend holding on for GPT-4 as well?
3. To arrive at the Build-a-Graph and Algorithmic prompting methods, how many negative prompts did you iterate through? It is clear that domain specific contextual knowledge is being leveraged by LLMs to solve this task, but I am interested in the meta-question of how did you arrive these methods?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes, the authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and helpful suggestions!
> Lack of Systematic Comparisons between LLMs
We believe it is interesting to see how more recent LMs (e.g., GPT-4) perform on the NLGraph benchmark. However, due to the budget we had and as we will make the NLGraph benchmark publicly available, we leave it for future work.
> From Appendix E, it appears that GPT-4 is considerably more capable than GPT-3.4-Turbo. A number of other works have found GPT-4 to show considerably higher performance, thereby hitting the thresholds of practical utility. Can you comment on any of this behavior wrt the graph reasoning tasks?
According to the experiments we did, GPT-4 is indeed more capable than GPT-3.5-turbo. However, it still fails in some easy cases such as case 3 in Table 11, which humans can easily solve, and in many difficult cases, indicating it only possesses preliminary graph reasoning abilities.
> I am surprised that zero-shot is the best performing method in Figure 2 (right, max-flow task). Similarly, I am finding it hard to understand why CoT underperforms zero-shot. How much time was spent iterating on CoT? Can you comment of this exact trend holding on for GPT-4 as well?
We have tried several versions of CoT on a small subset of the problems and finally choose the form that performs the best.
Problems such as max-flow are rather difficult, even for humans. A human struggles to solve a max-flow task with about ten nodes, even when some exemplars are given. Due to the difficulty, LLMs fail to learn the correct way to generate intermediate steps or learn from in-context exemplars.The extra text in CoT might be hard for the model to grasp and does not provide a positive impact. We believe the trend still holds on for GPT-4, though it needs experiments to finally prove this.
> To arrive at the Build-a-Graph and Algorithmic prompting methods, how many negative prompts did you iterate through? It is clear that domain specific contextual knowledge is being leveraged by LLMs to solve this task, but I am interested in the meta-question of how did you arrive these methods?
We first come up with a set of prompting methods that intuitively help solve graph reasoning problems, and then test them. Build-a-Graph and Algorithmic prompting methods work the best. Other methods such as asking LMs to answer some simple questions about the graph property before answering the true question do not work well. We also try variants of our prompting methods and provide the results in Table 7 in the appendix. | null | null | null | null | null | null |
Time Series Kernels based on Nonlinear Vector AutoRegressive Delay Embeddings | Accept (poster) | Summary: This focus of this paper on a new type of kernel for reservoir computing (using nonlinear vector autoregressive delay embeddings) that is faster and more accurate than related kernels.
The three main contributions
* Introduction of a new kernel for time-series modeling
* State of the art results on uni- and multimodal problems on real-world datasets (related work has used synthetic datasets.)
* Theoretical connections to Taken's theorem and the field of state space reconstruction.
There are only a small number of (intuitive) hyper parameters for tuning the model. While this approach does not seem suitable for large datasets (where deep learning-based solutions reign), it appears to be a nice and efficient solution for smaller time-series problems.
Strengths: Overall the paper is well written. I don't have a deep understanding of reservoir computing but I think they authors do a nice job of connecting different technical areas. If I was working on problems in this area I would be strongly interested in building a deeper understanding of this paper.
The authors do a nice job of contextualizing the work, especially given that this is deviates from many recent deep learning oriented time-series papers.
The core idea behind the kernel is elegant and clearly has a large impact on efficiency of the approach.
I appreciate the discussions on asymptotic complexity and the qualitative notes and visualizations.
Weaknesses: It looks like the authors only use 122 UCR unimodal datasets but my understanding is that the most recent UCR benchmark (since 2018) has been using 128 unimodal datasets. Is there a good reason for this discrepancy? Similarly, the UEA corpus should have 30 datasets whereas only 22 are used in this paper.
Minor: it's challenging to interpret the graphs (especially Fig 2) when printed in black and white.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: At the end of section 2, there is discussion on the lack of stability of techniques for reservoir computing. Perhaps I'm being naive, but this seems obvious given that many of the parameters (e.g., W_in and A) are randomly initialized and not optimized over -- this means that parameters that are updated (e.g., W_out and c) need to compensate for potentially poor choices of W_in and A. I know this isn't something you introduced, but generally speaking why is this a reasonable idea?
The approach is described as "a simple concatenation of the input series with time-delayed copies and nonlinear functionals." Could this approach simply be implemented using convolutions (with dilations)? Would there be a more straightforward way of writing out this math in this case?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper is more theoretical and does not touch on noteworthy societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We warmly thank reviewer $\color{orange}{\textbf{XEWY}}$ for the thorough review and positive feedback.
We really appreciate the favorable comments on clarity of the writing and elegancy of the underlying idea, as well as positive remarks on how we bring together different research areas.
As for the raised concerns and questions, individual points are addressed below.
**W1. Is there a reason for benchmarking on a subset of datasets?**\
Among the univariate datasets, we have excluded 8 datasets with a high $N_{classes} / N_{train}$, which are then incompatible with our choice of stratified cross-validation for the SVM.
Results for those datasets can still be obtained by considering a shuffle split rather than stratified, which we report in Tab. 2 (attached pdf).
Among multivariate ones, we have excluded 6 high-dimensional datasets for which the application of the NVAR kernel is not appropriate, and 5 datasets for which SVM accuracy was $\ll 50 \%$ for all approaches, which indicates that kernel methods are, in general, not a suitable solution.
Thanks to the reviewer's comment, we now include 3 additional multivariate datasets that we did not originally consider.
Corresponding results are shown in Tab. 2.
Average accuracy for the additional univariate and multivariate datasets indicates superior performance of NVARk with respect to the baselines.
All results have been integrated into the experimental section of the manuscript.
**W2. Figures in black and white**\
We thank the reviewer for pointing this out, we have included a line where, for interpretation of colors, we refer the reader to the web version of the article.
**Q1. Why are unoptimized parameters in RC a reasonable idea?**\
One of the greatest advantages of keeping the parameters of the recurrent update unoptimized is to avoid exploding or shrinking the gradient when backpropagating through time.
This is a complementary solution to maintaining the memory unaltered with the popular gating mechanisms, e.g. LSTMs and GRUs, but much more efficient.
Furthermore, despite their randomness, the reservoir dynamics have the potential to amplify the relevant features in the input data, actually reducing the burden on the linear readout.
However, if the choice of the hyperparameters (controlling such dynamics) is poor, an adverse effect is obtained.
**Q2. Could the approach be rewritten using dilated convolutions?**\
It is interesting to reason about this.
Convolutions can be declined into products between the input and a sliding filter or between two delayed signals.
Regarding the first type, in our approach, there is no weighting of the input dimensions until the final readout.
Despite this layer certainly involving some terms of the form $w_k x_{t-j}$, non-linear terms are different.
For what concerns these non-linear terms, they are products between the input and its lags.
It is then definitely possible to think of a rewriting in terms of causal convolutions of the input with itself, despite the advantages not being fully clear to us.
For a full rewriting, one can think of merging both considerations.
However, at first sight, this seems far from straightforward.
We hope to have clarified all of the reviewer’s concerns and are happy to provide further details.
---
Rebuttal Comment 1.1:
Comment: Thanks for the depth of thinking in all of your responses. Based on the responses here and the other reviews/comments I am increasing my score to 'accept.' | Summary: This paper presents a kernel that can be applied to time series - both
univariate and multivariate - that draws inspiration from reservoir
computing. Practically, it constructs an expanded time series by
sampling lags and then computing polynomial combinations of the
original and lagged data through time. Experiments show that the
kernel, when combined with an SVM, is more accurate and more efficient
than a number of relevant competitors.
Strengths: This paper pulls on an interesting thread of work in reservoir
computing (RC) that makes connections between RC and nonlinear vector
autoregressive models (NVARs). RC can be computationally expensive,
and the NVAR approach simply combines time series with lagged copies
and nonlinear functionals, like products. The paper further connects
the use of lags to Takens' theorem about reproducing noise-free
nonlinear dynamics through state space reconstruction using lagged
observations. The discussion about the various connections to this
work is interesting.
The paper is clear and relatively easy to read and understand. RC and
NVAR are summarized in a useful way without getting bogged down in
details, but with enough information to follow the presentation. The
figures are also very helpful in understanding the paper.
One potential weakness of the approach is the relatively large number
of hyperparameters, including the number and size of the lags, the
order of the polynomial regression, and the embedding dimension. The
paper, though, does a good job presenting and arguing for heuristic
choices for these parameters that seem to perform well in practice.
Finally, empirical results suggest that the proposed approach is as
accurate as the state of the art and more efficient.
Weaknesses: The primary contribution of the paper is a kernel that is accurate and
fast to compute. There are claims in the introduction where the
authors suggest that the kernel is more interpretable than competitors
though there is no discussion of that in the remainder of the paper.
The approach is conceptually simple, which is good - fit a model to an
augmented time series (lags and products) and use the model parameters
as a vector-based representation of the time series. The
hyperparameter choices - of which there are many - are made
heuristically. Some indication of the sensitivity of the approach to
variation in those choices would be good. That is, are the heuristics
a result of just those values working empirically; or is the method
highly sensitive to those choices?
The complexity analysis would benefit from doing a direct comparison.
That section claims that the proposed methods does one expensive
computation (fitting a model) per, compared to other methods that do
an expensive computation per pair of instances. What is the total
complexity of the approaches? If competing approaches have less
expensive per-pair cost they may still be more efficient overall.
It's unclear what the kPCA plots add to the paper. What dataset is
represented in the figure? Why did you choose that dataset? I'm sure
there are other datasets for which the separation for rmESN is better
than for NVARk.
An ablation study in which the MTS were augmented with just lags or
with just products would help tease out which element is most
important. Takens says nothing about nonlinear features.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) How sensitive is the approach to hyperparameter choice, both in
terms of complexity and accuracy?
(2) Are both lags and products important for accuracy? Why or why not?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: There is no such discussion in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We warmly thank reviewer $\color{magenta}{\textbf{Nbgs}}$ for the extensive review and constructive feedback.
Among all, we really appreciate the kind words on the clarity of our paper and figures, as well as showing interest in how we connect our work to different research areas.
As for the raised concerns and questions, individual points are addressed below.
**W1. There is no discussion of interpretability in the remainder of the paper**\
For a discussion on our interpretability claims, we kindly refer the reviewer to our common response \#1.
**W2. How sensitive is the approach to hyperparameter choice?**\
In addressing this, we split hyperparameters into classes, as we believe a separate discussion is needed for $k$ and $s$.
-
- $\bar{d}_r$: as we show in Fig. 1 (top) (attached pdf) average performance for multivariate datasets are not considerably impacted in the range [75, 150]. As expected, we observe a significant drop of performance for high values (redundancy).
- $\lambda_{ridge}$: we show in Fig. 1 (bottom-left) little sensitivity in the region $[5, 50]$. Results are averaged across a pool of 20 randomly sampled univariate datasets. The employed OCReP method outperforms the best choice by $\sim 2\%$, as it adapts the value to each individual dataset.
- $\gamma_{rbf}$: in Fig. 1 (bottom-right), we range different multiplicative factors and show little sensitivity in a neighborhood of the unity.
- The sensitivity to $k$ and $s$ strongly depends on the dataset, although it generally tends to be high.
In response, we introduce our dataset-specific heuristic adapting ideas from a precedent study [a], which we find works well in practice.
A reasonable idea of the average sensitivity to these parameters can be obtained by comparing the performance of NVAR, where k and s are chosen by heuristic, to the performance of NVARk*, where k and s are optimized.
Unfortunately, understanding sensitivity by considering other settings is challenging, as the existing literature predominantly focuses on the noise-free case.
Furthermore, there is no strong evidence for the wider applicability of such approaches outside dynamical systems (see **Q1** from reviewer $\color{red}{\textbf{RJLc}}$).
**W3. What is the total complexity of the approaches?**\
As evidenced in Section 3.4, NVARk exhibits an overall complexity of $\mathcal{O}(NT\bar{d}^2_r)$.
SINK and GAK report $\mathcal{O}(T \log(T))$ and $\mathcal{O}(\min(T_1, T_2))$ respectively, which do not counterbalance their $\mathcal{O}(N^2)$ nature.
A separate discussion is needed for TCK, which gets around its cubic complexity by putting an upper threshold on the length of the sampled time segments ($\mathcal{O}(T_{max}^3)$).
As for dimensionality $\bar{d}_r$, we treat it as a constant and do not discuss complexity in connection to this parameter.
**W4. It’s unclear what the kPCA plots add to the paper**\
Despite the underlying equivalence between NVAR and simple RC, better performance can be obtained by finding a better parameter setting, which is not trivial for RC.
In our approach, a better setting is easier to find with fewer and more interpretable parameters.
We mean to use the kPCA example (*CinCECGTorso* dataset) to visually highlight the possible extent of the differences between NVAR and rmESN, i.e. theoretically equivalent methods under different working points.
As a side note, it highlights that a kernel can be used for visualization.
**W5. Are both lags and products important for accuracy?**\
This is an interesting ablation study that has not been investigated in our work and previous works to the best of our knowledge.
We provide results for such additional experiments in Tab. 1 (attached pdf).
We separately consider the case of fitting directly on the input (no concatenation), sampling linear terms only, and sampling non-linear terms only.
As a general trend, we observe that using all terms leads to the best results, followed by \textit{non-linear} and \textit{linear}.
Note also that all configurations perform better than fitting directly on the input series.
In the univariate case, the \textit{linear} variant tends to underperform, as the total number of concatenated dimensions is very small (equal to $k$).
For multivariate datasets, the two variants perform very similarly, as all lags of attributes may already include a considerable number of dimensions.
Both results are also in line with our interpretation using the generalized Taken's theorem.
Inspired by this study, we have also tested an additional variant in which we add all linear terms, and then fill remaining dimensions (up to $\bar{d}_r$) by sampling nonlinear ones.
Interestingly, this leads to even better results and a reduction in variance.
We believe this compensates for the imbalance between all possible linear and non-linear terms, while also reducing potential noise from uncorrelated dimensions.
We sincerely thank the reviewer for raising this question, which led to additional interesting results and the integration of this comprehensive analysis into the main experimental section.
**W6. There is no discussion of limitations **\
We kindly refer the reviewer to lines 330-331, where the main limitation is mentioned in the manuscript.
Please also find a further discussion in our common response \#2.
We hope to have clarified all of the reviewer’s concerns and are happy to provide further details.
**References:**\
[a] Paparrizos et al., State space reconstruction parameters in the analysis of chaotic time series - the role of the time window length, Physica D, 1996. | Summary: The authors introduce a new time series kernel based on Nonlinear Vector AutoRegressive (NVAR) processes, following recent literature on its equivalence to reservoir dynamics. The kernel operates on time delay embeddings and enables the computation of similarity between time series with different lengths. The proposal is paired with SVM models and evaluated in classification tasks for both univariate and multivariate cases. The obtained accuracy and speed results are promising when compared to other kernel methods and a reservoir computing alternative.
Strengths: The manuscript is well written and organized. The motivation is clear and the tackled problem (building kernels for time series data) is relevant to the community. The discussion of the related methods is comprehensive, although I have missed a nod to Gaussian process models (see 'Questions' section).
The main proposal seems to merge the contributions by Bianchi et al. (2020) and Bollt (2021), evaluating the NVAR equivalence to RC in the task of time series classification. The simple to understand heuristics that form the NVARk is a relevant strength of the work.
The main proposal, NVARk, is explained in detail and the provided source code that implements Algorithm 1 is easy to follow. I also praise the provided supplementary material and the qualitative result for KPCA visualization.
Weaknesses: The performed experiments are quite extensive, but the results presentation can be improved. For instance, Table 1 could also present the average rank of each method and be paired with boxplots of accuracy results across the 122 univariate datasets. Beside the per dataset results, Table 2 should also include the metrics considered in Table 1 (rank, average score), for the sake of homogeneity.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - I believe a more thorough discussion on how Taken's theorem holds with random lagged dimension dropping (Eq. (5)) is required. Perhaps some of the content of the supplementary material could be brought to the main text.
- In practice, is there any interplay between the choice of $k$ lagged steps and polynomial order $n$? In which scenarios n > 2 would be useful?
- How the proposed method behave for larger dimensional datasets? It would be interesting to include the dataset dimension in Table 2.
- Lines 315-316: "NVARk can, in fact, get around both issues with its non-recursive structure and the absence of a training phase." -> I suppose the ridge regression step and the paired SVM are not considered here, which should be made clearer.
- Since the tuned NVARk* usually presents even better results, it would be interesting to verify its trade-off by including it in the experiments of Section 4.3 (Execution time and scalability).
- It would be valuable to comment about the use of the introduced NVARk in the context of Gaussian processes models. Maybe a brief discussion on how it would compare with other GP approaches for sequences [1,2,3,4] could be included.
References
[1] Frigola-Alcade R. and Rasmussen, C. Integrated pre-processing for Bayesian nonlinear system identification with Gaussian processes. IEEE CDC, 2013.
[2] Mattos, C. et al. Recurrent Gaussian processes. ICLR, 2016.
[3] Al-Shedivat, M. et al. Learning scalable deep kernels with recurrent structure. JMLR, 2017.
[4] Toth, C. and Oberhauser H. Bayesian learning from sequential data using Gaussian processes with signature covariances. ICML, 2020.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The manuscript states sufficiently its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We warmly thank reviewer $\color{green}{\textbf{8sG2}}$ for the extensive review and positive feedback.
We particularly appreciate the positive remarks on clarity of the presentation and soundness of the work, which we are happy to see reflected in the scores.
In addition, we value your comment on the relevancy of kernels for time series analysis.
As for the raised concerns and questions, individual points are addressed below.
**W1. Results presentation can be improved**\
We agree, our initial presentation choices were mostly dictated by the space constraint.
In the new version of the manuscript, we now provide the average rank metric, which reflects what is observed in AVERAGE SCORE:
AVERAGE RANK: NVARk = 2.1, SINK = 2.2, rmESN = 2.6, TCK = 2.8
We now also provide consistent metrics for univariate and multivariate data, i.e.
AVERAGE SCORE (multivariate): NVARK* = 0.890; NVARk = 0.877; rmESN = 0.857; TCK = 0.842\
AVERAGE RANK (multivariate): NVARk = 1.62; rmESN = 2.12; TCK = 2.25\
N FIRST RANKED (multivariate): NVARk = 10; rmESN = 2; TCK = 4
As for the boxplot, we argue that it might confuse the reader as we do not claim a statistically significant difference with respect to SOTA.
**Q1. More thorough discussion on how Taken’s theorem holds with random lagged dimension dropping is required**\
We agree that the paper would benefit from a more thorough introduction to state space reconstruction for the unfamiliar reader.
Takens' theorem establishes an upper limit on the necessary concatenated lag count that is necessary to form a valid equivalent of the generating dynamical system.
In Takens' formulation, there is no importance bias towards any specific lag and their number is the only relevant factor.
In essence, this rationale underpins our adoption of random lag dropping, as long as 'enough' concatenated dimensions remain.
In our next revised version, we will prioritize integrating the main text with material from Appendix A.
**Q2. Is there any interplay between the choice of lagged steps and polynomial order?**\
There is, but we could not find any use for it.
The polynomial order ($n$) regulates the degree of interactions between dimensions.
With $n=2$, NVAR captures mutual correlations between attributes and auto-correlations across lags.
With $n=3$, three-way interactions are considered as well, such as dimensions being correlated only if a third one exhibit certain patterns.
If these are important for the dataset at hand, then $n=3$ could, in principle, be considered.
Meanwhile, $k$ influences the model's memory span.
In words, more lags would enable the model to explore these interactions further back in time.
However, in practice, relevant three-way interactions tend to be rare and our random sampling often leads to inadvertent spurious correlations (something that would appear even more often with a high $k$).
As a result, we have never found any advantage in using $n > 2$.
**Q3. How the proposed method behave for larger dimensional datasets?**\
We kindly refer the reviewer to our common response \#2.
**Q4. Clarify "absence of training phase"**\
The reviewer is right in saying that ridge regression and SVM are not considered in the mentioned statement.
At ll.315-316, we have substituted this terminology with a reminder of NVARk's linear complexity in the number of time series ($N$).
**Q5. It would be interesting to verify trade-off of optimized NVARk**\
The reported superior performance for NVARk* can be obtained with a median slowdown factor of $\sim25$ (parallelized grid search + 1 iteration with best parameters) with respect to the heuristic-based NVARk.
Most unfavorable cases are high-T datasets.
As for asymptotic behavior, this is trivially similar to NVARk scaled by the grid size.
NVARk* also places favorably with respect to the considered baselines.
We report here the median computation times:
NVARk = $0.5 s$, NVARk* = $13.7 s$, SINK = $24.2 s$, rmESN = $38.3 s$, TCK = $44.9 s$.
Interestingly, we also observed that NVARk* execution time is faster than 1 iteration of rmESN for 107/122 univariate datasets.
**Q6. It would be valuable to comment about the use in the context of GP**\
This is an interesting direction that can be further explored.
We find these two directions appealing.
As a first example, such MTS kernels can, in general, be used in multi-task GP, where they can be paired with a scalar kernel over time (like RBF) for simultaneous modeling of multiple time series:
$$ k(t_1, t_2, i, j) = RBF(t_1, t_2) \cdot K_{MTS}(i,j) $$
e.g. spatio-temporal data or medical data for different patients.
Secondly, and more specifically to NVARk, we recognize a compelling similarity between building our feature map (linear terms and polynomials) and different orders of information in signature transforms.
As in a reference suggested by the reviewer, signatures are being employed as a building block in deep kernels to build larger GP models, which might pave the path for interesting future directions.
We hope to have clarified all of the reviewer’s concerns and are happy to provide further details.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the answers and the additional results in the provided pdf. I suggest including the new figures and tables in the work's appendix. I believe the comments presented in the common response #2 is a valuable inclusion to the main text.
Due to the clear responses and the additional discussion provided by the authors' rebuttal to all the reviews, I will increase my original score. | Summary: This work proposes a feature extraction based on a non-linear vector autoregressive model (called NVAR kernel in the paper). The NVAR method constructs a deterministic feature matrix with the original input time series, lagged versions of the time series (parametrized by the lag and spacing parameters) and the products between lagged and unlagged dimensions, up to some polynomial order. The resulting feature map is then subsampled to obtain a random feature map. A linear readout layer performs next-step prediction on the embedding states and the resulting parameters form the feature map for a radial basis function kernel. The authors form a connection to state space reconstruction theory to justify their approach of subsampling by connecting it to state space reconstruction theory. The method is compared to other kernel choices on univariate (109) and multivariate (16) datasets for classification. The authors also demonstrate the scalability of their approach by measuring execution time and provide heuristics for hyperparameter choices.
Strengths: Originality
The underlying ideas in this paper have been proposed before in the time series space (lags and polynomial features; randomized feature maps), but the combination proposed here is novel for time series classification. The NVAR formalism in time series classification is novel as well (to the best of my knowledge).
Quality
The algorithm choices presented in this paper are motivated by referring to previous work and connecting to state space reconstruction theory. While I’m not an expert on this, I appreciate the effort the authors made to justify the randomized subsampling step. The paper evaluates the proposed approach against other kernel choice baselines from different classes (reservoir, model-based, Fourier-based) univariate and multivariate datasets. The authors also analyze the execution time and scalability. I appreciate the efforts from the authors to provide heuristics for hyperparameter choices of their approach and showing the gain of additional cross-validation. I think the paper misses the comparison to a critical baseline, which I will discuss in the Weaknesses section.
Clarity
Overall, the paper was well written but I think the clarity could be improved by introducing some core concepts into the introduction to readers that are not familiar with reservoir computing. For example, the authors state in the introduction that reservoir computing has a “large set of hyperparameters” but do to mention what they mean. I think a bit more detail to introduce the reader would be helpful (although I would consider this largely a stylistic choice as most of this detail comes later in the paper).
Significance
Time series classification is an important application of time series research. Especially in the healthcare domain, time series classification methods have to run on devices with limited compute (smartphones or other edge devices). As such, a classification method that does only require minimal compute is significant. However, since this method is specific fairly specific to time series classification, the broader impact is limited.
Weaknesses: I agree the rationale of the authors to select one representative baseline methods per category (reservoir, model based) etc. However, I think the the MINIROCKET method (Dempster et al. KDD 21: https://dl.acm.org/doi/10.1145/3447548.3467231) shares many properties of the proposed method (per time series computation is the most expensive step, suitable for downstream linear classification, low computational cost). Since this method provides high classification accuracy at low computational cost, I think it should be one of the baselines in the paper (it can also deal with uneven sequence length due to its final pooling operation). I would appreciate if the authors consider to compare against the method. At the very least, I would expect that this method is discussed in the related work (even though it is not from the reservoir computing literature).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are not discussed in the paper (or I missed it). A short discussion of limitations would be appreciated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply thank reviewer $\color{blue}{\textbf{hHQP}}$ for the extensive review and positive feedback.
We really appreciate the positive remarks on the general quality and clarity of our paper.
Similarly, we value the highlighting of a form of novelty and significance.
As for the raised concerns and questions, individual points are addressed below.
**W1. A bit more detail to introduce the reader to RC would be helpful**\
We agree that the paper would benefit from a more extensive introduction to RC and its hyperparameters, as their non-interpretability and large number partially motivate our approach.
Our decision not to delve into extensive discussions originated from the need to adhere to the space limitations.
As a middle ground, we now mention a few representative examples around l.120, i.e. spectral radius and input scaling (control non-linearity and relative effect of the current input as opposed to the history), and leaking rate (control speed of reservoir updates).
We also refer the reader to Appendix B for a comprehensive list.
**W2. Broader impact is limited**\
While demonstrating versatility across tasks is valuable, we respectfully disagree that assessing one task suggests limited impact.
Within the established literature, typical comparison of kernels uses classification only [a]-[c], but their application notably extends beyond that.
As in Mikalsen et al. [d], we also present a snapshot of kPCA.
Most importantly, we would like to emphasize that the numerical accuracy of TS classification constitutes only a part of our work.
Rather, we anticipate an impact on RC representation learning, for which we demonstrate superior performance with higher efficiency and interpretability.
Additionally, we demonstrate success of extending applicability of NVAR to noisy multivariate time series analysis, which we argue is of interest to both communities.
**W3. MINIROCKET should be one of the baselines**\
We thank the reviewer for bringing MINIROCKET to our attention, we have included it in the background section.
Our choice of representative for SOTA was based on the most extensive evaluation of univariate TS metrics to date [e], from where SINK comes out as the strongest trade-off between accuracy and efficiency.
We agree that MINIROCKET, as a representation learning method, can be used in conjunction with an RBF kernel, which would allow comparison.
In fact, this line of work certainly warrants further investigation.
However, we are unable to provide proper comparison due to the time constraints.
On one hand, one should ensure fairness of the hyperparameter setting, with no bias towards the UCR classification task (note that all heuristics for NVARk, except arguably $\bar{d}_r$, are task agnostic).
On the other, the NVARk processing is of a causal type, while our understanding is that MINIROCKET attends to both the past and future of each timestamp.
As such, we believe that a bidirectional variant of NVAR representation would be better for such comparison.
On a side note, the focus of our work is more on the expressive power of RC methods rather than TS classification.
**W4. Limitations are not discussed**\
We kindly refer the reviewer to ll.330-331, where the main limitation is mentioned in the manuscript.
Please also find a further discussion in our common response \#2.
We hope to have clarified all of the reviewer’s concerns and are happy to provide further details.
**References:**\
[a] Paparrizos et al., Grail: efficient time-series representation learning, VLDB Endowment, 2019.\
[b] Cuturi and Doucet, Autoregressive Kernels for Time Series, arXiv, 2011.\
[c] Baydogan et al., Time series representation and similarity based on local autopatterns, Data Mining and Knowledge Discovery, 2016.\
[d] Mikalsen et al., Time series cluster kernel for learning similarities between multivariate time series with missing data, Pattern Recognition, 2017.\
[e] Paparrizos et al., Debunking four long-standing misconceptions of timeseries distance measures, ACM SIGMOD, 2020.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response in the rebuttal.
> Broader Impact is Limited
Thank you for your response. I agree with the authors that limiting to classification does not mean that the method cannot be applied to other tasks, it also does not demonstrate the effectiveness of the method for other tasks. Some recent papers on time series representation learning do demonstrate the effectiveness of their method on several tasks and therefore do suggest wider applicability (forecasting, anomaly detection, classification, see [1]-[2]). I agree that the method is not limited to classification, but the applicability to other tasks is also not demonstrated in this paper.
Zhihan Yue, Yujing Wang, Juanyong Duan, Tianmeng Yang, Congrui Huang, Yu Tong, & Bixiong Xu (2021). TS2Vec: Towards Universal Representation of Time Series. In AAAI Conference on Artificial Intelligence.
Ling Yang, & linda Qiao (2022). Unsupervised Time-Series Representation Learning with Iterative Bilinear Temporal-Spectral Fusion. In International Conference on Machine Learning.
Thank you for addressing the my other points. Given the thorough response to other reviewers, I'm increasing my score. I would kindly ask the authors to reconsider the last sentence of the abstract that is arguably correct, but suggests to the reader that the proposed method is demonstrated on forecasting (which it is not in this paper and refers to related work in this space).
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their support.
We also welcome their suggestion and have modified the last sentence of the abstract with:
"This further advances the understanding of RC representation learning models and extends the typical use of the NVAR framework to kernel design and representation of real-world time series data." | Rebuttal 1:
Rebuttal: We would like to remark here our thanks to all reviewers for their extensive reviews.
We are happy to hear that reviewers acknowledged novelty ($\color{red}{\textbf{RJLc}}$, $\color{blue}{\textbf{hHQP}}$) and relevancy ($\color{blue}{\textbf{hHQP}}$, $\color{green}{\textbf{8sG2}}$) of our work, as well as praising its interdisciplinarity ($\color{red}{\textbf{RJLc}}$, $\color{magenta}{\textbf{Nbgs}}$, $\color{orange}{\textbf{XEWY}}$).
We are also grateful to all 5 reviewers for all sharing their appreciation for the clarity of the writing.
In order to strengthen our paper, incentivized by the reviewers' feedback, we have brought sever minor modifications together with the following significant additions:
- we now compare computational time of NVARk* with respect to NVARk and baseline methods, highlighting performance trade-off ($\color{green}{\textbf{8sG2}}$);
- we now provide an additional study of the sensitivity of our approach to the different hyperparameters, either by providing additional experiments (Fig. 1) or extensive discussion ($\color{magenta}{\textbf{Nbgs}}$);
- we now present a novel ablation study (Tab. 1), deepening the importance of linear and non-linear components of NVARk ($\color{magenta}{\textbf{Nbgs}}$);
- we now extend our evaluation to 8 more univariate and 3 more multivariate datasets (Tab. 2) ($\color{orange}{\textbf{XEWY}}$).
Please find below responses which we hope help to clarify shared concerns across more reviewers.
Finally, please find attached a one-page pdf providing the additional experiments that are referenced across our responses.
We look forward to the reviewers' individual responses to our rebuttal.
**1. Discussion of the interpretability claims**\
Our claims refer to NVAR's ability to circumvent the inherent randomness of RC methods.
As in ll.184-186, NVARk representation vectors encapsulate the mutual and auto-mutual information within the dimensions of the input.
In contrast, representations from RC methods encompass hardly-interpretable reservoir dynamics.
Additionally, the key parameters of NVARk are easy to interpret: $s$ controls the spacing between lags and just has to be set high enough not to sample noise; $k$ controls the number of lags and extends memory to find the relevant patterns.
On the other side, for instance, input scaling and spectral radius are much less interpretable.
These are two notable RC parameters controlling, among other aspects, the degree of nonlinearity injected into the reservoir.
How much of this is required by the task is not easy to judge and requires experienced insight into nonlinear dynamics or, most of the time, traditional hyperparameter tuning.
In our view, the absence of heuristics to choose these parameters also supports this.
We have now made this connection between our claims and the above considerations more explicit in Sec. 3.2 of our updated paper.
**2. Limitations and behavior with high-dimensional datasets**\
As the input dimensionality approaches $\bar{d}_r$ (our limit), there is little margin for adding more dimensions.
Here, concatenation of lags and non-linear terms would require substantially increasing $\bar{d}_r$.
This places excessive strain on the linear readout, which inherently possesses limited expressive capacity.
This component struggles to effectively model intricate relationships among numerous features and is increasingly overloaded with spurious correlations.
We then expect the effectiveness of NVARk to decrease.
To overcome this, one can consider replacing random sampling with strategies that prioritize the selection of meaningful terms.
In fact, we believe this is a promising future direction for our work.
Alternatively, it would be interesting to explore how different linear layers, e.g. Lasso, would perform in this regime.
We acknowledge the significance of this aspect in our work, which may not have received sufficient attention in the main text.
We now integrate the provided discussion into the Conclusions of the manuscript.
Pdf: /pdf/8c8c3f85900ae7a969cef1f8c5bf5c80dd5ec08b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In their paper, the authors propose a new method for deriving a kernel from time series data. They combine ideas from reservoir computing with nonlinear vector autoregressive models, based on recent theoretical work exploring their similarities.
The primary idea of the paper is to construct a kernel by modeling a time series' dynamics using a nonlinear vector autoregressive model first. The vectorized fit parameters are then used to constitute the kernel representation of the respective time series.
The authors investigate the efficacy of this kernel in SVM classification across a range of unimodal and multimodal time series. They demonstrate state-of-the-art performance when compared to three other approaches.
Strengths: Overall, the paper is well-written and easy to follow. The main ideas and contributions are clearly delineated. Further, code is provided with the submission. The method manages to significantly reduce the number of hyperparameters needed to model dynamics compared, for instance, to RC approaches. It also achieves state-of-the-art performance on a range of unimodal and multimodal time series classification tasks, compared to other kernel based methods. It presents an interesting contribution that as far as I can judge is novel, and leverages a recent development in a related field to come up with a new application.
Weaknesses: Comparison methods: If I understood correctly, no hyperparameters were tuned for the comparison methods in the experiment, while you optimized the delay-embedding parameters for your method. While I appreciate that a thorough hyperparameter tuning for some of the comparison methods can be computationally prohibitive and the smaller amount of hyperparameters is one of the strengths of your approach, it would make the comparisons fairer if some tuning took place, or if you explained in more detail why you didn't perform any tuning.
Line 217: "We propose the region 75 ≲ ¯dr ≲ 100 as a good, though non-optimal, setting" The cutoff around 75-100 is justified experimentally in the appendix. The phrase "non-optimal setting" is repeated in the appendix. What do you mean by "non-optimal" in this context? Why would you choose a non-optimal setting? While I understand that this cutoff is motivated by decreased performance due to redundancy, it seems highly dependent on the dataset, and I'm not sure I understand why this reduces performance for the MTS so drastically. Since this is one of the central components/hyperparameters of your algorithm, some more clarification would be useful here.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Relationship to Taken's Theorem: The choice of methodology is explained with reference to Taken's delay embedding. However, the connection to Taken's theorem in some of the practical cases isn't entirely clear to me. In the original next-generation RC approach, chaotic attractors are considered, where Taken's Theorem applies, while in your setting, many of the time series used in the classification tasks are not derived from something that can be as straightforwardly modeled via an underlying attractor (e.g., Japanese Vowels). It might be interesting to investigate further for which types of time series your approach is particularly suited and outperforms other methods. It's also worth exploring whether this relates to the suitability of a dynamical systems description for the underlying dynamics.
l.56: The resulting model is more accurate, interpretable, and non-recurrent.
What does interpretable refer to in this context? The single k-PCA example does not make this point strong enough in my mind, and it is not apparent to me how the derived parameters are more interpretable than e.g. the parameters of an RC model.
Small typos:
226, implies
236: only the ridge
328: Computationally, it is exceptionally
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors explicitly mention that high-dimensional datasets are the main weakness of their approach. I don't foresee any direct negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We warmly thank reviewer $\color{red}{\textbf{RJLc}}$ for the thorough review and positive feedback.
We really appreciate the kind words on the clarity of our paper, as well as the novelty and innovative utilization of recent developments from a related field.
Lastly, we are also grateful for spotting a few typos.
As for the raised concerns and questions, individual points are addressed below.
**W1. No hyperparameters were tuned for the comparison methods**\
This is only partially true.
We indeed found fine-tuning baseline hyperparameters to be mostly computationally unfeasible and, considering NVARk's heuristic foundation, might also not yield a fair comparison.
However, to the best of our knowledge, there are no rule-of-thumb guidelines facilitating the choice.
Our choice of baseline hyperparameters is then based on what is provided in the authors' original papers, with the belief that suit their approaches well and are transferable, especially given that our task and benchmarking archives are similar (or a subset).
When applicable, we consistently apply the same choices for both NVARk and rmESN, i.e. OCReP optimization for $\lambda_{ridge}$ and heuristic for $\gamma_{rbf}$.
Also, note that our choice for the hidden dimensionality ($\bar{d}_r = 75$) is optimized for rmESN in the authors' paper (see Supplementary Material for [a]), and is not optimal for NVARk.
Finally, we would like to emphasize that performance for optimized NVARk (NVARk*) is reported separately.
**W2. Why would you choose a non-optimal setting for $\bar{d}_r$?**\
In line with the scope of our work, the main motivation for using $\bar{d}_r = 75$ (l.252) is to prioritize a fair comparison with rmESN.
Our term "non-optimal" refers to the lack of fine-tuning of $\bar{d}_r$.
The fact that this value lies within the range $70-100$, where we empirically observe an absence of feature redundancy, made us accept this value.
We have made this clearer at ll.213-217, with a recall at l.252.
As for the observed performance drop after this range, please find a more extensive discussion in our common response \#2.
**Q1. It’s worth exploring whether performance relates to the suitability of a dynamical systems description**\
We agree; having found good performance in diverse cases of time series that are not immediately arising from dynamical systems, is one of the most interesting outcomes of our work.
Despite our paper would certainly benefit from inspecting such aspects, we argue that it would be better suited for dedicated future work, e.g. compelling follow-ups on time series representation learning and state space reconstruction as a possible underpinning.
We believe these results, as well as future directions, are of great interest to both dynamical systems and time series communities.
We thank the reviewer for highlighting this future direction and we are happy to include it in the Conclusions section of our paper.
**Q2. What does interpretable refer to in this context?**\
We kindly refer the reviewer to our common response \#1.
We hope to have clarified all of the reviewer’s concerns and are happy to provide further details.
**References:**\
[a] Bianchi et al., Reservoir Computing Approaches for Representation and Classification of Multivariate Time Series, IEEE Transactions on Neural Networks and Learning Systems, 2020.
---
Rebuttal 2:
Comment: I want to thank the authors for their detailed responses and updated results.
Overall, The responses and new results cleared up some questions, and there seems to be an overall consent that this paper is a good contribution, so I vote for acceptance.
Some small further comment:
I agree with one of the referees that section 4.4. (kPCA visualization) relies only a single example, and no more thorough statistical study of kPCA visualisations across datasets is provided.
The authors claim that "Despite projections being similar for most datasets, we observed a few critical differences" without really making these differences explicit, and it is not clear to me whether this benefit over rmESN really extend to other datasets, or how much this example relied on "cherry-picking".
That's why I would suggest that the authors either make the selection process for the example more explicit/consider other examples and list the "critical differences" explicitly, or move this section to the appendix as an example visualisation, and instead add some of the new results and explanations to the main text.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their support.
We also welcome their suggestion and have moved the kPCA visualization example to the appendix.
This additional space in the main text is used to discuss the new results and enhance the discussion along the lines of our common responses #1 and #2. | null | null | null | null | null | null |
Cross-modal Active Complementary Learning with Self-refining Correspondence | Accept (poster) | Summary: This paper tackles a new challenge in image-text matching, namely, noisy correspondence, which refers to the mismatched image-text pairs that can mislead the model to learn incorrect cross-modal associations during training, resulting in a suboptimal cross-modal model that computes inaccurate similarities for retrieval. To address this challenge, the authors propose a general cross-modal robust complementary learning framework CRCL that can enhance the robustness and performance of existing image-text matching methods. The authors provide theoretical and empirical evidence that their method is effective and achieves state-of-the-art performance under the same settings compared with the existing robust baselines, such as NCR, DECL, and BiCro.
Strengths: 1. This paper compares with a comprehensive and fair set of baselines, including the state-of-the-art robust method BiCro (CVPR’23). Moreover, the paper also includes CCR&CCS (WACV’23) and RCL (TPAMI’23) in the appendix, which is commendable.
2. This paper provides appropriate theoretical analysis in the appendix, which is satisfactory. The authors cleverly adapt the robust theory of the noisy label problem and views NC as an instance-level category noise label problem. The experimental results and the theoretical analysis are consistent and supportive of each other.
3. This paper shows promising performance on both synthetic NC and real NC datasets. Especially under high noise levels, CRCL demonstrates a strong robustness against NC.
4. From Figure 1, it is evident that SCC is effective in alleviating the accumulation of noise errors and improving the accuracy of rectified correspondence labels.
Weaknesses: 1. There is a typo in lines 48-49: CSRL should be CRCL.
2. This paper verifies the generality of CRCL on three standard methods, i.e., VSE$\infty$, SGR, and SAF. How about the training cost of these methods, e.g., compared to the existing robust frameworks BiCro, NCR, and DECL?
3. The paper seems to lack a discussion of related works in the main text, although I found it in the appendix. I understand that it may be due to the space limitation. However, I think CRCL is a significant improvement over NCR (NeurIPS’2021) for the problem of noisy correspondence. Therefore, for the readers who are not familiar with NCR or noisy correspondence, it would be more helpful to discuss related work in the main text.
4. Is it possible to add Plausible-Match R-Precision (PMRP)[1] as an evaluation metric, since it may be more suitable for many-to-many matching evaluation? Although I believe that Recall can reflect most of the retrieval performance, PMRP may be more appealing.
5. I think SCC is a feasible and effective technique. What is the intuition behind it? Can the authors provide more explanation?
[1] Chun S, Oh S J, De Rezende R S, et al. Probabilistic embeddings for cross-modal retrieval[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 8415-8424.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors discussed the limitations and Broader Impacts of CRCL in section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments and insightful suggestions. We will address your concerns and questions one by one as follows.
**Q1. here is a typo in lines 48-49: CSRL should be CRCL.**
Thank you for your detailed review. We will correct all typos in the next version.
**Q2. This paper verifies the generality of CRCL on three standard methods, i.e., VSE, SGR, and SAF. How about the training cost of these methods, e.g., compared to the existing robust frameworks BiCro, NCR, and DECL?**
Thanks for your valuable comment. We performed an efficiency analysis following DECL. For a fair comparison, all experiments are performed on a single GeForce RTX3090 24GB GPU and the same training environment. We report the results on Flickr30K with 20% NC as follows:
|Method|Video memory (GB)|Time/epoch|rSum
|-|-|-|-
|NCR|11.9 GB|45.2 min|486.8
|DECL-SAF|10.9 GB|13.5 min|479.0
|DECL-SGR|14.6 GB|18.3 min|482.8
|BiCro-SAF|8.1 GB|15.6 min|478.5
|BiCro-SGR|11.8 GB|20.0 min|478.9
|CRCL-VSE$\infty$|3.3 GB|1.3 min|481.1
|CRCL-SAF|7.3 GB|12.9 min|492.5
|CRCL-SGR|10.4 GB|15.2 min|496.6
From the experimental results, one can see that our method is more efficient. Especially, compared with the co-teaching architecture NCR and BiCro, our CRCL is more efficient and has a better performance.
**Q3. About the related works.**
Thank you for your valuable suggestion. Due to space constraints, we have to move the related works to the supplementary material. If possible, in the next version, we will try to include the related works in the main text.
**Q4. Is it possible to add Plausible-Match R-Precision (PMRP)[1] as an evaluation metric, since it may be more suitable for many-to-many matching evaluation? Although I believe that Recall can reflect most of the retrieval performance, PMRP may be more appealing.**
Thank you for your valuable suggestions. We report the results with the PMRP metric on the MS-COCO dataset with 40% noise, as follows (**The first four columns are the results on the MS-COCO 5-fold 1K test, and the last four columns are the results on MS-COCO 5K test.**):
||I2T|I2T|T2I|T2I|I2T|I2T|T2I|T2I
|-|-|-|-|-|-|-|-|-
|Methods|R@1|PMRP|R@1|PMRP|R@1|PMRP|R@1|PMRP
|PCME (0% noise) [1]|68.8|45.1|54.6|46.0|44.2|34.1|31.9|34.4
|DECL-SAF|75.8|46.4|60.3|47.8|54.1|35.4|38.8|36.0
|CRCL-SAF|76.4|46.4|62.1|47.9|54.8|35.5|40.2|36.1
|DECL-SGR|75.9|46.0|60.2|46.7|54.3|30.8|38.5|35.0
|CRCL-SGR|76.8|46.9|61.9|48.3|55.7|35.5|40.3|36.4
|DECL-SGRAF|77.1|47.0|61.5|48.2|55.8|36.0|40.1|36.4
|CRCL-SGRAF|78.2|47.3|63.3|48.7|57.6|36.0|41.7|36.8
From these results, our CRCL not only achieves better Recall@1 accuracies but also has higher PMRP scores for bidirectional retrieval.
**Q5. I think SCC is a feasible and effective technique. What is the intuition behind it? Can the authors provide more explanation?**
Thank you for your insightful suggestion. In fact, our design intuition for SCC comes from addressing the flaws in previous works. Due to the memorization effect of DNNs, cross-modal learning will face the self-sample-selection error accumulation problem, which will degrade the performance. To alleviate the error accumulation, previous robust frameworks (e.g., NCR and BiCro) utilize the co-teaching manner to obtain accurate predictions. However, the co-teaching strategy would increase the number of models, which will largely increase the training overhead. To address this problem, we present an efficient Self-refining Correspondence Correction (SCC) technique without any additional model to relieve the error accumulation problem. SCC leverages momentum correction to aggregate historical predictions of one cross-modal model, providing stable and accurate correspondence estimations while alleviating the over-memorization to NC. Moreover, SCC combines multiple independent self-refining processes in the entire training life, which could alleviate error accumulation against noisy correspondences.
> [1] Probabilistic embeddings for cross-modal retrieval, CVPR'21.
---
Rebuttal Comment 1.1:
Comment: I have carefully read the authors’ rebuttal and the other reviewers’ comments. I think the authors have satisfactorily addressed all my concerns and improved the quality of their paper. Therefore, I maintain my positive score for this paper.
---
Reply to Comment 1.1.1:
Comment: We are very grateful for your feedback and positive recognition of our work. Your constructive comments have greatly enhanced the quality of our paper. We will revise our paper according to the reviews in the next version. Thank you again for your review and valuable time! | Summary: This paper tackles a latent challenge in image-text matching, which is the presence of noisy correspondences between images and texts. The paper introduces a general framework that combines a robust loss function and a correspondence correction technique to enhance the existing models’ ability to cope with noisy correspondences. The paper shows that the proposed CRCL framework outperforms the state-of-the-art methods. The paper also provides both experimental and theoretical evidence to support the effectiveness of the proposed algorithms.
Strengths: Significance: Noisy correspondences are a common issue in image-text data, which can arise from alignment errors or weak cross-modal information. Studying how to deal with noisy correspondences can enable more applications of image-text learning, where the alignments are challenging and require domain expertise. This work makes significant contributions both theoretically and empirically, and offers a new perspective for future research on noisy correspondences and related fields.
Clarity: This paper is well-written and clear, and the motivation and method are easy to follow. The technical proofs seem to be sound. However, there are some typos, undefined terms, and missing references in the paper. Please see Weaknesses and Questions below.
Originality: The techniques used in this paper seem to be novel in the image-text matching field, and there are some technical innovations.
Quality: I think the quality of this paper is high, and I did not find any major flaws. The authors conduct comprehensive experiments to demonstrate the effectiveness of the proposed method.
Weaknesses: - The related work section is incomplete. The authors should include and compare with some recent works on image-text matching, such as [1,2,3].
- Some typos: CRCL in line 48.
- Eq.1 is confusing. I suggest rewriting it as $1 - y_{ik}$, with probability $\bar{\eta}_{ik}, \forall k\neq j$.
- Eq.11 is not clear. The authors should explain the meaning and derivation of each term.
[1] Huang Y, Wang Y, Zeng Y, et al. MACK: multimodal aligned conceptual knowledge for unpaired image-text matching[J]. Advances in Neural Information Processing Systems, 2022, 35: 7892-7904.\
[2] Goel S, Bansal H, Bhatia S, et al. Cyclip: Cyclic contrastive language-image pretraining[J]. Advances in Neural Information Processing Systems, 2022, 35: 6704-6719.\
[3] Pan Z, Wu F, Zhang B. Fine-Grained Image-Text Matching by Cross-Modal Hard Aligning Network[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 19275-19284.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - The authors should explain why their results are different from the previous works NCR and DECL. Are there any special settings or hyperparameters that affect the performance? The authors should also report the training efficiency of their method, such as the training time and the computational resources, and compare with the existing methods, such as DECL.
- The authors should justify their choice of using MS-COCO and Flickr30K datasets, which have one-to-many correspondences (1 to 5), while their problem formulation assumes one-to-one correspondences. How does this affect the validity and applicability of their method?
- The authors claim that ACL is a general framework that can be applied to any existing model. However, they do not provide any empirical evidence to support this claim under other tasks. For example, can ACL handle noisy labels? The authors should provide more experimental support for the generality of ACL.
- For Eq.11, it is not clear how the initial labels are obtained. Is it based on the first epoch of the first SR? The authors should clarify this point and explain how they initialize the labels.
- What does active loss mean? Is it the same as normalized loss [1]? The authors should define this term clearly.
[1] Ma X, Huang H, Wang Y, et al. Normalized loss functions for deep learning with noisy labels[C]//International conference on machine learning. PMLR, 2020: 6543-6553.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have discussed the potential limitations and implications of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments and insightful suggestions. Attached is our point-by-point response.
**Q1. Missing related works [1,2,3].**
We appreciate your valuable feedback and the related works you mentioned. We will include a discussion and a comparison of these works in the next version. However, due to the time limitation, we are not able to conduct the experiments under our settings. The comparison experiments with [3] can be found in the response (Q.6) to Reviewer QLFm.
**Q2. Eq.11 is not clear.**
Eq.11 is a mathematical description of updating binary correspondence labels in multiple SR pieces. Specifically, the next SR uses the updated labels from the previous SR piece as the initial labels for training. Each SR does not update the label before the $e_f$-th epoch, and the rest uses Momentum Correction to update. The only exception is that the first SR uses the predicted matching probabilities at the $e_f$-th epoch as the labels for momentum update.
**Q3. (a) The explanation of why the results are different from the previous works NCR and DECL. (b) The training efficiency comparison between CRCL and DECL.**
We appreciate your valuable feedback and constructive suggestions. **(a)** This difference in results is mainly caused by the difference in noise settings, i.e., the generation method of noisy correspondences and the noise rates. To fairly evaluate our method, we use the same settings and reproduce some results, which may differ from the results in the original papers of baselines. Specifically, we use the generation method used in NCR [12], i.e., **randomly shuffling the captions of training images for a specific percentage**. For a unified comparison, we set four noise rates, i.e., 20%, 40%, 60%, and 80%. Some of the results in the paper come from original papers, e.g., the results of BiCro (0%, 20%, 40%, and 60%) and NCR (0%, 20%) on Flickr30K and MS-COCO, and the results on CC152K. The others are reproduced by us under the same setting. **(b)** Due to the space limitation, the reply to this question can be found in the response (Q.2) to Reviewer DZ4B.
**Q4. The authors should justify their choice of using MS-COCO and Flickr30K datasets, which have one-to-many correspondences (1 to 5), while their problem formulation assumes one-to-one correspondences. How does this affect the validity and applicability of their method?**
Thanks a lot for your valuable comment. Although there are one-to-many correspondences in MS-COCO and Flickr30K datasets, they do not affect the validity and applicability of our method. In practice, we can only train the models by randomly sampling small mini-batches due to the limitation of computing resources. Moreover, the input image-text pairs in each mini-batch basically have one-to-one correspondences due to the sparsity of random sampling, which actually does not violate the assumptions of our problem formulation. Besides, from the experimental results, our CRCL is applicable and effective on MS-COCO and Flickr30K datasets with state-of-the-art performance.
**Q5. (a) Further explanation about ACL is a general framework. (b) Can ACL handle noisy labels? (c) The authors should provide more experimental support for the generality of ACL.**
We appreciate your thoughtful comment. **(a)** In this paper, we mainly focus on image-text matching, and apply our ACL to three models to demonstrate its generality, i.e., SAF, SGR, and VSE$\infty$. In the rebuttal phase, we have further applied our ACL to a new task (i.e., noisy labels) to demonstrate its generality as shown in the following table. **(b)** Yes, ACL can handle noisy labels due to the similarity between noisy correspondence and noisy labels. Specifically, we can equate positive pairs to positive classes, such that binary correspondence labels and class labels are actually equivalent. **(c)** To verify this, we use the released code of [4] to conduct a preliminary experiment and compare the performance of ACL with some baselines. Specifically, to adapt to noisy labels, we directly replace the bidirectional matching probabilities used in ACL with unidirectional class probabilities output by the $softmax$ layer. Since ACL needs to have a corrected label $\hat{y}$ to recast $q$, we simply replace $\hat{y}$ with the positive class probabilities (possibly false positive class). The comparison baselines are CE, GCE, and NCE + MAE in [4]. The results ($mean±std$) are reported on CIFAR-10 over 3 random runs, as shown in the following table:
|Mthods|0.2|0.4|0.6|0.8
|-|-|-|-|-
|CE|75.90±0.28|60.28±0.27|40.90±0.35|19.65±0.46
|GCE|87.27±0.21|83.33±0.39|72.00±0.37|29.08±0.80
|NCE+MAE|87.12±0.21|84.19±0.43|77.61±0.05|49.62±0.72
|Our ACL|88.64±0.11|85.31±0.14|79.06±0.55|54.26±0.89
From the results, our ACL shows superior performance and robustness in dealing with noisy labels, which demonstrates that ACL has the generality to handle noisy labels.
**Q6. For Eq.11, it is not clear how the initial labels are obtained.**
Thank you for your careful review. To clarify, the first SR is trained with the initial labels set as 1 and uses the predicted matching probabilities at the $e_f$-th epoch as the labels for subsequent momentum update, as described in Eq.11.
**Q7. What does active loss mean? Is it the same as normalized loss [4]? The authors should define this term clearly.**
Thank you for your valuable review. Yes, the active loss is inspired by the normalized loss [4]. We migrated this class-level definition (i.e., “active”) to instance-level image-text matching, i.e., a loss is defined as “active” if it only optimizes at positive pairs.
>[1] MACK: multimodal aligned conceptual knowledge for unpaired image-text matching, NeurIPS'22.\
[2] Cyclip: Cyclic contrastive language-image pretraining. NeurIPS'22.\
[3] Fine-Grained Image-Text Matching by Cross-Modal Hard Aligning Network, CVPR'23.\
[4] Normalized loss functions for deep learning with noisy labels, ICML'20. | Summary: This paper presents a novel framework (CRCL) for cross-modal correspondence learning that can handle noisy image-text pairs. The key idea of CRCL is to use a complementary active loss (ACL) that balances between discriminative learning and robust learning. ACL leverages the rectified correspondence labels to adjust the loss function and assign active loss weights, which means that the potentially noisy pairs are more focused on robust learning. To obtain accurate rectified correspondence labels, the paper proposes a self-refining correspondence correction technique (SCC) that estimates the correlation between modalities and avoids error accumulation. The proposed framework is simple yet effective in mitigating the negative impact of noisy training pairs and achieving robust image-text matching. The paper also conducts extensive experiments on three benchmark datasets to demonstrate the effectiveness and rationality of the proposed method.
Strengths: 1. The CRCL framework seems to be a general and flexible approach that can be easily integrated with existing methods to enhance the robustness of image-text matching.
2. This paper is well-written and easy to follow. The authors provide clear explanations and motivations for their method.
3. The authors simplify the training process by discarding the co-teaching scheme used in NCR and BiCro, and instead focus on the loss function and the correction technique to make the method concise and clear.
4. The experimental results are impressive and convincing. It is also noteworthy that CRCL can improve the performance of the pre-trained model, e.g., CLIP, as shown in Table 3.
Weaknesses: 1. In Eq.3, there is a typo: $i_j$ should be $I_j$.
2. In Eq.11, some symbols are not clearly defined, e.g., $\hat{p}^(j,t-1)(*)$. Please explain their meanings and notations.
3. The NC problem can be viewed as a special case of the PMP problem. RCL[20] uses complementary contrastive learning (CCL) to deal with PMPs. Similarly, CRCL can also be regarded as a further extension and study of CCL. However, the paper does not provide a direct comparison between CRCL and RCL. It would be interesting to see how they differ in performance and insights.
4. In the supplementary material, there is a capitalization error: “For brevity” should be “for brevity”.
5. Why not include MSCN[A] as a baseline? MSCN seems to be from the same period as BiCro (CVPR’23) and also addresses the NC problem.
6. Some related works are missing: [B-C]. These papers also propose methods for image-text matching and could be relevant for comparison or discussion.
[A] Noisy Correspondence Learning With Meta Similarity Correction, CVPR, 2023.
[B] Learning Semantic Relationship Among Instances for Image-Text Matching, CVPR, 2023.
[C] Fine-Grained Image-Text Matching by Cross-Modal Hard Aligning Network, CVPR, 2023.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: This paper is well-written and easy to follow. It has all the essential elements of a good paper and I do not have any major objections to accept it. I appreciate the authors’ approach, which is more concise and theoretically sound than the existing robust image-text matching methods. However, there are some minor issues that need to be addressed, such as some typos and some recent related works that need to be further discussed. Noisy correspondence is an important research direction in the field of multimodality, as it is similar to noisy label learning. I hope the authors can not only improve the performance of their method, but also explore the potential impact of CRCL on more correspondence learning tasks.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors should enrich further, as described in the Questions section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments and insightful suggestions. We will address your concerns and questions one by one as follows.
**Q1. In Eq.3, there is a typo: $i_j$ should be $I_j$.**
Thank you for your careful review. We will correct it in the next version.
**Q2. In Eq.11, some symbols are not clearly defined, e.g., $\hat{p}^{(j,t-1)}(*)$. Please explain their meanings and notations.**
Thank you for your suggestion. $\hat{p}^{(j,t-1)}(I_i,T_i)$ denotes the average matching probability $(p_{ii}^\circ + p_{ii}^\diamond)/2$ of pair $(I_i,T_i)$ at the $(t-1)$-th epoch during the $j$-th SR training piece. We will supplement it in the next version.
**Q3. Comparison results with RCL.**
Thank you for your valuable suggestion. We followed the same experimental setup in RCL [20] to evaluate our method and compare them. Due to the time limitation, we only report partial results of the SGRAF variants on the Flickr30K dataset, as follows:
|||I2T|I2T|I2T|T2I|T2I|T2I|||
|-|-|-|-|-|-|-|-|-|-
|Noise|Methods|R@1|R@5|R@10|R@1|R@5|R@10|rSum|$\uparrow$(%)
|40%|RCL-SAF|68.8|89.8|95.0|51.0|76.7|84.8|466.1|
|40%|CRCL-SAF|71.9|91.9|95.3|54.1|79.6|86.8|479.6|+13.5%
|40%|RCL-SGR|71.3|91.1|95.3|51.4|78.0|85.2|472.3|
|40%|CRCL-SGR|73.7|93.0|96.2|53.9|80.5|88.1|485.4|+13.1%
|80%|RCL-SAF|45.0|72.8|80.8|30.7|56.5|67.3|353.1|
|80%|CRCL-SAF|49.7|76.6|86.0|34.8|62.5|72.8|382.4|+29.3%
|80%|RCL-SGR|47.1|70.5|79.4|30.3|56.1|66.3|349.7|
|80%|CRCL-SGR|49.3|77.4|85.9|34.0|62.8|72.3|381.7|+32.0%
From the above results, CRCL outperforms RCL on all metrics, especially on 80% noise. This demonstrates the effectiveness of our method in handling noisy correspondences (NC). RCL uses complementary contrastive learning (CCL) to provide robustness against NC, but as we discussed in the *Introduction* section, it does not explicitly mitigate the effect of easily separable noise to further improve performance. In contrast, our CRCL leverages SCC to provide more reasonable learning objectives for higher performance.
**Q4. In the supplementary material, there is a capitalization error: “For brevity” should be “for brevity”.**
Thank you for your careful review. We will correct it in the next version.
**Q5. Why not include MSCN[A] as a baseline? MSCN seems to be from the same period as BiCro (CVPR’23) and also addresses the NC problem.**
We appreciate your valuable comments and helpful suggestions. We agree that MSCN is a strong baseline and should be compared with our CRCL. However, due to the different noise settings, we cannot directly use the reported results of MSCN and need to rerun it under our settings. Unfortunately, training MSCN is very time- and resource-consuming. For instance, training a model of MSCN on MSCOCO requires 4$\times$V100 32GB cards for 20 days. Therefore, we were not able to finish all the experiments before the submission deadline. After the deadline, we did our best to complete the experiments and report them in the following table. **In the table, the first four columns are the results on Flickr30K, and the last four columns are the results on MS-COCO 1K test.**
|Noise|Methods|I2T(R@1)|$\uparrow $(%)|T2I(R@1)|$\uparrow $(%)|I2T(R@1)|$\uparrow $(%)|T2I(R@1)|$\uparrow $(%)
|-|-|-|-|-|-|-|-|-|-
|20%|MSCN|77.4||59.6||78.1||64.3|
|20%|CRCL|77.9|+0.5%|60.9|+1.3%|79.6|+1.5%|64.7|+0.4%
|40%|MSCN|74.4||57.2||74.8||60.3|
|40%|CRCL|77.8|+4.4%|60.0|+2.8%|78.2|+3.4%|63.3|+3.0%
|60%|MSCN|70.4||53.4||74.4||59.2||
|60%|CRCL|73.1|+2.7%|54.8|+1.4%|76.3|+1.9%|60.8|+1.6%
|80%|MSCN|1.0||0.4||66.8||52.7|
|80%|CRCL|62.3|+61.3%|46.0|+45.6%|72.7|+5.9%|57.5|+4.8%
As can be seen from the table, our CRCL consistently outperforms MSCN on both datasets and under all noise levels, which also demonstrates the effectiveness and robustness of our CRCL against NC. Please note that the above table only includes partial comparison results in terms of Recall@1 due to the space limitation, and more experimental results of MSCN will be provided in the next version.
**Q6. Some related works are missing: [B-C]. These papers also propose methods for image-text matching and could be relevant for comparison or discussion.**
We appreciate your feedback and the related works you mentioned. We will include a discussion of these works in the next version. Briefly, HREM [B] could explicitly capture both fragment-level relations within modality and instance-level relations across different modalities, leading to better retrieval performance. Pan et.al [C] propose a Cross-modal Hard Aligning Network (CHAN) to comprehensively exploit the most relevant region-word pairs and eliminate all other alignments, achieving better retrieval accuracy and efficiency. However, unlike them, our CRCL is a specific framework to address NC. Despite this, compared with them, CRCL still achieves competitive results on well-labeled datasets, as shown in the following table:
||I2T|I2T|I2T|T2I|T2I|T2I||
|-|-|-|-|-|-|-|-
||R@1|R@5|R@10|R@1|R@5|R@10|rSum
|CHAN|79.7|96.7|98.7|63.8|90.4|95.8|525.0
|HREM|81.2|96.5|98.9|63.7|90.7|96.0|527.1
|CRCL|80.7|96.5|98.6|65.1|91.2|96.1|528.2
The table reports the results of I2T and T2I retrieval tasks on the MS-COCO 5 fold-1K test set.
>[A] Noisy Correspondence Learning With Meta Similarity Correction, CVPR'23.\
[B] Learning Semantic Relationship Among Instances for Image-Text Matching, CVPR'23.\
[C] Fine-Grained Image-Text Matching by Cross-Modal Hard Aligning Network, CVPR'23. | Summary: This manuscript focuses on image-text matching under the noisy correspondence setting. To achieve a noise robust multi-modal representation, the authors propose two components, including a Active Complementary Loss (ACL) and a Self-Refining Correspondence Correction (SRCC). In ACL, a complementary contrastive loss styled fomula is derived, in which a coefficient $q$ is set in seeking for a tighter bound between the divergence between risk of training with noisy correspondence and ideal setting. As for SRCC, the labeled matching score is relaxed by momentum updating to alleviate the noise. Finally, the authors conduct extensive experiments on image-text retrieval to show their performance.
Strengths: 1. The motivation of the manuscript is novel, which is from noise tolerance loss function designing and noisy label correction simultaneously.
2. The proposed Active Complementary Loss has rigorous theoretical proof in evidenting the tighter bound to the divergence between noisy risk with ideal risk.
3. The experiments are adequate and extensive. What' s more, as the Tab. 1 shown, the proposed method is very stable under different noisy ratios, and the improvment is non-trivial under larger noisy ratio.
Weaknesses: 1. Lacks necessary explaination in figures. Actually, the pure text claim to the proposed method could be harder to grab.
2. The Sec. 2.2 is complex and hard to follow. Beyond theoretical proof, how ACL works in intuition should be further discussed.
3. The existing organization to the proposed two components is poor. Actually, I didn' t see a clear connection between Sec. 2.2 with Sec. 2.3. Despite that they are all for noise correspondence, the author fail to discuss the relationship between ACL and SRCC. If not, I could treat this manuscript as a naive A+B technic.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments and constructive suggestions. We will address your concerns and questions one by one as follows.
**Q1. Lacks necessary explanation in figures. Actually, the pure text claim to the proposed method could be harder to grab.**
Thank you for your constructive comments and suggestion. To make our method more clear and more understandable, we have added a ***pdf file*** to ***the global Author Rebuttal*** that contains an illustration of our framework (Figure 1). Here is a brief explanation of the illustration, as follows:
*This is an illustration of our CRCL framework. CRCL consists of two components: an Active Complementary Loss (ACL) and a Self-refining Correspondence Correction (SCC). ACL balances between active and complementary learning based on the matching probabilities computed by the cross-modal model. It uses the corrected labels from SCC to conduct reasonable direct cross-modal learning by focusing on the active loss of convincing data and also recasting the regulatory factor to provide robust tolerance for possible noisy data under risk minimization theory. SCC exploits multiple self-refining processes with momentum correction to enlarge the receptive field for correcting correspondences, thus obtaining accurate labels and alleviating error accumulation.*
**Q2. The Sec. 2.2 is complex and hard to follow. Beyond theoretical proof, how ACL works in intuition should be further discussed.**
Thank you for your valuable suggestion. The main idea behind our ACL is to balance between the underfitting property of complementary loss and the noise overfitting disadvantage of active loss, thus improving the robustness against noisy correspondence (NC). Previous works explore different robust losses to alleviate the adverse impact of NC, but they often face overfitting or underfitting problems, especially under high noise rates. For example, active losses can fit the data quickly but they will overfit the noise easily. On the other hand, some robust losses can resist the noise well but they will underfit the clean data. Therefore, we propose our ACL to achieve a balance between efficient learning and robustness. ACL introduces exponential normalization into complementary loss to control the risk difference between noisy and ideal clean data, thus ensuring robustness for noisy data. However, since complementary learning has a long-standing underfitting problem, we still need to pay more attention to positive/matched pairs for direct efficient learning. Therefore, we introduce a weighted active learning loss into ACL that focuses more on convincing pairs. We will give more intuitive explanations in the next version.
**Q3. The existing organization of the proposed two components is poor. Actually, I didn't see a clear connection between Sec. 2.2 with Sec. 2.3. Despite that they are all for noise correspondence, the author fails to discuss the relationship between ACL and SCC. If not, I could treat this manuscript as a naive A+B technic.**
Thank you for your detailed review. We would like to clarify that our proposed method is not a simple A+B technique. ACL and SCC are complementary and indispensable components in our proposed framework, which can also be seen from our ablation experiments in Table 4, i.e., SCC without our ACL will yield an inferior performance. More specifically, ACL actually needs to use corrected labels to provide an ideal weighted score for active loss and an ideal regulatory factor for robust complementary loss, which suggests that our ACL actually relies on a very reliable correspondence correction technique. SCC plays such a role and is essential for ACL. To obtain accurate weighted scores, SCC leverages Momentum Correction to aggregate historical predictions, providing stable and accurate correspondence estimations. Furthermore, SCC combines multiple independent self-refining processes in training to eliminate error accumulation against noisy correspondences. The elimination of error accumulation allows ACL to provide better learning objectives for the model, thereby improving performance. So, SCC and ACL are mutually beneficial in our framework actually.
Thank you for your time and feedback. We hope this helps you understand our work better. If you have any more questions or comments, please let us know. | Rebuttal 1:
Rebuttal: This is a global response. We add the illustration of our method in the attached pdf file.
Pdf: /pdf/14578ecc24bc5e6edc52cae27261189cd2f403ea.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper focuses on the problem of noise correspondence in image-text matching tasks. To address this issue, this paper proposes a generalized cross-modal robust complementary learning framework, which not only reduces the risk of erroneous supervision from the active complementary loss but also obtains stable and accurate correspondence correction through a self-refining correspondence correction. Extensive experiments show that the developed model could significantly improve the effectiveness and robustness compared with the state-of-the-art approaches. The topic of this paper is of great practical interest and the motivation is clear.
Strengths: a. This paper proposes a novel generalized cross-modal robust complementary learning framework to address the noisy correspondence problem in image-text matching tasks, which enhances the effectiveness of existing methods through robust loss and correction techniques.
b. This paper utilizes the active complementary loss that employs complementary pairs to conduct indirect cross-modal learning with exponential normalization to boost robustness against noisy correspondence. In addition, self-refining correspondence correction is proposed to obtain stable and accurate correspondence correction.
c. Extensive experiments are provided to prove the effectiveness and robustness of the proposed model.
Weaknesses: a. Essentially, the authors propose a new loss function to solve the noise correspondence problem in the image-text matching task. The proposed active complementary loss and self-refining correspondence correction both serve the final loss. So I think the novelty may be limited.
b. I noticed in the supplementary that when the framework proposed in this paper is applied to the VSE model, there is a leap in performance improvement (such as 60% noise, Flickr30K dataset, The bidirectional retrieval improved by 50.3% and 35.4% on R@1, respectively.) while the performance improvement was limited when applied to the SGRAF model. Therefore, I think the author should explain why the performance gap is so large. More extensive experiments should also be organized to demonstrate the robustness of the proposed framework.
c. To my knowledge, in previous studies, the noise injected on the Flickr30K and MS COCO datasets was different. So how is the noise injected by the author generated?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to the item "weaknesses".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: In addition to the effectiveness of the experimental results, I suggest that the authors can apply the proposed framework in more methods to enhance its robustness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We have carefully looked into all comments. Attached is our point-by-point response.
**Q1. Regarding the concern of novelty .**
Thanks for your comment but we disagree with your opinion. Although some methods are proposed to address noisy correspondence (NC), they still face some outstanding issues that limit their performance: overfitting, underfitting, and inefficiency. To be specific, **(1)** Previous works explore different robust losses to alleviate the adverse impact of NC, however, almost all of them face overfitting or underfitting problems, especially under high noise rates. Specifically, active losses own fast-fitting ability but easily overfit noise. On the other hand, some robust losses have high robustness but easily underfit clean data. It is still an open issue of how to balance efficient learning and robustness, however, which is less touched so far. To address this issue, we present a dynamic balance strategy to make active and robust losses promote each other for robust learning. **(2)** Due to the memorization effect of DNNs, cross-modal learning will face the self-sample-selection error accumulation problem, which will degrade the performance. To alleviate the error accumulation, previous robust frameworks (e.g., NCR and BiCro) utilize the co-teaching manner to obtain accurate predictions. However, the co-teaching strategy would increase the number of models, which will largely increase the training overhead. To address this problem, we present an efficient Self-refining Correspondence Correction (SCC) without any additional model to tackle the error accumulation problem. The efficiency comparison is shown in the response (Q.2) to Reviewer DZ4B.
Furthermore, our method is a generalized framework that can endow existing image-text matching methods with robustness and stability under NC. We have extended several classical image-text matching methods, e.g., VSE$\infty$, SGR, SAF, and CLIP (see Q.2), all of which have shown superior performance and robustness. In summary, we think that this work is with sufficient novelty and technical contribution.
**Q2. (a) I think the author should explain why the performance gap is so large. (CRCL-VSE$\infty$/SGRAF vs. VSE$\infty$/SGRAF) (b) More extensive experiments should also be organized to demonstrate the robustness of the proposed framework.**
Thank you for your careful comment and valuable suggestion. **(a)** We think that the discrepancy in performance improvement is due to the differences in the network models. Different network models have different behaviors and sensitivities to noisy correspondences, which result in different absolute performance gains. For example, as shown in Table 2, the performance of VSE$\infty$ and the SGRAF models varies under different noise rates. Even for SGR and SAF, which have the same framework with a minor difference in a subnetwork, their performance differs significantly in Table 2. Therefore, the difference in network models affects how the methods perform under NC. This can also lead to a gap in absolute performance between different methods (e.g., VSE$\infty$, SAF, and SGR). Moreover, from the experimental results, we can see that our CRCL consistently improves the methods with different network models, which demonstrates the effectiveness and generalization of our method.
**(b)** As suggested, we applied CRCL on the classic pre-trained model CLIP(ViT/32) to further verify the robustness of our method, in addition to the CRCL-VSE$\infty$, CRCL-SAF, and CRCL-SGR models reported in the manuscript. We conducted a more extensive evaluation on the Flickr30K dataset and obtained the following partial results:
|||I2T|I2T|I2T|T2I|T2I|T2I|||
|-|-|-|-|-|-|-|-|-|-
|Configurations|Methods|R@1|R@5|R@10|R@1|R@5|R@10|rSum|$\uparrow$(%)
|Zero-shot|CLIP|78.6|95.4|97.7|59.8|85.1|90.9|507.5
|Finetune on 40% Noise|CLIP$_{best}$|81.9|95.3|98.1|66.1|88.7|93.6|523.7
|Finetune on 40% Noise|CLIP$_{last}$|64.7|86.8|91.1|45.7|68.5|76.2|433.0|-90.7%
|Finetune on 40% Noise|CRCL-CLIP$_{best}$|83.2|96.2|98.7|67.3|89.5|94.2|529.1
|Finetune on 40% Noise|CRCL-CLIP$_{last}$|83.2|96.2|98.6|67.4|89.5|94.2|529.1|-0.0%
|Finetune on 80% Noise|CLIP$_{best}$|71.0|90.0|95.0|54.5|79.7|87.1|477.3
|Finetune on 80% Noise|CLIP$_{last}$|28.8|51.4|63.0|17.5|33.9|42.2|236.8|-240.5%
|Finetune on 80% Noise|CRCL-CLIP$_{best}$|83.0|96.1|98.4|66.8|89.1|94.0|527.4
|Finetune on 80% Noise|CRCL-CLIP$_{last}$|83.1|96.0|98.68|66.7|89.1|94.0|527.5|+0.1%
Note that $best$ means the results from testing the checkpoint with the best validation performance, and $last$ means using the last checkpoint to perform testing. By comparing the results of the $best$ and $last$ rows, we can reflect the degree of overfitting NC by the performance degradation, and then understand the robustness of our method. From the results, our CRCL not only improves the performance of CLIP on the dataset with NC, but also alleviates the performance degradation (compare $best$ and $last$ rows) caused by noise overfitting, which demonstrates the strong robustness of CRCL. For more verification of the robustness, we can also refer to Fig. 2 in the supplementary material. Fig. 2 records the variation in validation performance during training. Except for the full version of CRCL, the rest variants suffer from a noise overfitting problem, such as the CCL loss, $\mathcal{L}_d$, and the TR loss, which also shows that CRCL has better robustness.
**Q3. Regarding the method to generate noisy correspondence.**
Thank you for your detailed comments. To generate noisy correspondence, we followed the standard method used in NCR [12], which is **randomly shuffling the captions of training images for a specific percentage**. This way, we can ensure a fair comparison with previous works by using the same noise. For a more comprehensive evaluation, we reproduced the results of some baselines under four noise rates. | null | null | null | null | null | null |
A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning | Accept (spotlight) | Summary: For class-imbalanced learning, many studies modified the loss function to emphasize the learning on minority classes by reweighting or logit adjustment. These studies showed high epirical classification performance, but existing generalization analysis of such studies is not unified. In this paper, the authors propose a data-dependent contraction to capture how the modified losses affect each class. With the data-dependent contraction technique, a fine-grained generalization bound is established and the authors analyze existing class imbalanced learning studies in a unified manner by applying the generalization bound to the VS loss. The authors propose two principled algorithm : TLA and ADRW based on the theoretical insights, and verify effectiveness of the two proposed algorithm through extensive experiments on benchmark datasets.
Strengths: Well written paper, it is easy to follow the paper.
Great theoretical analysis and insights are given in this paper.
Novel algorithms are proposed based on the analysis and the insights.
Extensive experiments were conducted.
Weaknesses: Code is not submitted. Thus I can not reproduce the experiment results in the paper.
Recently proposed class imbalanced learning algorithms based on multiple experts, contrastive learning, knowledge distillation are not reviewed, and also not compared in the section 4.3. It seems that the proposed algorithms achive lower classification performance than recently proposed algorithms such as PACO, RIDE and ACE.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I have no questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The proposed algorithms (TLA, ADRW) improved classification performance, but the improvement seems not that significant under some settings (specifically between VS and VS+TLA+ADRW under CIFAR-10-LT, step w/sam and CIFAR-100-LT w/sam).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1**: Code is not submitted. Thus I can not reproduce the experiment results in the paper.
**A1**: Thanks for your constructive concern! According to the policy
> If you were asked by the reviewers to provide code, please send an anonymized link to the AC in a separate comment (make sure the code itself and all related files and file names are also completely anonymized).
we have sent the anonymized link to the AC. In this repository, we provide:
- the **checkpoints and the log files** that exactly correspond to the results reported in the response **A2**, which is stored in the folder `existing_results`
- the **code** that can reproduce the results, as well as the **bash script** stored in the file `bash_scripts/run.sh`. Since the proposed methods, i.e., ADRW and TLA work only when $t > T_0$, we resume from the checkpoint in the folder existing_results with $t = T_0$, such that the randomness can be eliminated. For example
```bash
python cifar_train_sam.py --dataset cifar100 --imb_type exp --imb_factor 0.01 --loss_type VS --train_rule ADRW_T --gpu 1 --seed 0 --tro 0.25 --gamma 0.05 --tau 0.75 --rho 0.5 --wd 0.0005 --epochs 400 --t_reweight 320 --resume 'existing_results/better_wd+400epoch/320_ckpt.pth.tar' --exp_str Better_WD_400
```
---
> **Q2**: Recently proposed class imbalanced learning algorithms based on multiple experts, contrastive learning, knowledge distillation are not reviewed, and also not compared in the section 4.3. It seems that the proposed algorithms achive lower classification performance than recently proposed algorithms such as PACO, RIDE and ACE.
**A2**: Thanks very much for this constructive suggestion! Since this paper focuses on the loss-oriented methods for imbalanced learning, we mainly review prior works in this direction. According to your suggestion, we will update a more systematic review in the new version of the appendix to cover these orthogonal methods.
We did not compare the mentioned methods since the experimental protocols are quite different. Taking CIFAR-100-LT ($\rho=100$) as an example, where `Simple Aug` denotes random crop and random flip, and `wd` represents weight decay.
| | Ours | RIDE | ACE | PaCo |
|:---|:---|:---|:---|:---|
| Expert | Single | Multiple | Multiple | Single |
| Training Epoch | 200 | 200 | 400 | 400 |
| Data Augmentation | `Simple Aug` | `Simple Aug` + `Random Rotation` | `Simple Aug` + `Mixup` | `Simple Aug` + `RandAug` |
| Better wd | $\times$ | $\checkmark$ | $\checkmark$ | $\checkmark$ |
In fact, if we align these protocols, the proposed method can outperform these methods. Here are the results on CIAFR-100 LT ($\rho=100$), and we will provide more results in the future version.
| Method | Balanced Accuracy |
|:-|:-:|
| RIDE | 49.10 |
| ACE | 49.60 |
| PaCo | 52.00 |
| Ours | 46.40 |
| Ours + Better wd | 49.54 |
| Ours + Better wd + 400 epoch | 50.16 |
| Ours + Better wd + 400 epoch + RandAugment | 52.97 |
---
> **Q3**: The proposed algorithms (TLA, ADRW) improved classification performance, but the improvement seems not that significant under some settings (specifically between VS and VS+TLA+ADRW under CIFAR-10-LT, step w/sam and CIFAR-100-LT w/sam).
**A3**: Thanks for your careful reading! Imbalanced learning is a challenging task, and even a small performance improvement is not trivial. In Sec.4, we reported the results averaged on 5 random seeds, and the proposed method outperforms the competitors consistently.
Besides, we notice that the protocol might limit the improvement. After we adopt the protocol `Better wd + 400 epoch + RandAugment`, as mentioned in the response **A2**, for both VS and Ours, the improvement will be more significant:
| Method | Balanced Accuracy |
|:-|:-:|
| VS | 51.83 |
| Ours | 52.97 |
---
Rebuttal Comment 1.1:
Title: I'll raise my score.
Comment: The authors clearly addressed my concern. Thus, I will raise my score as 7.
---
Reply to Comment 1.1.1:
Title: Thanks for the timely and kind comment
Comment: Thanks very much for your timely and kind comment! According to your suggestion, we will update more empirical results in the future version. | Summary: In this paper, the authors study the problem of class imbalance. To be specific, they identify that there is a gap between the generalization theory of re-weighting & logit adjustment techniques and the practice. To be specific, they identify that the existing generalization bounds fail to account the imbalance among classes. The authors propose to close the gap by introducing an imbalance-specific bound, and then perform an analysis with existing methods to gain more insights about the effects of certain design choices. Finally, they introduce a variation on existing loss functions that perform better than existing approaches.
After the rebuttal:
With the rebuttal, the authors addressed my minor comments about clarifications on certain details. This is a solid NeurIPS paper. I recommend accepting the paper.
Strengths: 1. The paper provides theoretical as well as practical results & insights.
2. The four insights are very valuable.
3. Strong improvements over the baselines.
4. Generally easy to follow text.
Weaknesses: I am generally happy with the paper. My only concern is that it contains many typos and grammatical errors. Moreover, I would find it easier if Fig 1, 2 and 3 analyzed the performances with respect to the imbalance ratio as commonly performed in the literature.
Minor comments:
- "[12] proposes an effective scheme" => Reference numbers should not be a part of a sentence. A correct way to write this is "Cao et al. [12] propose an effective scheme".
- The paper uses many many acronyms without introducing them.
- Lines 18-45: Already in these lines you should start with formulations (of ERM, margin theory etc.) since this paper is making theoretical contributions.
- Line 47: "loss function utilized in existing proof" => "loss function utilized in existing proofs"?
- Line 111: "Let ||.|| denotes" => "Let ||.|| denote".
- Line 114: "constants \mu" => "constant \mu".
- Line 207: "Do re-weigting and logit-adjustment fully compatible?" => "Are re-weigting and logit-adjustment fully compatible?"
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: None.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your nice suggestions, and we would like to make the following response.
> **Q1**: Typos and grammatical errors.
**A1**: Thanks for your careful reading! We will correct these typos and grammatical errors in the future version.
---
> **Q2**: The paper uses many many acronyms without introducing them.
**A2**: Thanks for this careful comment! We will check all the acronyms and provide their introduction in the future version:
- The Class-Balanced (CB) loss [11]: This method proposes a re-weighting term that depends on the effective number of each class, where the effective number is based on the idea that diminishing marginal returns for additional samples.
- The Label-Distribution-Aware Margin (LDAM) loss [12]: this loss function uses additive terms, which depend on the label distribution, to encourage the model to have the optimal trade-off between per-class margins.
- The Deferred Re-Weighting (DRW) scheme [12]: This scheme deploys the re-weighting loss with a small learning rate after a vanilla ERM training period. In other words, the re-weighting phase is deferred.
- The Logit-Adjustment (LA) loss [13]: this loss function uses additive terms to adjust the logits, such that the induced objective is Fisher consistent with the balanced accuracy.
- The Class-Dependent Temperatures (CDT) loss [14]: this method introduces multiplicative terms, also named temperature, to compensate for the effect of feature deviation between training and test data.
- The Vector-Scaling (VS) loss [16]: this loss function combines the advantages of both the additive terms of the LA loss and the multiplicative terms of the CDT loss.
---
> **Q3**: Lines 18-45: Already in these lines you should start with formulations (of ERM, margin theory etc.) since this paper is making theoretical contributions.
**A3**: Thanks for this constructive suggestion! In the future version, we will update the introduction to provide more formulations. For example, the naive Empirical Risk Minimization (ERM) can be denoted as
$$
\min_f \hat{\mathcal{R}}(f) := \frac{1}{N} \sum_{(\boldsymbol{x}, y) \in \mathcal{S}} L(f(\boldsymbol{x}), y),
$$
where $L: \mathbb{R}^C \times \mathcal{Y} \to \mathbb{R}\_{+}$ measures the performance of the model $f: \mathcal{X} \to \mathbb{R}^C$ on the data point $(\boldsymbol{x}, y)$ belonging to the training set $\mathcal{S}$, and $C$ denotes the number of classes. As another example, we will provide the union bound based on the margin theory [12]:
$$
\mathcal{R}\_\text{bal}(f) \precsim \frac{1}{C} \sum\_{y=1}^{C} \frac{1}{\text{margin}\_y^{\downarrow} \sqrt{N_y}},
$$
where $\text{margin}\_y^{\downarrow}$ represents the minimal margin of the class $y$, and $N_y$ denotes the number of samples in class $y$. In contrast, our bound can be formulated by:
$$
\mathcal{R}\_\text{bal}(f) \precsim \frac{1}{C \pi_C} \sum\_{y=1}^{C} \mu_y \sqrt{\pi_y},
$$
where $\pi_y := N_y / N$, and $\mu_y$ is the local Lipschitz constant of $f$ for the class $y$. Next, we will point out that our result is shaper and can provide a series of insights, which can help highlight our theoretical contributions.
---
> **Q4**: I would find it easier if Fig 1, 2 and 3 analyzed the performances with respect to the imbalance ratio as commonly performed in the literature.
**A4**: Thank you very much for this constructive comment! We will update the results with respect to the imbalance ratio $\rho=10$ in the future version of Appendix. According to the policy, we first attach these figures in the PDF file of the “global” response.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: Thank you for the rebuttal. I will keep my original recommendation as Strong Accept. Well done.
---
Reply to Comment 1.1.1:
Title: Thanks for this kind comment
Comment: Thanks very much for this kind comment! We will update our response to the future version, according to your suggestion. | Summary: This paper proposes a sharpened generalization bound of imbalance learning by directly bounding the balanced empirical risk. The authors achieve this by generalizing the Lipschitz Continuity to the Local Lipschitz Continuity with a group of constants, which, in VS Loss, is parameterized by a re-weighting term, a generalization term, and a logit adjustment term. By adjusting the above three terms with their proposed algorithm TLA and ADRW, the authors achieve keeping the balance between class balance and generalization of a model.
Strengths: This paper unified re-weighting and logit adjustment, two common methods for solving the imbalance problem, which I think is a good contribution to this field. Moreover, the authors' entry point is novel, providing sophisticated proof for their theory, the paper is well-written, and the experiments also well demonstrate their claims.
Weaknesses: The authors only perform their experiment on the ResNet family for all the baselines and their proposed method. I hope the authors can provide the experiment result of their proposed method on different network structures in future work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have no questions from the authors.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed their limitations in the form of future work in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your nice suggestions, and we would like to make the following response.
> **Q**: The authors only perform their experiment on the ResNet family for all the baselines and their proposed method. I hope the authors can provide the experiment result of their proposed method on different network structures in future work.
**A**: Thank you for this nice suggestion! As mentioned in Appendix E, we follow the protocols of the prior arts [12, 28, 37]. To further investigate the effect of backbone, we conduct a preliminary experiment based on DenseNet121 on CIFAR-100 LT ($\rho=100$). Due to the time limit, we only tune $\alpha_y, \beta_y, \Delta_y$ and fix the other parameters such as the learning rate, weight decay, and the training epoch. The empirical results averaged on three seeds are listed as follows, which are consistent with those of ResNet. We will provide more results in future work.
| Dataset | CIFAR-100 |
|:------- |:---------:|
| Imbalance Type | LT ($\rho=100$) |
| CE | 42.8 $\pm$ 0.3 |
| CE + DRW | 44.1 $\pm$ 0.6 |
| CE + ADRW | 45.2 $\pm$ 0.4 |
| VS | 47.5 $\pm$ 0.5 |
| VS + DRW | 47.5 $\pm$ 0.3 |
| VS + TLA + ADRW | 48.0 $\pm$ 0.4 | | Summary: This paper provides a unified generalization analysis of the loss-modification approaches for imbalanced learning. It analyzes the gap between balanced accuracy and empirical loss (the loss may involve re-weighting and logit-adjustment approaches). It further provides empirical analysis that matches the theoretical insights. And a new method induced by the theoretical results with better performance is also proposed.
Strengths: 1. This paper directly establishes a theoretical connection between balanced accuracy and empirical loss, which is not attained in previous work.
2. In this paper, the existing methods are systematically reviewed, and according to the new theoretical results, the existing methods are well classified and discussed.
3. The charts and tables are beautifully formatted and laid out.
Weaknesses: 1. What we would like to bound is the balanced accuracy $\mathcal{R}_{bal}$, where the loss is measured by $M$. However, in the imbalanced learning setting, $L$ is not just a differential surrogate version of $M$, but instead is an adjusted version that puts more emphasis on the small classes. In such circumstances, what is the meaning of bounding $\mathcal{R}_{bal}^L$? Measuring under a class-balanced distribution while small classes are still emphasized in the loss? (By the way, please check the mapping of L in line 84.)
2. How is ‘data-dependent’ exhibited in the theoretical result? Is it just about the sample size in each class, i.e., label distribution? Is there anything more than label distribution?
3. In Eq.(14), what is By(f)? I checked the supplementary and I see it is a constant. What is the meaning of it? Is it the minimal margin on class y? I think these points should be made clear in the main body of the paper. And at the moment, I am also doubtful about the argument ‘balanced / imbalanced By(f)’ in the paragraph of line 192-200, which may involve the mechanisms in By(f).
4. Maybe I misunderstood it, I suppose in Figure 3, T0 marks the start of DRW, and the total epoch T is fixed. Then why the line ‘CE+None’ is not constant in Figure 3(b) and 3(c)?
5. In section 3.3, there is no mention of the setting of $\Delta_y$. So does the setting of $\beta_y$ in the $t<T_0$ stage. And in experiments, it would be better if provided more details on how these hyperparameters are tuned.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: SEE Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NAN.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments, and the response is as follows.
> **Q1**: The meaning of bounding $\mathcal{R}_{bal}^L$.
**A1**: In the `non-asymptotic` level (finite samples), bounding $\mathcal{R}\_{bal}^L$ can put more emphasis on minority classes, which is consistent with your understanding. But if we take an `asymptotic` view (infinite samples), it is beyond an adjusted version. Ideally, a small $\mathcal{R}\_{bal}^L$ should induce a small $\mathcal{R}\_{bal}$. To this end, a basic requirement is Fisher consistency. That is, optimizing $\mathcal{R}\_{bal}^L$ can recover the optimal solution to $\mathcal{R}\_{bal}$:
$$
\mathcal{R}\_{bal}^L(f) \to \min\_f \mathcal{R}\_{bal}^L(f) \Rightarrow \mathcal{R}\_{bal}(f) \to \min\_f \mathcal{R}\_{bal}(f).
$$
As mentioned in Sec.2.1, the generalized loss we consider has a consistent special case [13]:
$$
L\_F(\boldsymbol{s}, y) := \frac{\delta\_y}{\pi\_y} \log [1 + \sum\_{y' \neq y} \frac{\delta\_{y'}}{\delta\_{y}} e^{\boldsymbol{s}\_{y'} - \boldsymbol{s}\_{y}}],
$$
where $\delta_y > 0$ is an arbitrary constant. Hence, if we select such a $L$, bounding $\mathcal{R}\_{bal}^L$ also helps bound $\mathcal{R}_{bal}$. Of course, not all losses are consistent, and we need a systematic analysis in future work.
---
> **Q2**: Check the mapping of $L$.
**A2**: Thanks for your careful reading! The mapping of $L$ should be $\mathbb{R}^C \times \mathcal{Y} \to \mathbb{R}_+$. We will correct it in the future version.
---
> **Q3**: How is data-dependent exhibited?
**A3**: Compared with the union bound [12], data-dependent is exhibited in:
- The basic lemma, *i.e.*, Lemma 2 introduces $1 / \pi_C$ to the generalization bound. Since $\pi_C$ denotes the ratio of the most minor class, it can be regarded as a measure of imbalance degree. In other words, this lemma shows how the model performance depends on the imbalance degree of the data.
- Based on the proposed techniques, Theorem 1 and Proposition 3 reveal how existing loss-oriented methods improve generalization performance by exploiting data priors, which is also beyond the label distribution itself.
---
> **Q4**: The meaning of $B_y(f)$ and 'balanced/imbalanced $B_y(f)$'.
**A4**: Perhaps due to the way of our writing, it is a pity to leave the impression that $B_y(f)$ is not well formulated. As described in line 462, $B_y(f)$ is the minimal prediction on the ground-truth class $y$, i.e., $B_y(f) := \min\_{\mathbf{x} \in S_y} s_y$. Following your understanding, $B_y(f)$ is closely related to the minimal margin, which is defined as $\text{margin}\_y^\downarrow := \min_{\mathbf{x} \in S_y} (s_y - \max_{j \neq y} s_j)$. In other words, **a large $B_y(f)$ can implicitly increase the minimal margin of the class $y$**. Meanwhile, we have
$$
B_y(f) - \text{margin}\_y^\downarrow \le \max\_{\mathbf{x} \in S_y, j \neq y} s_j.
$$
Hence, **as we improve the model performance on class $y$, the RHS of the above inequality, i.e., the gap between $B_y(f)$ and $\text{margin}_y^\downarrow$ will decrease, and both the minimal margin and $B_y(f)$ will increase.**
Keeping this mechanism in mind, the argument 'balanced/imbalanced $B_y(f)$' becomes easier to understand. In fact, `balanced/imbalanced` are indeed a little confusing, thus we next use `take into account`. As shown in Fig.3a, a weighted loss can boost the model performance on minority classes but hinders further improvement on majority classes. As a result, majority/minority classes have relatively small/large $B_y(f)$, respectively (*i.e.*, fail to take into account the $B_y(f)$ of majority classes). By contrast, DRW helps both majority classes and minority classes have a small $B_y(f)$ (*i.e.*, take into account $B_y(f)$ of both majority classes and minority classes). This explains why DRW can bring a better generalization performance, which is our main point.
---
> **Q5**: The line 'CE+None' is not constant in Figure 3.
**A5**: In DRW, both reweighting and small learning rates will be used when $t > T_0$ [12]. To provide a more informative comparison, we also decrease the learning rates of `CE+None` at the corresponding epoch $T_0$ as `CE+DRW`, making the line in Fig.3 not constant. We will clarify this issue in the future version.
---
> **Q6**: The setting of $\Delta_y$ and $\beta_y$.
**A6**: Perhaps due to the way of our writing, it is a pity to leave the impression that $\beta_y$ and $\Delta_y$ are not well defined in Sec.3.3. In fact, our analysis and the induced algorithm are both based on the generalized loss defined in Sec.2.1. Hence, all the $\beta_y$ and $\Delta_y$ mentioned in Sec.2.1 are reasonable options. In general, the loss family can be written as:
$$
L_\text{VS}(\boldsymbol{s}, y) = - \alpha_y \log \left( \frac{e^{\beta_y s_y + \Delta_y}}{\sum_{y'} e^{\beta_{y'} s_{y'} + \Delta_{y'}} } \right).
$$
And
- $\beta_y \in \{1, (N_y / N_1)^{\gamma}\}$, where $N_y$ denotes the number of samples in the class $y$, and $\gamma > 0$.
- $\Delta_y \in \{0, \tau \log \pi_y, - \frac{C}{N_y^{1 / 4}} \}$, where $\tau, C >0$. If $\Delta_y = - \frac{C}{N_y^{1 / 4}}$ for the ground-truth label $y$, $\Delta_{y'} = 0$ for $y' \neq y$ [12].
We will clarify this issue in the future version.
---
> **Q7**: Details of hyperparameter search.
**A7**: To validate our theoretical results, we first tune these parameters via grid search as suggested in [13, 16]. Specifically, $\alpha_y \propto \pi_y^{-\nu}$, and $\nu$ is searched in $\\{0.15, 0.25, 0.75, 1.0, 2.0, 3.0\\}$; $\Delta_y = \tau \log \pi_y$, and $\tau$ is searched in $\\{0.5, 0.75, 1.0, 1.25, 2.0\\}$; $\beta_y = (N_y / N_1)^\gamma$, and $\gamma$ is searched in $\\{0.05, 0.1, 0.15, 0.2, 0.25\\}$.
Benefiting from the theoretical validation in Sec.4.2, the complexity can be significantly decreased. For example, according to Fig.4, we will choose a small $\gamma$ when $\nu$ is large to avoid the incompatibility issue. We will update this detail in the future version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer and your clarifications. I will keep my score unchanged.
---
Reply to Comment 1.1.1:
Title: Thanks for your kind comment
Comment: Thanks for your kind comment! According to your suggestion, we will update our response to the future version to clarify these issues. | Rebuttal 1:
Rebuttal: Dear reviewers,
First, we would like to express our sincere gratitude for your valuable comments. Following the valuable suggestions, we have carefully polished and improved the corresponding details. Now we present a brief summary of the response.
- **We clarify some important concepts** such as
- the meaning of bounding $\mathcal{R}_\text{bal}^L$, (**Q1** for Reviewer 2B2B)
- the meaning of 'data-dependent', (**Q3** for Reviewer 2B2B)
- the mechanisms of $B_y(f)$, (**Q4** for Reviewer 2B2B)
- the line of 'CE+None' in Fig.3, (**Q5** for Reviewer 2B2B)
- the setting of $\beta_y, \Delta_y$, (**Q6** for Reviewer 2B2B)
- the strategy for tuning the hyper-parameters, (**Q7** for Reviewer 2B2B)
- the meaning of acronyms. (**Q2** for Reviewer sr97)
- the formulation provided in the future version of the introduction. (**Q3** for Reviewer sr97)
- **We provide more empirical results**, including
- those with different backbones (DenseNet121), (**Q** for Reviewer D9Eg)
- those with a imbalance ratio $\rho=10$ (Fig.1-3), (**Q4** for Reviewer sr97)
- those with different protocols (weight decay, epochs, and data augmentation). (**Q2,Q3** for Reviewer 4AC3)
- **We uploaded the code to an anonymized repository and sent its link to AC**. (**Q1** for Reviewer 4AC3)
- **We checked the typos and grammatical errors**. (**Q2** for Reviewer 2B2B, **Q1** for Reviewer sr97)
Please refer to the respective response for more details, and we will update all these improvements in the future version.
Pdf: /pdf/e938a6e0cdaf49673d7539b83b74e437d7e8711c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Unsupervised Graph Neural Architecture Search with Disentangled Self-Supervision | Accept (poster) | Summary: This paper proposed a method DSGAS to automcatiicaly design the architectures in an unsupervised manner. It discovers the optimal architectures by learning the latent graph factor.
Strengths: It is interesting and novel to automate the design of GNNs in an unsupervised manner, which holds significant potential.
Weaknesses: 1. The necessity of designing disentanglement is not clear. It is true that GNNs may preferred different graph factors. What is the difference between the proposed method and the baseline that: using one latent factor design one architectures, and then run multiple times and ensembles the search architectures?
The motivation for disentangled architectures is not well justified. While the evaluations on K are provided, it would be more effective to achieve K=2 by ensemble two K=1 results.
2. The experiments are insufficient to justify the effectiveness of the proposed design method. Since it designs GNNs in an unsupervised manner, it would be better to conduct fair comparisons with existing unsupervised methods rather than methods designed under a supervised manner.
3. Apart from the comparisons of performance, the comparisons of search cost, the parameters of the searched architectures should be provided as well.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please check the weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Please check the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the comments and suggestions. We have carefully reviewed each point raised and make responses to the reviewer point by point as follows.
> The necessity of designing disentanglement is not clear. It is true that GNNs may preferred different graph factors. What is the difference between the proposed method and the baseline that: using one latent factor design one architectures, and then run multiple times and ensembles the search architectures? The motivation for disentangled architectures is not well justified. While the evaluations on K are provided, it would be more effective to achieve K=2 by ensemble two K=1 results.
Thank you for your comment. Following your suggestion, we compare a baseline that ensembles K searched architectures by averaging their outputs. The results on the dataset Computers are shown in the following table.
| K | 2 | 3 | 4 | 5 |
| :------- | :--------- | :--------- | :--------- | :--------- |
| Ours | 85\.1+-0.4 | 86\.7+-0.4 | 87\.3+-0.4 | 86\.5+-0.4 |
| Ensemble | 84\.0+-0.3 | 84\.2+-0.4 | 84\.0+-0.5 | 83\.2+-0.4 |
As shown in the table, our method has a significant performance improvement over the ensemble baseline. The reason might be that the ensemble baseline can not learn to capture various graph factors, and accordingly can not search the optimal architectures with regard to different graph factors. In contrast, our method is an end-to-end manner, which is learned to jointly discover the latent factors and search multiple factor-wise expert architectures to achieve state-of-the-art performance on various datasets.
> It would be better to conduct comparisons with existing unsupervised methods rather than methods designed under a supervised manner.
Thank you for your comment. To the best of our knowledge, the study of unsupervised graph neural architecture search remains unexplored in the literature, and our method is the first GNAS method designed under unsupervised settings. The existing GNAS methods are designed in a supervised manner, and for fair comparisons, we extend these methods as baselines by replacing the supervised loss with the self-supervised loss to suit the unsupervised NAS settings.
> Apart from the comparisons of performance, the comparisons of search cost, the parameters of the searched architectures should be provided as well.
Thank you for your suggestion. Following your suggestion, we provide the comparisons of GNAS methods in terms of performance, search cost, and architecture parameters in the following table. The time is tested with one NVIDIA 3090 GPU.
| Data | CS | | | Computers | | | Physics | | | Photo | | |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Metric | ACC(%) | Time(s) | #Params(K) | ACC(%) | Time(s) | #Params(K) | ACC(%) | Time(s) | #Params(K) | ACC(%) | Time(s) | #Params(K) |
| Random | 92.9+-0.3 | 1071 | 899 | 84.8+-0.4 | 3605 | 144 | 95.4+-0.1 | 2095 | 1096 | 91.1+-0.6 | 522 | 142 |
| DARTS | 92.8+-0.3 | 34 | 915 | 79.7+-0.5 | 79 | 144 | 95.2+-0.1 | 75 | 1116 | 91.5+-0.6 | 13 | 126 |
| GraphNAS | 91.6+-0.3 | 647 | 1011 | 69.0+-0.6 | 5295 | 372 | 94.5+-0.1 | 2268 | 1198 | 89.3+-0.7 | 435 | 238 |
| GASSO | 93.1+-0.3 | 34 | 2632 | 84.9+-0.4 | 69 | 370 | 95.7+-0.1 | 75 | 3236 | 92.0+-0.3 | 13 | 361 |
| Ours | 93.5+-0.2 | 49 | 1013 | 86.6+-0.4 | 201 | 259 | 95.7+-0.1 | 99 | 1250 | 93.3+-0.3 | 20 | 288 |
As shown in the table above, the training time of our method is on par with the state-of-the-art one-shot NAS methods (e.g., DARTS, GASSO,), which is much more efficient than the multi-trial NAS methods (e.g., random and GraphNAS). The numbers of parameters of the searched architectures are comparable for different methods. While being competitive in efficiency, our method has significant performance improvements over the baselines.
---
Rebuttal Comment 1.1:
Comment: Your response addressed most of my concerns.
For W2, the comparisons with unsupervised(or self-supervised) human-designed GNN methods are expected, rather than the unsupervised Graph NAS baselines.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the feedback. Following your suggestion, we provide the comparisons with representative self-supervised human-designed GNN methods. Specifically, for graph classification, we compare with DGK[1], Graph2Vec[2], InfoGraph[3], GraphCL[4], and JOAO[5] on PROTEINS dataset. For node classification, we compare with DGI[6], MVGRL[7], GRACE[8], GCA[9], and LaGraph[10] on CS dataset. The results are shown in the following tables.
Table 1.
| DGK[1] | Graph2Vec[2] | InfoGraph[3] | GraphCL[4] | JOAO[5] | Ours |
|-----------|-----------|-----------|-----------|-----------|-----------|
| 73.3+-0.8 | 73.3+-2.0 | 74.4+-0.3 | 74.4+-0.5 | 74.6+-0.4 | 76.0+-0.2 |
Table 2.
| DGI[6] | MVGRL[7] | GRACE[8] | GCA[9] | LaGraph[10] | Ours |
|-----------|-----------|-----------|-----------|-----------|-----------|
| 92.2+-0.6 | 92.1+-0.1 | 92.9+-0.0 | 93.1+-0.0 | 93.3+-0.2 | 93.5+-0.2 |
The results show that our method has significant performance improvements over the self-supervised human-designed GNN baselines. We will incorporate the results in the revision.
Once again, we thank you for your time and consideration. Should you have any further questions, we would be delighted to provide further responses.
[1] Yanardag, Pinar, and S. V. N. Vishwanathan. "Deep graph kernels." Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 2015.
[2] Narayanan, Annamalai, et al. "graph2vec: Learning distributed representations of graphs." arXiv preprint arXiv:1707.05005 (2017).
[3] Sun, Fan-Yun, et al. "InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization." International Conference on Learning Representations. 2019.
[4] You, Yuning, et al. "Graph contrastive learning with augmentations." Advances in neural information processing systems 33 (2020): 5812-5823.
[5] You, Yuning, et al. "Graph contrastive learning automated." International Conference on Machine Learning. PMLR, 2021.
[6] Veličković, Petar, et al. "Deep Graph Infomax." International Conference on Learning Representations. 2018.
[7] Hassani, Kaveh, and Amir Hosein Khasahmadi. "Contrastive multi-view representation learning on graphs." International conference on machine learning. PMLR, 2020.
[8] Zhu, Yanqiao, et al. "Deep graph contrastive representation learning." arXiv preprint arXiv:2006.04131 (2020).
[9] Zhu, Yanqiao, et al. "Graph contrastive learning with adaptive augmentation." Proceedings of the Web Conference 2021. 2021.
[10] Xie, Yaochen, Zhao Xu, and Shuiwang Ji. "Self-supervised representation learning via latent graph prediction." International Conference on Machine Learning. PMLR, 2022. | Summary: This paper addresses the problem of unsupervised graph neural architecture search, which has received limited attention in existing literature. The authors propose a novel approach called Disentangled Self-supervised Graph Neural Architecture Search (DSGAS) to discover optimal architectures capturing latent graph factors in an unsupervised manner. The DSGAS model consists of a disentangled graph super-network that incorporates multiple architectures with factor-wise disentanglement, self-supervised training to estimate architecture performance under different factors, and a contrastive search method to discover architectures with factor-specific expertise. Extensive experiments on 11 real-world datasets demonstrate that the proposed model achieves state-of-the-art performance compared to several baseline methods.
Strengths: - The paper addresses an important and underexplored problem in the field of graph neural architecture search, i.e., scenarios where supervised labels are not available.
- The proposed DSGAS model introduces a novel approach to discovering optimal architectures by leveraging latent graph factors in a self-supervised fashion based on unlabeled graph data.
- The disentangled graph super-network and the self-supervised training with joint architecture-graph disentanglement are novel contributions that enhance the understanding and performance of the proposed model.
- The extensive experimental evaluation on 11 real-world datasets demonstrates the superiority of the DSGAS model over several baseline methods, showing its effectiveness in unsupervised graph neural architecture search.
Weaknesses: - While the paper presents a novel approach, further details regarding the implementation of the proposed DSGAS model in the main paper would be beneficial for the readers to understand the design clearly. For example, it is better to illustrate the details of the supernet construction in the main paper.
- Some typos should be corrected, e.g., heterogenous -> heterogeneous on line 323.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the valuable comments. We respond to the reviewer’s comments point by point as follows.
> While the paper presents a novel approach, further details regarding the implementation of the proposed DSGAS model in the main paper would be beneficial for the readers to understand the design clearly. For example, it is better to illustrate the details of the supernet construction in the main paper.
Thank you for your suggestion. We described the details of the super-network configuration in the appendix due to the page limit. Here, we briefly describe the super-network. The super-network consists of two parts, the operation pool and the directed acyclic graph (DAG). The operation pool includes several node aggregation operations (e.g., GCN, GIN, GAT, etc.), graph pooling operations (e.g., SortPool, AttentionPool, etc.), and layer merging operations (e.g., MaxMerge, ConcatMerge, etc.). The DAG determines how the operations are connected to calculate the graph representations for the subsequent classification tasks. We will add the illustrations in the revised main paper.
> Some typos should be corrected, e.g., heterogenous -> heterogeneous on line 323.
Thank you for your suggestions. We will correct the typos in the revised main paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. The rebuttal resolved my concern. In my opinion, the paper addresses an important and underexplored nas problem, and the proposed disentangled self-supervision is novel. I would like to raise my recommendation from 7 to 8. | Summary: This paper gives a pioneer solution for graph neural architecture search with limited labels. The key idea is to train a super-network containing disentangled factor-wise architectures by a self-supervised learning specially designed for graph neural architecture search. In this way, the paper addresses the key problem of finding the optimal architectures capturing various graph factors in an unsupervised fashion. The extensive experiments show that the method achieves signifiant improvements over the state-of-the-art GraphNAS methods under unsupervised settings. Detailed ablation studies also show the effectiveness of each proposed components.
Strengths: Unsupervised GraphNAS is an important and valuable problem which remains unexplored in the literature. This paper is a timely work to introduce GraphNAS in the scenarios where labels are scarce, and these scenarios are actually quite common in practice. The method design is novel, where the contrastive search module is especially interesting that it pushes together architectures with similar factor expertises and pull away unsimilar architectures by architecture augmentations to explore the search space. The extensive experiments verify its capability of automating the architecture design for the data from various fields.
Weaknesses:
1. Although it may fall outside the scope of this paper, i'm curious about whether the model can be applied into areas like AI4Science where labels are quite scarce?
2. Pretraining techniques have shown promising applications in many scenarios like NLP and CV, and they usually have a close relationship with unsupervised or self-supervised techniques. It would better to discuss the relationship between this method and GraphNAS pretraining.
3. The framework diagram (figure 1) is a bit small, especially that the font size which is too small to read clearly. It would be better to enlarge the diagram and increase the font size a bit.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
1. Although it may fall outside the scope of this paper, i'm curious about whether the model can be applied into areas like AI4Science where labels are quite scarce?
2. Pretraining techniques have shown promising applications in many scenarios like NLP and CV, and they usually have a close relationship with unsupervised or self-supervised techniques. It would better to discuss the relationship between this method and GraphNAS pretraining.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I don't see any concerns regarding the societal impact in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the valuable suggestions, which are helpful for the improvement of the paper. We respond to the reviewer’s comments point by point as follows.
> Although it may fall outside the scope of this paper, i'm curious about whether the model can be applied into areas like AI4Science where labels are quite scarce?
Thanks for your inspiring comment. We agree that there exist some trending areas like AI for Science that are close to the problem we focus on, i.e., graph neural architecture search with limited or even no labels. For instance, we notice that some recent works attempt to comprehend the relationship between metabolic pathways and molecular pathways for synthesizing new molecules by leveraging the power of graph neural networks. In these scenarios, labels, e.g., the molecular properties or subtypes, are limited, posing great challenges for graph neural architecture search to automatically enhance the GNN architectures. Since our method is specially designed for GNAS with limited or even no labels, these scenarios are natural and promising applications, which we leave to future works for explorations.
> Pretraining techniques have shown promising applications in many scenarios like NLP and CV, and they usually have a close relationship with unsupervised or self-supervised techniques. It would better to discuss the relationship between this method and GraphNAS pretraining.
Thanks for your comment. To the best of our knowledge, it remains unexplored for GraphNAS pretraining, which could simultaneously contain multiple goals like self-supervised learning, transfer learning, multi-task learning, etc. In this paper, we mainly focus on unsupervised graph neural architecture search, i.e., to discover graph architectures without labels. In semi-supervised experiments, we also show some progress in pretraining the super-networks to alleviate the label scarcity issues. For example, our model with the pretraining stage has an absolute improvement of 5% on one dataset compared with the ablated version without the pretraining stage. We will leave exploring more aspects of GraphNAS pretraining in future works.
> The framework diagram (figure 1) is a bit small, especially that the font size which is too small to read clearly. It would be better to enlarge the diagram and increase the font size a bit.
Thank you for your suggestions. We will improve the diagram in the revised main paper. | Summary: This paper mainly focuses on the problem of graph neural architecture search without labels. The authors find that the key problem is to discover the latent graph factors that drive the formation of graph data as well as the underlying relations between the factors and the optimal neural architectures. To this end, the authors propose the disentangled super-network, and design a self-supervised training and searching paradigm to automate the architecture design. The experiment results are significant and the deeper analyses are convincing.
Strengths: - This paper mainly focuses on the problem of graph neural architecture search without labels, which is important yet underexplored in the literature.
- The proposed methods sound reasonable.
- The experiment results are significant and the deeper analyses are convincing.
Weaknesses: - Can the authors explain the details of the intuition for the proposed contrastive search?
- The experiments include both node classification and graph classification tasks. Do these tasks share the same search space?
- As the existing graph NAS methods are in need of labels for training and searching, how do the authors modify them to compare as baselines?
- What about the complexity of the method?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed comments and suggestions. We respond to the reviewer’s comments point by point as follows.
> Can the authors explain the details of the intuition for the proposed contrastive search?
Thank you for your suggestion. We provide the details of the intuition for the proposed contrastive search as follows. As architectures similar in operation choices and topologies have similar capabilities of capturing semantics for downstream tasks, slightly modifying the architecture will have a slight influence on its capability. Moreover, since different GNN architectures are experts in different downstream tasks, the architectures searched for different disentangled latent factors are expected to have dissimilar capabilities under different factors. For these two reasons, we propose the contrastive architecture search to capture discriminative features by pulling similar architectures together and pushing dissimilar architectures away in the latent space.
> The experiments include both node classification and graph classification tasks. Do these tasks share the same search space?
Thank you for your comment. In comparison with the search space for node classification tasks, the search space for graph classification tasks has two extra kinds of tailorable operations, i.e., graph pooling operations to capture global representations and layer merging operations to provide jumping knowledge. We provided the details of the search space in Appendix D.1.
> As the existing graph NAS methods are in need of labels for training and searching, how do the authors modify them to compare as baselines?
Thank you for your question. We first pre-train the models by self-supervised loss and then evaluate the models by finetuning an extra classifier. For the graph NAS baselines, the self-supervised loss is utilized to substitute the supervised loss to train the model parameters as well as select the architectures.
> What about the complexity of the method?
Thank you for your suggestion. We provide the complexity analysis as follows. Denote the number of nodes and edges in the graph as $N$ and $E$, the number of latent factors as $K$, the number of operation choices as $|\mathcal{O}|$, the dimensionality of hidden representations as $d$. The time complexity of the disentangled super-network is $O(K|E|d + K|V|d^2)$, where the computation for each factor is fully parallelizable and amenable to GPU acceleration, and $K$ is usually a small constant. The time complexity of the self-supervised training and contrastive search modules is both $O(K^2d^2)$. As architectures under different factors share the parameters, the number of learnable parameters is the same as classical graph super-network, i.e., $O(|\mathcal{O}|d^2)$. Therefore, the complexity of our method is comparable to classical GNAS methods. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors study the problem of unsupervised graph neural architecture search, which remains unexplored in the literature. The authors propose a novel Disentangled Self-supervised Graph Neural Architecture Search (DSGAS) model, which is able to discover the optimal architectures capturing various latent graph factors in a self-supervised fashion based on unlabeled graph data. Extensive experiments on 11 real-world datasets demonstrate that the proposed DSGAS model is able to achieve state-of-the-art performance against several baseline methods in an unsupervised manner.
Strengths: 1. I believe the problem of unsupervised graph neural architecture search is important, and this work is significant that it may unlock more NAS applications in practice.
2. The writing is clear and well-organized in general.
3. The experments and analyses are extensive.
Weaknesses: Please see my comments in the questions part.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. What is the relationship between supervised and unsupervised nas paradigms in terms of the optimization problem, e.g., line 84 in the main paper?
2. In Figure 1, the super-network seems a simple DAG, why do the searched architectures in Figure 4 seem kind of complex in terms of the operation connections?
3. In Figure 1, it seems that the mixed operation has different colors for different $\alpha_i$, is it possible that some factor-wise architectures share the same operation?
4. In Figure 1, is it required that all the architectures should choose the same augmentations? In other words, is it possible that some architectures adopt operation choice perturbation, while others adopt weight perturbation in the same batch for contrastive learning?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed comments and insightful questions. We make responses to the reviewer’s comments as follows.
> What is the relationship between supervised and unsupervised nas paradigms in terms of the optimization problem, e.g., line 84 in the main paper?
Thank you for your comment. The problem of unsupervised GNAS can be formulated as optimizing an architecture generator that is able to discover powerful architectures by exploiting inherent graph properties without labels. Since the labels are not available under unsupervised settings, the validation and training metrics in Line 84 can not be calculated to measure the architecture performance and search architectures as in supervised NAS methods.
> In Figure 1, the super-network seems a simple DAG, why do the searched architectures in Figure 4 seem kind of complex in terms of the operation connections?
Thank you for your question. The proposed disentangled super-network can flexibly incorporate multiple architectures to be searched with regard to various graph factors. This design alleviates the entanglement of architectures by providing more flexible choices of paths in the super-network, which also results in complex yet competitive architectures.
> In Figure 1, it seems that the mixed operation has different colors for different $\alpha_i$, is it possible that some factor-wise architectures share the same operation?
Thank you for your question. Yes, some factor-wise architectures can share the same operation. During the search process, the architectures are encouraged to capture different graph factors but are not prohibited from sharing the same operation. This enables the factor-wise architectures to share some common knowledge in learning the graph properties.
> In Figure 1, is it required that all the architectures should choose the same augmentations?
Thank you for your question. No, the architectures can choose different augmentations. Like the data augmentations in contrastive learning, all the architecture argumentations act as a transformation of an architecture to a similar architecture. In Figure 3(b), 'Compose' randomly chooses different architecture augmentations for each architecture, and it is shown to be effective on various datasets.
---
Rebuttal Comment 1.1:
Title: Thanks for your reply.
Comment: Thanks for your detailed response. All my concerns have been resolved. Besides, after reading other reviews and your reply, I found the NAS disentangled mechanism is pretty interesting and sounds good. I would like to raise my score. | null | null | null | null | null | null |
A Hierarchical Spatial Transformer for Massive Point Samples in Continuous Space | Accept (poster) | Summary: The authors proposed a hierarchical spatial transformer model for many irregular point samples in continuous spatial domain. Compared with existing methods, the proposed method can model implicit spatial dependency across irregular samples and in multiple scales in continuous space. The proposed model uses a quad-tree hierarchy to conduct efficient attention operations by approximating distant points in a coarse resolution. The model also includes an uncertainty quantification module to capture the varying prediction confidence in different input sample density. Evaluations on several datasets confirm that our method outperforms multiple baselines in accuracy and effectiveness of uncertainty quantification.
Strengths: Overall, this is a solid paper with novel technical contributions and extensive experimental evaluations.
• The paper solves a significant problem of spatial representation learning for irregular samples in continuous space. The problem has many significant applications in environment sustainability. It is also important for learning surrogate models to speed up numerical simulations.
• The technical novelty of the proposed hierarchical attention architecture is strong. The model uses a quad-tree to learn latent representation of point samples in different subareas in a multi-scale hierarchy. The attention layers use quad-tree to make spatial approximation to overcome the computational bottleneck. The model also has an uncertainty quantification module.
• The proposed method has shown promising results on both real-world and synthetic datasets. The evaluation is solid with sensitivity analysis and computational experiments.
Weaknesses: Although the experimental evaluation is extensive, it will be helpful to add some more on how the thresholds of uncertain/certain predictions are determined in UQ metric.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: • How are the thresholds of uncertain/certain predictions determined in UQ metric?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: No limitations on societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Question: how are the thresholds of uncertain/certain predictions determined in UQ metric?
Response: Thank you for your positive feedback. About how to choose thresholds of uncertainty in the UQ metric, we discussed them in the supplementary materials in section 7.1. Here we briefly describe how we choose the UQ threshold. We used a validation dataset for determining accurate/inaccurate and certain/uncertain thresholds. Specifically, the threshold for accuracy is determined by the average MAE loss on validated samples. Then the threshold for uncertainty is determined by the average uncertainty score for accurate and inaccurate prediction respectively. The detailed equation is provided in our supplementary materials section 7.1.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thanks for the responses. The authors have properly addressed my questions. I am fine with an acceptance. | Summary: This paper proposes a hierarchical transformer to model a large number of irregular point samples in continuous space. This is achieved by a quad-tree hierarchy which could learn the multi-scale spatial representation. So the long-range interactions are recorded.
The experiment is performed in three real-world dataset (two water quality datasets and one sea-surface temperature prediction) and one simulation dataset.
Strengths: This submission tries to solve the large number of issues by using quadtree. In my understanding, the quadtree is a classic technique for downsampling points so that the point number will be reduced to an affordable level. And the quadtree could model the long-range interactions without the distribution assumption.
The proposed method reduced the computational cost to $O(N\log N)$ or $O(NM)$.
The approach presentation is detailed and clear.
Weaknesses: The adopted quadtree technique seems could only work well in the sparse case. As a result, the proposed method is not a general solution for all kinds of large numbers of point samples.
The experiment setting is not convincing in the reviewer's current understanding. In particular, Red Tide and Turbidity datasets have a large number of data samples, but it seems only to have one 'epoch'. In other words, The point number is large but the total data size seems very small. And for Darcy flow and Sea Surface, it seems each image only have 400 point samples, it is not a large number set in my understanding, the all-pair transformer could cover it.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the author clarify how the adopted quadtree technique in the paper could solve uniformly distributed cases?
Link to the weakness above, as Darcy flow and Sea Surface datasets are set, could the author claim why do we have to feed all point samples in a single feed? In other words, why don't we solve this task in all three datasets by following a classic 'classification setting'?
Compared with the all-pair transformer, pros and cons?
Could the reviewer know the number of parameters for all-pair transformer and other baselines in Table 2?
Any guidance in choosing hyper-parameters for designing quadtree if needed?
As in Weakness, my main concern is first two questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer:
**W1 & Q1: how the quadtree technique could solve uniformly distributed cases.**
Thank you for the comment. In fact, our HST model does not require samples to be sparsely distributed for efficiency gain. For uniformly distributed point samples, the quadtree will be a balanced (complete) tree. We can illustrate how our model works in this case with an example. Please see the example in Fig.1 of global rebuttal pdf (the topmost box). There are 8*8 point samples evenly distributed in 2D. Assume the minimum leaf node size is 4. The height of the quadtree is $h =log_4(64/4) = 2$, as Fig.1(b) shows. Consider the query sample of point 1, the key node set include 3 remaining points in the same leaf node (points 2-4), 3 internal nodes $(r_2^2 ... r_4^2)$, 3 internal nodes $(r_2^1 ... r_4^1)$. Thus, there are only 9 keys, much lower than the total number of points (64). In fact, our efficiency gain is better for uniformly distributed case, as the depth of the tree is lower. This significantly reduces the time costs.
**W2 & Q2: the experiment with 400 point samples.**
This is a valid concern. First, we chose to randomly select 400 input points from each $64\times 64$ image in the Darcy flow and Sea Surface dataset because drawing much denser input samples (e.g., over 1000) makes the problem trivial because of strong spatial autocorrelation. In this case, even a simple nearest-neighbor estimator can work well. More importantly, even 400-point input samples is not as easy as it seems for the vanilla transformer model due to the high memory and time costs. The GPU memory costs for vanilla all-pair transformer with 1K point samples exceed the capability of A100 GPU due to the large size of intermediate tensors. This is confirmed in our experiment results (Figure 4 in the paper).
For detailed calculation, please see our calculation below. Second, we can run all the point samples in a single feed, but the GPU memory cost is very high. We provide a memory consumption analysis below. Third, even with 400 point samples, our model can still improve the computational efficiency as Fig.4 in our paper shows. The computation time is reduced from 25 minutes to 5 minutes for one epoch. Our model can have slight improvement compared to all-pair transformer as Table 2 shows.
Memory cost estimation of vanilla transformer:
Assume $L = 1,000$ points, $H = 8$ attention heads, $N = 6$ block, the dimension of each head $D = 64$, and the batch size $B= 512$. The model stores results from forward attention activation (including self-attention, dense layer) ($M_{a}$) and backward gradient ($M_{g}$), and model parameters ($M_{m}$). The dominant factor is the attention layer. For each attention layer, given query and key tensors $Q,K \in R^{L*D}$, the attention matrix multiplication will result in a tensor with the shape of $[L, L]$ for each attention head and each sample within the minibatch. Thus, the total memory cost for all heads in one minibatch in one attention layer is: $ M_{a} = BHL^{2} \times (float32 \ bytes)$. Multiplying this by $N$ attention layers, the cost is $4NBHL^{2} \ bytes= 6\times 512\times 8\times 4\times 1000^{2} bytes \approx 94 GB$. This exceeds A100 memory capacity (80GB), as shown in our experiment results in Figure 4.
In contrast, our HST model significantly reduces the memory costs to only 24 GB. The reason is that we approximate point samples far away from the query points by coarse representation (in quadtree internal nodes) in the attention computation. The further away points are from the query points, the coarser cells we use in approximation. Due to limited space, we do not provide estimation here. The detailed derivation can be found in the rebuttal for reviewer 2.
**Q3: pros and cons compared with all-pair transformer.**
This is a great question. We provide some analysis on the pros and cons of our model compared to the all-pair transformer.
The pros:
1. Able to capture multi-scale effect with the hierarchical quad-tree structure: Our model learns a hierarchical spatial representation within the quadtree. The multi-scale effect is important for many scientific problems. This is reflected by the improved prediction accuracy in experiments.
2. Our HST reduces the time and memory costs when compared with the all-pair transformer. Please see our analysis above.
3. We added uncertainty quantification branch to estimate the model’s confidence.
The con:
1. Our HST makes a trade-off between efficiency and spatial granularity when calculating attention between points. We approximate the set of key points by coarse cells in the quadtree based on distance to the current query point.
**Q4: the number of parameters of baselines:**
Thanks for the great suggestion. We will add the number of parameters for the baselines and our model in Table 2. The numbers are as follows. GP: 2 (RBF kernel parameters), Deep GP: 0.24 million, Spatial GCN: 0.79 million, All-pair transformer: 3.81 million, HST model: 3.81 million, NodeFormer: 2.2 million, Galerkin Transformer: 17.6 million
Note that the large number of parameters in Galerkin Transformer is because we used the default code setup without tunning. We could set up the number of attention layers the same as other transformer models (six encoder-decoder layers). In this case, the number of Galerkin transformer parameters would be 4.4 million.
**Q5. the guidance on choosing hyper-parameters for designing quadtree.**
We provide the quadtree hyperparameters in our supplementary material Section 7.1 and the sensitivity analysis. We chose the minimum leaf node size of 20 in experiments based on prior knowledge about the spatial dependency structure. The size of the leaf node can be determined based on the spatial homogeneity and efficiency needs. Roughly, the more spatially homogeneous the input point samples are and the more efficiency gain we need, the bigger leaf node cells we need to choose. | Summary: This paper proposes a quad-tree partition of irregularly distributed sample locations. This leads to an algorithm with O(NlogN) complexity.
Strengths: Quad-tree idea is nice, even though not original for irregular grids or pixels.
Weaknesses: The targeted problem in the paper is regression. But the model is framed as an encoder-decoder. The paper does not explain why this seemingly unreasonable choice.
The O(NlogN) complexity has a large big-O constant because of sparse matrix operations which are known to be inefficient on GPUs.
There are not enough details on models and experiments. For example, the model parameters such as number of layers, hidden size and number of attention heads are not clearly specified. The training hyper parameters such as learning rate and its decay schedule are not explicitly stated. This makes it hard to determine the quality of the experiments, let alone replicating them.
Datasets are all 2D with up to 1K sample points. These are well within the capacity of vanilla Transformer on A100, the GPU used in the paper. And the optimal leaf node size is 100. This results in a very shallow quad tree. So the evidence about the efficiency gain is not convincing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What is the vocab for the decoder? The solution space for regression is not discrete.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer:
**W1 & Q1:the encoder-decoder module for regression and the vocab for the decoder.**
Response: The confusion may come from the naming. Although the encoder-decoder architecture was originally used in machine translation (discrete vocab), common encoder-decoder architectures, e.g., U-Net, transformer, have been widely used in regression problems such as temperature and precipitation prediction [1-4]. In a general sense, an encoder learns a latent feature representation that encodes the structural dependency within input data. A decoder predicts the target variable from encoded features. There is no restriction that the framework cannot be applied to regression. We hope this can address the concern and please comment if any questions.
[1] Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting. NeurIPS 2021
[2] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecas. AAAI 2021
[3] Spatiotemporal Swin-Transformer Network for Short Time Weather Forecasting. CIKM 2021
[4] NumHTML: Numeric-Oriented Hierarchical Transformer Model for Multi-Task Financial Forecasting. AAAI 2022
**W2: the O(NlogN) complexity has a large big-O constant**
Response: We agree that sparse matrix operations can be less efficient in GPU penalization when compared to normal dense matrix operations. However, there are already significant efforts in GPU optimization of sparse matrix multiplication (e.g., cuSPARSEt library). In our HST model, the large big-O constant is not a dominant factor, as shown in experiment results in Figure 4. For 400 points, our HST reduces the time costs from 25 mins to 5 mins. One note is that the time cost for the default all-pair transformer when the input size is 1000-point is not measurable because out-of-memory problem (please see our response below for this question). In contrast, our model can run 1000 points within twenty minutes for one epoch.
**W3: not enough details on models and experiments.**
Response: In fact, we had provided all the model hyper-parameters and training details in our supplementary material section 7.1. The description is quoted as following:
Our HST model used six spatial attention layers in both the encoder and the decoder. We use 8 heads and the latent representation embedding dimension of each head was 64, and the quadtree leaf node size threshold was 20 by default. The batch size was $512$. For the training process, we used the MSE loss with a decaying learning rate that reduced the learning rate by half if the validation loss did not improve over five epochs (with an initial
learning rate of $10^{-4}$ and a minimum rate of $10^{-7}$). We also used early stopping with a patience of 10 epochs and a maximum of 50 epochs. The optimizer was Adam with $\beta_1=0.9$ and $\beta_2 = 0.98$. The $L_2$ regularization weight was $10^{-4}$.
**W4: 1K samples within the capability of vanilla transformer**
Response:
This is a good question. First, we want to clarify that even 1K points will exceed the capacity of vanilla (all-pair) Transformer on A100 GPU. This is because the dominating GPU memory costs come from intermediate tensors in forward propagation. Please see an example below on the calculation of GPU memory cost with 1K point samples for both vanilla (all-pair) transformer and our HST transformer.
Memory cost estimation of vanilla transformer:
Assume $L = 1,000$ points, $H = 8$ attention heads, $N = 6$ block, the dimension of each head $D = 64$, and the batch size $B= 512$. The model stores results from forward attention activation (including self-attention, dense layer, and skip connection) ($M_{a}$) and backward gradient ($M_{g}$), and model parameters ($M_{m}$). The dominant factor is the attention layer. For each attention layer, given query and key tensors $Q,K \in R^{L*D}$, the attention matrix from $QK$ multiplication will result in a tensor with the shape of $[L, L]$ for each attention head and each sample within the minibatch. Thus, the total memory cost for all heads in one minibatch in one attention layer is: $ M_{a} = BHL^{2} \times (float32 \ bytes)$. Multiplying this by $N$ attention layers, the cost is $4NBHL^{2} \ bytes= 6\times 512\times 8\times 4\times 1000^{2} bytes \approx 94 GB$. This exceeds A100 memory capacity (80GB), as shown in our results in Figure 4.
In contrast, our HST model significantly reduces the memory costs. The reason is that we approximate point samples far away from the query points by coarse representation (in quadtree internal nodes) in the attention computation. The further away points are from the query points, the coarser cells we use.
HST memory cost estimation:
Assume same setup ($L = 1K$ points, $H = 8$ attention heads, $N = 6$ block, the dimension of each head $D = 64$, and the batch size $B= 512$) and a quadtree height $h=10$ and minimum leaf node size $T=100$. HST has the same number of query points. For each query point in a leaf node, the key set includes all the points in the same leaf node, as well as only a subset of internal quadtree nodes (the leaf’s siblings, its parent’s siblings, and its grandparent’s siblings, …, till the root node). This is illustrated in the red boxes in Fig.1 in global rebuttal box. In this example, the key set is reduced from 64 to 9 nodes. In general, the key set is reduced from $L$ to $T+3h$. Thus, the cost of $N$ attention layers is reduced from $NHL^{2}$ to $NHL(T+3h)$. The sparse indicator matrix for selective keyset attention calculation in HST has a shape of $\mathbf{I} \in R^{L \times 2L}$, and its non-zero element count is $L(T+3h)$, thus the memory cost is $BNL(T+3h)\times float32 \ bytes $. Thus, the total memory cost is $2BNL(T+3h)\times float32 \ bytes
= 2\times 512\times 6\times 8\times 1000\times 130\times 4 \ bytes \approx 24 GB$. Similarly, the time costs of HST are also lower than the vanilla transformer due to less attention pairs. See our responses above.
---
Rebuttal 2:
Comment: Thanks for the clarification. I have increased my ratings. | Summary: This paper proposes a hierarchical spatial transformer model for a large number of point samples in continuous space. The model is important for geoscience applications, such as water quality monitoring and air quality monitoring, and operator learning for numerical models. The novel idea includes continuous position-encoding and hierarchical attention layers, which make a trade-off between efficiency and spatial resolution and captures the interactions in multiple spatial scales. The proposed model also has uncertainty quantification that reflect the effect of sample spatial sparsity. The proposed method is compared against multiple baselines on different datasets with significant improvements. There are also computational experiments to show the efficiency of the method.
Strengths: 1. The idea of hierarchical attention with a quad-tree structure for massive samples in the continuous space is quite novel. The idea makes a trade-off between computational efficiency and spatial approximation of latent representation. The spatial approximation in a quad-tree structure is well-motivated by the spatial autocorrelation effect.
2. The proposed method has strong technical contributions. There is theoretical analysis of the time complexity of the proposed model and the gain in efficiency.
3. There are extensive experimental evaluations against multiple state of the art methods on both real-world and synthetic datasets. The results show the better accuracy and uncertainty quantification of the proposed method.
Weaknesses: 1. There are some minor presentation issues that can be fixed. For example, Table 2 has a typo in the title. It should be “two real-world datasets and a synthetic dataset”.
2. The application paragraph in the introduction can be more detailed to better highlight the impact of the model in operator learning for numerical simulation.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Can you explain in more details how the proposed model can be used in numerical simulations (e.g., ocean current, multiphase flow) in the introduction?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: No limitations discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer:
Thank you for your positive feedback and careful reading. We have done thorough proofreading and corrected the typos in the paper. For the second question about how the proposed model can be used in numerical simulations, we use multiphase flow as an example to explain. In this case, the input features consist of 3D coordinates of point samples in the continuous space as well as the initial pressure and velocity of those samples, the target output is the acceleration at any point location. We can add this into the paper.
---
Rebuttal Comment 1.1:
Title: Rebuttal read
Comment: Thank you for providing the rebuttal. I believe they addressed my previous questions. The paper makes valid and practical contributions to the machine learning community. The proposed idea can also beneifit a broad range of scientific domains. In my opinion the paper should be accepted. | Rebuttal 1:
Rebuttal: This PDF contains a figure illustration of evenly distributed input points. It illustrates how our model can reduce computational costs in this case by selecting a subset of keys in attention calculation.
Pdf: /pdf/7d679834ead8c8bde2a4c6d339eba3c8a9cf7fe4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Understanding Neural Network Binarization with Forward and Backward Proximal Quantizers | Accept (poster) | Summary: The paper generalizes ProxConnect with forward-backward quantizers and introduces ProxConnect++ that includes some binarization techniques as special cases. With the derived ProxConnect++, the paper proposes BNN++ to illustrate the effectiveness of ProxConnect++. Experiments show the advantages of BNN++ on image classification benchmarks with CNNs and vision transformers.
Strengths: The paper generalizes ProxConnect and presents the unified framework ProxConnect++, which includes some binarization techniques as special cases. With ProxConnect++, one may design new forward-backward quantizers. Extensive experiments on CNNs and vision transformers demonstrate the advantages of the proposed ProxConnect++.
Weaknesses: The paper claims that it develops a unified framework ProxConnect++, which allows to design various forward-backward quantizers. However, the forward and backward quantizers used in the paper are limited to the existing ones. To show the advantages of ProxConnect++, it should design new forward and backward quantizers, which outperform the existing ones. The paper only gives an example BNN++, which is also inspired the existing forward and backward quantizers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The paper presents a unified framework ProxConnect++. It is claimed that ProxConnect++ can include many binarization techniques as special cases. I have two concerns for the unified framework.
First, it is desirable to present a unified framework. So one can analyze existing algorithms in a unified way. As a unified framework, the most important thing is to propose new superior algorithms following the framework. However, the paper does not give more insights into developing new algorithms. Note that BNN++ is also inspired by the exiting forward and backward quantizers.
Second, the paper compares with some variants of ProxConnect++ and shows that BNN++ performs the best. It is better to include other state-of-the-art binarization techniques in the experiments.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors addressed the limitations of their work. There is no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would first like to thank Reviewer fayS for the great review and questions. Below we address your concerns:
**Design new proximal quantizers:**
(1) Prior to our work, existing implementations mostly designed the backward quantizer in an ad hoc fashion (based on graphic approximations of the sign function), while Dockhorn et al. showed that any (continuous) monotonic function can be used as a backward quantizer. Our Theorem 1 allows practitioners to easily design compatible forward and backward quantizers.
(2) We will add the following corollary to facilitate the design of F-B quantizers:
> **If the forward quantizer is a continuous differentiable function (with bounded support), then one can simply choose the backward quantizer as the derivative of the forward quantizer.** This follows from Theorem 1 since $\mathsfit{P}(w) \equiv w$ is clearly proximal. Note that the BNN example does not follow from this Corollary (but still follows from the more general Theorem 1).
(3) Aside from the BNN++ example, we give more examples here on how to derive new F-B quantizers from existing implementations (visualization for existing algorithms and new ones are presented in Figure 1 of the global response):
- Bi-Real/R-BNN: $\mathsf{F}(\mathbf{w})=\text{sign}$, $\mathsf{B}(\mathbf{w})=\nabla F(\mathbf{w})$, where $F(\mathbf{w})$ is a piewise polynomial function. We simply choose $\mathsf{F} =F(\mathbf{w})$ and arrive at our legitimate variant Poly+. Note that we gradually increase the coefficient of $F(\mathbf{w})$ such that we ensure full binarization at the end of the training phase.
- EDE in IR-Net: $\mathsf{F}(\mathbf{w})=\text{sign}$, $\mathsf{B}(\mathbf{w})=\nabla g(\mathbf{w})=kt(1-\text{tanh}^2(t\mathbf{w}))$, where $k$ and $t$ are control variables varying during the training process, such that $g(\mathbf{w})\approx \text{sign}$ at the end of training. Again, we choose $\mathsf{F} =g(\mathbf{w})$ and arrive at our new legitimate variant EDE+.
Empirically, we find both variants outperform their original form in Table 1 of the global response.
(4) With numerous examples, we have demonstrated that PC++ helps us understand and improve existing gradient approximation methods. We find that using PC++ to design new proximal quantizers based on existing ones is straightforward according to the corollary above. Furthermore, given any other forward quantizers (e.g., higher-order approximation of $\text{sign}$), we can easily derive their corresponding backward quantizers.
**Sota binarization techniques:**
(1) In Appendix E, we compare with another Sota binarization work on vision transformers called BiViT, where we observe BNN++ outperforms BiViT under the same experimental setting.
(2) We add additional results on comparison with other SOTA BNN methods, including Bi-Real Net, Rotated RNN, IR-Net (EDE), and ReActNet. As stated in our previous paragraph, we find that using our PC++ framework, we can improve existing methods with better F-B quantizers design. Furthermore, we find that BNN++ still outperforms other algorithms across all tasks.
(3) In this paper we mainly study the design of forward-backward quantizers. To verify our theoretical framework on vision transformers, we already applied several empirical tricks to improve the performance of baseline methods and our methods, including the layer-wise scaling factor, additional layernorm function, knowledge distillation, and additional pertaining.
(4) Nevertheless, we are aware of several other techniques such as attention matching, channel re-scaling, shifted activation, state-aware gradient update, and so on, which may boost the performance of PC++ algorithms. We identify this as a limitation of our work and we leave it as future work to further improve our established baselines by combining and adapting some of these techniques.
---
Rebuttal Comment 1.1:
Title: Did our rebuttal address your concerns?
Comment: Dear reviewer fayS, as the discussion deadline is approaching, we are wondering if our response has addressed your concerns. We would be happy to answer any of your further questions. Thank you! | Summary: Thanks to authors for submitting their work to NeuRips 2023. After nicelly flowing introduction the lines 47-58 lay goals of the paper and its contributions out in the context of recent advances, cf. Dockhorn et al. [13], PC (ProxConnect). In particular, the paper generalizes PC to forward-backward binarization, as opposed to PC using only forward binarization, labelling this more general approach PC++. As a practical result of presented theory the fully binarized (8-bit integers) BNN++ is shown to be competitive against existing approaches on most tasks, including experiments on several datasets and architectures (incl. CNN and ViT). It achieves 30x reduction in memory and storage with a modest 5-10% accuracy drop compared to full precision training. Binarization results on vision transformers, rare up until now, is especially timely and promising given the recent deployment of the transformers in deep learning. Architecture ablation experiments conducted demonstrate practicality of the hereby developed theory and increase potential impact of the paper.
Besides Theorem 1, being rather a straightforward extension of previous work Dockhorn et al. [13], including convergence guarantees applied on PC++ in appendix, I value authors used this general result to present a unifying and pragmatic framework for binarization of the NN in response to increasing energy demands for training and fine-tuning the recent large transformer models. I find paper timely and relevant contribution to research community.
Strengths: + $\textbf{[Accessibility, Approach]}$ The goal of the paper is achieved in very accessible and neat form. Proving Theorem 1 (the main result) introduces a compelling and general framework for designing binarization algorithms for NNs.
+ ProxConnect (3) ++ >>> nice idea of formally adding transformation T that “cancels out” (formally in the Theorem 1 sense) in backward step, so the original theory of PC, Dockhorn et al. [13], applies.
+ $\textbf{[Unifying framework]}$ anAdditional and very impactful merit of the paper is its unifying approach of currently quite scattered and uneasy to access research of neural networks binarization. Especially natural inclusion of the transformers is very timely, convenient and impactful addressed by only few previous works, e.g. Y. He et al. [19].
+ $\textbf{[Demonstration of practical usefulness]}$ (Architecture) Ablation experiments present practical usefulness of hereby developed theory (Theorem 1) (line: 283-284) and increasing the impact of the paper on further research.
Weaknesses: - Is the extension of previous work of Dockhorn et al. [13] novel enough? As noted above in Summary and Strength sections, formally adding transformation $T$ that “cancels out” (formally in the Theorem 1 sense) in backward step, so the original theory of PC, Dockhorn et al. [13], applies is smart yet rather straightforward (also demonstrated in the proof in Appendix). Some may see this rather a direct consequence of the previous work, that should be fully contributed to the respective authors :-)... .Despite this caveat I believe the paper's focus on application of this theory outweighs the theoretical merits and yields a work with potential high impact on further advances in the field. I leave to authors without impacting my decision whether or not to recognise previous work of Dockhorn et al. [13] more and consider renaming the Theorem 1 to "Corollary" to refer directly to previous work.
- Extend of experiments in terms of datasets and modes is well selected. Yet number of runs to avoid statistical error, especially in Tables $3$ and $4$, is insufficient and should be higher. See Questions section.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Would it be possible to Increase an extend of randomly seeded pool of experiments for Table 3,4? - reported averages on 3 runs together with rather diminishing differences between columns may not be significant. Because the results are used to draw performance claims, as on line 250, 251, “In particular, for end-to-end training, the best performing ProxConnect++ algorithms achieve ≈ 5% accuracy drop”, the extended experiments are suggested to be reported in camera ready version. Especially as it concerns ViT, treated rarely before and thus of high impact and interest to community.
- The Figure 2., p.8, presents well why more involved techniques that post-training binarization (PTB) are needed. Would it be worth to use it in the beginning of the paper as a motivation figure? Accompanied with similar figure of memory and storage requirements, perhaps.
- What does Table 2 show? (accuracy?) While it is commented in the text it is not clear from Table captions. I recommend to factor it in for the sake of higher clarity.
- Line 136: “proved” —> “proven”
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Suggestion: For a camera ready version notes on limitation of the method are suggested to be brought back to main body of the paper from the Appendix G.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would first like to sincerely thank Reviewer5jjv for appreciating our contribution and providing valuable suggestions. Below we address your concerns:
**(1) Novelty on theory:**
We agree that PC++ is an extension of PC, and we have followed the reviewer's suggestion to label the convergence result in the appendix as a Corollary and credit it more directly to Dockhorn et al.
**(2) More runs to avoid statistical error**:
We have increased the randomly seeded pool of experiments from 3 to 5 for Table 3 in the main paper. We confirm that the updated Table (denoted as Table 2 in the global response) reports very similar results with 3 runs and our conclusion remains the same. Due to limited time, we can only finish Table 3 in a week but we will update Table 4 as well in our final draft.
**(3) Figure 2**:
We agree that Figure 2 would serve as a nice motivation for this work. We have modified the original Figure 2 and provided an additional figure on memory footprint (Figure 2 in the global response) to illustrate:
> BAT is necessary for our context as it performs much better than PTB and reduces the same memory footprint during inference.
(4) Yes, Table 2 shows accuracy. We will add the caption to our final draft.
(5) We have changed "proved" to "proven" on line 136.
(6) Thank you for the suggestion. We will move the limitations back to the main paper in the final draft.
---
Rebuttal Comment 1.1:
Comment: Thanks to authors for response and adjustments. I believe they will be appreciated by readers. I have no further comments. Thank you.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Dear Reviewer 5jjv, we want to thank you again for your positive review and great suggestions that helped improve our paper. We will make sure that these adjustments are incorporated into our final draft! | Summary: This paper proposes a new framework for training neural networks with binary weights, which generalizes ProxConnect and takes it as a special case. In the new framework, forward and backward quantizers are defined. A consistency result of the two quanziters is derived (Theorem 1). Extensive experiments are conducted to evaluate the usefulness of the new framework.
Strengths: 1. The paper is very well-written and easy to the follow. I love the clarity of the paper.
2. The theoretical framework is sensible and the analysis is rigorous. I did not find a technical flaw.
3. The empirical evaluation is sufficient. The code is provided for reproduction.
Weaknesses: 1. The proposed framework seems a direct generalization of ProxConnect. Thus, it's novelty is somewhat limited.
2. It is not straightforward for practitioners how to use the framework to design new proximal quantizers. Please provide practical guidelines and tips.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first thank Reviewer WcKF for the positive review and great questions, which we address blew:
**Novelty on theory:**
- We agree with Reviewer WcKF (and also Reviewer 5jjv) that PC++ is an extension of PC, and we are more than happy to credit the theoretical convergence property of PC++ to Dockhorn et al.
- As pointed out by Reviewer 5jjv, our paper also contributes to designing the unifying framework of forward-backward quantizers, deriving a sufficient and necessary condition, and demonstrating the practical usefulness of our framework.
**Design new proximal quantizers:**
(1) Prior to our work, existing implementations mostly designed the backward quantizer in an ad hoc fashion (based on graphic approximations of the sign function), while Dockhorn et al. showed that any (continuous) monotonic function can be used as a backward quantizer. Our Theorem 1 allows practitioners to easily design compatible forward and backward quantizers.
(2) We will add the following corollary to facilitate the design of F-B quantizers:
> **If the forward quantizer is a continuously differentiable function (with bounded support), then one can simply choose the backward quantizer as the derivative of the forward quantizer.** This follows from Theorem 1 since $\mathsfit{P}(w) \equiv w$ is clearly proximal. Note that the BNN example does not follow from this Corollary (but still follows from the more general Theorem 1).
(3) Aside from the BNN++ example, we give more examples here on how to derive new F-B quantizers from existing implementations (visualization for existing algorithms and new ones are presented in Figure 1 of the global response):
- Bi-Real/R-BNN: $\mathsf{F}(\mathbf{w})=\text{sign}$, $\mathsf{B}(\mathbf{w})=\nabla F(\mathbf{w})$, where $F(\mathbf{w})$ is a piewise polynomial function. We simply choose $\mathsf{F} =F(\mathbf{w})$ and arrive at our legitimate variant Poly+. Note that we gradually increase the coefficient of $F(\mathbf{w})$ such that we ensure full binarization at the end of the training phase.
- EDE in IR-Net: $\mathsf{F}(\mathbf{w})=\text{sign}$, $\mathsf{B}(\mathbf{w})=\nabla g(\mathbf{w})=kt(1-\text{tanh}^2(t\mathbf{w}))$, where $k$ and $t$ are control variables varying during the training process, such that $g(\mathbf{w})\approx \text{sign}$ at the end of training. Again, we choose $\mathsf{F} =g(\mathbf{w})$ and arrive at our new legitimate variant EDE+.
Empirically, we find both variants outperform their original form in Table 1 of the global response.
(4) With numerous examples, we have demonstrated that PC++ helps us understand and improve existing gradient approximation methods. We find that using PC++ to design new proximal quantizers based on existing ones is straightforward according to the corollary above. Furthermore, given any other forward quantizers (e.g., higher-order approximation of $\text{sign}$), we can easily derive their corresponding backward quantizers.
---
Rebuttal Comment 1.1:
Title: Did our rebuttal address your concerns?
Comment: Dear reviewer WcKF, as the discussion deadline is approaching, we are wondering if our response has addressed your concerns. We would be happy to answer any of your further questions. Thank you! | Summary: This paper studied binary neural networks which extend the existing theory of ProxConnect(PC) to ProxConnect++ and explored the fully binarized scenario, where the dot-product accumulators are also quantized to 8-bit integers. The authors also proposed BNN++ with non-linear forward and backward approximation to the sign function. Authors did experiments on CIFAR10 and ImageNet datasets.
Strengths: Binarizing weights and activations in neural networks is a challenging problem worth studying. This paper proposed an in-depth study of forward and backward functions for binary neural networks and proposed a new method that performs outstanding accuracy on CIFAR10 and ImageNet datasets.
Weaknesses: The authors only compared their method with several BNN methods. But many classic and famous BNN works are not even mentioned, such as XNOR-Net. And some papers studying the backward approximation are missing, e.g., the second-order approximation to the sign function proposed in Bi-Real Net should be compared in Fig.1. I would suggest authors compare or at least mention the binary neural networks to make the paper more comprehensive.
To name a few:
[1] Zihan Xu, Mingbao Lin, Jianzhuang Liu, Jie Chen, Ling Shao, Yue Gao, Yonghong Tian, and Rongrong Ji. Recu: Reviving the dead weights in binary neural networks. (CVPR)
[2] Zhijun Tu, Xinghao Chen, Pengju Ren, and Yunhe Wang. Adabin: Improving binary neural networks with adaptive binary sets. (ECCV)
[3] Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. (ECCV)
[4] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. (ECCV)
[5] Haotong Qin, Ruihao Gong, Xianglong Liu, Mingzhu Shen, Ziran Wei, Fengwei Yu, and Jingkuan Song. Forward and backward information retention for accurate binary neural networks. (CVPR)
[6] Brais Martinez, Jing Yang, Adrian Bulat, and Georgios Tzimiropoulos. Training binary neural networks with real-to-binary convolutions. (ICLR)
[7] Zechun Liu, Zhiqiang Shen, Marios Savvides, and Kwang-Ting Cheng. Reactnet: Towards precise binary neural network with generalized activation functions. (ECCV)
[8] Chunlei Liu, Peng Chen, Bohan Zhuang, Chunhua Shen, Baochang Zhang, and Wenrui Ding. Sa-bnn: State-aware binary neural network. (AAAI)
[9] Mingbao Lin, Rongrong Ji, Zihan Xu, Baochang Zhang, Yan Wang, Yongjian Wu, Feiyue Huang, and Chia-Wen Lin. Rotated binary neural network. (NeurIPS)
[10] Zechun Liu, Zhiqiang Shen, Shichao Li, Koen Helwegen, Dong Huang, and Kwang-Ting Cheng. How do adam and training strategies help bnns optimization. (ICML)
[11] Hyungjun Kim, Jihoon Park, Changhun Lee, and Jae-Joon Kim. Improving accuracy of binary neural networks using unbalanced activation distribution. (CVPR)
[12] Koen Helwegen, James Widdicombe, Lukas Geiger, Zechun Liu, Kwang-Ting Cheng, and Roeland Nusselder. Latent weights do not exist: Rethinking binarized neural network optimization. (NeurIPS)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The authors use the non-linear approximation of the sign function in the forward pass, will that yield other real-valued outputs besides the binary values?
Line 170 is confusing: “BNN++ is more desirable than BNN+ empirically.” The author means BNN++ is better than PC++?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
**Additional References**:
We would first like to thank Reviewer YJmg for providing the additional references, especially Bi-Real net, R-BNN, IR-Net, and ReActNet, which are closely related to our work and expanding our PC++ family. We agree that these are important papers that deserve proper discussion in our work. Here we extend our discussion and we will add them to our paper (we follow the Reviewer's citation order here):
> Existing works propose different methods for improving the performance of BNN and they can be roughly categorized into three classes:
>
> (1) different *gradient approximation* that can be either improved or justified with PC++:
>
> Some existing implementations [3][5][9] set the forward quantizer as $\text{sign}$ and design the backward quantizer in an ad hoc fashion (based on graphic approximations of the sign function), e.g., [3] applies a piece-wise polynomial approximation; [9] improves [3] with a dynamic polynomial approximation; [5] designs a dynamic error decay estimator based on the $\text{tanh}$ function. These methods in their original forms do not belong to PC++, but they could be brought into the PC++ family by designing new forward quantizers using our Theorem 1:
>
> - Bi-Real/R-BNN: $\mathsf{F}(\mathbf{w})=\text{sign}$, $\mathsf{B}(\mathbf{w})=\nabla F(\mathbf{w})$, where $F(\mathbf{w})$ is a piewise polynomial function. We simply choose $\mathsf{F} =F(\mathbf{w})$ and arrive at our legitimate variant **Poly+**. Note that we gradually increase the coefficient of $F(\mathbf{w})$ such that we ensure full binarization at the end of the training phase.
> - EDE in IR-Net: $\mathsf{F}(\mathbf{w})=\text{sign}$, $\mathsf{B}(\mathbf{w})=\nabla g(\mathbf{w})=kt(1-\text{tanh}^2(t\mathbf{w}))$, where $k$ and $t$ are control variables varying during the training process, such that $g(\mathbf{w})\approx \text{sign}$ at the end of training. Again, we choose $\mathsf{F} =g(\mathbf{w})$ and arrive at our new legitimate variant **EDE+**.
>
> Other implementations [2][7] apply shift transformation on both forward and backward quantizers, which belong to the PC++ family.
>
> We visualize the forward-backward quantizers of [3][5][7][9], our new Poly+ and EDE+ in Figure 1 of the global response. Moreover, we perform experiments on vision transformers to examine the performance of these 6 additional quantizers in Table 1, we observe that:
> - Our new proposed Poly+ and EDE+ always outperform the original algorithms and further confirm that our PC++ framework merits theoretical and empirical justifications.
>- BNN++ still outperforms other algorithms on all tasks.
>
>(2) architecture design that can be further integrated into our PC++ as future work: [1] designs a rectified clamp unit to address "dead weights"; [4] applies the absolute mean of weights and activations; [6] uses real-to-binary attention matching and data-driven channel re-scaling; [11] proposes a shifted activation function.
>
> (3) optimization refinement, which again could be integrated into our framework as future work: [8] proposes to rescale the backward gradient on binary activations in order to stabilize BNN training; [10] provides a weight decay scheme; and [12] proposes Bop, a new optimizer for BNNs.
>
**Questions**:
(1) **Full binarization:** In our experiments (lines 195-196), we linearly increase $\mu$ in BNN++ to achieve full binarization in the end. To avoid real-valued outputs, we also performed a sanity check to confirm the final weights are 100% binary.
(2) **Regarding the relationship between BNN++ and PC++**:
Here we clarify that BNN++ is an algorithm that belongs to the broader PC++ family:
- Firstly, we want to emphasize that PC++ is a family of algorithms where the forward/backward quantizers satisfy the condition in Theorem 1;
- Secondly, as we discussed in Example 2, BNN+ cannot be justified under the framework of PC++, thus does not belong to PC++.
- Thirdly, our new variant BNN++ is designed to be a legitimate PC++ algorithm and is empirically shown to perform better than BNN+.
- Similarly to BNN+, references [3][5][9] provided by the Reviewer use different backward quantizers and the same $\text{sign}$ function, thus cannot be justified by PC++. Using our Theorem 1, we design new algorithms Poly+ and EDE+ that belong to the PC++ family. These new variants are able to explore the loss landscape with gradients evaluated at more fine-grained weights, especially in the initial phase of training (as opposed to evaluating the gradient at full quantized weights).
---
Rebuttal Comment 1.1:
Title: Did our rebuttal address your concerns?
Comment: Dear reviewer YJmg, as the discussion deadline is approaching, we are wondering if our response has addressed your concerns. We would be happy to answer any of your further questions. Thank you! | Rebuttal 1:
Rebuttal: We would like to thank all reviewers again for their extremely informative reviews that helped us improve the paper. Here we want to provide additional figures and tables (in the new one-page PDF file) that we will add to our final draft:
(1) **Figure 1:** According to Reviewer YJmg's suggestions, we have carefully reviewed more existing methods and summarized additional forward-backward quantizers here, which can be either *improved* or *justified* with our PC++ framework. Specifically, we find that:
- Bi-Real Net, R-BNN, and Error Decay Estimator (EDE) in their original forms do not belong to PC++, but they can be improved by modifying the forward quantizers (we will show the improvement in the next table). Thus, we propose the modified Poly+ and EDE+.
- ReActNet is a special case of PC++.
(2) **Table 1:** Following Figure 1, we perform experiments on vision transformers to examine the performance of additional quantizers (and their modified variants), we observe that:
- Our new proposed Poly+ and EDE+ always outperform the original algorithms and further confirm that our PC++ framework merits theoretical and empirical justifications.
- BNN++ still outperforms other algorithms on all tasks.
(3) **Figure 2:** Following the suggestions by Reviewer 5jjv, we modify the original Figure 2 and provide an additional figure on memory footprint to serve as a motivation for this paper:
> BAT is necessary for our context as it performs much better than PTB and reduces the same memory footprint during inference.
(4) **Table 2:** As suggested by Reviewer 5jjv, we increase the randomly seeded pool of experiments from 3 to 5 for Table 3 in the main paper. We confirm that the updated Table reports very similar results with 3 runs and our conclusion remains the same.
Pdf: /pdf/1fe97efa134302d7a83b6967d2689ac46d35809d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Last-Iterate Convergent Policy Gradient Primal-Dual Methods for Constrained MDPs | Accept (poster) | Summary: This paper studies the problem of policy searching for constrained MDPs. The authors devise two Lagrangian-based named regularized policy gradient primal-dual (RPG-PD) method and optimistic policy gradient primal-dual (OPG-PD) method, respectively. Their methods are single-time-scale and thus insensitive to hyperparameter changes. They also show that the proposed methods have the property of last-iterate convergence and give the convergence rate in theoretical analysis.
Strengths: The paper is well written and easy to follow. The related works section is particularly commendable, providing a comprehensive overview of relevant works in this field. The authors are the first to give a non-asymptotic rates for last-iterate convergence of single-timescale primal-dual methods. The theoretical analysis is technically sound. From my view point this paper can be a nice addition to the literature on constrained policy optimization.
Weaknesses: * The convergence of the proposed methods relies on the strong duality property of CMDPs, which may not be true for general parametrized policy classes (like NNs).
* The numerical experiments are only conducted on tabular cases. And I suggest the authors compare their results to more baselines, for example, primal methods like CRPO in [102].
* The paper seems a little bit too long to appear in a conference proceeding. I suggest the author remove some redundant explanations or examples to shorten the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The paper only considered constrained MDP with a single constraint as a simplified case. Can the results be generalized to the case of multiple constraints?
* Could the homotopic strategy (gradually shrink the regularization term, see [Li et al. 2022]) be applied to the RPG-PD method? Would that yield better theoretical guarantees?
[Li et al. 2022] Homotopic Policy Mirror Descent: Policy Convergence, Implicit Regularization, and Improved Sample Complexity
Yan Li, Guanghui Lan, Tuo Zhao arXiv:2201.09457
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See comments above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper, and the valuable feedback. Please find our specific remarks as follows.
---
## Weaknesses
> - *The convergence of the proposed methods relies on the strong duality property of CMDPs, which may not be true for general parametrized policy classes (like NNs).*
**Response**: This is an important point. First, strong duality is a `structural property` of constrained MDPs when no function approximation is used, i.e., it can be `proved` under the strict feasibility/Slater condition; see e.g., [R1, R2]. Slater condition has been commonly used in the literature to develop convergence theory for constrained MDPs, see e.g., [R2, R3] and many follow-up works.
Second, in the function approximation setting we do not require strong duality in the `parametrized policy class`, and the function approximation error has captured the possible duality error caused by the inexpressiveness of the function class; see also [R3]. Interestingly, we would like to point out that for our regularized method: RPG-PD (see Equation (8)), the theory (see Theorem 4 and Corollary 5) only assumes the saddle-point property/strong duality for the regularized Lagrangian, without requiring the strong duality for the original un-regularized Lagrangian. Hence, even if the original strong duality fails, when the saddle point of the regularized Lagrangian is close to the solution of the original constrained MDP problem, our regularized method is still applicable.
Thanks for the comment, and we will remark this point in the final version.
[R1] *Constrained Markov Decision Processes. Routledge. 2021.*
[R2] *Safe policies for reinforcement learning via primal-dual methods. TAC 2022.*
[R3] *Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs. arXiv:2206.02346. 2022.*
> - *The numerical experiments are only conducted on tabular cases. And I suggest the authors compare their results to more baselines, for example, primal methods like CRPO in [102].*
**Response**: We have compared RPG-PD and OPG-PD with a primal method CRPO [R1] in Figure 4 in Appendix E.1. To control constraint satisfaction, CRPO switches between the gradient directions of reward/utility value functions depending on the amount of constraint violation. As a result, Figure 4 shows that CRPO reaches a slightly lower reward value than OPG-PD's, and the constraint satisfaction is relatively conservative and has mild oscillation behavior. Hence, we conjecture that the policy last-iterate convergence holds for CRPO, up to some error caused by the switching overhead.
Thanks for the suggestion, and we will check other baselines and add more experiments in the final version.
> - *The paper seems a little bit too long to appear in a conference proceeding. I suggest the author remove some redundant explanations or examples to shorten the paper.*
**Response**: Thanks for the suggestion, and we will shorten the final version according to your suggestion.
---
## Questions
> - *The paper only considered constrained MDP with a single constraint as a simplified case. Can the results be generalized to the case of multiple constraints?*
**Response**: We can generalize the constrained saddle-point formulation (see Equation (3)) for constrained MDPs with a finite number of constraints by introducing a vector form of Lagrangian multiplier. Thus, the dual updates of RPG-PD and OPG-PD are in vector form and the last-iterate convergence analysis carries over to this general case. We notice that we are not the first using such a simplification, as it has been used in several other studies, e.g., [R1, R2, R3].
Thanks for the question, and we will remark this point in the final version.
[R1] *Provably efficient model-free constrained rl with linear function approximation. NeurIPS 2022.*
[R2] *DOPE: Doubly optimistic and pessimistic exploration for safe reinforcement learning. NeurIPS 2022.*
[R3] *Natural policy gradient primal-dual method for constrained markov decision processes. NeurIPS 2020.*
> - *Could the homotopic strategy (gradually shrink the regularization term, see [Li et al. 2022]) be applied to the RPG-PD method? Would that yield better theoretical guarantees?*
**Response**: Thank you for providing this important reference [R1]. We believe that the homotopic strategy can be adopted in our regularized method, and hopefully with better convergence guarantee. We notice two challenges in applying the homotopic strategy: (i) an optimal policy is not necessarily induced by some deterministic optimal policies that usually do not exist for a constrained MDP; (ii) since instability of saddle-point gradient dynamics often results from large stepsize, monotonically increasing stepsizes could make primal-dual algorithms divergent. Hence, additional effort is needed to prove the homotopic strategy for solving constrained MDPs.
Thanks for the suggestion, and we will remark this reference, and include this important future direction in the final version.
[R1] *Homotopic policy mirror descent: Policy convergence, implicit regularization, and improved sample complexity. arXiv:2201.09457. 2022.*
---
We would like to thank the reviewer again for the helpful comments. Please feel free to let us know if there are any other concerns we can address that could improve your assessment of our work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification, I choose to retain my positive evaluation of the manuscript. | Summary: This paper studied the policy gradient primal-dual approach for constraint MDP with last iterate convergence guarantee. A regularized policy gradient primal-dual method is first proposed where regularization on the policy and dual variables are introduced to add curvature to the minimax problem and with appropriately chosen regularization factor, sublinear last-iterate convergence is obtained. An optimistic poicy gradient primal-dual method is then proposed with linear convergence of the squared distance between the last iterate (policy, dual variable) pair and the saddle region under some additional assumptions. The key analysis technique is to bridge the policy update and update in occupancy measure, and CMDP is convex w.r.t. the occupancy measure.
Strengths: The last iterate convergence of algorithm for CMDP is meaningful. The proposed methods with counterparts in minimax optimization are intuitive and explained very well. Multiple varations of CMDP has been studied, i.e., linear function approximation, zero constraint violations. The I went over a majority part of the proofs and have some questions. I will increase my score after the my questions are addressed. The majority of proofs are sound to me.
Weaknesses: I did not find perticular weaknesses of the current paper. The ideas are explained clearly and easy to follow. The limitations are also discussed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The first inequality in Page 29 between Line 1045 - Line 1046: could you justify the usage of Lemma 27, since \pi^*_\tau may not be the optimal w.r.t. to dual variable \lambda_t?
It seems that there is some order mismatch of the terms (i), (ii) in Eq. (20), where (i) is O(\sqrt{epsilon}) and (ii) is O(epsilon). It is possible to further reduce the sample complexity by carefully selecting \tau?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper, and the valuable feedback. Please find our specific remarks as follows.
---
## Questions
> *The first inequality in Page 29 between Line 1045 - Line 1046: could you justify the usage of Lemma 27, since \pi^*_\tau may not be the optimal w.r.t. to dual variable \lambda_t?*
**Response**: Thank you for raising this important question. We apologize for a typo in Lemma 27: $x'$ on the left-hand side of the inequality should be $x$. We repeat Lemma 27 in a more convenient argmax-form here: if $x' \in argmax_{\bar{x}\in X} \langle \bar{x}, g \rangle - \frac{1}{\eta} \text{KL}(\bar{x},x)$, where $X$ is a probability simplex $\Delta(A)$ and $g$ is a bounded vector in $\mathbb{R}^{|A|}$, then for any $x^\star\in X$, $\langle x^\star - x, g\rangle \leq \frac{1}{\eta} (\text{KL}(x^\star,x)-\text{KL}(x^\star,x')) + \eta \sum_a x_a (g_a)^2$. Since the explicit form of $x'$ is the standard update of Hedge: $x_a'\propto x_a {\rm e}^{\eta g_a}$, its proof follows from the proof of Theorem 2 in the note [R1] by flipping the sign of the loss.
Application of Lemma 27 to the primal update of RPG-PD per state (see Equation (6a)) can be verified by taking $x' = \pi_{t+1}(\cdot \vert s)$, $\bar{x} = \pi(\cdot \vert s)$, $g = Q_{r+\lambda_t g +\tau \psi_t}^{\pi_t}(s,\cdot)$, $x = \pi_t(\cdot \vert s)$, and $x^\star = \pi_{\tau}^\star(\cdot \vert s)$. We notice that the left-hand side of the inequality in Lemma 27 does not require the optimality of $x^\star$. Then, the boundedness of the value function $g$ leads to the inequality between line 1045 and line 1046.
Thanks for the question, and we will remove this typo and explicitize the application of Lemma 27 in the final version.
[R1] Lecture 1, Introduction to Online Optimization/Learning, [Link](https://haipeng-luo.net/courses/CSCI659/2022_fall/lectures/lecture1.pdf)
> *It seems that there is some order mismatch of the terms (i), (ii) in Eq. (20), where (i) is O(\sqrt{epsilon}) and (ii) is O(epsilon). It is possible to further reduce the sample complexity by carefully selecting \tau?*
**Response**: Given a desired accuracy $\epsilon\in(0,1)$, adding two big O notations $O(\epsilon)$ and $O(\sqrt{\epsilon})$ leads to $O(\epsilon) + O(\sqrt{\epsilon}) \leq O(\sqrt{\epsilon})$ when $\epsilon\to 0$, which summarizes the way we combine the terms (i), (ii) in Equation (20).
We note that the regularization parameter $\tau$ has been carefully selected. The KL distance between the policy iterate and the optimal regularized policy is determined by a sum of ${\rm e}^{-\eta \tau t}$ and $\eta/\tau$, where $\eta$ is the stepsize and $\tau$ is the regularization parameter. To ensure $\epsilon$-optimality gap and $\epsilon$-constraint violation, the KL distance must be $O(\epsilon^2)$ and the regularization parameter needs to be $\Theta(\epsilon^2)$ (see Proof of Corollary 3). Hence, the stepsize has to be $\Theta(\epsilon^4)$ and the iteration complexity thus becomes $\Omega (1/\epsilon^6)$.
---
We would like to thank the reviewer again for the helpful comments. Please feel free to let us know if there are any other concerns we can address that could improve your assessment of our work.
---
Rebuttal Comment 1.1:
Comment: Thanks again for providing very positive comments and constructive questions. It would be very much appreciated if you could review our response again. If you have further questions, please feel free to notify us. | Summary: This work shows the first non-asymptotic and policy last-iterate convergence for single-time-scale algorithms in the CMDP literature. In particular, it provides nearly dimension-free sublinear last-iterate policy convergence, sublinear last-iterate policy convergence with function approximation, and problem-dependent linear last-iterate policy convergence.
Strengths: 1. This work strengthened the prior works which only guarantee asymptotic last-iterate convergence or value-average or policy-mixture non-asymptotic convergence.
2. It sets up a new framework for analyzing policy-based primal-dual algorithms via the distance of primal-dual iterates to a saddle point.
Weaknesses: 1. It is better to give a more formal definition of the "single-time-scale" and "two-time-scale" at the beginning of the paper to help readers better understand the introduction.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Compared with the references [95, 24], if there is any other technical difficulty apart from replacing the convex inequality conditions in constrained convex optimization with the specialty property of CMDP, such as the performance difference lemma?
2. How is the last-iterate converge rate established in this paper compared with the last-iterate converge rates using two-time-scale methods? Are the convergence rates in this paper better? If not, why not, and if it is possible to reduce the gap?
3. For problem (4), why do you directly show that the strong duality holds? Lemma 1 can not be directly applied to the problem (4).
4. In Theorem 6, how strong is the assumption that optimal state visitation distribution is unique given that there may exist multiple optimal policies for CMDP.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: See questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper, and the valuable feedback. Please find our specific remarks as follows.
---
## Weaknesses
>1. *... a more formal definition of the "single-time-scale" and "two-time-scale" ... help readers better understand the introduction.*
**Response**: This is an important point. `single-time-scale` refers to the classical gradient-based methods that update multiple iterates, concurrently, using constant stepsize [R1, R2]. `two-time-scale` is from stochastic approximation [R3]: stepsizes for different iterates are relatively large/small (or iterates change relatively fast/slow). We notice that methods with two nested gradient loops are in the `two-time-scale` type, since one iterate waiting for another iteration loop is slower. We will explicitize this notion in the final version.
[R1] *Studies in linear and non-linear programming. Stanford University Press, 1958.*
[R2] *A modification of the Arrow-Hurwicz method for search of saddle points. USSR. 1980.*
[R3] *Stochastic approximation: a dynamical systems viewpoint. Springer. 2009.*
---
## Questions
>1. *Compared with the references [95, 24] ... any other technical difficulty apart from replacing the convex inequality conditions in constrained convex optimization with the specialty property of CMDP, such as the performance difference lemma?*
**Response**: Besides handling non-convexity, our first technical contribution, we see other two considerable technical difficulties we have addressed. First, our constrained saddle-point problem is not symmetric: one takes a stochastic policy that affects transition dynamics and the other selects an action in a continuous interval that changes payoff. Our last-iterate analysis works for the asymmetric saddle points, which departs from the symmetric setting [R1, R2]. Second, a saddle-point policy is not uniformly max-min optimal, i.e., being optimal across all states, since an optimal policy depends on the initial state distribution in a constrained MDP. Our duality analysis goes beyond the per-state analysis [R3, R4], which could be of independent interest for analyzing constrained Markov games.
Thanks for the question, and we will emphasize these points in the final version.
[R1] *Tight last-iterate convergence of the extragradient and the optimistic gradient descent-ascent algorithm for constrained monotone variational inequalities. arXiv:2204.09228. 2022.*
[R2] *On linear convergence of iterative methods for the variational inequality problem. JCAM. 1995.*
[R3] *Last-iterate convergence of decentralized optimistic gradient descent/ascent in infinite-horizon competitive Markov games. COLT 2021.*
[R4] *Can We Find Nash Equilibria at a Linear Rate in Markov Games? ICLR 2023.*
>2. *... compared with the last-iterate converge rates using two-time-scale methods? ... better? ... reduce the gap?*
**Response**: The last-iterate rate of RPG-PD is sublinear in time, which is worse than the linear rates of two-time-scale methods [R1, R2]. We notice that RPG-PD converges to a regularized saddle point at a linear rate, up to a neighborhood that is dictated by the stepsize and regularization parameters. The slow rate results from proper parameters that set the neighborhood to be a desired accuracy. In our experiment, RPG-PD converges to the optimal saddle point sublinearly; see Figure 9 in Appendix E.3. So, we conjecture that it is impossible to improve the order of rate without new algorithmic design.
The last-iterate rate of OPG-PD is linear in time, which matches the linear rate of the two-time-scale method [R1] and improves the linear rate in [R2]. We note that our last-iterate convergence captures the stability of primal-dual iterates, while the last policy iterates in [R1, R2] come from NPG subroutines. We also notice that problem-dependent constants occur in all three linear rates, which we leave as future work of uncovering the optimal rate.
Please see more rate comparison in Table 1 in Appendix A. Thanks for the questions, and we will emphasize these points in the final version.
[R1] *A dual approach to constrained markov decision processes with entropy regularization. AISTATS 2022.*
[R2] *Algorithm for constrained Markov decision process with linear convergence. AISTATS 2023.*
>3. *... problem (4), why do you directly show that the strong duality holds? ...*
**Response**: Because the strong duality in Lemma 1 is for the un-regularized problem (see Equation (1)), it is not relevant to the regularized constrained saddle-point problem (see Equation (4)). So, it is crucial to check well-definedness of this regularized saddle-point problem, by showing the existence and uniqueness of a saddle point; see Appendix C.1 for proof.
>4. *... Theorem 6, how strong is the assumption that optimal state visitation distribution is unique ...*
**Response**: This is an important point. To measure the proximity of primal-dual iterates to a saddle point, we assume the uniqueness of the optimal state visitation distribution to define a distance metric supported by this distribution. Compared with the unique optimal policy, it is a mild assumption, since different optimal policies can share the same state visitation. This can be viewed from the linear program formulation of MDPs: occupancy measures $\{q^{\pi_k^\star}(s,a), k =1,2,\cdots\}$ associated with optimal policies $\{\pi_k^\star, k =1,2,\cdots\}$ share the same state visitation $q(s) = \sum_a q^{\pi_k^\star}(s,a)$ for all $k$. We notice that when we restrict the set of optimal policies $\Pi^\star$ with this property (any such optimal policy induces the same state visitation), our theory (see Theorem 6 and Corollary 7) still holds (or algorithms indeed converge to a `set` of optimal policies that share the same state visitation). Therefore, we believe that this uniqueness assumption is arguably mild. We leave further relaxing this assumption as our immediate future work.
---
Rebuttal Comment 1.1:
Comment: Thanks again for providing valuable comments and insightful questions. It would be very much appreciated if you could review our response again. If you have further questions, please feel free to notify us. | Summary: Two single-timescale algorithms (RPG-PD and OPG-PD) are proposed, and their finite-time convergence rates have been derived. The iteration complexity guarantees for RPG-PD are (nearly) dimension-free but are sublinear. Guarantees for OPG-PD are linear but depend on problem-dependent quantities.
Strengths: 1. Last-iterate convergence and constraint satisfaction are useful and generally more challenging. Unlike average and mixture performance measures, these do not hide possible oscillations in objective/constraint functions of immediate policy iterates, which can be undesirable for constrained dynamic systems.
2. Empirical studies showing improved performance over existing algorithms. Most notably - RPG-PD and OPG-PD suppress oscillation in utility values while achieving optimal reward values.
3. Very thorough literature survey.
Weaknesses: See Questions.
Minor typos
1. At several places, the references are to the results in the appendix, e.g., Theorem 18 below the statement of Thm. 4.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. I could not find any explanations/insights on why the previous approaches lead to oscillations while the newly proposed approaches do not. In particular, why does OPG-PD more or less have no oscillations? Also, in Figure 8, the oscillations are more pronounced for smaller parameter choices. Why does that happen?
2. It is unclear how the proposed ideas can be implemented in practice: they involve (a) computing exact expectations or Monte-carlo estimates from a priori unknown distributions and (b) optimizing certain objective functions over the set of all policy distributions, e.g., 6a, 9a, or 27a. These seem very challenging. Can the authors comment on it?
3. While the authors have mentioned the online version as part of future work, do they expect this version to also have the benefits of damped oscillations?
4. In practice, one often may require that the constraints not be violated with high probability. Would the proposed approach extend to cover that scenario as well?
5. What is `optimistic' about the OPG-PD method? Some insights would be helpful for the reader.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations of the proposed techniques have been adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper, and the valuable feedback. Please find our specific remarks as follows.
---
## Weaknesses
>1. *Minor typos: At several places, the references are to the results in the appendix, e.g., Theorem 18 below the statement of Thm. 4.*
**Response**: Below the statement of Theorem 4, `Theorem 18` should be `Theorem 4`. We will correct this typo and check all cross-references in the final version.
---
## Questions
>1. *... why does OPG-PD more or less have no oscillations? ... in Figure 8, the oscillations are more pronounced for smaller parameter choices. Why does that happen?*
**Response**: A key insight into the oscillation of Lagrangian-based primal-dual algorithms is from the game-theoretic view. Primal/dual iterates in a primal-dual method are two players: max-/min-players are primal/dual updates, respectively. Gradient-based learning dynamics are not necessarily contractive at the stationary points. The simplest case is the cyclic behavior, e.g., iterates cycle in a bilinear game: $\max_x \min_y x^\top y$ (see Figure 2 of [R1]) in which $(0,0)$ is not a contracting point of usual gradient descent-ascent methods.
A key reason why almost no oscillation is in our OPG-PD (see Equation (9)) is that its gradient-based learning dynamics is `contracting` to the set of stationary points. Particularly, our theory (see Theorem 6) shows that the distance from primal-dual iterates to a set of optimal ones decreases to zero, linearly. Due to fast last-iterate convergence, any oscillation is damped exponentially fast, yielding `more or less have no oscillations.'
Figure 8 evaluates the sensitivity of our RPG-PD (see Equation (6)) using different regularizations $\tau \in \{ 0.1, 0.05, 0.01 \}$ and a fixed stepsize $\eta$. Decreasing $\tau$ from $0.1$ to $0.01$ causes the utility value function in Figure 8 (Right) to oscillate more and more. A reason for this is that RPG-PD with small regularization works as an un-regularized primal-dual method. However, oscillation is inherent to naive primal-dual methods, e.g., un-regularized one [R2].
Thanks for the questions, and we will emphasize these points in the final version.
[R1] *ReLOAD: Reinforcement learning with optimistic ascent-descent for last-iterate convergence in constrained MDPs. ICML 2023.*
[R2] *Responsive safety in reinforcement learning by PID Lagrangian methods. ICML 2020.*
>2. *... unclear how the proposed ideas can be implemented in practice ... Can the authors comment on it?*
**Response**: In practice, our primal-dual algorithms: RPG-PD (see Equation (6)) and OPG-PD (see Equation (9)) can be implemented using policy simulators. Executing primal-dual updates in RPG-PD and OPG-PD needs 2 things: (i) estimation of state and state-action value functions of a current policy; and (ii) projection onto a probability simplex. We can implement them efficiently. For instance, an version of RPG-PD with linear function approximation is given in Algorithm 1 (see it in Appendix C.9), where the policy gradient direction is a linear function. Since current policy is from the last update, we simulate it to form unbiased estimates of value functions in Algorithm 2 and Algorithm 3 (see them in Appendix C.9) and the best linearly approximated state-action value function serves as a policy gradient direction.
The policy gradient steps in Equation (6a), Equation (9a), and Equation (27a) can be evaluated explicitly. Since Equation (6a) and Equation (27a) perform the classical mirror descent step with KL divergence; see their closed-form expressions in [R1]. For Equation (9a), it is the classical mirror descent step with Euclidean distance, i.e., projected gradient, where probability simplex projection can be solved with linear complexity [R2].
Thanks for the questions, and we will emphasize these points in the final version.
[R1] *Mirror descent, Large-Scale Optimization for Data Science,
[Link](https://yuxinchen2020.github.io/ele522_optimization/lectures/mirror_descent.pdf)*
[R2] *Fast projection onto the simplex and the $l_1$ ball. MP 2016.*
>3. *... online version ... do they expect this version to also have the benefits of damped oscillations?*
**Response**: An open question raised by our work is: can we have a policy-based primal-dual algorithm with online exploration and prove its last-iterate convergence? Due to the last-iterate convergence, it would inherit the benefits of damped oscillations. We conjecture that the answer to this question is positive, considering last-iterate results for zero-sum games [R1, R2].
[R1] *Last-Iterate Convergence with Full and Noisy Feedback in Two-Player Zero-Sum Games. AISTATS 2023.*
[R2] *Uncoupled and Convergent Learning in Two-Player Zero-Sum Markov Games. arXiv:2303.02738. 2023.*
>4. *... constraints not be violated with high probability ... extend to cover that scenario ...*
**Response**: Our constrained MDP takes an inequality constraint on a value function. Such a constraint can approximate the feasible region of a high probability constraint, e.g., obstacle-avoidance [R1]. Hence, we can apply our algorithms to high probability constraints. We will remark this in the final version.
[R1] *Safe policies for reinforcement learning via primal-dual methods. IEEE TAC 2022.*
>5. *What is `optimistic' about the OPG-PD method? ...*
**Response**: The notion `optimistic` is from optimization, e.g., [R1]. If we view Equation (9b) as a real policy gradient step that gives a policy $\hat\pi_{t+1}$, then Equation (9a) serves as a prediction step that gives an intermediate policy $\pi_t$. Not policy gradient at $\hat\pi_t$, a real step uses a policy gradient at $\pi_t$ from prediction. Thus, OPG-PD is `optimistic` on the predicted policy $\pi_t$, as it leads the direction of policy search. We will remark this in the final version.
[R1] *Optimization, learning, and games with predictable sequences. NeurIPS 2013.*
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I choose to retain my positive score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Masked Two-channel Decoupling Framework for Incomplete Multi-view Weak Multi-label Learning | Accept (poster) | Summary: The core innovation of the method lies in decoupling the single-channel view-level representation, which is common in deep multi-view learning methods, into a shared representation and a view-proprietary representation with a cross-channel contrastive loss. The authors have conducted sufficient experiments to verify the effectiveness of the proposed method.
Strengths: 1. The definition of the problem in this paper is clear. The topic is novel, and as far as I know, multi-view multi-label classification methods for the complex situation of missing labels and data are underexplored.
2. The interpretation of extracting multi-view features from two perspectives is intuitive and fits with the fundamental assumption of multi-view/multi-modal learning problems that there are commonalities as well as differences between modalities.
3. The experimental results are sufficient. The authors not only provide the results in the missing case, but also provide the results of the proposed model on the complete datasets, which shows that the model has good learning ability for multi-view data.
Weaknesses: 1. In Eq.(1), the necessary explanation about the coefficient 2 in the numerator is lacking. The authors should also provide more explanation about the motivation of such design.
2. A clarification on the value of mask rate $\sigma$ is lacking.
3. The authors propose to build two groups of encoder-decoder for different views, but the computational complexity may increase with the number of views quickly.
4. There are some unclear expressions, for instance, in Section 2.2, the authors say “Different from conventional deep multi-view networks”. So what are the conventional deep multi-view networks? There is no corresponding reference here.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Is it efficient to address the missing problem by introducing additional prior information, that is matrix G and W in this paper?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have discussed the limitations and suggested further directions for improvement in the Conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your recognition and we respond to your comments below:
W1: In Eq.(1), the necessary explanation about the coefficient 2 in the numerator is lacking. The authors should also provide more explanation about the motivation of such design.
A1: Thanks for your comments, we have added more explanation about coefficient 2 in molecular of Eq. (1):
In addition, considering the efficiency of matrix calculation, we set the coefficient of the contrastive item of shared features and private features to 2 in the numerator part, so that the entire paired similarities can be obtained by a single matrix multiplication, i.e., $[s_i^{(1)}; s_i^{(2)};…; s_i^{(N)}; o_i^{(1)}; o_i^{(2)};…; o_i^{(N)}]\times [s_i^{(1)}; s_i^{(2)};…; s_i^{(N)}; o_i^{(1)}; o_i^{(2)};…; o_i^{(N)}]=Q\in \mathbb{R}^{2N\times 2N}$. The matrix $Q$ can be divided into four sub-matrices whose size is $N$ by $N$. The sub-matrix in the upper left corner stores the similarity values of the denominator in Eq. (1), the sub-matrix in the lower right corner stores the values of the second item in the numerator, and the remaining two submatrix, which are transposed to each other, store the similarity values of the first item in the numerator. Therefore, we set the coefficient of the first term in the numerator to 2. At the same time, the diagonal elements of $Q$ are removed, so that the number of elements in the numerator and denominator parts is $\frac{1}{3N^{2}-N}$ and $\frac{1}{N^{2}-N}$ respectively.
W2: A clarification on the value of mask rate $\sigma$ is lacking.
A2: Thanks for your comments, as described in our paper, we empirically set $\sigma$ to 0.25 instead of 0.75 in MAE, considering that vector-based features do not have high redundancy as in image data. At the same time, we give the parameter analysis of $\sigma$ in the supplementary material, and the experimental results show that the model achieves the best performance when $\sigma$ is set to 0.2-0.4.
W3: The authors propose to build two groups of encoder-decoder for different views, but the computational complexity may increase with the number of views quickly.
A3: Thank you for your comments. In fact, we designed only 2 encoders for each view (one decoder per view). Since our codec structure is simple, including only 4-5 linear layers, the increase in model parameters brought about by increasing the number of views is also linear.
W4: There are some unclear expressions, for instance, in Section 2.2, the authors say “Different from conventional deep multi-view networks”. So what are the conventional deep multi-view networks? There is no corresponding reference here.
A4: We are sorry that our expression confused you. We have added relevant references to the manuscript:
Different from conventional deep multi-view networks[1-3], we employ two groups of multi-layer…
[1] Deep Double Incomplete Multi-View Multi-Label Learning With Incomplete Labels and Missing Views, TNNLS
[2] Multi-level Feature Learning for Contrastive Multi-view Clustering, CVPR
[3] Deep Embedded Complementary and Interactive Information for Multi-View Classification, AAAI
Q1: Is it efficient to address the missing problem by introducing additional prior information, that is matrix G and W in this paper?
A1: Thanks for your question, the missing view indicator matrix $\mathbf{W}$ and missing label indicator matrix $\mathbf{G}$ we introduced in the paper can avoid the negative impact of missing information. And our experiments in the new supplementary material confirm that introducing $\mathbf{W}$ and $\mathbf{G}$ can effectively improve the model's ability to handle missing views and labels.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses. The authors have addressed my concerns and I'd like to rise my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer XPPT
Comment: Thank you very much for your review comments and encouragement, we will further improve the quality of the manuscript. | Summary: This paper proposes a general framework for missing multi-view and missing multi-label classification. The main innovation is to decouple each view feature into two different shared and private features. The paper also applies random masks to the raw features to make the network learn from limited information, and shows the benefits of this approach with ablation experiments.
Strengths: - This paper is clear in terms of writing and presentation. It is easy for the reader to understand the author's motivation.
- The problem addressed by the authors is complex (doubly incomplete data) and poses a greater challenge than methods based on ideally complete data. However, judging from the results, the author solves this problem well.
- The method of using labels to constrain the feature extraction is novel, and the author's local masking of input features has also brought performance growth. I think there are simple and effective techniques that can be extended to other fields.
- The experimental results show that the MTD has significant advantages over the existing multi-view multi-label classification methods.
Weaknesses: - The one-HL metric does not seem to be of much significance because it does not effectively evaluate the performance of different methods according to the author's experimental results.
- It is inappropriate to appear both ‘Figure’ and ‘Fig.’ in the text.
- The author should clearly explain the meaning of Figure 3 in the caption.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Are these masks fixed during training? How exactly did the authors set up the mask?
- Why the results of iMVWL and NAIM3L are not consensus with original papers?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your recognition and we respond to your comments below:
W1: The one-HL metric does not seem to be of much significance because it does not effectively evaluate the performance of different methods according to the author's experimental results.
A1: Yes, the metric "1-HL" does not show strong discrimination in performance evaluation, however, as an old multi-label classification metric, we still include it in the experimental results and to make a comprehensive comparison with existing works (iMvWLL, NAIM3L, and DICNet).
W2: It is inappropriate to appear both ‘Figure’ and ‘Fig.’ in the text.
A2: Thanks for your suggestion, we have checked the full text to ensure the consistency of similar expressions.
W3: The author should clearly explain the meaning of Figure 3 in the caption.
A3: Thanks for your comments, we have added a detailed explanation to the caption of Figure 3 in the manuscript:
A random sample's channel similarity heat maps across all channels on Corel5k dataset with half of missing views and labels. S_1- S_6 and O_1- O_6 denote shared features and view-proprietary features on six views, respectively. With the increase of training epoch, the similarities of features on shared and proprietary channels show the expected trend, that is, the shared features across views gradually converged, while the similarities of "shared-proprietary" and "proprietary-proprietary" feature pairs gradually decreased.
Response to some questions:
Q1: Are these masks fixed during training? How exactly did the authors set up the mask?
A1: Yes, these masks are fixed during training. In our model, we uniformly set the mask rate $\sigma=l/d_v$ to 0.25 for all datasets instead of a larger masking rate like MAE. The reason is that we notice the difference that the information density of vector data is much larger than that of image data. At the same time, we give the parameter analysis of $\sigma$ in the supplementary material, and the experimental results show that the model achieves the best performance when $\sigma$ is set to 0.2-0.4.
Q2: Why the results of iMVWL and NAIM3L are not consensus with original papers?
A2: Since we added new metrics (1-OE and 1-Cov) to our experiment that were not provided in the original papers, we reproduced their methods on our incomplete data and our replicated results exceed those in the original papers on some datasets and metrics.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. The responses have solved all my concerns. I am satisfied with it and would like to keep my previous scores (weak accept).
---
Reply to Comment 1.1.1:
Title: Response to Reviewer txHg
Comment: Your approval is very important to us and we will carefully revise our manuscript. | Summary: This paper proposes a new framework for incomplete multi-view weak multi-label classification (iMvWMLC), which is a challenging task that involves missing views and labels in the data. The framework consists of four main components: a two-channel decoupling mechanism (called MTD) that extracts shared and view-specific features from each view, a cross-channel contrastive loss that enhances the semantic separation of the two channels, a random fragment masking strategy that reduces the redundancy of the input features, and a label-guided graph regularization loss that preserves the geometric structure among samples. The paper evaluates the framework on five datasets and compares it with eight state-of-the-art methods, showing that it outperforms them on various metrics and is robust to different levels of incompleteness.
Strengths: (1) As far as I know, research on multi-view multi-label classification is still in its infancy, and this article focuses on this topic very well.
(2) To a certain extent, the logic and expression of this article are clear.
(3) I am interested in masking operations applied to vectorized feature data, most multimodal methods focus more on input real data such as text or images. I agree that the use of the mask mechanism on feature data is instructive.
Weaknesses: (1) The similarity matrix calculated based on weak tags does not take into account the interference caused by unknown tags. The paper should address the potential interference caused by unknown tags in the calculation of the similarity matrix.
(2) The results of the ablation experiment suggest that the so-called contrast loss does not seem to have improved much, at least on the Corel5k dataset. The paper should provide some explanation.
(3) The paper needs to improve the details, as there is a error in section 3.4 where "TMD" is incorrectly referenced. The authors could correct this and prevent similar errors.
(4) The reviewer has noticed that there are some recent works aiming at solving the multi-view classification problem. The paper could do a better job by citing the different relevant works, e.g., [1-3], and clarifying the differences between this work and them.
[1] Trusted Multi-View Classification, ICLR’21
[2] Partially View-aligned Representation Learning with Noise-robust Contrastive Loss, CVPR’21
[3] Dual Contrastive Prediction for Incomplete Multi-View Representation Learning, TPAMI'23
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) It would be interesting to know if the authors have experimented with other masking methods, such as random bit masking of features, and what the results of such experiments were.
(2) Availability of code is important for reproducibility and further research. There is no code in the supplement material. Do the authors have any plans to make the code public?
(3) The paper should provide a more detailed discussion on why not all comparison methods were adapted to this task.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: For missing multi-view data without a priori missing information, this method seems to be difficult to work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Your comments are greatly appreciated and we respond to these concerns as follows:
W1: The similarity matrix calculated based on weak tags does not take into account the interference caused by unknown tags. The paper should address the potential interference caused by unknown tags in the calculation of the similarity matrix.
A1: Thank you for your suggestion! In the process of calculating the similarity matrix, we only consider the sample similarity that can be obtained from the existing label information. However, in the calculation process of Eq. (4), unknown labels have an impact on the normalization scale, so we adjusted the calculation method of the similarity matrix to better handle sample labels with unknown tags. The specific adjustments are as follows:
$T_{i,j} = \frac{C_{ij}\cdot(y_{i}y_{j}^T)}{C_{ij}\cdot(y_{i}y_{j}^T)+\eta}$, where ${T}\in [0,1]^{n\times n}$ is the sample similarity graph, describing the similarity between any two samples. $C_{ij} = G_{i,:}G_{j,:}^T$ is the number of label available for both samples $i$ and $j$. $y_{i}$ and $y_{j}$ denote the $i$-th and $j$-th rows of $Y$. And $\eta$ is a constant, empirically set to 100 for simplicity. Taking into account the fact that $C_{ij}$ is much larger than $y_{i}y_{j}^T$ on most datasets, when the number of categories is large, even if two samples only have one same label, it means that they are much more similar than other sample pairs.
W2: The results of the ablation experiment suggest that the so-called contrast loss does not seem to have improved much, at least on the Corel5k dataset. The paper should provide some explanation.
A2: In the ablation experiment, the contrastive loss did not show a large performance improvement on the corel5k dataset. We analyze that this may be because the cross-view sharing information of many samples in corel5k dataset is not sufficient, that is, the view-specific information occupies a dominant position, leading to difficulties in the aggregation of shared information. Experiments on other datasets show that the contrastive loss has improved in all metrics (Table 1 and Table 2 in new supplementary materials).
W3: The paper needs to improve the details, as there is an error in section 3.4 where "TMD" is incorrectly referenced. The authors could correct this and prevent similar errors.
A3: Thank you for your careful review. We have carefully checked the full text and corrected similar typos, and believe that the quality of the paper has been further improved.
W4: The reviewer has noticed that there are some recent works aiming at solving the multi-view classification problem. The paper could do a better job by citing the different relevant works, e.g., [1-3], and clarifying the differences between this work and them.
A4: Thanks! We have added to the manuscript an introduction to these multi-view classification works and the differences from these works in the paper:
The trusted multi-view classification proposed by Han et al. focuses on the multi-view single-label classification based on confidence learning, trying to learn the classification uncertainty of each view based on evidence theory [1]. However, Dempster-Shafer evidence theory requires the mutual exclusion of labels, which is difficult to satisfy in multi-label classification. Yang et al. proposed a multi-view contrastive learning method with noise-robust loss to solve the partial view alignment problem, which takes aligned data as positive pairs and unaligned data as negative pairs [2]. Our method focuses on missing views rather than multi-view unalignment issue, and relies on the shared-private feature assumption rather than the prior alignment information for the construction of positive and negative pairs. Lin et al. proposed dual contrastive prediction for incomplete multi-view representation learning, which performs contrastive learning at both the instance level and category level to ensure cross-view consistency and recover missing information [3]. However, the instance-level contrastive loss of this method only aggregates cross-view representations in the potential space from the perspective of consistency, ignoring the view-private information.
Q1: It would be interesting to know if the authors have experimented with other masking methods, such as random bit masking of features, and what the results of such experiments were.
A1: Thanks for your question! In fact, we also tried to mask random bits of vector features, but it didn't work well, and we think that the continuity of key information in the features was badly broken.
Q2: Availability of code is important for reproducibility and further research. There is no code in the supplement material. Do the authors have any plans to make the code public?
A2: Yes, we will make the code public upon acceptance of the paper to help peers reproduce it.
Q3: The paper should provide a more detailed discussion on why not all comparison methods were adapted to this task.
A3: At present, the research on multi-view multi-label classification is in its infancy, and few methods that can handle both view and label incompleteness. In our comparison experiment, only iMvWLL[4], NAIM3L[5], and DICNet[6] can meet our task setting at the same time. Therefore, following the existing methods [4-6], we introduce other methods that can handle incomplete multi-views or partial multi-labels in the experiment to enrich our experiment, which also can demonstrate the necessity of special design for missing views and unknown labels in the network structure.
[4] Incomplete multi-view weak-label learning, IJCAI
[5] A concise yet effective model for non-aligned incomplete multi-view and missing multi-label learning, TPAMI
[6] DICNet: Deep Instance-Level Contrastive Network for Double Incomplete Multi-View Multi-Label Classification, AAAI
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses. The authors have addressed my concerns and I will maintain my rating of acceptance for the paper.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 8GYm
Comment: Thank you for your approbation and we will carefully integrate your suggestions into our manuscript. | Summary: This paper studies incomplete multi-view weak multi-label learning problem, which is important. The authors propose a masked two-channel decoupling framework based on deep neural networks. They develop cross-channel contrastive loss, a label- guided graph regularization loss, and random fragment masking strategy. The experiments validate the effectiveness.
Strengths: - The problem is interesting.
- The techniques in this work seem correct and reasonable.
Weaknesses: - This work just combines some existing widely-used techniques. Thus the work is a bit incremental, does not provide new insights to me.
- As stated in abstract, the core is to decouple view-level representation into shared representation and a view-proprietary representation. This idea has been widely used in existing works.
- The authors first fill the missing views with noise, and missing label with 0, and then treat it as normal complete view learning problem. To me, this scheme seems not grasp the essence of multi-view weak multi-label learning problem.
- The aim and construction of contrastive loss in this work seem a bit confusing to me. It is suggested to explain it in more detail.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See weakness part
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The author seems to lack a comprehensive discussion on the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts in reviewing the manuscript! We respond to the comments below:
W1. This work just combines some existing widely-used techniques. Thus the work is a bit incremental, does not provide new insights to me.
A1. Indeed, we admit that the techniques or ideas used in our method have similar applications in other fields. However, we emphasize more on the combination of these existing ideas and multi-view multi-label classification tasks, that is, solving the problem “Why should them be used on the MvMLC task, How to use them, and What are the benefits?”.
The following is the difference and significance of the three core technologies involved in this paper and the existing technologies:
1) View-specific representation decoupling framework and contrastive loss. Decoupling the features of each view or modality into a shared feature and a proprietary feature is not new in multi-view or multi-modal learning [1,2], however, the crux of the problem is how to achieve this decoupled state, the technical routes and emphases of existing methods are not the same. For instances, work [1] uses an adversarial loss to make it difficult for the model to discern the view origin of shared features, however, the training cost of adversarial learning is high. Our decoupling framework and corresponding contrastive loss avoid this problem well. Work [2] only focuses on reducing the distance between shared feature pairs and increasing the distance between private feature pairs, ignoring the differences between private features and shared features at both the intra-view and inter-view levels. However, our method considers both: maximizing the distance between shared and private features from individual view-specific feature, and the comparison between shared and private feature sets from different views.
2) Label-guided graph regularization. The Laplacian graph regularization technique is often used in traditional methods in the field of multi-view learning, and at the core of the technique, the construction of Laplacian graph usually only comes from the original feature space and acts on the latent representation of the sample. In our MvMLC task, we boldly combine it with the characteristics of the multi-label task, that is, construct a label-based sample adjacency matrix to replace the adjacency matrix from the original data. The obvious advantage of this design is that the sample label information has clear semantic information while the original data often contains noise and redundancy.
3) Masking random fragment of feature. As another work we are proud of, masking attempts on random segments is our contribution to the multi-view learning community. As far as we know, masking strategy that is currently popular in fields such as images have not been applied to vector data, especially in the field of multi-view or multi-label. And unlike the well-known MAE technology, we do not need to perform additional non-coding processing on the masked data segments, which can be easily transplanted to any multi-view learning network to a large extent, and it is a kind of Plug and play technology.
[1] Multi-View Multi-Label Learning with View-Specific Information Extraction. IJCAI,2019.
[2] CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for Multi-Modality Image Fusion. CVPR,2022.
W2. As stated in abstract, the core is to decouple view-level representation into shared representation and a view-proprietary representation. This idea has been widely used in existing works.
A2. Thank you for the comment. As we mentioned in A1, some methods apply the idea of decoupling in multi-view learning. However, the above purpose cannot be achieved by using only two types of encoders. And the key to this idea is to design a suitable strategy to make the features extracted by the two types of encoders meet our assumption, that is, true shared features and proprietary features. Our experiment in Figure 3 also confirms the effectiveness of our strategy.
W3. The authors first fill the missing views with noise, and missing label with 0, and then treat it as normal complete view learning problem. To me, this scheme seems not grasp the essence of multi-view weak multi-label learning problem.
A3. Indeed, we fill missing instances with noise and fill unknown labels with 0 values, however this is just a means of data preprocessing. And the purpose of that is to ensure that multi-view data is dimensionally aligned, which is very important in batch-based training of deep neural networks. It should be noted that although we fill in the missing information beforehand, we do not treat it as complete multi-view learning in subsequent processing. For example, in Eq. (1), we introduce condition $\Upsilon$ to avoid filled noise from taking part in the calculation of contrastive loss; In Eq. (2) and (3), we introduce missing index matrix $W$ in the process of multi-view fusion to exclude the illegal features corresponding to missing instances. In Eq. (9), we introduce a missing label indicator matrix $G$ to avoid the negative effects of unknown labels.
On the other hand, we would like to say that there are many ways to deal with missing views and labels, including missing view recovery, pseudo-label filling, etc., and the "skip" strategy adopted by our method based on prior missing information is also used by many related methods [3-5].
[3] A Concise yet Effective Model for Non-Aligned Incomplete Multi-view and Missing Multi-label Learning. TPAMI,2022.
[4] Dicnet: Deep instance-level contrastive network for double incomplete multi-view multi-label classification. AAAI, 2023.
[5] Expand globally, shrink locally: Discriminant multi-label learning with missing labels, Pattern Recognition, 2021.
Responses to W4 will be added to the comments section after the rebuttal phase begins due to word limitation.
---
Rebuttal Comment 1.1:
Title: Response to weakness 4
Comment: W4. The aim and construction of contrastive loss in this work seem a bit confusing to me. It is suggested to explain it in more detail.
A4. Thanks for the comment, and we are sorry to confuse you about the contrastive loss. According to your suggestion, we further explain the design idea of contrastive loss as follows in the manuscript:
For any sample $i$, we pair features on $2N$ channels in pairs ($4N^2$ feature pairs in total), and classify these feature pairs into 2 categories, namely positive pairs and negative pairs, where positive pairs consist of shared features from different views, and the remaining “shared-private” and “private-private” feature pairs are negative pairs. Our goal is to minimize the distance between positive instance pairs (the denominator part), while maximizing the distance between negative pairs (the numerator part). Here we not only consider maximizing the difference between private features decoupled from the different view ($o_{i}^{(u)}$ and $o_{i}^{(v)}$, $u\neq v$), but also reduce the similarity of shared features and private features from both intra-view and inter-view levels ($s_{i}^{(u)}$ and $o_{i}^{(v)}$, including $u=v$). This approach fulfills the design objective of the dual-channel model, which is to encourage consistency between shared features from different views while maintaining a clear distinction between view-proprietary feature and shared or other view-proprietary features.
In addition, considering the efficiency of matrix calculation, we set the coefficient of the contrastive item of shared features and private features to 2 in the numerator part, so that the entire paired similarities can be obtained by a single matrix multiplication, i.e., $[s_i^{(1)}; s_i^{(2)};…; s_i^{(N)}; o_i^{(1)}; o_i^{(2)};…; o_i^{(N)}]\times [s_i^{(1)}; s_i^{(2)};…; s_i^{(N)}; o_i^{(1)}; o_i^{(2)};…; o_i^{(N)}]=Q\in \mathbb{R}^{2N\times 2N}$. The matrix $Q$ can be divided into four sub-matrices whose size is $N$ by $N$. The sub-matrix in the upper left corner stores the similarity values of the denominator in Eq. (1), the sub-matrix in the lower right corner stores the values of the second item in the numerator, and the remaining two submatrix, which are transposed to each other, store the similarity values of the first item in the numerator. Therefore, we set the coefficient of the first term in the numerator to 2. At the same time, the diagonal elements of $Q$ are removed, so that the number of elements in the numerator and denominator parts is $\frac{1}{3N^{2}-N}$ and $\frac{1}{N^{2}-N}$ respectively.
About limitation of the paper:
In the conclusion, we explain the limitations of our method and further improvement directions in the future, such as adding the consideration of label correlation to the multi-label classification problem in the model. At the same time, for missing views and partial multi-label, in addition to introducing prior information, more complex view completion and pseudo-label prediction can be performed to improve the discriminability of the model. | Rebuttal 1:
Rebuttal: Thanks to all reviewers, and this is our new supplementary material.
Pdf: /pdf/73ca79a00c8a697b85fdb95836518025bda75d9b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Towards Efficient and Accurate Winograd Convolution via Full Quantization | Accept (poster) | Summary: This paper proposed to fully quantize the Winograd convolution post-training under the observation of disruption of consistency between different transformation procedures, the new proposed Factorized Scale Quantization is suitable in the Winograd domain. The experiments demonstrate significant improvements compared with previous post-training-quantization methods.
Strengths: 1. This paper is organized well, the exploration of quantization of different components of Winograd convolution clearly shows the primary cause of accuracy degeneration, and the following proposed method precisely targets this problem.
2. The proposed method is interesting, which utilizes two factor vectors to replace the per-pixel scales, and these two vectors can be merged into the transformation matrices.
3. Some experimental results are even higher than the baseline.
Weaknesses: The experiments only demonstrate the quantization bit, there is no computation cost or inference time comparison which are very important to this work, it is difficult to know how much the proposed improve the efficiency other than accuracy.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: This paper emphasizes the hardware-friendly deployment of this proposed quantization method, will the optimization procedure including solving $\alpha$, $\beta$, and $\tilde{O}$ cost much more additional resources?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper hasn't mentioned or discussed its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** The experiments only demonstrate the quantization bit, there is no computation cost or inference time comparison which are very important to this work, it is difficult to know how much the proposed improve the efficiency other than accuracy.
**A1:** Thank you for your suggestion. Because of the time limitation, we can't implement Winograd convolution on GPUs or FPGAs, necessitating low-level optimizations. Instead, we opt for BOPs as an alternative metric for measuring computation cost. This metric is widely used in various fields such as Neural Architecture Search (NAS), pruning, and quantization research(\[4,5,6])
BOPs is defined as $BOPs=b_1 \cdot b_2 \cdot MAC$, where $b_1$ and $b_2$ represent the bit-width of two operators, respectively. **Please notes that the optimization procedures (13) and (26) happen before inference.** During the inference process, the computation cost of quantized Winograd convolution comprises three components: **element-wise multiplications** $U\odot V$, **Winograd transformations ($BXB^T$ and $AOA^T$)**, and **quantization overhead**.
For element-wise multiplication, like [1, 3], our methods use the per-pixel quantization, with the memory overhead of $(m+r-1)*(m+r-1)$ to store the quantization scales and $N\times C_{out}\times (m+r-1)\times (m+r-1)+ C_{in}\times C_{out}\times (m+r-1)\times(m+r-1)$ times flops to re-quantize $U$ and $V$.
For Winograd transformations ($BXB^T$ and $AOA^T$), as introduced in Section 4.2.2, the benefit of our proposed FSQ is that we can move scales $\alpha$ and $\beta$ into transformation matrices. **So we can facilitate per-tensor matrix multiplication implementation when per-pixel quantization is utilized**. Therefore, quantizing $X$, $B^TX$, $O$, and $A^TO$ results in a constant memory overhead and a computation overhead of $2 \times N \times (C_{out} + C_{in}) \times (m + r - 1) \times (m + r - 1)$. Because these quantization and re-quantization operations need the same times flops as the tensor size, the overhead is negligible compared to Winograd transformations which involve twice matrix multiplications.
The BOPs of different methods to quantize F(6,3) ResNet-20 are shown in Table R5-1. [1] quantizes all the transformation matrices $A$,$B$, and $G$, but the intermediate results such as $B^TX$ and $A^TO$ are not quantized. So [1] needs higher precision to hold these results and carry out the next operation. [2] don't quantize these Winograd transformations to maintain accuracy. Compared to them, our full-quantization of Winograd will achieve less computation cost.
**Table R5-1 BOPs**
| | Im2col(FP) | Winograd(FP) | BQW[2] | Winograd-AwareNet[1] | Ours(PAW) | Ours(PAW+FSQ) |
| ------------------ | ---------- | ------------ | ------- | -------------------- | --------- | ------------- |
| Low-precision BOPs | 0 | 0 | 464.56M | 1282.45M | 464.56M | 791.71M |
| Flops | 40.81M | 12.37M | 5.27M | 0.48M | 5.27M | 0.80M |
| Total BOPs | 10.44G | 3.16G | 1.81G | 1.41G | 1.81G | 0.99G |
**Q2:** This paper emphasizes the hardware-friendly deployment of this proposed quantization method, will the optimization procedure including solving $\alpha$, $\beta$, and $O$ cost much more additional resources?
**A2:** The overhead of the optimization procedure to solve $\alpha$, $\beta$, and $O$ is negligible. There are two reasons: (1) The optimization procedure (13) and (26) happens before inference. (2) According to our experiments, this optimization procedure (26) converges fast. In most cases, it will converge in several to dozens of iterations, which takes approximately several minutes for one layer.
[1] Fernández-Marqués et al., "Searching for Winograd-aware Quantized Networks", 2020
[2] Chikin et al., "Channel Balancing for Accurate Quantization of Winograd Convolutions", 2022
[3] Andir et al., "Going Further With Winograd Convolutions:Tap-Wise Quantization for Efficient Inference on 4x4 Tiles", 2022
[4] Wang et al., "Differentiable Joint Pruning and Quantization for Hardware Efficiency", 2020
[5] Guo et al., "Single path oneshot neural architecture search with uniform sampling", 2020
[6] Liu et al., "Towards precise binary neural network with generalized activation functions", 2020 | Summary: This paper proposes PTQ-Aware Winograd (PAW), Factorized-scale quantization and a iterative optimization algorithm to solve the problem of quantization on Winograd domain. These methods not only fully quantize the whole Winograd Convolution, but also surpass the existing Winograd quantization methods in terms of effect and hardware friendliness.
Strengths: 1. This is the first work to perform full-quantization for Winograd Convolution
2. Factorized-scalequantization makes the quantization for Winograd more hardware-friendly compared to per-pixel quantization in previous work
3. This paper focuses on the properties of the Winograd domain that make quantification difficult and gives a full analysis
4. The effect of the method is obviously beyond the existing methods, and the theoretical derivation is complete and the experiment is fully credible
Weaknesses: 1. In the explanation in Section 4.1:
- Is the motivation of this section: In the original Winograd method, A, B and G satisfy some strict mathematical relationship with each other, but the perturbation brought by quantization destroys the strict relationship between them, and the small perturbation brought by such quantization will bring much greater overall loss. Therefore, the authors simulated the perturbation by artificially adding the perturbation and tested the Reconstruction Loss?
- How was the range of perturbation added determined, and was the actual quantization error measured and used as a reference? Have you tested how much the Reconstruction Loss will decrease after optimization?
2. This work achieves an obvious improvement over previous work, but what are the challenges if we continue to expand the tile size? Why?
I would like to hear the authors’ feedback during rebuttal, and fixing these issues would further improve the quality of this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** In the explanation in Section 4.1:
Is the motivation of this section: In the original Winograd method, A, B, and G satisfy some strict mathematical relationship with each other, but the perturbation brought by quantization destroys the strict relationship between them, and the small perturbation brought by such quantization will bring much greater overall loss. Therefore, the authors simulated the perturbation by artificially adding the perturbation and tested the Reconstruction Loss?
How was the range of perturbation added determined, and was the actual quantization error measured and used as a reference? Have you tested how much the Reconstruction Loss will decrease after optimization?
**A1:** (a) Your understanding of the motivation is right, but your analysis is a little different from ours.
In the original Winograd method, $A$, $B$, and $G$ satisfy some strict mathematical relationships with each other to ensure these Winograd transformations happen in the same Winograd domain. However, quantization noise destroys the relationship between them. For example, the weight transformation after quantization becomes $Q(GWG^T)$ , which may mismatch with original transformations $AOA^T$ and $B^TXB$.
To demonstrate this, we change transformation matrices $A$ and $B$ towards different directions (by adding random noise) and test the reconstruction loss using $Q(GWG^T)$ and $GWG^T$, respectively. In Figure 1, we discover that although original A and B can produce desired results with floating-point $GWGT$, they are no longer optimal when changing $GWG^T$ to $Q(GWG^T)$. About half of the random matrices A and B lead to more minor errors (the blue dots on the upper left of the yellow triangle). Thus, it is necessary to align these transformation procedures after quantization.
(b) We test the reconstruction loss decreasing before and after optimization. Typically, the reconstruction loss will decrease to about 1/3 of the original loss. We show the results of different layers on ResNet-18 in Table R4-1.
**Table R4-1. Reconstruction loss **
| | layer1.0.conv2 | layer2.0.conv2 | layer3.0.conv2 | layer4.0.conv2 |
| ------ | -------------- | -------------- | -------------- | -------------- |
| Before | $7.21e-02$ | $1.56e-01$ | $4.71e-01$ | $2.65e-01$ |
| After | $1.73e-02$ | $6.17e-02$ | $1.27e-01$ | $6.92e-02$ |
**Q2:** This work achieves an obvious improvement over previous work, but what are the challenges if we continue to expand the tile size? Why?
**A2:** We don't expand our methods for tile size >=8 in the paper because of two reasons:
On the one hand, The speedup ratio of Winograd convolution, represented by $\frac{(mr)^2}{(m+r-1)^2}$, increases more slowly as the tile size becomes larger. As an illustration, the speedup ratios for $F(2,3)$, $F(4,3)$, $F(6,3)$, and $F(8,3)$ are 2.25, 4, 5.0625, and 5.76, respectively (Table R4-2). On the other hand, Winograd convolution suffers from numeric accuracy issues: the floating point (FP) error increases exponentially with the tile size [1]. According to [1], the normalized L1 norm of floating point error for various deep convolution neural networks is shown in Table R4-2. Therefore, considering the trade-off between accuracy and efficiency, most papers [2,3,4] focus on F(4,3) and F(6,3).
**Table R4-2 Numerical Error and Speedup**
| | 2 | 4 | 6 | 8 | 10 |
| ------------------- | :----: | --------------------- | ---------------- | ---------------- | ---------------- |
| ResNet-20 FP error | ----- | $1.19 \times 10^{−3}$ | $3.23 × 10^{−3}$ | $7.46 × 10^{−3}$ | $7.88 × 10^{−2}$ |
| SqueezeNet FP error | ------ | $7.31\times 10^{-5}$ | $1.32 × 10^{−4}$ | $2.17 × 10^{−4}$ | $1.25 × 10^{−3}$ |
| Speedup | 2.25 | 4 | 5.0625 | 5.76 | 6.25 |
[1] Alam et al., "Winograd Convolution for Deep Neural Networks : Efficient Point Selection", 2022
[2] Chikin et al., "Channel Balancing for Accurate Quantization of Winograd Convolutions", 2022
[3] Andir et al., "Going Further With Winograd Convolutions:Tap-Wise Quantization for Efficient Inference on 4x4 Tiles", 2022
[4] Fernández-Marqués et al., "Searching for Winograd-aware Quantized Networks", 2020
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed rebuttal, which solved my concerns. I believe it is a good work with constructive contributions to the field. | Summary: The paper proposes a PTQ-Aware Winograd (PAW) method to improve the performance of deep learning inference with quantized parameters and Winograd Convolution. In particular, all steps of the Winograd operation are combined and optimized with a unified objective to reduce the domino effect of quantization in different parts of the Winograd sequence. Moreover, to mitigate the range difference of the Winograd output transformation, factorized-scale quantization is proposed. This involves balancing the distribution of the output transformation using two factorized scaling parameters. This approach outperforms previous work and the cifar-10 dataset.
Strengths: The analysis of quantization for all parts of the Winograd operation is novel and interesting. The paper is well-written, and the proofs are explained well.
Weaknesses: 1- The overhead of proposed quantization algorithm needs to be compared with the QAT approach, since both methods perform training during the quantization process.
2- The new approach has only been tested on the ResNet model. It is suggested that experimental results for other networks be added to the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Most of the proofs in this paper assume the parameters follow a Gaussian distribution. Previous studies have shown that the distribution of parameters is similar to Laplace distribution. Does the proof in the paper also hold valid with a near-Laplace distribution of parameters?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author does not discuss whether the new approach works for size 6 and size 8 of the Winograd operation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** The overhead of proposed quantization algorithm needs to be compared with the QAT approach, since both methods perform training during the quantization process.
**A1:** Thank you for your suggestion. QAT methods require much more GPU resources and training data than PTQ. **When applying it to the Winograd algorithm, these drawbacks will be amplified.** Training large Winograd models can be slow and use a significant amount of memory. This is a direct consequence of implementing all the stages involved in the Winograd convolution and retaining the inputs to each operation in memory to backpropagate through the entire process. According to the open source code of [1], the Winograd-aware QAT method costs about 10 hours on ResNet18 using the ImageNet dataset with **4 GPUs** (NVIDIA RTX 3090) for **one training epoch**. In contrast, the whole optimization procedure of the proposed PTQ method can be accomplished in around 5 hours on **a single GPU**.
**The optimization procedures (13) and (26) happen before inference.** During the inference process, as introduced in Section 4.2.2, the benefit of our proposed FSQ is that we can move scales $\alpha$ and $\beta$ into transformation matrices. So we can facilitate per-tensor matrix multiplication implementation when per-pixel quantization is utilized. The overhead of our proposed algorithm only includes re-quantization operation, which is negligible compared to the original Winograd transformation.
**Q2:** The new approach has only been tested on the ResNet model. It is suggested that experimental results for other networks be added to the paper.
**A2:** Thank you for your suggestion. In the paper, we show the results of ResNet models to compare our method with another PTQ work [2]. Here, we add experiments on VGG and SqueezeNet using the Cifar-10 dataset. The results align with our expectations. The detailed results are presented in Table R3-1 and Table R3-2.
**Table R3-1. Accuracy (\%) on VGG11 (92.02%).**
| Tile Size | BRECQ[4] | PAW | FSQ | FSQ+PAW |
| --------- | -------- | ----- | ----- | ------- |
| F4 | 89.13 | 91.56 | 86.59 | 91.55 |
| | 92.02 | 92.28 | 90.82 | 91.83 |
| F6 | 75.10 | 89.94 | 68.98 | 90.34 |
| | 91.27 | 91.88 | 88.44 | 91.63 |
**Table R3-2. Accuracy (\%) on SqueezeNet (92.62%).**
| Tile Size | BRECQ[4] | PAW | FSQ | FSQ+PAW |
| --------- | -------- | ----- | ----- | ------- |
| F4 | 89.69 | 91.98 | 88.66 | 91.78 |
| | 92.61 | 92.68 | 92.01 | 92.80 |
| F6 | 80.50 | 90.67 | 76.48 | 91.26 |
| | 92.37 | 92.61 | 90.54 | 92.42 |
**Q3:** Most of the proofs in this paper assume the parameters follow a Gaussian distribution. Previous studies have shown that the distribution of parameters is similar to Laplace distribution. Does the proof in the paper also hold valid with a near-Laplace distribution of parameters?
**A3:** **Yes, our proof is also valid for the near-Laplace distribution of parameters**. Actually, in Proposition 1, we only use the condition that $X$ and $W$ are zero-mean independent and identically distributed to calculate the mean and standard deviation of $O$. Therefore, the assumption that the parameters follow a Gaussian distribution is not necessary, and the proof also applies to the case of a Laplace distribution with mean zero (even other zero-mean distributions). We have modified the theorem in the revision.
**Q4:** The author does not discuss whether the new approach works for size 6 and size 8 of the Winograd operation.
**A4:** Thank you for your suggestion. **The experiments on tile size 6 have been shown in Table 4 and Table 5.** We don't expand our methods for tile size >=8 in the paper because of two reasons: On the one hand, The speedup ratio of Winograd convolution, represented by $\frac{(mr)^2}{(m+r-1)^2}$, increases more slowly as the tile size becomes larger. On the other hand, Winograd convolution suffers from numeric accuracy issues: the floating point (FP) error increases exponentially with the tile size [4]. According to [4], the normalized L1 norm of floating point error for various deep convolution neural networks is shown in Table R3-3. Therefore, considering the trade-off between accuracy and efficiency, most papers [2,3] focus on F(4,3) and F(6,3).
**Table R3-3 Numerical Error and Speedup**
| | 2 | 4 | 6 | 8 | 10 |
| ------------------- | :----: | --------------------- | ---------------- | ---------------- | ---------------- |
| ResNet-20 FP error | ----- | $1.19 \times 10^{−3}$ | $3.23 × 10^{−3}$ | $7.46 × 10^{−3}$ | $7.88 × 10^{−2}$ |
| SqueezeNet FP error | ------ | $7.31\times 10^{-5}$ | $1.32 × 10^{−4}$ | $2.17 × 10^{−4}$ | $1.25 × 10^{−3}$ |
| Speedup | 2.25 | 4 | 5.0625 | 5.76 | 6.25 |
[1] Fernández-Marqués et al,."Searching for Winograd-aware Quantized Networks", 2020
[2] Chikin et al., "Channel Balancing for Accurate Quantization of Winograd Convolutions", 2022
[3] Li et al., "BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction", 2021
[4] Alam et al., "Winograd Convolution for Deep Neural Networks : Efficient Point Selection", 2022
---
Rebuttal Comment 1.1:
Title: Rebuttal Response.
Comment: Thank the authors for the detailed response to my comments. The responses are valid and complete. I also believe this is a novel and interesting paper, and I am raising my score to 7. | Summary: This paper proposes a post-training quantization algorithm for Winograd convolution, which overcomes the inconsistency in domain transformation by adjusting the transformation matrices together (PTQ-Aware) via a unified optimization procedure, and achieves full quantization by a new factorized scale quantization (FSQ) method. Experiments on CIFAR-10 and ImageNet show the effectiveness of the proposed algorithm. It surpasses the previous state-of-the-art Winograd post-training quantization algorithm significantly in terms of accuracy.
Strengths: * The proposed Winograd post-training quantization algorithm is considerably more accurate than previous methods, and is hardware friendly.
* The paper is well presented. I am able to understand the whole paper as a reader whose research focus is not on model quantization.
Weaknesses: * The paper failed to mention or compare with some quantization-aware training (QAT) algorithms that achieve full-quantization, e.g "Searching for Winograd-aware Quantized Networks" [1] and "Going Further With Winograd Convolutions: Tap-Wise Quantization for Efficient Inference on 4x4 Tiles" [2]. These QAT algorithms share quite similar ideas with this paper and the accuracy of their final quantized models are much higher.
* [1] https://arxiv.org/abs/2002.10711 (2020)
* [2] https://arxiv.org/abs/2209.12982 (2022)
* If possible, some quantitive evaluation on how faster the proposed algorithm is (compared to previous algorithms) is desired.
* Two typos:
* line 188: v_i -> v_j
* line 239: long -> learn
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * Could you briefly compare your proposed algorithm with some QAT methods that also achieve full-quantization (e.g. the two papers I listed in the Weakness section) and summarize the advantage of your algorithm?
* Could you provide some evaluation on the inference speed of the proposed algorithm compared with previous algorithms? If not possible, some simple analysis is also OK. I'd like to know whether there's some hidden computation overhead or complexity introduced by your algorithm.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I hope the authors can better differentiate their method with previous Winograd quantization algorithms and justify their method's advantages. If it simply adapts QAT full-quantization to the PTQ scenario, the novelty and contribution of the paper would be somewhat undermined.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: Could you briefly compare your proposed algorithm with some QAT methods that also achieve full-quantization (e.g. the two papers I listed in the Weakness section) and summarize the advantage of your algorithm?
**A1**: Thank you for your suggestion. This response includes three parts: **the difference between QAT and PTQ**, **a comparison with QAT algorithms**, and a **results comparison**.
**In the quantization research community, QAT and PTQ are two distinct research areas.** Although QAT can achieve more promising performance, training the network requires great many GPU resources and a full training dataset. Therefore, it is not always practical when either the training dataset is unavailable (e.g., privacy and commercial training data) or rapid deployment is required. Therefore, much effort has been spent in the industry and research community toward improving PTQ performance.
**The drawbacks of QAT will be amplified when it is applied to the Winograd algorithm.** Training large Winograd models can be slow and use a significant amount of memory. This is a direct consequence of implementing all the stages involved in the Winograd convolution and retaining the inputs to each operation in memory to backpropagate through the entire process. According to the open source code of [1], the Winograd-aware QAT method costs about 10 hours on ResNet18 using the ImageNet dataset with **4 GPUs** (NVIDIA RTX 3090) for **one training epoch**. In contrast, **the whole optimization procedure** of the proposed PTQ method can be accomplished in around 5 hours on **a single GPU**
"Searching for Winograd-aware Quantized Networks" [1] and "Going Further With Winograd Convolutions: Tap-Wise Quantization for Efficient Inference on 4x4 Tiles" are two classical papers about QAT for Winograd convolutions. **Compared to them, we have the following difference and advantages**:
- [1] quantizes all the transformation matrices $A$,$B$, and $G$, but the intermediate results such as $B^TX$ and $A^TO$ are not quantized. So [1] needs higher precision to hold these results and carry out the next operation. In addition, their methods, as mentioned above, require a full dataset and huge training resources, which are sometimes unavailable.
- [2] proposes tap-wise quantization for $U$ and $V$ which is used in many following works. They don't re-quantize the high-precision immediate results $O$ and utilize the shift-and-add operations to accelerate the computation of $A^TOA$.
**Moreover, the strong effectiveness of retraining the whole network hides the difficulty of full quantization, especially for quantizing output tensors in the Winograd domain $O$**. According to our analysis in Section 4.2.1, $O$ suffers from huge distribution differences among taps and deserves per-pixel quantization (or called tap-wise quantization in [2]). However, since O will take part in matrix multiplication (Eq. 4) instead of element-wise multiplication, per-pixel quantization will lead to different scales in the summation dimension, which makes it not feasible in general hardware. Thus, we proposed a more hardware-friendly method FSQ, which factorizes per-pixel scales into two vectors and we can move both scales into transformation matrices. We also **theoretically** demonstrate that, under the assumption of the identical and independent distribution of weights and activations, our method is equally optimal to per-pixel quantization in minimizing quantization errors.
It's also worth noting that although our methods only require few-shot, unlabeled calibrations, and fewer computation resources, **we can achieve comparable performance with previous QAT works.** The difference and the results of ResNet-20 using Cifar-10 dataset are shown in Table R2-1.
**Table R2-1 Comparision with Other Methods**
| | Quantization Type | Optimziation of Transformation Matrices | Full Quantization | Quantiation graunlarity of $O$ | Acc drop |
| ----------------- | ----------------- | --------------------------------------------------- | ------------------------------------------------------- | ------------------------------- | ----------- |
| BQW[3] | QAT/PTQ | Fixed | Not quantize Winograd transformation | Not Quantize | -0.02/-1.42 |
| Winograd-Aware[1] | QAT | End-to-end Training(Multiple GPUs and full dataset) | Not re-quantize the immediate results $A^TO$ and $B^TX$ | Per-tensor | -0.7 |
| Tap-wise quant[2] | QAT | Fixed | Not re-quantize the immediate results $O$ | Not Quantize | -0.6 |
| Ours | PTQ | Reconstruction Loss (Single GPU and unlabeled data) | Quantize all components | FSQ | -0.47 |
**Q2:** If possible, some quantitive evaluation on how faster the proposed algorithm (compared to previous algorithms) is desired.
**A2:** Because of the length limitation, we answer this question in the **global rebuttal**.
**Q3:** Two typos: line 188: $v_i$ -> $v_j$ line 239: long -> learn
**A3:** Thank you for your careful reading of the paper. These issues will be corrected in the updated manuscript.
[1] Fernández-Marqués et al., "Searching for Winograd-aware Quantized Networks", 2020
[2] Andir et al., "Going Further With Winograd Convolutions:Tap-Wise Quantization for Efficient Inference on 4x4 Tiles", 2022
[3] Chikin et al., "Channel Balancing for Accurate Quantization of Winograd Convolutions", 2022
---
Rebuttal Comment 1.1:
Comment: Thanks very much for the detailed rebuttal, which has solved most of my concerns. I am now leaning between "borderline accept" and "weak accept". But still, due to some concerns on the potential impact of the paper, I'd like to keep my current rating. | Rebuttal 1:
Rebuttal: Thank you reviewers for your helpful feedback and constructive advice. Based on the reviewers' questions, comments, and recommendations, we have made many revisions that may significantly improve the quality of the paper.
**Here, we explain the common concern of reviewers on the computation cost of our algorithm.**
Because of the time limitation, we can't implement Winograd convolution on GPUs or FPGAs, necessitating low-level optimizations. Instead, we opt for **BOPs** as an alternative metric for measuring computation cost. This metric is widely used in various fields such as Neural Architecture Search (NAS), pruning, and quantization research(\[1,2,3])
BOPs is defined as $BOPs=b_1 \cdot b_2 \cdot MAC$, where $b_1$ and $b_2$ represent the bit-width of two operators, respectively. **Please notes that the optimization procedures (13) and (26) happen before inference.** During the inference process, the computation cost of quantized Winograd convolution comprises three components: **element-wise multiplications** $U\odot V$, **Winograd transformations ($BXB^T$ and $AOA^T$)**, and **quantization overhead**.
For **element-wise multiplication**, like [1, 3], our methods use the per-pixel quantization, with the memory overhead of $(m+r-1)*(m+r-1)$ to store the quantization scales and $N \times C_{out}\times (m+r-1)\times (m+r-1)+ C_{in}\times C_{out}\times (m+r-1)\times(m+r-1)$ times flops to re-quantize $U$ and $V$.
For **Winograd transformations ($BXB^T$ and $AOA^T$)**, as introduced in Section 4.2.2, the benefit of our proposed FSQ is that we can move scales $\alpha$ and $\beta$ into transformation matrices. **So we can facilitate per-tensor matrix multiplication implementation when per-pixel quantization is utilized**. Therefore, quantizing $X$, $B^TX$, $O$, and $A^TO$ results in a constant memory overhead and a computation overhead of $2 \times N \times (C_{in} + C_{out}) \times (m + r - 1) \times (m + r - 1)$. Because these quantization and re-quantization operations need the same times flops as the tensor size, the overhead is negligible compared to Winograd transformations which involve twice matrix multiplications.
The BOPs of different methods to quantize F(6,3) ResNet-20 are shown in Table R5-1. [1] quantizes all the transformation matrices $A$,$B$, and $G$, but the intermediate results such as $B^TX$ and $A^TO$ are not quantized. So [1] needs higher precision to hold these results and carry out the next operation. [2] don't quantize these Winograd transformations to maintain accuracy. Compared to them, our full-quantization of Winograd will achieve less computation cost.
**Table R1**
| | Im2col(FP) | Winograd(FP) | BQW[5] | Winograd-AwareNet[4] | Ours(PAW) | Ours(PAW+FSQ) |
| ------------------ | ---------- | ------------ | ------- | -------------------- | --------- | ------------- |
| Low-precision BOPs | 0 | 0 | 464.56M | 1282.45M | 464.56M | 791.71M |
| Flops | 40.81M | 12.37M | 5.27M | 0.48M | 5.27M | 0.80M |
| Total BOPs | 10.44G | 3.16G | 1.81G | 1.41G | 1.81G | 0.99G |
**Another common concern of reviewers is the performance of our algorithm on other networks.** In the paper, we show the results of ResNet-style models to compare our method with another PTQ work [5]. Here, we add experiments on VGG and Squeezenet using the Cifar-10 dataset. The results align with our expectations. The detailed results are presented in Table R1-2 and Table R1-3. We also add these experiments as Sec. 5 in supplementary materials.
**Table R2. Accuracy (\%) on VGG11 (92.02%).**
| Tile Size | BRECQ [6] | PAW | FSQ | FSQ+PAW |
| --------- | --------- | ----- | ----- | ------- |
| F4 | 89.13 | 91.56 | 86.59 | 91.55 |
| | 92.02 | 92.28 | 90.82 | 91.83 |
| F6 | 75.10 | 89.94 | 68.98 | 90.34 |
| | 91.27 | 91.88 | 88.44 | 91.63 |
**Table R3. Accuracy (\%) on SqueezeNet (92.62%).**
| Tile Size | BRECQ [6] | PAW | FSQ | FSQ+PAW |
| --------- | --------- | ----- | ----- | ------- |
| F4 | 89.69 | 91.98 | 88.66 | 91.78 |
| | 92.61 | 92.68 | 92.01 | 92.80 |
| F6 | 80.50 | 90.67 | 76.48 | 91.26 |
| | 92.37 | 92.61 | 90.54 | 92.42 |
[1] Wang et al., "Differentiable Joint Pruning and Quantization for Hardware Efficiency", 2020
[2] Guo et al., "Single path oneshot neural architecture search with uniform sampling", 2020
[3] Liu et al., "Towards precise binary neural network with generalized activation functions", 2020
[4] Fernández-Marqués et al., "Searching for Winograd-aware Quantized Networks", 2020
[5] Chikin et al., "Channel Balancing for Accurate Quantization of Winograd Convolutions", 2022
[6] Li et al., "BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction", 2021 | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a Post-training-quantization-aware Winograd (PAW) method to optimize all transformation procedures required by the Winograd algorithm to achieve post-training quantization of pre-trained ResNet models. A useful factorized scale quantization (FSQ) method is also proposed to balance the differences in the range of values in the Winograd domain. Obvious increases in terms of classification accuracy values on CIFAR-10 and ImageNet datasets are obtained with different ResNet-style model structures.
Strengths: The paper achieved good improvements by solving problems observed in practice in cases with large tile sizes. It is useful to know the problems the authors observed and their solutions.
Weaknesses: 1. The presentation of the paper was not satisfactory and is quite difficult to follow. A lot of polishing work needs to be made before the paper could reach a state that I could recommend accepting.
2. Some motivation for the work needs to be justified: why checking and solving the issue happened when quantizing ResNet-style models with large (>=4) tile sizes? Some justification and performance differences need to be made when comparing to those with smaller tile sizes.
Many minor issues are listed below:
1. Many issues in the use of English:
a. "Another approach to accelerate CNNs is faster implementations of convolution, such as FFT [11] and Winograd [10]. Among them," ...
b. "for the first time. And we further propose a " ...
and many others (too many to list)...
2. "surpasses the previous state-of-the-art method by 8.27% and 5.38% on ResNet-18 and ResNet-34, respectively." -> Better to be clear surpass in terms of what?
3. Different numbers of significant figures are used in Table 2.
4. Abbreviation std is not defined in the paper.
5. "does not require retraining the network end-to-end." -> It seems the authors may mean both updating the entire model and training from scratch by using the wording "end-to-end" here. Many ambiguities like this exist throughout the paper.
6. Would be more friendly to the readers to define tile size in the paper (I guess you mean kernel size or filter size here). Could also provide the definitions of F(4,3) and F(6,3) before using them.
7. Many issues in the references. For instance, conference references [7], [19], and [30] are in different formats. Some references, such as [33], are not cited in their official publication sources.
8. In table 5, perhaps better to use the wording "baseline" than "strong baseline". We believe all baseline systems authors try to quote are strong enough.
9. Since non-standard tile sizes are used in this paper, it may be better to explain more carefully about the model architectures rather than simply referring to them using the standard terms (ResNet-34 etc.).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. From Table 4, the cases in which PAW resulted in the most considerable improvements are 6-bit and 8-bit for F(6,3). Could you please explain in what type of hardware, 6-bit could be a more efficient setting than 8-bit?
2. From Table 5, F(4,3) results are often better than the F(6,3) results, which matches my understanding. In that sense, could you please explain why improving the performance with tile size 6 matters? Following that logic, could you please provide more evidence of the advantage of using a large tile size (>=4)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Only ResNet-style model architectures are tested, which is limited. The authors also did not compare the results and efficiency against those with the standard tile size.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading of our article and providing helpful feedback. We have scrutinized the manuscript and made corresponding modifications, e.g., correcting some issues in tables and references. We have also polished the paper per your recommendations and explained some specific terms (e.g., tile size) more clearly. **We apologize for the confusion between tile size and filter size.** The final version would be more reader-friendly, even for those unfamiliar with the Winograd algorithm. **We also analyze the relationship between speedup ratio and tile sizes and formally demonstrate the necessity of adopting large tile sizes to accelerate CNNs.**
**Q1:** Some motivation for the work needs to be justified: why checking and solving the issue happened when quantizing ResNet-style models with large (>=4) tile sizes?
**A1:** We apologize for the confusion. The tile size $m$ in Winograd convolution differs from the filter size $r$ in standard convolution. **For $r \times r$ standard convolutions with filter size $r$, the algorithm transforms the convolution operations to the Winograd domain and generates $m \times m$ (spatial) outputs at a time, which is denoted as $F(m,r)$.** The parameter m is called tile size, which is used to balance the speedup and numerical precision.
Compared with standard convolution, the Winograd algorithm requires fewer multiplications. The speedup ratio depends on tile size $m$. For a $m \times m$ output patch, $F(m,r)$ Winograd convolution requires $C_{in}C_{out}(m+r-1)^2+2C_{in}(m+r-1)^2+2C_{out}(m+r-1)^2m$ multiplications. In contrast, standard convolution requires $C_{in}C_{out}(mr)^2$ multiplications. Usually, filter numbers $C_{out}$ and channel numbers $C_{in}$ are much larger than $m$ and $r$ in modern CNNs, and the speedup ratio is about $\frac{(mr)^2}{(m+r-1)^2}$. Therefore, **the larger the tile size $m$, the more significant the speedup becomes (Table R1-1).**
However, Winograd convolution suffers from numeric accuracy issues: the floating point (FP) error increases **exponentially** with the tile size [1] (Table R1-1). Therefore, considering the trade-off between accuracy and efficiency, most papers [5,7] focus on F(4,3) and F(6,3).
**Table R1-1 FP Error and Speedup**
| |2|4|6|8|10|
|-|:-:|-|-|-|-|
|ResNet-20 FP error| -- |$1.19×10^{−3}$|$3.23×10^{−3}$|$7.46× 10^{−3}$|$7.88×10^{−2}$|
|SqueezeNet FP error| --- |$7.31×10^{-5}$|$1.32×10^{−4}$|$2.17×10^{−4}$|$1.25×10^{−3}$|
|Speedup|2.25|4|5.0625|5.76 |6.25|
**Q2:** Many issues in the use of English: a. "Another approach to accelerate CNNs is faster implementations of convolution, such as FFT [11] and Winograd [10]. Among them," ... b. "for the first time. And we further propose a "
**A2:** We have addressed the issues you highlighted as follows:
(a) "An alternative method for enhancing the speed of CNNs involves the development of faster convolution implementations
(b) "...for the first time. We further propose a..."
(c) "...our proposed method outperforms ... in terms of the **top-1 accuracy** ".
(d) " the standard deviation (**std**) of O distributed...."
**Q3:** "Post-training quantization is a more lightweight method that does not require retraining the network end-to-end" -> It seems the authors may mean both updating the entire model and training from scratch by using the wording "end-to-end" here.
**A3:** You are right. **QAT updates the entire model with end-to-end training, while PTQ only needs to optimize quantization parameters layer by layer.** Due to page constraints, we use the term "end-to-end" to convey this distinction in the related works section. Many other papers use the exact phrase to distinguish between PTQ and QAT, e.g., "A White Paper on Neural Network Quantization"[2]. In the final revision, we have modified the sentence as the bolded sentence.
**Q4:** Since non-standard tile sizes are used in this paper, it may be better to explain more carefully about the model architectures rather than simply referring to them using the standard terms (ResNet-34 etc.).
**A4:** As introduced in A2, the tile size is a parameter of the Winograd algorithm. The model architectures (e.g., filter size and input/output sizes of each layer) remain unchanged.
**Q5:** Could you please explain in what type of hardware, 6-bit could be a more efficient setting than 8-bit?
**A5:** Although current general-purpose GPUs may not fully support 6-bit operations, many domain-specific accelerators can support precision-scalable computation. For example, Stripes[3] and BitFusion[4], both built on the bit-serial approach, can at run-time tune to the desired precision mode (e.g., 6-bit). In this type of hardware, the advantages of utilizing 6-bit computation over 8-bit computation are prominent and can be categorized into two key aspects:
1. **Reduced Storage Access and Energy Consumption:**
2. **Enhanced Computation Latency in Bit-Serial Scenarios:** In bit-serial architectures, 6-bit computation exhibits lower latency than 8-bit computation, achieving an acceleration effect of 1.3x [3].
**Q6:** Only ResNet-style model architectures are tested, which is limited.
**A6:** Because of the length limitation, we answer this question in the **global rebuttal.**
[1] Alam et al., "Winograd Convolution for Deep Neural Networks : Efficient Point Selection", 2022
[2] Nagel et al., "A White Paper on Neural Network Quantization", 2021
[3] Judd et al., "Stripes**:** Bit-serial deep neural network computing", 2016
[4] Sharma et al., "Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Network", 2018
[5] Chikin et al., "Channel Balancing for Accurate Quantization of Winograd Convolutions", 2022
[6] Li et al., "BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction", 2021
[7] Fernández-Marqués et al., "Searching for Winograd-aware Quantized Networks", 2020 | null | null | null | null | null | null |
Topological RANSAC for instance verification and retrieval without fine-tuning | Accept (poster) | Summary: This work introduced a new approach of geometrical verification for image retrieval and matching, which is not based on pixel-perfect robust estimation, but on something called "topological common sense". The authors discuss the limitations of the commonly used Spatial Verification strategies, and draw inspiration from how humans compare two images (topological relations vs point level accurate estimation of transformation matrices). Good improvement is shown in ROxf/RPar datasets.
Strengths: - TP is a very interesting idea compare to SP.
- The biological inspiration for TP is good, and sound.
- The fact that "TP without SP (Ours)" outperforms "TP with SP (Ours)", is a good indication of the robustness of the proposed method.
- The high explainability experiments are interesting and seem technically accurate.
Weaknesses: - Why is it claimed that this method offers explainability? To me it seems like the SP methods also offer explainability. Can the authors explain?
- Why was retrieval only chosen as a proxy task? It seems like TP could be used in other tasks too (e.g. matching, loop closure) which would make this work much more general and thus much stronger.
- It seems like Hypergraph propagation results are not ideal when using SP. I am wondering if there is any extra loss or consistency regularizer that could be added to SP to make it avoid this. In a sense, is this a fair comparison? Are all pairwise pose estimation consistency losses get into account for the 3 image case? {e.g. L63 RANSAC, SP calculates a transformation matrix M between two views, but is that enforced somehow in the triplet?}
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: My questions are above in the Weaknesses section, and these are the things I would like to see answered from the authors.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I did not find that the authors addressed limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer JDZk,
Thank you for acknowledging the significance of our method and recognizing its merit in terms of its robustness and technical accuracy. We wish to address the concerns you've raised:
Explainability: We'd like to emphasize that we aren't suggesting our method, TP, offers superior explainability in comparison to SP, which is well-regarded for its strong explainability. Instead, our contention is that TP offers improved explainability than other methods that augment SP and often rely heavily on fine-tuning sets and compromising on clarity and transparency. If we were to identify a specific advantage our TP possesses over SP in terms of explainability, it would be its user-friendly reasoning. The HRs generated by TP align more intuitively with human understanding, contrasting with SP's inlier numbers. Without the context provided by the homography matrix, these numbers can sometimes puzzle users in discerning why certain matchings are selected or discarded.
Task Specificity: Our TP is designed to target the rigid-body recognition challenges. The rationale for leveraging retrieval as the proxy task stems from the notorious complexity of datasets like Oxford and Paris. These datasets encompass hard positives and negatives, which while being challenging, have ground truths verifiable by human annotators. As concurred by Reviewer WbBp and discussed in Section 5, the ability of humans to discern these tough cases underscores the importance of our target task. Conversely, other rigid-body benchmarks don't match the complexity of Oxford and Paris or guarantee human recognition. For instance, the geo-localization benchmark MSLS uses images within a 25m radius as ground truth indicators. Regrettably, some pairs lack visual overlap, making human recognition based solely on visual cues unfeasible.
Further emphasizing on this, we evaluated our approach against 100 queries from the challenging geo-localization benchmark MSLS val set. As depicted in Table 2 in the rebuttal PDF, our method shows marked improvement over SP with SIFT features. Surprisingly, our TP even rivals the performance of the recent large-scale fine-tuned model R2Former (CVPR 23); it fine-tuned on more than 1.44 million training images in MSLS. This experiment, we believe, reinforces our method's versatility, as we already showed in the submitted paper in page 7 and table 1 w/ GeM fine-tuned on GLD 4 million training set.
Fairness in Comparison with SP on Hypergraph Propagation: Our commitment to fairness is evident from the consistent use of code, weights, and pre-calculated SP results employed by the hypergraph propagation (HP) research. Given that our parameters mirror those perfected by the HP authors (and remain the best performance in ROxford/RParis benchmarks), it is reasonable to think that the current setting for SP on HP is good and the comparison stands equitable. The sole modification was to change hop 3 to hop2 to reduce computational overhead. We guess your concern about fairness is because the listed performance in our paper is lower than the performance of the HP paper. To address any persisting concerns, we conducted additional tests comparing SP and our TP with propagation set at hop 3 in Table 3 of the rebuttal PDF. We reproduced the reported performance in their paper and our TP outperformed their results, further validating our method's efficacy.
We hope these clarifications address your concerns, and thank you once again for your constructive feedback.
Warm regards,
Authors | Summary: To address the limitations of SPatial verification (SP), the authors introduce the topological consistency to the RANSAC process for image retrieval without fine-tuning. With the socalled homeomorphism regions, the proposed method can achieve better results than some typical methods on four datasets.
Strengths: 1. This work introduces the topological consistency to the RANSAC process. I think it is interesting.
2. The proposed methods are largely based on existing works. Thus, most of the techniques are correct.
3. The performance of the proposed methods is better than some typical methods.
Weaknesses: Some key concerns should be addressed.
1. Technical contributions
In fact, the methods of comning RANSAC and topological models are not new in current CV field, especically in image retrieval. I agree with that the SP method has big problems without finetuning sets. However, the authors mainly combines the so-called topological model and RANSAC. Most of other modules are not new. The defination of HR is common sense. There are no new insights. In my view, this is not a significant contribution, but only a typical combination method for better matching feature representations. Besides, as far as I know, other methods without finetuing can also perform better than the proposed method, such as BLIP, BEiT, ect. The proposed functions in Fig.2 are also naive for HRs. Why them should be that order? There are no full motivation.Thus, the techinical contributions are rather limited, considering most of the used modules are from typical image processing operation.
2. Insufficient experiments
First, through the whole experiments, I find that there are no full comparisons with other methods. In fact, the authors only compare with limited methods as shown in Tab.1. As for the methods using SIFT, I suggest to add more comparisions in this aspect. For the pre-trained models, more large-scale models should be added such as BLIP, BEiT, etc. Second, since the title is for image retrieval, I think it is better to perform experiments on different kinds of object retrieval datasets, such as person (Market1501, MSMT2017), Cars (VehicleReID), Food (Food100k), etc. Third, there are no full analysis on the key modules. For example, how about without using HRs, or just using the visually similar regions? The effects of the proceduce or operation orders in Fig, 2 is missing? As far as I know, there are many different mactching techniques. How about applying the proposed methods in GLUE, SuperGLUE? Is the deep features important in the proposed framework? l think, more experiments will make the proposed method more convincing.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I think the authors have partly addressed the limitations. However, the speed and computation should be discussed for better future applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer pXos,
Thank you for your meticulous feedback. We understand and acknowledge your concerns and have accordingly undertaken additional experiments and provided clearer explanations.
# Main Contribution and Novelty
Our contribution lies in our innovative adaptation of RANSAC. We replace its geometry model with a topological one, rather than a mere fusion. The essence lies in its innate adaptability to any input pair, functioning autonomously, without the necessity of training data.
While notable methods such as SuperGlue and R2Former (CVPR 23) utilize topological insights, they do not intrinsically modify RANSAC's architecture. Our method, unlike them, excels without a foundational reliance on prior training data. Referring to Fig.2, it portrays only a single loop iteration in RANSAC, which is variable and not predefined. Page 4 should have clarified this; however, we can provide further elaboration in the discussion stage if required.
Please note that feedback from other reviewers supports our standpoint:
"Very interesting" - JDZk.
"Very different than traditional approaches and refreshing" - X5kj.
"Brings many insights and advantages and revitalized the classical algorithms" - WpBp.
# Scope and Motivation
Our work delves deep into hypothesis-testing processes that function independently of training; this direction is supported by Reviewer WpBp as "remarkable considering the recent trend of pursuing stronger features through fine-tuning. As discussed in Section 5, it is evident that as humans, we can easily determine if the given image pair is a mismatch without too much contextual information." The result that our no-training TP is on par with large-scale refined GeM shows the potential of this direction.
Addressing rigid-body instance retrieval, our findings largely hinge on ROxford and RParis benchmarks. However, based on your insights, we've widened our experimental spectrum to include the geo-localization benchmark MSLS.
The term "image retrieval" in our title aligns with prevailing norms, as many landmark retrieval papers predominantly analyzing Oxford/Paris use such phrasing. Nonetheless, we're open to title revisions to ensure clarity.
# Additional Experiments
Responding to your suggestions, we've taken the following measures (all the result tables are in the rebuttal pdf file):
Table 1 additionally compared our methodology with more techniques such as SIFT-based FisherVector and SMK, and pre-trained alternatives like BLIP and the BEiT series. While BLIP and BEiT's performance might seem unexpectedly subpar, challenges intrinsic to instance retrieval benchmarks like Revisited Oxford/Paris, featuring extreme viewpoints, occlusions, and pattern similarities, could be contributing factors. We've rigorously vetted our BLIP and BEiT implementations, confirming their proficiency in identifying easy cases, but recognizing their limitations in discerning hard positives and negatives.
Table 2 assessed our approach's efficacy on the MSLS dataset, an intricate geo-localization benchmark. The results, especially when pitted against the best fine-tuned model R2Former (CVPR23), are telling of our method's prowess. Because we only consider the rigid-body instance retrieval, you mentioned datasets are not in the scope of this paper. (We are open to change the title to avoid misunderstanding) However, the results on the famous ROxford, RParis, and MSLS show that our method has generality on both two representative rigid-body recognition tasks, landmarks retrieval, and geo-localization.
To further highlight the effectiveness of our proposed HRs, we aligned it with regions established by acclaimed methods such as superpoint+superglue, superpoint + sp, and r2d2 in Table 1, utilizing their pre-existing weightings without fine-tuning on the landmarks in Oxford and Paris. Despite these methods exhibiting strength in tasks like pose estimation and SfM, they couldn't parallel our TP's efficiency. Finally, we would like to humbly mention that our initially selected baselines are the strong and representative methods in the landmark retrieval field; initial feedback from peer reviewers on our experiments has been particularly encouraging:
“The ablation study is a good indication of robustness” - Reviewer JDZk.
“Very strong results, well-constructed experimental section, apt baselines, and thorough comparisons” - Reviewer X5kj.
“The benchmarking outcomes are truly exceptional” - Reviewer WpBp.
In conclusion, your feedback is invaluable, and we genuinely believe our refined explanations and bolstered experimental data should address any lingering concerns.
Warm regards,
Authors
---
Rebuttal Comment 1.1:
Title: Feedback for authors
Comment: I have read the rebuttal and the review comments. I think the authors have partly addressed my concerns. I would like to improve my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for revisiting our submission. We appreciate your feedback and the constructive comments you provided in the initial review. We're pleased to note that our rebuttal partly addressed your concerns and that you would like to improve your score. If there are specific areas still needing clarification, please highlight them. We aim to address any residual issues and appreciate your guidance in this matter. | Summary: This paper reexamines the classical instance recognition problem in computer vision, a problem that holds great significance in applications such as image retrieval. In recent years, data-driven approaches that relying on fine-tuning pre-trained deep models have drawn much attention. However, these methods not only face the difficulties in obtaining the data for fine-tuning in real-world applications, but also lack explainability. The authors closely examined the vulnerabilities of SPatial verification (SP) method, which remains the most performant method in non-fine-tuning setting, and propose the new Topological verification (TP) model to replace SP in the RANSAC process. Inspired by human vision system, TP-based RANSAC is a region-growing algorithm which seeks and maximizes Homeomorphism region (HR). Starting from some seed patches called Hypothesis Set, the *fovea* and *saccade* function iteratively examines neighboring regions and expands HR by enforcing several important conditions, including *local consistency*, *topological consistency* and *connectivity*. Evaluation on challenging benchmarks, such as ROxf/RPar and ROxf+1M/RPar+1M, demonstrates that the proposed method establishes new SOTA performance across all methods without the need for fine-tuning.
Strengths: After a decade with deep learning techniques dominating the CV/ML/AI world, it is refreshing and delightful to read this paper, which revitalizes the classical algorithms by introducing many insights and advancements. The authors delve into the SP model that has been widely used in the hypothesis testing of RANSAC process and identify its vulnerabilities that were long overlooked by the previous methods. This is remarkable considering the recent trend of pursuing stronger features through fine-tuning. As discussed in Section 5, it is evident that as humans, we can easily determine if the given image pair is a mismatch without too much contextual information. It is intriguing to observe how the topological rules introduced in this paper closely emulate the tactics that humans may leverage to solve this problem. The bio-inspired fovea and saccade functions effectively replicate such brain mechanisms in an algorithmic fashion. Compared to the SP model and fine-tuning methods, the topological model possesses better explainability. Moreover, it retains the characteristic of not necessitating fine-tuning while also being compatible with large-scale pre-trained models. The benchmarking results are exceptional. I particularly like the insightful discussions in Section 5. It is a very good read.
Weaknesses: If I have to point out one weakness of this paper, I would say that there are still some missing details (some are listed in the Question section of this review) in the main paper and supplemental material, though most of them are minor and does not cause difficulty in understanding the method. However, I still think that the paper could be made a little clearer for readers who do not particularly work on instance recognition and image retrieval problem. For example, I guess not every reader is familiar with the term *hop1* and *hop2* shown in Table 2.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. It is not very clear to me how to obtain the initial hypothesis set? It perhaps has been standardized by previous RANSAC-based SP methods. I still think it appropriate to mention it in the paper for clarity.
2. How does saccade function work? I assume that it is similar to most advancing front techniques, as illustrated by part of Figure 2. I don't seem to find more details in the supplemental material as well.
3. How to perform patch location adjustment? I assume that it is accomplished by computing the minimal enclosing bounding box of the matched key points.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Nowadays, the mainstream research seems to lean towards seeking for larger and stronger per-trained models to vectorize everything and simplify the retrieval problem as similarity search by simple operations like dot products. Focusing on right instance recognition, the topological model excels in single modality domain and has little applicability in cross-modality retrieval tasks as popularized by CLIP. On the other hand, inspired by human vision system, this work could also be a great example showing that the pursue of large models might be a detour when solving a cognitive problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer WpBp,
Thank you for your positive remarks regarding our work. We truly appreciate your support for the direction of classical RANSAC, especially at a time when it isn't the prevailing trend. We concur with your observation that our findings underscore the potential of RANSAC-based methods when combined with novel insights and improvements. Below, we address your queries:
Initial Set: Indeed, the initial set originates from the intermediate phase of the standardized RANSAC process, notably the nearest neighbor with ratio test → SP iteration based on Equation (1) in the manuscript. The initial set can derive either from the ratio test result or the final SP outcome. Our experiments, as depicted in Table 1 in the main body of the paper, compare these two approaches. Notably, we present 'TP with SP' (utilizing the final SP result) and 'TP without SP' (employing the basic ratio test result). The intent was to gauge the impact of SP filtering on the hypothesis set. Interestingly, the integration of SP unexpectedly reduced the final accuracy. A plausible reason might be the incorrect exclusion of numerous prospective positive match pairs by SP.
Saccade Function: Apologies for the oversight in elaboration. Given an initial matching pair (the hypothesis), the saccade function begins its traverse from this pair, covering the entire image region using the Breadth First Search (BFS). Simplistically, imagine it dividing the image into patches similar to the vision transformer. From the initial hypothesis matching patch pair, the saccade function employs BFS across all patches, selecting those that meet the three HR conditions. The actual trajectory is intriguing due to the patch location adjustment phase, resulting in patches of varied sizes and irregular locations compared to transformer's pre-segmented patches.
Patch Location Adjustment: You've hit the mark; it indeed computes the smallest enclosing bounding box of the matched key points.
On 'Hop 1' and 'Hop 2': We will certainly clarify this in the revised manuscript. To explain them, during image retrieval diffusion or propagation, it's unfeasible to diffuse across the entire database due to computational constraints. As a workaround, we limit the graph and diffuse among, say, the hop 3 neighbors of the query image, optimizing retrieval speed with minimal accuracy compromise. 'Hop 1' and 'Hop 2' denote propagation among hop 1 or hop 2 neighbors for each query, respectively. In our experiments, the hyperparameter was adjusted from hop 3 to hop 2 to curtail offline computational demands. To ensure fairness in comparison, we've included the hop 3 results in the rebuttal PDF. These further emphasize TP's superiority over SP.
Once again, thank you for your invaluable feedback.
Warm regards,
Authors | Summary: This paper proposes an approach in the area of image retrieval, more specifically landmark retrieval. Authors propose a new method for spatial verification that replaces standard and commonly used spatial model in RANSAC-based approaches, with topological one. Experiments show SOTA performance using handcrafted features, that even beats pretrained and is comparable to fine-tuned approaches.
Strengths: + Strong contributions: novel approach in hypothesis testing with RANSAC that is very different than traditional approaches and refreshing, inspiration from bio processes.
+ Very strong results with hand-crafted SIFT features and well established ranking framework with spatial verification.
+ Well written paper
+ Good experimental section, properly set baselines, good comparisons
Weaknesses: - Time and memory analysis missing. It would be interesting how does this approach compares with standard RANSAC, and also how does the whole framework compare with competitive frameworks during inference
- This approach could be combined with other local features, even the trained or fine-tuned ones. It would make paper stronger if it was shown how well does it work with features beyond SIFT
- Spatial verification can be applied and help in visual geo-localizations on benchmarks such as Pittsburg30k and Tokyo24/7, it would be interesting to see how this method works there
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: My first weakness is something that should be addressed in the rebuttal, ie the time and memory analysis. Other two weaknesses: ablate different features and visual geo-localization are improvements that would make paper stronger, but are not necessary.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer X5kj,
Thank you for your thorough feedback and for recognizing the strength of our contribution. We value your insights and would like to address each of your questions:
Time and Memory Cost: We have provided details on time costs in lines 246-248 of our paper. Our TP, implemented in Python, averages 1.23s per image pair, making it slower than the C-implemented RANSAC SP at 0.53s. Nonetheless, transitioning to a C-based implementation promises substantial speed enhancements. The time distribution across the entire retrieval pipeline can be seen in Table 4 of the attached rebuttal pdf. When compared to the initial ranking stage (0.57s), the reranking stages of SP (53.2s) or TP (123.4s) appear relatively time-consuming. However, this disparity can be attributed to implementation techniques. Parallel implementation of SP/TP verification can drastically cut down time, particularly if the top 100 are verified concurrently. Regarding memory costs, these can be broken down into model size and local feature costs. Both SP and TP are lightweight and mobile-friendly, avoiding the cumbersome millions of parameters seen in models like ResNet. The memory cost for local features is identical for SP and TP. In scenarios where real-time feature calculation is feasible, this cost, such as with SIFT features, is minimized to an average of 0.43 MB per image (in numpy array format) in the Oxford set. For tasks where memory consumption is a critical factor, the memory costs associated with our method can be mitigated. By leveraging real-time feature calculation, we can bypass storing extensive local features. This on-the-fly computation is about 0.16s per image pair.
Compatibility with Other Local Features: We demonstrated the efficiency of integrating TP with DELG local features in paper table 2; our TP+DELG scored 86 and 71 on ROxford, surpassing the 85 and 59 of SP+DELG. In the attached rebuttal PDF, we also provide a comparison between SP+superpoint and TP+superpoint. The results show TP+superpoint scoring 69.5 and 46 on ROxford, as opposed to SP+superpoint's 66.1 and 42.4. Based on these comparisons across SIFT, DELG, and Superpoint, we posit that TP is a superior alternative to SP for rigid-body retrieval tasks.
Geo-localization Experiments: At your suggestion, we conducted an evaluation on geo-localization. We used the renowned MSLS dataset. Our method's efficacy on the challenging MSLS validation set can be seen in table 3 of the attached pdf. Our approach notably outperforms SP when using SIFT features (top1 recall 0.59 vs. 0.74). Considering SP's prowess in geo-localization tasks, our TP's significant performance boost (25% improvement) was a pleasant surprise. After observation, we think the reason is that street views in geo-localization often deviate considerably from the plane assumption inherent in SP. This deviation means that the SP might primarily identify just one side or facet of the street, leading to its limitations in this scenario. In contrast, our TP demonstrates more adaptability under these conditions, making it a more versatile choice for such applications. Notably, our TP paired with SIFT even rivals the performance of the cutting-edge, fine-tuned R2Former model (CVPR '23), with top 1 recalls of 0.74 vs. 0.81, respectively. This experiment underscores our method's versatility.
We are grateful for your thoughtful review and hope our clarifications prove satisfactory.
Warm regards,
Authors | Rebuttal 1:
Rebuttal: Dear Reviewers,
First and foremost, we'd like to thank all the reviewers for the time and effort dedicated to reviewing our work. Your feedback has been instrumental in highlighting areas of improvement, and we've conducted additional experiments in response to your insightful comments.
Additional Experiments:
Table 1 offers a comparison encompassing more SIFT-based methods and those that are pre-trained, providing a broader landscape of performance benchmarks.
In Table 2, we delve into the performance of SP and our TP on the widely-recognized geo-localization benchmark MSLS.
Table 3 elucidates the results when SP and TP are amalgamated with DELG, specifically focusing on the hypergraph propagation among hop 3 neighbors.
Lastly, Table 4 provides a granular look at the time distribution throughout the retrieval process.
Your detailed feedback has not only enabled us to refine our work but has also guided our direction in conducting these supplementary experiments. We're optimistic that these additions provide a more comprehensive and robust foundation to our research.
Once again, thank you for your invaluable insights. We look forward to any further comments or suggestions you might have.
Warm regards,
Authors
Pdf: /pdf/dabd44b9ba194d4616904641b77729f363d4cf30.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Long Sequence Hopfield Memory | Accept (poster) | Summary: This paper proposes to introduce nonlinear interaction terms in the Amari-Hopfield type recurrent networks for learning long binary sequences. Analytical results on the network capacity and a new learning algorithm are provided.
Strengths: 1. The analytic calculation of the network capacity on random sequences in Section 2 is a very interesting theoretical result, extending previous work on the capacity of storing static patterns in the networks. The simulation results match the theoretical calculation, which is excellent.
2. The paper is well-written and easy to follow.
3. Code is provided for the reproduction of the experiments.
Weaknesses: 1. It is technically incorrect to state that "a major limitation of the Hopfield Network and related associative memory models is its capacity". In [1], it has been shown Hopfield model can store sequences of maximal length 2^N. The short length of sequences is largely due to the temporal asymmetric Hebbian learning algorithm, rather than the model.
2. There is a vast of papers [2,3,4,5] on overcoming the limitation of storing correlated in Hopfield model by the Hebbian algorithm, which are not mentioned this paper. The ideas can be extended to store sequences. The authors should do a literature review on them and compare the pros and cons with their approach in solving the problem.
3. The theory is conducted on random sequences, rather than the correlated pattern sequences which should be addressed. In the experiments in Section 2.4, the MNIST digits images in a sequence are in essentially random orders. Therefore the correlation of patterns is not strong enough to validate the propose method.
4. Robustness of sequence retrieval is missing in the paper. A major feature of Hopfield model is its robustness of recovering static patterns under noise. Can the proposed method recover the rest of a sequence, given a perturbed pattern? An evaluation is needed.
5. In Line 65-68, the $\xi$ should be bold-faced. In Line 95, $\xi_2^1$ should be $\xi_1^2$.
[1] Exponentially Long Orbits in Hopfield Neural Networks, Muscinelli, Gerstner, Brea. Neural Computation, 2017.
[2] Learning of correlated patterns in spin-glass networks by local learning rules. Diederich, Opper. PRL, 1987.
[3] The space of interactions in neural network models. Gardner. Journal of Physics A, 1988.
[4] Perceptron-like learning in time-summating neural networks. Bressloff, Taylor. Journal of Physics A, 1999.
[5] Dynamics of Memory Representations in Networks with Novelty-Facilitated Synaptic Plasticity. Blumenfeld, Preminger, Sagi, Tsodyks. Neuron, 2006.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall, I appreciate the theoretical part of this paper, which is valuable to the community per se. However, I found it is not consistent with the problem addressed in the beginning of this paper, learning long and correlated pattern sequences. The experiments do not convince me that correlated pattern sequences can be handled by the proposed method either.
However, I would like to raise my rating by 1-2 points, if the authors can in the rebuttal or revised paper
1) evaluate their method on the Moving MNIST dataset, in which the successive patterns are highly correlated.
https://www.cs.toronto.edu/~nitish/unsupervised_video/
2) evaluate their method for the robustness of sequence retrieval under noise.
Moreover, if the authors can additionally provide some theoretical justification on storing correlated sequence patterns (a shortcoming which I have pointed out in the review), plus all my concerns are well-addressed, I could raise the rating by 3-4 points.
### Rebuttal
I have read the rebuttal and believe the authors have addressed my points. I therefore raise my rating to "6. Weakly Accept".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the limitation of the analytic calculation is mentioned in Line 353-358.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the clear criticism and insightful comments, especially the suggestion to use the Moving MNIST dataset and include simulations demonstrating robust sequence retrieval, which we have included the global rebuttal. We have also sketched an outline for the capacity of biased patterns.
**Weaknesses:**
> It is technically incorrect to state that "a major limitation of the Hopfield Network and related associative memory models is its capacity". In [1], it has been shown Hopfield model can store sequences of maximal length 2^N. The short length of sequences is largely due to the temporal asymmetric Hebbian learning algorithm, rather than the model.
Thank you for this comment. By Hopfield Network, we specifically refer to a network with a Hebbian-like learning rule, and will clarify this in the camera-ready version. The paper mentioned in [1] describes an algorithm such that given a network of size $N$, one can construct a sequence of $2^N$ patterns and a specific update rule to recall these patterns. However, it will not work for arbitrary sequences and does not have error correction capabilities, which is a core feature of our model. We will be more precise in the camera-ready version.
> There is a vast of papers [2,3,4,5] on overcoming the limitation of storing correlated in Hopfield model by the Hebbian algorithm, which are not mentioned this paper. The ideas can be extended to store sequences. The authors should do a literature review on them and compare the pros and cons with their approach in solving the problem.
This is a fair point. We tried to do a brief survey in the introduction of the paper, but the literature on associative memory is vast and some papers were accidentally omitted. We have included a brief comparison to other methods in the global rebuttal and will include a deeper one in the camera-ready version, alongside citing these papers. If you believe any other models are relevant, we would be grateful.
> The theory is conducted on random sequences, rather than the correlated pattern sequences which should be addressed. In the experiments in Section 2.4, the MNIST digits images in a sequence are in essentially random orders. Therefore the correlation of patterns is not strong enough to validate the propose method.
Thank you for this suggestion. We ran simulations on the Moving MNIST dataset which has higher overlap between patterns and included the results in the global rebuttal.
> Robustness of sequence retrieval is missing in the paper. A major feature of Hopfield model is its robustness of recovering static patterns under noise. Can the proposed method recover the rest of a sequence, given a perturbed pattern? An evaluation is needed.
This is great suggestion, and we have ran simulations which we've further discussed in the global rebuttal. In Figure 3, we add noise to each transition in the sequence which would be even more difficult than only perturbing the first pattern.
> In Line 65-68, the $\xi$ should be bold-faced. In Line 95, $\xi_2^1$ should be $\xi_1^2$.
Thank you for pointing that out; we have fixed these typos.
**Questions:**
> Overall, I appreciate the theoretical part of this paper, which is valuable to the community per se. However, I found it is not consistent with the problem addressed in the beginning of this paper, learning long and correlated pattern sequences. The experiments do not convince me that correlated pattern sequences can be handled by the proposed method either. However, I would like to raise my rating by 1-2 points, if the authors can in the rebuttal or revised paper:
1. evaluate their method on the Moving MNIST dataset, in which the successive patterns are highly correlated. https://www.cs.toronto.edu/~nitish/unsupervised_video/
2. evaluate their method for the robustness of sequence retrieval under noise.
Thank you for the valuable suggestions, and we appreciate you pointing us to the Moving MNIST dataset of which we were not previously aware. We have ran both of these simulations and included them in the global rebuttal.
> Moreover, if the authors can additionally provide some theoretical justification on storing correlated sequence patterns (a shortcoming which I have pointed out in the review), plus all my concerns are well-addressed, I could raise the rating by 3-4 points.
Thank you for this question. We agree that it would be very interesting to extend our theoretical arguments to correlated patterns. Our approximate Gaussian computations should be easily extensible to the case in which the patterns are biased (not correlated in the statistical sense, but leading to non-zero average overlap, which is the relevant notion in this context); the effect would be to shift the mean and variance of the crosstalk distributions. Concretely, consider patterns with
$$\mathbb{P}[\xi_{j}^{\mu} = \pm 1] = \frac{1 \pm r}{2},$$
such that
$$\mathbb{E}[\xi_{j}^{\mu}] = r$$
is the bias. This setup mirrors that considered in the classic work of Gardner (JPA 1988), which has been extended to simple nonlinear networks (Zavatone-Veth & Pehlevan PRE 2021); for that unconstrained learning rule the bias does not alter the scaling of capacity with $N$. We propose to do this computation for the final version of our paper, as it should not be challenging.
To sketch the computation, we recall that our goal is to analyze the crosstalk and approximate its distribution as Gaussian.
$$ C = \sum_{\mu=2}^P \xi^{2}_{1} \xi_1^{\mu+1} f \left( X^{\mu} \right), $$
where
$$ X^{\mu} = \frac{1}{N-1} \sum_{j=2}^{N} \xi_{j}^{\mu} \xi_{j}^{1}$$
Since the patterns are no longer symmetrically distributed, we must now also track the non-zero mean of the crosstalk. However, the computation is still reasonably straightforward.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and believe the authors have addressed my points. I therefore raise my rating to "6. Weakly Accept".
However, the authors should provide more quantitative empirical analysis for the capacity/robustness trade-off. I hope the authors can add this in the final version of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your suggestions and the updated score. We will certainly include some more empirical analysis for the capacity/robustness trade-off in the final version. We have already ran simulations where we examine the effect of a non-zero error threshold when computing the transition and sequence capacities. | Summary: This paper combines two modifications to the basic Hopfield network to create a network capable of creating and recalling long sequential memories (i.e. sequences of states of the network). The sequence capacity of the proposed network is bounded analytically, and supported by numerical simulations. This model is then extended to allow memories to be recalled a fixed number of times before transitioning to the next item in the sequence. Finally, an implementation of this network using more biologically plausible neurons is given.
Strengths: * This work extends the basic Hopfield network in several important directions while retaining the original model's simplicity and (to some extent) analytic tractability.
* The paper itself is well-written and eminently readable, and is likely to have broad appeal in related disciplines. Although it covers several ideas related to sequential Hopfield networks, the paper still feels cohesive and flows well.
* The theory and simulations complement each other well.
Weaknesses: * The theoretical analysis crucially relies on nonrigorously approximating a nonlinear function of i.i.d. random variables as Gaussian. The authors make a substantial attempt to provide justification for this, but this does dilute the utility of the theoretical analysis.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: * Can you elaborate on the relationship between the models presented here and the Karuvally et al. work mentioned on line 46?
* What kind of loss in sharpness results if you circumvent the Gaussian approximation (lines 98-105) by using McDiarmid's inequality to bound the tail instead? I believe this should also allow the capacity to be bounded for any $L$-Lipschitz nonlinearity.
* In lines 201-213, it seems like the patterns are not actually "correlated", as each pattern is still i.i.d. but the distribution is no longer uniform. If so, care should be taken to distinguish this meaning from the term "correlation" as it is used in Section 2.4, meaning that the patterns are not i.i.d. On that note, a synthetic version of the MNIST experiment where there is a ground truth sequence, and different sequences are generated by flipping each of its bits with some small probability, would be interesting.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions, especially those on improving the rigor of the technical analysis. Since the original submissions, we have proven rigorous bounds for the Polynomial DenseNet and are working on rigorous bounds for the Exponential DenseNet, which has proven to be substantially more difficult.
**Weaknesses:**
> The theoretical analysis crucially relies on nonrigorously approximating a nonlinear function of i.i.d. random variables as Gaussian. The authors make a substantial attempt to provide justification for this, but this does dilute the utility of the theoretical analysis.
Thank you for this comment; as we acknowledge the analysis presented is approximate. A small clarification: the crosstalk is not simply a nonlinear function of a sum of i.i.d. random variables, but a sum of functions of i.i.d. sums of i.i.d. random variables. For the Polynomial DenseNet, we have in hand a rigorous analysis that adapts the analysis of the symmetric MHN by Demircigil et al. 2017 to bound the bitflip probability. We would be happy to add this to our manuscript if the referee believes it would be helpful. Please see also the discussion below under your question about McDiarmid's inequality.
**Questions:**
> Can you elaborate on the relationship between the models presented here and the Karuvally et al. work mentioned on line 46?
Karuvally et al. 2022 takes an interesting approach toward extending Modern Hopfield Networks to store sequences. While we forego the energy function formulation altogether, they develop a way to analyze an energy function in terms of an adiabatic limit. We believe that their model, GSEMM, can most closely be described as the biologically-plausible implementation of our proposed Mixed Network. Furthermore, they analyze the model in the setting of continuous-time dynamics, allow for different timescales between the hidden and feature layers, and also allow for intralayer synapses within the hidden layer. On the other hand, our paper is focused on providing a theoretical analysis of the model and its sequence capacity, alongside introducing the generalized pseudoinverse rule to store correlated patterns. Karuvally et al. 2022, however, do not derive an analytic expression for the sequence capacity, which is the central result in our paper.
> What kind of loss in sharpness results if you circumvent the Gaussian approximation (lines 98-105) by using McDiarmid's inequality to bound the tail instead? I believe this should also allow the capacity to be bounded for any Lipschitz nonlinearity.
Thank you for the suggestion. We have in fact previously attempted to estimate the capacity by using McDiarmid's inequality to bound the bitflip probability. For the polynomial DenseNet, the sensitivity to flipping one of the pattern elements inside the nonlinear separation function led to a bound of the form $\mathbb{P}[C>1] \leq [1+o(1)] \exp[-n/(2p)]$, which only recovers the capacity of the classic SeqNet. If the reviewer has any idea for circumventing this loss of sharpness, we would greatly appreciate their feedback. As an alternative approach to rigorously controlling the bitflip probability for the polynomial DenseNet, we believe that the argument used by Demircigil et al. (2017) to prove their theorem on symmetric networks can be adapted step-by-step to prove the Gaussian prediction. We would be happy to include this argument if the reviewer thinks it would enhance our paper. For the exponential DenseNet, the situation is more substantially more difficult due to the tail decaying more slowly,, and we are investigating alternative derivations.
> In lines 201-213, it seems like the patterns are not actually "correlated", as each pattern is still i.i.d. but the distribution is no longer uniform. If so, care should be taken to distinguish this meaning from the term "correlation" as it is used in Section 2.4, meaning that the patterns are not i.i.d.
Thank you for mentioning this, as the imprecise wording here is a historical artifact. Within the Hopfield Network literature, the term "correlated patterns" is often used to refer to "overlapping patterns". As it is the overlap between patterns which increases crosstalk, increasing the overlap will reduce the signal-to-noise ratio when computing sequence transitions. In other words, we are concerned with the raw second moment of the distribution of patterns rather than the centered second moment of the distribution. One can increase the overlap by increasing the bias or by increasing the correlation between the patterns.
> On that note, a synthetic version of the MNIST experiment where there is a ground truth sequence, and different sequences are generated by flipping each of its bits with some small probability, would be interesting.
This is a great suggestion. We assume the intent is to introduce a higher degree of correlation into the network. Due to limited space for figures, we were not able to display this exact experiment but believe that Figures 1 and 2 of the Moving MNIST dataset showcase the model's ability to handle pattern overlap and Figure 3 shows the model's robustness to noise/corruption.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response and the new experiment! I do think that the rigorous bound would substantially strengthen the paper and I hope that you will include it in future versions. | Summary: This is well presented paper one the storage of sequence memories in Hopfield-like networks. It builds on the recent modern Hopfield networks, but now with asymmetric weights connecting adjacent memories within a sequence. Theoretical capacity limits are calculated and then compared to simulations. Adaptations to improve correlated inputs are proposed using a pseudoinverse rule.
Strengths: Well presented paper. Easy to read. Technically solid.
Weaknesses: 1) This is a small update paper to the recent spate of modern Hopfield network papers (Krotov&Hopfield 2020, Ramsauer et al 2019, etc.,). The proposed method of sequence memory only works when no elements of the sequence are repeated (big limitation), and for unstructured sequences (i.e. it would have no idea if the sequence was drawn from an underlying distributions).
2) There are also papers that use modern Hopfield networks for sequence memories (e.g. Whittington et al 2022) that don’t have these limitations as they essentially control the sequence retrieval via an external controller. There should be some sort of comparison to existing sequence memory models.
3) The connection to motor skills is difficult to mesh with what we know about motor learning and episodic memory systems. Episodic memory is a hippocampal phenomenon, whereas motor skills / learning are the classic example of procedural knowledge.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Relating to the weaknesses above:
What happens when two patterns are identical in different sequences?
What about when the sequence is drawn from a distribution?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I think the main limitations are that this paper is a small advancement from existing modern Hopfield nets., and it’s not clear that the proposed update is a general solution. It is also not clear what the asymmetric weights offers more than other sequence memory models – being that the main point of this paper is that the proposed models is a good model, it really does need some comparisons to existing models. I’m really not saying this proposal is bad - I like it, the presentation is v good, and the theory/simulations concordance is great - it’s just not clear where it stands in the overall landscape, so it’s hard to draw any conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your compliments and insightful suggestions. Indeed, this is an extension of existing literature of modern Hopfield networks but we believe that it introduces a general solution for error-correction and the robust retrieval of long sequences. We were primarily focused in this work on theoretically analyzing the model and numerically verifying it via simulation.
**Weaknesses:**
> This is a small update paper to the recent spate of modern Hopfield network papers (Krotov&Hopfield 2020, Ramsauer et al 2019, etc.,). The proposed method of sequence memory only works when no elements of the sequence are repeated (big limitation), and for unstructured sequences (i.e. it would have no idea if the sequence was drawn from an underlying distributions).
The first part of this statement is correct. Thank you for clarifying it, and we will mention it explicitly in the camera-ready submission. We are actively exploring extensions of the model to store sequences with repeating elements for future work, but we restricted the scope of this paper to non-repeating elements for theoretical tractability. One route we have considered is adding additional associations across longer time steps (e.g. $\xi^{\mu} \to \xi^{\mu+2}$).
<!-- However, we do not anticipate this being a major problem in translating the results to represented via contextual embedding which will be -->
Regarding the second part of the statement, we assume that the sequences are drawn from a Rademacher distribution to compute theoretical capacity as is commonly done in the literature (Amit, Gutfreud, Sompolinsky 1985; McEliece et al. 1987; Krotov, Hopfield 2016). However, the model itself still works when the stored patterns come from a structured underlying distribution, as is demonstrated in the MNIST experiments and Overlapping Patterns experiments in the paper (Figures 5 and 6) and all of the Moving MNIST experiments in the global rebuttal. It simply will have a smaller capacity due to overlap between patterns, although we proposed the generalized pseudoinverse rule as a way to address this.
> There are also papers that use modern Hopfield networks for sequence memories (e.g. Whittington et al 2022) that don’t have these limitations as they essentially control the sequence retrieval via an external controller. There should be some sort of comparison to existing sequence memory models.
We assume the reviewer is referring to "Relating transformers to models and neural representations of the hippocampal formation." This paper was incredibly insightful in connecting the transformer architecture to the Tolman-Eichenbaum machine, and indeed should have been cited when surveying related sequence memory models. However, we believe that this model takes a different approach, in which the positional encoding for each element of the sequence is learned. We assume this is what the reviewer means by "controlling sequence retrieval via an external controller." We were focused on the setting where one would can recall a sequence without access to an external controller. However, while not exactly the same, we briefly suggest an analagous approach for context-dependent gating in the context of motor sequence generation in lines 314-317:
*"In particular, the role of the basal ganglia in this network suggests a novel mechanism of context-dependent gating within Hopfield Networks. Rather than modulating synapses or feature neurons in a network, one can directly inhibit (activate) memory neurons in order to decrease (increase) the likelihood of transitioning to the associated state."*
If there are other connections the reviewer sees, we are open to learning about them and further discussing them.
> The connection to motor skills is difficult to mesh with what we know about motor learning and episodic memory systems. Episodic memory is a hippocampal phenomenon, whereas motor skills / learning are the classic example of procedural knowledge.
Indeed, we do not believe that this model is connected to episodic memory as is found in the hippocampus, but rather are interested in motor sequences that are memorized through repetitive training such as a tennis serve. We seek to point out similarities between the model and recent developments in motor action selection and control, which suggest the role of thalamocortical feedback loops in initiating and automatically preparing motor motifs:
- Logiaco, Abbott, Escola 2021
- Kao, Sadabadi, Hennequin 2021
- Moll et al 2023
We acknowledge that there are differences between our model and the actual neurobiology underlying motor sequence control such as chunking. We will add a sentence to the camera-ready version to clarify this point.
**Limitations:**
> I think the main limitations are that this paper is a small advancement from existing modern Hopfield nets, and it’s not clear that the proposed update is a general solution. It is also not clear what the asymmetric weights offers more than other sequence memory models – being that the main point of this paper is that the proposed models is a good model, it really does need some comparisons to existing models. I’m really not saying this proposal is bad - I like it, the presentation is v good, and the theory/simulations concordance is great - it’s just not clear where it stands in the overall landscape, so it’s hard to draw any conclusions.
We appreciate the reviewer's comments and concerns. We present a broader survey of related models in the global rebuttal and will modify the introduction in the camera-ready version. If there are other models the reviewer believes should be included in the comparison, we would be happy to investigate them further. Finally, we simply want to reiterate that the focus of the work is the introduction and theoretical analysis of the DenseNet, which we believe provides a general solution for error correction and robust retrieval of long sequences.
---
Rebuttal Comment 1.1:
Title: Many thanks for the response
Comment: Many thanks for the responses. I have appreciated the additional results on moving msnist, as well as appreciating that this is a bigger advance that I had previously thought. I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your helpful criticism and the updated score. | Summary: This paper focuses on computational memory that stores sequence data. Existing work that considers Hopfield-like neural networks suffer from limited sequence capacity due to the crosstalk issue. To this end, this paper introduces a nonlinear interaction term inspired by Dense Associative Memories, enhancing pattern separation. The authors develop novel scaling laws for sequence capacity relative to network size, outperforming traditional Hopfield-based models. The authors also propose a generalized pseudoinverse rule for recalling sequences with highly correlated patterns. Additionally, this paper presents a biologically plausible implementation with connections to motor neuroscience to store sequences with variable timing.
Strengths: 1. Designing computational memory for sequence data is interesting and novel.
2. The paper is well-written and easy to follow.
3. The technical part of this paper seems sound to me.
4. Theoretical analyses such as memory capacity round up good work.
Weaknesses: 1. Besides Hopfield-like associative memory (AM), there is another class of AM namely predictive coding network (PCN)-based memory. Some recent works including [1,2,3] have shown superior performance of PCN-based approaches compared with Hopfield-like baselines. Seems [3] also considers associative memories for storing and retrieving sequential manner input. I am interested in both theoretical and empirical comparisons between the proposed method and the PCN-like methods.
2. Hopfield network is susceptible to convergence to local minima during the pattern retrieval process. Local minima are stable states that are not the desired patterns. Just wondering, if there are any theoretical guarantees/analyses of convergence of the proposed method.
3. Scaling up Hopfield networks to handle large-scale problems can be challenging. As the number of neurons increases, the computational and memory requirements grow rapidly. Is there any remedy for the efficiency concern, and what is the scalability of the proposed method?
4. Can the authors comment on the utility/challenges in applying their proposed method to real-world datasets/tasks beyond the MNIST datasets used in their experiments? e.g., using them in large-scale language modeling tasks where transformers are now popular.
[1] Salvatori, Tommaso, et al. "Associative memories via predictive coding." Advances in Neural Information Processing Systems 34 (2021): 3874-3886.
[2] Yoo, Jinsoo, and Frank Wood. "BayesPCN: A Continually Learnable Predictive Coding Associative Memory." Advances in Neural Information Processing Systems 35 (2022): 29903-29914.
[3] Tang, Mufeng, Helen Barron, and Rafal Bogacz. "Sequential Memory with Temporal Predictive Coding." arXiv preprint arXiv:2305.11982 (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Comparison with PCN-based approaches.
2. Is there any guarantee or analysis of the convergence?
3. Scalability of the proposed method.
4. Performance on real-world large-scale datasets/settings.
My final score will largely depend on the rebuttal and discussion with the other reviewers.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No negative societal impact was found in this paper. I have put my concerns in the "Weaknesses" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and suggestions. We have gone through the weaknesses and addressed them point by point. We are happy to go into more detail for any of these responses.
> Besides Hopfield-like associative memory (AM), there is another class of AM namely predictive coding network (PCN)-based memory. Some recent works including [1,2,3] have shown superior performance of PCN-based approaches compared with Hopfield-like baselines. Seems [3] also considers associative memories for storing and retrieving sequential manner input. I am interested in both theoretical and empirical comparisons between the proposed method and the PCN-like methods.
We have seen the work on the Predictive Coding Network as an alternative mechanism of for associative memory, and found that it to be very interesting. [3] does indeed consider a similar model and is contemporarenous with our submission, as it was posted to the arXiv on May 19, two days after the NeurIPS paper submission deadline. However, given its relationship to our model, we are ready to cite it in the camera-ready work. Our paper is primarily focused on extending Modern Hopfield Networks, deriving theoretical bounds on capacity and verifying them with numerical simulation. [3] introduces a related model, and even mentions that temporal predictive coding can be viewed as a "classical Asymmetric Hopfield Network (AHN) with an implicit statistical whitening process, which leads to more stable performance in sequential memory tasks of structured inputs." We would be interested in exploring theoretical comparisons between the proposed models and the PCN in future work. [1] provides a brief comparison for the symmetric setting, which we anticipate will extend to the sequence setting as well:
- "\[Modern Hopfield Networks\] are able to exactly retrieve data points, while \[the Predictive Coding Network\] always presents a tiny amount of error, even if not visible by the human eye. However, the retrieval process of our model is significantly better, as it always converges to a plausible solution, even when provided with a tiny amount of information, or a large amount of corruption. For example, our model never converges to wrong data points: when tested on complex tasks, it simply outputs fuzzier memories, instead of perfect but wrong reconstructions, as the MHNs."
> Hopfield network is susceptible to convergence to local minima during the pattern retrieval process. Local minima are stable states that are not the desired patterns. Just wondering, if there are any theoretical guarantees/analyses of convergence of the proposed method.
The Hopfield network will converge to these local minima, which we refer to in the paper as "metastable states," if there is too much crosstalk when computing the update. This crosstalk is diminished by adding the nonlinear interaction term, and increasing the nonlinearity generally increases the probability of appropriate recall. The theoretical calculations done in the appendix focus around bounding this crosstalk. There is also previous work (Krotov, Hopfield 2016; Ramsauer et al. 2020; Demircigil et al. 2017) in symmetric modern Hopfield Network which provide theoretical guarantees of convergence, and these calculations will likely carry over to the asymmetric case, at least for the polynomial DenseNet.
> Scaling up Hopfield networks to handle large-scale problems can be challenging. As the number of neurons increases, the computational and memory requirements grow rapidly. Is there any remedy for the efficiency concern, and what is the scalability of the proposed method?
The proposed method massively increases the capacity and computational efficiency of the Hopfield Network without increasing the network size. This is easily implemented since it simply requires applying a polynomial or exponential nonlinearity. However, Hopfield Networks along with any other associative memory method suffer from the fact that in order to retrieve patterns, then those patterns have to be stored somewhere. In the case of the DenseNet, there are no weights that would need to be stored, only the patterns. Furthermore, the distinguishing factor of the DenseNet is its ability to provide error-correction capabilities for robust recall of extremely long sequences, and can be easily implemented for real-time control applications.
> Can the authors comment on the utility/challenges in applying their proposed method to real-world datasets/tasks beyond the MNIST datasets used in their experiments? e.g., using them in large-scale language modeling tasks where transformers are now popular.
In the global rebuttal, we include simulations on the Moving MNIST dataset. The model currently proposed and analyzed has been formulated for discrete-valued patterns, but can be extended to store continuous-valued patterns as required for the transformer architecture. We believe that the model holds potential which we will investigate this in future work, but the focus of the current paper is to introduce the model, derive theoretical capacity limits, and verify it with empirical simulations. In particular, there has been an interest in utilizing state space models as an alternative to transformers for long sequence modeling, which we are exploring for potential connections.
- Albert Gu, Karan Goel, Christopher Ré 2022
- Poli et al. 2023
- Orvieto et al. 2023
---
Rebuttal 2:
Title: Response to Authors
Comment: I appreciate the authors' response and the newly added experiments, which look promising to me. Based on that, I have increased my score to 6. | Rebuttal 1:
Rebuttal: Thank you for the insightful comments and suggestions. We found that there were some common themes across the reviewers' comments: sequence retrieval for correlated patterns, robust recall of patterns under noise, and a comparison with other sequence recall methods. We individually respond to each reviewer's detailed comments point-by-point.
To address the problem of correlated patterns, Reviewer 2PbW suggested we test our model on the Moving MNIST dataset, which we were not previously aware of and are grateful for the suggestion. The dataset is constructed so that there are $10,000$ sequences of $20$ images, in which two handwritten digits from the MNIST datasets are slowly moving around. This leads to significant amounts of overlap between images within a sequence and should be a great test for robust recall of correlated patterns.
In Figure 1, we demonstrate the effect of pattern overlap on sequence recall and the impact of nonlinear interaction functions on overcoming this effect. In the top row, we show a portion of the ground truth sequence of digits $0$ and $3$ moving around. Despite using a network of size $N = 64 * 64 = 4096$, the linear SeqNet is unable to successfully recall the sequence. For the Polynomial DenseNet, increasing the polynomial degree $d$ slowly increases recall until $d=15$ when there is finally perfect recall. The Exponential DenseNet perfectly recalls the pattern.
In Figure 2, we test the limits of our model by taking all $10,000$ sequences of $20$ Moving MNIST images and concatenating them into a single sequence $200,000$ of images. Here, we display a portion of the ground truth sequence taken from the center, starting in the middle of one subsequence of $2$ and $5$ and ending in the middle of another subsequence of $7$ and $4$. The SeqNet and Polynomial DenseNet are not able to recall anything at all until $d=50$, and the Polynomial DenseNet does not perfectly recall the sequence until $d=70$. The Exponential DenseNet again perfectly recalls the entire sequence.
To address the problem of robust recall, Reviewer 2PbW suggested we test our model by perturbing the initial pattern in the sequence and Reviewer 6yTi suggested we generate new sequences with noise in each pattern in the sequence. In Figure 3, we tried to combine both of these into a single simulation where we simulated a 20 image sequence of $4$ and $9$ and added a randomly flipped bits at each transition. Note that a bitflip probability of 0.5 corresponds to complete noise as a bitflip probability of 1.0 will simply invert the picture. In the top row, we show the input into the network where the probability of pixel being flipped at random is 0.2. The second row shows target pattern the network should transition to. The next few rows show SeqNet and the Polynomial DenseNet failing to recall the sequence until $d=20$. Finally, the last row shows a modified form of the Exponential DenseNet with a smaller base, replacing the nonlinear interaction function $f(x) = e^{(N-1)(x-1)}$ with $f(x) = b^{(N-1)(x-1)}$ where $b = 1.01$. This is because the network size $N = 64*64 = 4096$ results in extremely large negative values in the exponent therefore floating point overflow. Reducing the base preserves the desired robust recall while circumventing numerical limitations.
Finally, we provide a brief survey of related models:
- The Predictive Coding Network (Tang, Barron, Bogacz 2023) is a contemporaneous model which the authors describe as a "classical Asymmetric Hopfield Network (AHN) with an implicit statistical whitening process, which leads to more stable performance in sequential memory tasks of structured inputs." While this model always results in some error, it generally converges to a plausible solution and avoids metastable states. On the other hand, our model provides perfect recall when it works but for insufficient nonlinearity converges to metastable states.
- Whittington et al. 2022 introduces a connection between the Transformer architecture and the Tolman-Eichenbaum machine, a model of the hippocampus, and propose the usage of an external network to learn a positional encoding via path integration. We propose a model that does not require an external network, although we briefly explore the possibility of adding an external network for context-dependent gating in the context of motor sequence control in lines 314-317.
- Karuvally et al. 2022 extends Modern Hopfield Networks to the sequence setting by developing a way to analyze an energy function in terms of an adiabatic limit. The GSEMM is related to the biologically-plausible implementation of our Mixed Network. Furthermore, they analyze the model in the setting of continuous-time dynamics, allowing for different timescales between the hidden and feature layers, and also allow for intralayer synapses within the hidden layer. For theoretical tractability, we focus on the discrete-time setting in order to assess the sequence capacity of the model, but our model can be easily extended to continuous time and continuous patterns.
- Muscinelli et al 2017 propose a way to construct a sequence of $2^N$ patterns an update rule to perfect recall this sequence. but this model is unable to store arbitrary patterns and lacks the error-correction capabilities that our model possesses.
- The following works focus on overcoming capacity limitations of correlated patterns: Diederich, Opper. PRL, 1987 proposes 2 local learning rule to store correlated patterns where each pattern must be learned sequentially. Gardener 1987 and Bressloff, Taylor 1992 demonstrates ways to store temporal sequences by utilizing perceptron-style learning rules. Blumenfeld et al. 2006 stores correlated patterns by changing weights proportionally to the difference between input and stored memories. These models all overcome pattern correlation via an iterative learning rule which explicitly accounts for correlation, whereas our model can store patterns without doing so.
Pdf: /pdf/cec509c19581a86dbb97dc15fc0e92bb4acdbabd.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
An Efficient Doubly-Robust Test for the Kernel Treatment Effect | Accept (poster) | Summary: This paper proposes Augmented Inverse Propensity Weighted cross Kernel Treatment Test (AIPW-xKTE), which is a doubly robust test with provably valid type-I error based on kernel mean embeddings to test for distributional treatment effect. The paper has one result, Theorem 4.1, showing the asymptotic normality of the proposed test statistic, and demonstrates its performance on synthetic and real datasets.
Strengths: The paper has one clear goal, i.e. to provide a testing procedure for distributional treatment effect. As the authors mention, distributional treatment effect has been an important topic of research for a while, and their proposal, AIPW-xKTE, has several advantages over the previous methods, most prominently that it has an analytical asymptotic null distribution, circumventing the need for permutation to get the null distribution, as well as being doubly robust.
The paper is very clearly written, setting out its goal in the backdrop of previous works and carrying out that goal with minimal fuss. The paper only offers one result, Theorem 4.1, but I think its conciseness is its value. I haven't gone through every detail of the proof in the appendix but a scan convinced me of its soundness. The paper was a pleasure to read.
Weaknesses: Perhaps one thing to count against this paper is its lack of novelty, in that its proposal is not something completely novel, rather it combines two ideas, namely cross U-statistics and augmented inverse probability weighting. However, it combines them to good effect and conducts thorough analysis of it, both theoretically and empirically, and I do not think this should count heavily against the paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The bottleneck of this procedure would probably be the estimation of kernel conditional mean embeddings, which has $n^3$ complexity? Perhaps it would be worth looking at speeding this up, through approximate kernel ridge regression methods (e.g. Nystrom method proposed in [Grunewalder et al., 2012] or FALKON in [Rudi et al., 2017]).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The conclusion section has listed a few of its limitations and interesting possible directions for future research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments.
- The bottleneck of this procedure would probably be the estimation of kernel conditional mean embeddings, which has n3 complexity? Perhaps it would be worth looking at speeding this up, through approximate kernel ridge regression methods.
We agree that looking at approximate kernel ridge regression methods for faster approximation of kernel conditional mean embeddings would be an interesting point to consider in future work.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I have no more comments to make, and would like to keep my evaluation of the paper. Good luck!
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer again for their time and comments. | Summary: The paper proposes a test of the null hypothesis that a binary treatment has no effect on the the potential outcome distribution. The test combines ideas of kernel mean embedding, double robustness, and cross U-statistics.
Strengths: Originality
-The connection between kernel embeddings of effect distributions and the cross U-statistic appears to be new.
-The main difference from Shekhar et al. (2022), who combine kernel mean embedding and cross U-statistics, appears to be the connection to double robustness.
-The main difference from Fawkes et al (2022), who combine kernel mean embedding and double robustness, appears to be the connection to cross U-statistics.
-For the kernel embedding of the potential outcome distribution, Muandet et al. (2021) use an IPW-style estimator while Singh et al. (2020) use a regression-style estimator. Similar to Fawkes et al. (2022), this paper combines IPW and regression estimators into a doubly robust AIPW estimator.
Quality - The results are clear and appear to be correct, with some minor comments given below.
Clarity - The paper is well written, especially its appendix.
Significance - Ultimately, this is a paper that combines building blocks that have been partially combined before. The combination is well executed, and contributes to the literature.
Weaknesses: I will raise the score if these items are addressed.
Statistical concepts
-The paper advertises efficiency, which when discussing tests, refers to certain statistical properties. However, the efficiency being described is computational by avoiding permutations. The framing should clarify this.
-Asymptotic equicontinuity and Glivenko-Cantelli class are not well explained in the main text. The former is well explained in the appendix, so a pointer would suffice. The latter is not; please provide more explanation of what this condition means and why it is reasonable in this context.
-Some statements about the asymptotic variance are too strong or poorly worded: on line 19 “the asymptotic variance…” and line 164 “the asymptotic variance…”
Comparisons
-The references given for average treatment effect are actually for the local average treatment effect on line 21. Please update here and elsewhere.
-The doubly robust kernel mean embedding estimator can be viewed as augmenting IPW (Muandet et al. 2021) with regression (Singh et al. 2020) approaches to kernel mean embeddings of potential outcome distributions, just as AIPW augments IPW with regression approaches to treatment effects. It would be worthwhile to point this out.
-It would be good to see brief comparisons to Shekhar et al. (2022) and Fawkes et al (2022) following Theorem 4.1.
Notation
-Sometimes notation is overloaded, which is unnecessary and a bit confusing. For example, mu refers to a kernel mean embedding, a regression, and something else in the appendix.
-Another notation issue is that the norm for beta is not defined in Theorem 4.1, and the cross fitting is poorly explained compared to Algorithm 1.
-It is not a good notation choice to write k(w,y) when x and y are variables with specific meanings in the paper.
-Finally, replace O(100n^2) with O(Bn^2).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The authors write “We were unable to control the type 1 error of the test presented in Fawkes (2022)…” What does this mean? Why not include Fawkes (2022), the most closely related work, in the simulations?
The authors show computational efficiency, but how about statistical efficiency?
In inequality (iii) of line 656, shouldn’t there be a 2 on the last term?
I had some issues with the proof of Step 3. Shouldn’t the final expression have lambda_1^2 in (13)? This correction would continue throughout the proof. At the bottom of page 27, how does the previous display imply lambda_1>0?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: There are no issues.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. We would like to address the following weaknesses and questions raised by the reviewer.
- 1
We have replaced the word "efficient" by "computationally efficient" in lines 7 and 307 to clarify that we are referring to computational efficiency.
- 2
We have replaced lines 190-191 by "…we ought to refer to asymptotically equicontinuous empirical processes (Park and Muandet, 2023) and Glivenko-Cantelli classes. We refer the reader to Appendix C for a presentation of such concepts, clarification of the norms used, and the proof of the following theorem."
and we have included the following definition of a Glivenko-Cantelli class in the same appendix:
Definition (Glivenko-Cantelli). We say that a class of integrable real-valued functions \mathcal{F} is a Glivenko-Cantelli class for \mathbb{P} if
\begin{equation*}
sup_{f \in \mathcal{F}} \| \frac{1}{n} \sum_{i=1}^n f(X_i) - \mathbb{E}[f(X)] \|
\end{equation*}
converges to zero in probability as $n \to \infty$.
We highlight that the Glivenko-Cantelli condition is required on the squared norm of the estimator (line 195), which is a scalar, hence we are dealing with the classical Glivenko-Cantelli concept. This condition appears to be a light assumption, as it is frequently satisfied by parametric function classes in low dimensional settings, and we are imposing it on the norm of the estimator, hence retrieving a one-dimensional setting. It is used to control the asymptotic behavior of the residuals in line 654.
- 3
We have replaced line 29 by "Under certain conditions (e.g., consistent nuisance estimation at $n^{-1/4}$ rates), the asymptotic mean squared error of the AIPW estimator is smaller than that of the IPW and PI estimator, and minimax optimal in a local asymptotic sense (Kennedy, 2020), hence... ", with the following cite:
Kennedy, Edward H. "Semiparametric doubly robust targeted double machine learning: a review." arXiv preprint arXiv:2203.06469 (2022).
We have replaced lines 164-166 by "Under certain conditions (e.g., consistent nuisance estimation at n^{-1/4} rates), the asymptotic variance of the AIPW estimator is minimized for $μ^1 = μ_1, μ^0 = μ_0$, thus the IPW estimator is generally dominated by the AIPW if $μ^1, μ^0$ are consistent."
- 4
We have replaced the Angrist and Imbens (1995) reference by "Hernán MA, Robins JM (2020). Causal Inference: What If. Boca Raton: Chapman & Hall/CRC."
We understand that Imbens (2004) may still be appropriate, given that it is mainly about ATEs/ATTs, but we are happy to include a different reference if you believe there are better suited options.
- 5
Thanks! We have included "In fact, the doubly robust kernel mean embedding estimator may be viewed as an augmented version of the KTE (which is a kernelized IPW) using regression approaches to kernel mean embeddings (Singh et al. 2020), just as AIPW augments IPW with regression approaches" in line 214.
- 6
We have included "Note that the proposed statistic is, at heart, a two sample test (with a nontrivial causal twist); in contrast to Shekhar et al. (2022), the two samples are not independent and are potentially confounded." in line 222.
We refrained from including Fawkes et al (2022) in the discussion given that we were unable to control the type 1 error of their test (see comment below). They did not present any theoretical guarantee of their current test, and it is still a preprint version -so we understand their method is subject to change-.
- 7
We agree, sorry about that. We have changed the $\mu$'s in section 3.4 to $\theta$'s, in accordance with the introduction. We have changed the $\mu$_1 and $\mu$_2 (and $\hat \mu_1$ and $\hat \mu_2$) from Appendix C to $\tau_1$ and $\tau_2$ (and $\hat \tau_1$ and $\hat \tau_2$). Consequently, the $\mu$ will solely refer to kernel mean embeddings. We have further changed the generic $\phi$'s in Appendix C for $\omega$'s so that $\phi$ only refers to the AIPW estimator of the mean embedding.
- 8
With the aforementioned change in lines 190-191, we now point the reader to the appendix where we clarify the notation used.
Further, we have included the sentence "Condition (iv) is equivalent to two-fold cross-fitting i.e. training $\hat \phi ^{(r)}$ on only half of the data and evaluating such an estimator on the remaining half."
in line 209 before "Condition (v)...".
- 9
Thanks, we have replaced it by $k(y, \tilde y) = \langle k(\cdot, y), k(\cdot, \tilde y) \rangle$ in line 176 (given that the kernel is going to be evaluated in $\mathcal Y$).
- 10
Thanks, done.
- 11
We implemented the test presented in Fawkes (2022), but we obtained rejection rates close to 35% under the null (i.e. we were unable to control the type 1 error at the targeted 5%). They did not prove that their test controls type 1 error, and indeed we found that it does not. Consequently, we left such a test out of the discussion.
- 12
If the actual embedding was known, the test would be minimax rate optimal against L2 alternatives -inherited from the properties of cross U-statistics, see Proposition E.1 and subsequent comments in Kim and Ramdas (2023)-. Given that we have to estimate the embeddings, the analysis is not straightforward. We refer the reviewer to the paragraph that expands from line 238 to line 245, where we comment on the statistical efficiency.
- 13
Yes, we have included the 2, thanks.
- 14
Indeed, the final expression in (13) should have \lambda_1^2. The correction continues throughout the proof without further implications. We have replaced \lambda_1 by \lambda_1^2 in the remaining lines of step 3 and highlighted that \lambda_1 > 0 implies \lambda_1^2 > 0. Thank you for spotting the typo!
- 15
We have that the square of the sum of non-negative eigenvalues is strictly greater than zero, hence one of them has to be strictly positive, so the first one is strictly positive -by decreasing order of the eigenvalues-. We have included this remark in the proof. | Summary: The paper focuses on studying Augmented Inverse Probability Weighting (IPW) for distributions instead of means. The outline of the paper is as follows:
1. The authors provide motivation for the problem.
1. They review several tools used to solve the problem, including Maximum Mean Discrepancy, Conditional Mean Embeddings, Kernel Treatment Effect, xMMD, and the asymptotics of AIPW.
1. The main results for AIPW in Hilbert spaces are presented, and the authors discuss practical details of the proposed test.
1. Discussion of the experiments.
Strengths: The paper is technical but clearly written. Authors prove non-trivial (to me) technical results, which convinces me that if I were to utilize AIPW for distributions, I would choose this particular implementation. More broadly, if I needed to test treatment effects on distributions and lacked access to propensity scores, I would opt for this test.
Weaknesses: The main limitation of this paper is that I struggle to think of a practical scenario in which I would have an interest in testing differences in the distribution of treatment effects. While the authors briefly mention that this question arises in various applications, none of those applications are utilized in the experiments. I'm uncertain whether investigating the effect of specialist home visits on cognitive test scores, beyond an increase in the mean, is a particularly relevant question to explore.
It might be more appropriate to compare this method with conditional average treatment effect tests. I can imagine situations where there are heterogeneous treatment effects that, on average, cancel each other out but work in opposing directions within two populations.
edit: see the list attached by authors.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I'd ask authors to discuss applications in which this test would be useful.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. We would like to address the following weaknesses and questions raised by the reviewer.
- The main limitation of this paper is that I struggle to think of a practical scenario in which I would have an interest in testing differences in the distribution of treatment effects. I'd ask authors to discuss applications in which this test would be useful.
Our work is the extension of kernel two sample tests [1] to the observational causal inference context. In non-causal settings, kernel distributional tests have had a very large impact in the ML community over the last decade (see for instance the citations of [1]). The main advantage of these tests is that they look beyond the mean effect. In causal inference, certainly testing if the average treatment effect equals zero is the most popular way to assess if a treatment is different from a placebo. But one may argue that this is not suitable in many settings, and we should call a treatment as different if (for example) the variance of the response is different from that for the placebo. Kernel treatment effects are a general way to answer these types of questions (with means and variances being special cases obtained with the linear or quadratic kernels).
[1] Gretton, Arthur, et al. "A kernel two-sample test." The Journal of Machine Learning Research 13.1 (2012): 723-773.
- While the authors briefly mention that this question arises in various applications, none of those applications are utilized in the experiments.
We mention that the test may be used to understand whether a treatment simply shifts the distribution of the outcome or, in turn, it also affects higher order moments. This is exactly what we did when investigating the effect of specialist home visits on cognitive test scores.
- I'm uncertain whether investigating the effect of specialist home visits on cognitive test scores, beyond an increase in the mean, is a particularly relevant question to explore.
If a treatment simply shifts the distribution of the outcome (in this case, specialist home visits increase cognitive test scores on average) but the higher order moments are unchanged, we can conclude that the treatment is always desired. Otherwise, the treatment may shift the mean but hugely increase the variance, for instance, which may be harmful for the children that are negatively affected by the treatment.
- It might be more appropriate to compare this method with conditional average treatment effect tests. I can imagine situations where there are heterogeneous treatment effects that, on average, cancel each other out but work in opposing directions within two populations.
Our test does not attempt to estimate conditional average treatment effects of any form, so we believe that the comparison would not be suitable. Tests for conditional average treatment effects are interesting, but complementary to the current paper.
---
Rebuttal Comment 1.1:
Comment: Respectfully, I don't think authors have provided a practical scenario or hypothetical in which I would like to use this test. Perhaps this theoretical construction will find its uses in the future.
If the only criterion would be technical novelty I would accept this paper.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer again for their time. Although we focused on the specialist home visit example as a novel use of our test, we would like to highlight other potential uses, such that
- Determining subgroups of patients that respond differently to medication and establish treatment policies, referring the reader to [1].
- Conducting feature selection for discovering treatment effect modifiers, referring the reader to [2] and [3].
- Studying the effect of various features on Google advertisers' spending, which has very heavy tails since there are a few advertisers who spend a lot [4]. For this reason means are not very useful summaries and instead distributional effects make a lot more sense.
We are happy to include any of these references if the reviewer considers that they shed light on the usefulness of our test. Furthermore, we can point the reader to [5], whose introduction discusses some specific motivation for studying distributional effects beyond the mean, as well as the vast literature on quantile treatment effects, which treat distributional effects with the exact same motivation while considering a different distributional target.
[1] Chikahara, Yoichi, Makoto Yamada, and Hisashi Kashima. "Feature selection for discovering distributional treatment effect modifiers." Uncertainty in Artificial Intelligence. PMLR, 2022.
[2] Bellot, Alexis, and Mihaela van der Schaar. "A kernel two-sample test with selection bias." Uncertainty in Artificial Intelligence. PMLR, 2021.
[3] Biesecker, Leslie G. "Hypothesis-generating research and predictive medicine." Genome research 23.7 (2013): 1051-1053.
[4] Díaz, Iván. "Efficient estimation of quantiles in missing data models." Journal of Statistical Planning and Inference 190 (2017): 39-51.
[5] Kennedy, Edward H., Sivaraman Balakrishnan, and Larry Wasserman. "Semiparametric counterfactual density estimation." arXiv preprint arXiv:2102.12034 (2021). | Summary: The paper introduces a test for the treatment effect which also takes distributional changes into account. The test strongly builds upon the recent works ( Kim and Ramdas, 2023) and ( Muandet et al. (2021)). The main novelty arises from extending the test in Kim and Ramdas, 2023 to the setting of treatment effect estimation. In comparison with previous tests (Figure 4) in the literature, the test proposed in this paper is computationally more efficient at a moderate price in power.
Strengths: Testing for treatment effects is an important problem. The proposed test is computationally efficient and appears to be practically useful.
Weaknesses:
1) Figure 2 is misleading to some degree. BART and Causal Forests are estimating the mean of the treatment effect and therefore necessarily fail in scenarios III and IV.
2) I am expecting a comparison against KTE ( Muandet et al. (2021)) and the AIPW extension from Fawkes et al. (2022) in the main text. This should come instead of the current Figure 2. As far as I understand from Figure 4, the main contribution of this paper is not a better test in terms of power but in terms of computational run-time. This does not sufficiently come across in the main text.
3) The contribution of the paper feels fairly limited and more like a straightforward extension of Kim and Ramdas, 2023. The theoretical results appear to me to follow from standard well-established arguments. Can the authors comment more on the technical challenges behind the proofs/ similarities to prior works?
4) While 3) itself is not a reason for a low grade, based on this limitation I would expect a more exhaustive experimental analysis of the method demonstrating the practical usefulness and its limitations on real-world data sets.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - How does the corresponding plot for Scenario (I) look for the setting in Figure 2 and 4?
- I am willing to increase my score if the authors can address the shortcomings above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. We would like to address the following weaknesses and questions raised by the reviewer.
- Figure 2 is misleading to some degree. BART and Causal Forests are estimating the mean of the treatment effect and therefore necessarily fail in scenarios III and IV.
Given that our work is the first to test the KTE in observational settings, there is no algorithm to use for benchmarking, as commented in line 266. We included algorithms designed for the average treatment effect, given that we saw no other choice. And we did say in line 274: "However, and as expected, such methods show no power if the distributions differ but have equal means".
- I am expecting a comparison against KTE ( Muandet et al. (2021)) and the AIPW extension from Fawkes et al. (2022) in the main text. This should come instead of the current Figure 2. As far as I understand from Figure 4, the main contribution of this paper is not a better test in terms of power but in terms of computational run-time. This does not sufficiently come across in the main text.
As explained to Reviewer 1 as well, the test presented in (Muandet et al., 2021) cannot be used with unknown propensity scores, and hence it does not apply to our problem (observational studies). We only included a comparison between KTE and AIPW-xKTE in Appendix A to show that, in the specific case of having known propensity scores (experimental studies), our test loses a little power for a large computational gain.
Further, we implemented the test presented in Fawkes (2022) -still a preprint-, but we obtained rejection rates close to 35% under the null (i.e. we were unable to control the type 1 error). This is not surprising: if you read carefully, Fawkes (2022) does not present any form of theoretical guarantee of the test. They do not claim that their test controls type-1 error, and indeed we find that it does not. Consequently, we left such a test out of the discussion in order to not be overly critical about a preprint, but we can add a note about our experiments with it.
Our test is the only test with theoretical type 1 error control in observational studies (where propensity scores are not known).
- The contribution of the paper feels fairly limited and more like a straightforward extension of Kim and Ramdas, 2023. The theoretical results appear to me to follow from standard well-established arguments. Can the authors comment more on the technical challenges behind the proofs/ similarities to prior works?
Although we presented the contribution as if it was a natural extension of previous works, we would like to highlight the nontrivial technical challenges addressed in Appendix C. The proof of Theorem 4.1, which is around 6 pages long, is completely novel, and cannot reduce to invoking any existing theorems. It combines the main idea presented in Kim and Ramdas, 2023 with a variety of techniques including kernel ideas, functional data results and causal inference results (eg: the novel Lemma C.7 and Theorem C.9) .
We highlight that the proof heavily differs from the work presented in Kim and Ramdas (2023) and any other paper that we have read on the topic. In fact, the only relevant part from their paper is Equation 14 and Equation 15 (part of step 4). We thus emphasize that the theoretical contribution of the work is far from being straightforward. We have now added a paragraph in the main paper to highlight these advances.
- While 3) itself is not a reason for a low grade, based on this limitation I would expect a more exhaustive experimental analysis of the method demonstrating the practical usefulness and its limitations on real-world data sets.
We considered, in total, 8 different scenarios with synthetic data and 6 different scenarios with real data. Further, we highlight that it is nearly impossible to assess our test in real-life observational studies, given that we do not know the ground truth. The test may reject or accept the null hypothesis, but we have no information on whether the null hypothesis is actually true or not. These experiments vary different aspects of the considered method and competitors. But if there is a particular aspect that you would like to see better explored, we would be happy to add an experiment for it.
- How does the corresponding plot for Scenario (I) look for the setting in Figure 2 and 4?
The Gaussian behavior (Subfigure A and Subfigure B) is only expected with our proposed test, so those subfigures would not be interesting for the remaining tests. Subfigure C is very similar for all the tests studied (around 0.05 with some noise), which is expected given that they are all provably well calibrated (hence we refrained from including those in the paper). We can add a note about these to the paper.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I would like to thank the authors for their response. I have increased my score to a 7.
Furthermore, I agree with the following concern raised by a different reviewer:
>The main limitation of this paper is that I struggle to think of a practical scenario in which I would have an interest in testing differences in the distribution of treatment effects. While the authors briefly mention that this question arises in various applications, none of those applications are utilized in the experiments. I'm uncertain whether investigating the effect of specialist home visits on cognitive test scores, beyond an increase in the mean, is a particularly relevant question to explore.
It would be beneficial to provide a concrete application as an example to highlight the necessity of this test. However, considering the limited access to public datasets, this might be an ambitious request.
---
Reply to Comment 1.1.1:
Comment: Following the concerns raised by Reviewer gJy9, we have provided further examples of use of our test in the respective official comment. We would like to thank the reviewer again for their insightful comments. | Rebuttal 1:
Rebuttal: We upload a PDF containing the changes in Figure 1, Figure 2 and Table 1, now with error bars / standard errors, suggested by one of the reviewers. We have included error bars in the remaining figures of the paper, not shown in here for space constraints.
Pdf: /pdf/ebebf945a9d1b8803c3ab71b9b56b99bfe2fd83c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces a statistical test to determine whether the distributions of the two counterfactuals are the same. This goes beyond the well-known average treatment effect, which only tries to understand whether the means of the distributions are the same. The first work in this direction was by (Muandet et al., 2021), who introduced the Kernel Treatment Effect (KTE). However, their test statistic is degenerate, which means that they cannot use the CLT to derive an asymptotic threshold, and they need to resort to a permutations approach in order to compute the threshold. This paper introduces the AIPW-xKTE test, which generalizes KTE by including a plug-in estimator in analogy with AIPW vs IPW, and most importantly by using the same approach as the Cross MMD test from (Kim and Ramdas, 2023), which does yield a statistic with asymptotically normal distribution.
Strengths: The contribution of the paper is clear, and the authors do a reasonable job at placing it within the literature.
Weaknesses: The main weakness is that the contribution is not highly novel, in that the test proposed is basically a combination of the KTE test from (Muandet et al., 2021) and the Cross MMD technique from (Kim and Ramdas, 2023).
The explanation would be more transparent if some important concepts were clearly defined. See more details in the questions section. There are some more detailed weaknesses that I also point out in the questions section.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: - What is double robustness? It is a relevant concept in the paper, as it is mentioned twice in the contributions part of the introduction, and many more times later on. However, it is never defined.
- Line 23: The plug in estimator is not well defined: what does it actually look like? In lines 222-225, the estimator appears again under a different notation (\hat{\beta} instead of \hat{\theta}). The authors currently say that “At this time, not so many choices exist for estimators…“ and they cite a work on this. It would be good to give a more detailed explanation of how the estimator \hat{\beta} is computed. Similarly, it would be helpful to give more insight on how \hat{\pi} is computed.
- Theorem 3.1: What is \hat{\pi}? What is \hat{\psi}_{DR}? These quantities have not been defined before as far as I can tell.
- Figure 1: Show error bars in subfigure c, to show that the discrepancy from 0.05 can be attributed to a statistical error. The key question that needs to be answered here is: how large does n need to be for the CLT to kick in and for the Type I error guarantee to hold. Without error bars, the current figure does not provide an answer to this question.
- Figure 2: Although not as critical, error bars in this figure would be appreciated too.
- Table 1: Show standard error.
- Comparison with (Muandet et al., 2021): The authors do not show any experimental comparison with the KTE test proposed by (Muandet et al., 2021). They argue that “Due to the fact that the KTE (Muandet et al., 2021) may not be used in the observational setting, where the propensity scores are not known, there is no natural benchmark for the proposed test.” Since AIPW-xKTE makes use of estimators of propensity scores, it seems natural to me to compare it with KTE using the same propensity score estimators. Is there a reason not to do this?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 2 fair
Limitations: No limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. We would like to address the following weaknesses and questions raised by the reviewer.
- The main weakness is that the contribution is not highly novel, in that the test proposed is basically a combination of the KTE test from (Muandet et al., 2021) and the Cross MMD technique from (Kim and Ramdas, 2023).
Although we presented the contribution as if it was a natural extension of previous works, we would like to highlight the nontrivial technical challenges addressed in Appendix C. The proof of Theorem 4.1, which is around 6 pages long, is completely novel, and cannot reduce to invoking any existing theorems. It combines the main idea presented in Kim and Ramdas, 2023 with a variety of techniques including kernel ideas, functional data results and causal inference results (eg: the novel Lemma C.7 and Theorem C.9) .
We highlight that the proof heavily differs from the work presented in Kim and Ramdas (2023) and any other paper that we have read on the topic. In fact, the only relevant part from their paper is Equation 14 and Equation 15 (part of step 4). We thus emphasize that the theoretical contribution of the work is far from being straightforward. We have now added a paragraph in the main paper to highlight these advances.
- What is double robustness? It is a relevant concept in the paper, as it is mentioned twice in the contributions part of the introduction, and many more times later on. However, it is never defined.
We have changed line 33 from
"... we highlight that double-robustness is a property that is shared with many other estimators."
to
"... we highlight that double-robustness is an intriguing property of an estimator that makes use of two models, in which the estimator is consistent even if only one of the two models is well-specified and the other may be misspecified; we refer the reader to [1] for a discussion on doubly-robust procedures."
[1] Kang, Joseph DY, and Joseph L. Schafer. "Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data." (2007): 523-539.
- Line 23: The plug in estimator is not well defined: what does it actually look like? In lines 222-225, the estimator appears again under a different notation ($\hat\beta$ instead of $\hat\theta$). The authors currently say that “At this time, not so many choices exist for estimators…“ and they cite a work on this. It would be good to give a more detailed explanation of how the estimator $\hat\beta$ is computed. Similarly, it would be helpful to give more insight on how $\hat\pi$ is computed.
The plug in estimator is well defined (line 24), but its form depends on the choice of the regression functions (denoted $\hat\theta$). Throughout the work, we denote univariate or multivariate regressors by $\hat\theta$, and infinite-dimensional regressors (such as conditional mean embeddings) by $\hat\beta$, which is introduced in line 183. Thank you for pointing out that we have not explicitly mentioned this somewhere, we now do so in Section 4 after line 179.
Many different options exist for $\hat\theta$, $\hat\beta$, and $\hat\pi$. Linear regression, logistic regression, random forests, or neural nets may be used for $\hat{\theta}$ and $\hat{\pi}$; conditional mean embeddings or the work cited in line 225 may be used for $\hat\beta$.
- Theorem 3.1: What is $\hat\pi$? What is $\hat\psi_{DR}$?
$\hat{\pi}$ is defined in line 27. $\hat\psi_{DR}$ should be $\hat\psi_{AIPW}$ instead, defined in line 28 (we thank the reviewer for spotting this typo!).
- Figure 1: Show error bars in subfigure c, to show that the discrepancy from 0.05 can be attributed to a statistical error. The key question that needs to be answered here is: how large does n need to be for the CLT to kick in and for the Type I error guarantee to hold. Without error bars, the current figure does not provide an answer to this question
Figure 2: Error bars in this figure would be appreciated too
Table 1: Show standard error.
The outcome of the test is either a 0 or a 1, hence its distribution is fully specified by its mean, which is estimated by Monte Carlo (repeating the test B times for a large B). The only error is Monte Carlo error, and we have now added error bars to show that our observations are not a result of Monte Carlo error. We uploaded a PDF with some plots in the Author Rebuttal section.
- Comparison with (Muandet et al., 2021): The authors do not show any experimental comparison with the KTE test proposed by (Muandet et al., 2021). They argue that “Due to the fact that the KTE (Muandet et al., 2021) may not be used in the observational setting, where the propensity scores are not known, there is no natural benchmark for the proposed test.” Since AIPW-xKTE makes use of estimators of propensity scores, it seems natural to me to compare it with KTE using the same propensity score estimators. Is there a reason not to do this?
The test proposed in (Muandet et al., 2021) is not valid if the propensity scores are misspecified (which is usually the case when the propensity scores are unknown). In causal inference, there is a huge difference between randomized experiments (where they are known) and observational settings (where they are unknown). Plugging in estimated propensity scores for the true ones doesn’t work: that would be too easy a solution to the latter problem — this is why the AIPW estimator is not simply the IPW estimator with plug-in estimate of the propensity score: one has to “debias” IPW in some sense and also reduce its variance. Muandet’s test only works in the randomized experiments setting, while ours is designed for the much harder observational setting. Consequently, we refrained from including it in the main body of the paper, where we discuss observational studies. We included a comparison between KTE and AIPW-xKTE in Appendix A, for the specific case of having known propensity scores (experimental studies). | null | null | null | null | null | null |
Tools for Verifying Neural Models' Training Data | Accept (poster) | Summary: This paper proposes a protocol for verifying that a model trainer submits data and learned weights, and a verifier checks whether the weights are correctly learned from the submitted data.
Such a protocol is useful for trustworthy AI. The paper defines the problem of Proof-of-Trainind-Data (PoTD). It is inspired by the previous work on "Proof-of-Learning," but the setting of PoTD requires more. The paper gives a formal definition of PoTD protocol and argues that the protocol needs to satisfy some conditions to achieve the guarantee in a practical setting.
Strengths:
The motivation of the paper is interesting. The proposed PoTD protocol seems reasonable.
Weaknesses:
1. The paper does not explain the motivation why we need the PoTD protocol and its impact. The paper says that some attacks can be treated by existing Proof-of-Learning methods. It seems trivial since PoTD poses stronger requirements, as written in the paper. The paper should discuss the problems of existing PoL methods and how the proposed protocol can solve them.
2. The only attack that the existing method cannot deal with but the proposed method can is a data subtraction attack. How important to deal with such an attack is not explained enough.
Moreover, experimental results show how the proposed method performs but do not compare with baseline methods. Therefore, experiments are not enough to
show the effectiveness of the proposed methods.
3. The presentation of the paper is hard to follow. The paper seems to consist of fragments of texts relationships between them are unclear.
For example, There is a formal definition of PoTD protocol in section 2, but the definition is never mentioned in the subsequent sections. Therefore, it is hard to judge whether the definition is reasonable or not.
The memorization heuristic introduced in Section 3.2 seems overly complex. If we use $-L(d, W)$ instead of M(d, W) in (3), then the values of PBQ and FBQ would not change since
the first term of (2) does not depend on data d and has no effect when evaluating $\Delta_M(d^\prime, W) > \Delta_M(d, W)$.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: n/a
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper mentions to limitations in the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Below, we respond specific points made in your review:
**W1:** *The paper says that some attacks can be treated by existing Proof-of-Learning methods. It seems trivial since PoTD poses stronger requirements, as written in the paper. The paper should discuss the problems of existing PoL methods and how the proposed protocol can solve them.*
**Response:** We are not sure which statements you are referring to. Most of the attacks raised in the paper cannot be affordably addressed using existing Proof-of-Learning methods, and even for those that can, our solutions are more efficient or have other benefits (e.g. only requiring inference and not retraining). We provide specific examples of attacks that PoL methods cannot address, but that our new methods can:
- **PoL-style retraining is known to be potentially vulnerable to synthetic data attacks** [1], which allow the Prover to swap the real data out for carefully-crafted fake data. This is fatal for PoTD, and can also enable PoL spoofs (see [1]). Our new verified-data-shuffling defense in Section 3.3 stops this attack.
- **Retraining is inefficient for catching small data lies**. Given 100 checkpoints, if a Prover wants to misreport 1 checkpoint’s worth of data, then to catch them w/ 80% probability the Verifier would have to rerun 80% of the original training run. However, with our method, the Verifier needs to spend as little as 1% (and only inference compute) to catch subtraction/interpolation attacks, and slightly more for addition attacks (which require retraining a couple segments to calibrate segment-magnitudes, but still yield a roughly order-of-magnitude improvement). Even if a Verifier is only worried about Provers misreporting 10 checkpoints, the Verifier would need to retrain $\approx 16\%$ of the checkpoints (see Appendix C of [2] for a similar derivation) to catch the lying Prover with 80% probability, much larger than the 1% or few percent required by our method.
We will include the above examples to better highlight the shortcomings of prior methods at achieving PoTD.
[1] Zhang, Rui, et al. "“Adversarial Examples” for Proof-of-Learning." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022.
[2] Shavit, Yonadav. "What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring." arXiv preprint arXiv:2303.11341 (2023).
**W2a:** *The only attack that the existing method cannot deal with but the proposed method can is a data subtraction attack. How important to deal with such an attack is not explained enough.*
**Response:** To clarify, our methods not only enable detection of data subtraction attacks, but also block synthetic-data-substitution attacks (see Section 3.3, and [3] for the paper that originally provided the attack), and are the first to enable cost-effective detection of interpolation attacks and large-scale data addition attacks. Together, these block an attacker from far-underclaiming the training data they used, which is directly relevant for, e.g., policies around the amount of compute used for training. We will make this clearer in the paper.
[3] Zhang, Rui, et al. "“Adversarial Examples” for Proof-of-Learning." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022.
**W2b:** *Moreover, experimental results show how the proposed method performs but do not compare with baseline methods.*
**Response:** Most of our defenses have no relevant baselines, such as data subtraction attacks on individual segments. In principle, one could run PoL-style retraining at a massive scale, but this would be far too expensive to be practically relevant.
**W3a:** *There is a formal definition of PoTD protocol in section 2, but the definition is never mentioned in the subsequent sections.*
**Response:** To clarify the paper’s structure, Section 2 provides a theoretical definition for PoTD, but proving such a theoretical guarantee is in practice currently impossible for large-scale NN training. Instead, just as in the original PoL paper, in Section 3.3 and 5 we propose a set of attacks, which are intended to approximate the full set of attacks on PoTD. We then show that our solution is robust to these attacks, and thereby demonstrate that our defenses heuristically satisfy Definition 1, for current attacks in the literature. We will clarify the set of considered attacks explicitly in Section 2, and enumerate all the attacks in a single location.
**W3b:** *The memorization heuristic introduced in Section 3.2 seems overly complex. If we use $-L(d, W)$ instead of M(d, W) in (3), then the values of PBQ and FBQ would not change since the first term of (2) does not depend on data d and has no effect when evaluating $\Delta_\mathcal{M}(d’,W) > \Delta_\mathcal{M}(d, W)$.*
**Response:** You are correct that the existence of the normalization term $E_{d' \in D_v}[L(d', W)]$ in our memorization heuristic $\mathcal{M}$ does not affect the values of PBQ and FBQ. This could represent a 2x efficiency gain, and we will mention it in the final manuscript. This normalization may still be important in other cases for preserving the meaning of the Memorization Delta $\Delta_\mathcal{M}$, which is used to produce memorization-charts. Without the normalization term, it becomes hard to compare $\Delta_\mathcal{M}$ across segments, especially in the earlier stages of training when the changes in loss between checkpoints vary quite a bit across segments. To illustrate, we created Memorization Delta plots with and without the normalization term to illustrate this point (see attached PDF).
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for the response. The claims in the response are all strong, and I understand that the proposed method has many interesting properties compared with PoL methods.
My question is, "Why did the authors not put these things in the original paper?"
I think the paper's introduction does not clearly show the motivation of the PoTD. It says PoTD is "inspired by PoL (line 33)", and "provide several verification strategies ... all
published attacks in the PoL literature (line 37)". Moreover, the paper also says PoTd is a "stricter requirement than POL" (line 52).
However, the introduction says nothing about why we need PoTD instead of PoL. These things made it hard for me to understand the position of PoTD.
I will raise my score, but I still think this paper needs a major revision to improve the presentation by including the material shown in the response. | Summary: The authors propose Proof of Training Data (PoTD), a variant of Proof-of-Learning (PoL) protocols that focuses on training set attacks, rather than the training algorithm itself. A valid PoTD protocol should be able to, at least in theory, spot when a machine learning model has been trained on a different training set than the one declared by the learner. As in PoL, the learner is required to provide a full transcript of the training process, including training data, code and intermediate checkpoints. Unfortunately, the task of verifying a training transcript is as computationally intensive as re-training the model from scratch. For this reason, the authors propose several heuristic strategies for PoTD, which rely on the fact that stochastic gradient ascent tends to first memorize and then forget the data it observes in each batch.
Strengths: From the methodology standpoint, I truly appreciate the memorization heuristics proposed in the papers. In particular, they seems to be able to defend against a large number of different threat models.
Weaknesses: The PoTD protocol proposed by the authors have considerable overlap with the existing PoL proposals. I am inclined to see it as a variant of these existing efforts, rather than a novel, independent idea.
Furthermore, the defenses proposed in the paper are heuristics and have been tested on large language models only. The authors mention that their techniques may work differently on other neural architectures, learning tasks and training procedures. Therefore, I believe more experimentation is needed to confirm their usefulness.
Additionally, the PoTD protocol requires the learner to disclose their complete learning process. Thus, there is no way to protect the intellectual property of the learner. The authors are honest about this limitation, and claim it will be addressed as future work. However, I feel this makes the paper weaker.
The data addition attack presented in lines 256-267 seems the most interesting scenario to me. It is unfortunate that the heuristic technique proposed by the authors cannot defend against it.
The writing style of the paper could be improved. I report here some specific examples.
Line 6, "and flag if the model specific harmful or beneficial data sources" is not a syntactically-correct English sentence.
Line 30, the authors start discussing how to solve proof-of-training-data
before giving a precise enough definition that the reader can follow.
Line 87, c2 is missing from the definition of V (compare with Line 82). Also, why is the probability taken over c1, when c1 does not appear in any of the terms?
Line 172-173, please number all equations.
Section 3.3 is very dense and many important details are left for future work.
Footnotes 4-9 occupy almost a quarter of the page and contain important information. I would prefer having them merged with the main text.
Line 225, "on trained" should be "trained on".
Section 5 introduces new concepts, new attacks, new defenses, new notation. Since this happens so late in the paper, it ends up being a bit overwhelming. Why not explicitly organise the whole paper by threat model?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Would it be possible to bypass the memorization heuristic by changing both the model and the data? Theoretical work has shown that only a small portion of a large neural network is important for achieving good predictive performance (lottery tickets). Once a learner has trained a good model on dataset D*, it should be possible to add (a large number of) redundant neurons and retrain those on D. Would this be a valid attack on PoTD?
The Equation between lines 172 and 173 is not obvious at first sight, and should be clarified. Also, please number all equation in the paper.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I commend the authors for beign upfront about the limitations of their work. All my concerns have been listed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Below, we respond specific points made in your review:
**W1:** *The PoTD protocol proposed by the authors have considerable overlap with the existing PoL proposals. I am inclined to see it as a variant of these existing efforts, rather than a novel, independent idea.*
**Response:** We emphasize that though our solution shares a common structure with techniques from the PoL literature, the PoTD problem is fundamentally different, and requires different tools.
First, we clarify the difference between PoL and PoTD. **PoL only checks if the Prover is capable of spending the compute** to perform a large training run, but does not check whether the training transcript disclosed **actually corresponds** to the training run that yielded $W^*$. For example, a Prover can do an original training run, and then replace 10% of the original training data with synthetic data as in [1] (e.g. to hide that data from the Verifier), and the resulting modified transcript **would still be a valid PoL**. PoTD is more ambitious in that it **checks whether the exact reported data transcript would actually result in** $W^*$. We will better highlight this distinction in the camera ready.
We also emphasize that existing methods from the literature, while being close to achieving PoL, are far from achieving PoTD, and our new methods are important for closing this gap. For example, segment-wise retraining fails to stop replacement-with-synthetic-data attacks, which are a major problem for PoTD (but less fatal for PoL); we address this with our defense in Section 3.3. Prior methods are also too inefficient to practically catch attacks that target a small fraction of segments (e.g. addition, subtraction, or few-segment interpolation), as a PoL-style Verifier could not afford to retrain a large fraction of original training segments to spot a few spoofed ones.
**W2:** *Furthermore, the defenses proposed in the paper are heuristics and have been tested on large language models only. The authors mention that their techniques may work differently on other neural architectures, learning tasks and training procedures. Therefore, I believe more experimentation is needed to confirm their usefulness.*
**Response:** We emphasize that our experiments on Pythia models provide validation of our method’s performance “in the wild”, as they used different architectures and training procedures (and thus are as close to a “random alternative draw of hyperparameters” as we can practically get). We encourage future work on additional modalities and architectures, and have prioritized large transformers as they are the primary focus of current regulatory discussions.
**W3:** *Additionally, the PoTD protocol requires the learner to disclose their complete learning process. Thus, there is no way to protect the intellectual property of the learner. The authors are honest about this limitation, and claim it will be addressed as future work. However, I feel this makes the paper weaker.*
**Response:** This assumption of Verifier access to training data and model checkpoints at verification-time is the standard assumption used in all prior works in the Proof-of-Learning literature. In fact, in contrast to prior defenses (such as retraining), our memorization-defense and data-order-defense do not require knowledge of the training hyperparameters, and are thus essentially out-of-the-box verifiable using only black-box API access, preserving confidentiality. We will update the Discussion to better clarify this contribution. (Catching data addition attacks still does require weights and training-hyperparameters access, hence our comment on the need for future work.)
**W4:** *The data addition attack presented in lines 256-267 seems the most interesting scenario to me. It is unfortunate that the heuristic technique proposed by the authors cannot defend against it.*
**Response:** We wish to clarify that our method does allow us to catch at-scale data addition attacks, so long as they are a significant fraction of the training data in that segment, e.g. >5%. We strongly concur that better defenses against data addition attacks would be an excellent focus of future work.
**W5-end:** *The writing style of the paper could be improved. I report here some specific examples.*
**Response:** Thank you for pointing out these edits. We will incorporate them into the final draft.
**Q1:** _Would it be possible to bypass the memorization heuristic by changing both the model and the data? [...]Once a learner has trained a good model on dataset $D^*$, it should be possible to add (a large number of) redundant neurons and retrain those on D. Would this be a valid attack on PoTD?_
**Response:** Your idea is a good one, and we suspect there may be a viable attack in this direction that would require new defenses. In its current form, this attack would not be resistant to segment-wise retraining, because when retraining any specific segment, either the real neurons or the fake neurons (or both) would not be correctly reproduced. An additional trick a Prover could add to this attack could be adding so many neurons that they dominate the reproduction-error term, and the “true” neurons appear as “noise” smaller than $\epsilon$. One defense would just be to require higher levels of reproducibility (e.g. small epsilon, or even perfect reproducibility if possible). We are also not sure that it is at all straightforward to get two parts of a neural network to memorize two different datasets, and never interfere with each others’ predictions; the tricks required to do so may introduce additional artifacts a Verifier could spot.
**Q2:** *The Equation between lines 172 and 173 is not obvious at first sight, and should be clarified.*
**Response:** We will add a step-by-step derivation in the appendix.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarification. | Summary: The paper presents a novel protocol called Proof-of-Training-Data, which a third party auditor can verify the data used to train a model. Here, the auditor will require training data, training code, and intermediate checkpoints. Experiments on two language models have demonstrated that known attacks from the Proof-of-Learning literature can be caught by this new protocol.
Strengths: This paper attempts to tackle an important security problem on trained neural network models. The proposed heuristics (i.e., memorization-based tests) is appealing and can efficiently catch spoofed checkpoints using a small amount of data. The paper is adequately structured and solid experiments have been carried out to empirically justify the effectiveness of the proposed protocol.
Weaknesses: The concept of Proof-of-Learning has been well studied. Although the authors have discussed various Proof-of-Learning literature in the related work and the experiments section, it is still not immediately clear to me why we need this brand-new protocol (Proof-of-Training-Data). If I understand correctly, the authors are attempting to solve an even harder problem where the adversaries can have more computing power.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Line 77, could there be a comparison between the formal formulation of Proof-of-Learning and Proof-of-Training-Data?
- Section 4, as most of the Proof-of-Learning experiments have been done on image datasets like CIFAR, I am curious if PoTD could perform similarly well on those image datasets.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Below, we respond specific points made in your review:
**W1:** *It is still not immediately clear to me why we need this brand-new protocol (Proof-of-Training-Data).*
**Response:** We were uncertain as to whether you were unclear about “the difference between the definitions of PoL and PoTD”, or unclear about “why existing methods from the PoL literature are insufficient to guarantee PoTD”. Just in case, we provide answers to both.
First, we clarify the difference between PoL and PoTD. **PoL only checks if the Prover is capable of spending the compute** to perform a large training run, but does not check whether the training transcript disclosed **actually corresponds** to the training run that yielded $W^*$. For example, a Prover can do an original training run, and then replace 10% of the original training data with synthetic data as in [1] (e.g. to hide that data from the Verifier), and the resulting modified transcript **would still be a valid PoL.** PoTD is more ambitious in that it **checks whether the exact reported data transcript would actually result in** $W^*$. We will better highlight this distinction in the camera ready.
Next, we clarify why existing methods from the PoL literature are insufficient to solve our PoTD problem:
- **Segment-wise retraining is known to be potentially vulnerable to synthetic data attacks** [1], which allow the Prover to swap the real data out for carefully-crafted fake data. Our new verified-data-shuffling defense in Section 3.3 stops this attack.
- **Retraining is inefficient for catching small data lies**. Given 100 checkpoints, if a Prover wants to misreport 1 checkpoint’s worth of data, then to catch them w/ 80% probability the Verifier would have to rerun 80% of the original training run. However, with our method, the Verifier needs to spend as little as 1% (and only inference compute) to catch subtraction/interpolation attacks, and slightly more for addition attacks (which require retraining a couple segments to calibrate segment-magnitudes, but still yield a roughly order-of-magnitude improvement). Even if a Verifier is only worried about Provers misreporting 10 checkpoints, the Verifier would need to retrain $\approx 16\%$ of the checkpoints (see Appendix C of [2] for a similar derivation) to catch the lying Prover with 80% probability, much larger than the 1% or few percent required by our method.
We will make this direct comparison between PoL and PoTD clearer in the final version of the paper, and use the above examples to better highlight the shortcomings of prior methods at achieving PoTD.
[1] Zhang, Rui, et al. "“Adversarial Examples” for Proof-of-Learning." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022.
[2] Shavit, Yonadav. "What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring." arXiv preprint arXiv:2303.11341 (2023).
**Q1:** *Line 77, could there be a comparison between the formal formulation of Proof-of-Learning and Proof-of-Training-Data?*
**Response:** We will include this comparison, see our response to W1.
**Q2:** *Section 4, as most of the Proof-of-Learning experiments have been done on image datasets like CIFAR, I am curious if PoTD could perform similarly well on those image datasets.*
**Response:** As mentioned in lines 312-316, we are also interested in future work testing PoTD on new modalities. Our efforts were particularly focused on scaling PoTDs to much larger models than the traditional PoL literature, to demonstrate their practicality. Specifically, works like [3] consider ResNets on CIFAR (ResNet-50 has 25M parameters and CIFAR-10 has 60K examples and is 163 MB), whereas our results are shown LLMs with large text corpora (125M to 1B parameters and OpenWebText has ~9B tokens and is ~17GB).
[3] Jia, Hengrui, et al. "Proof-of-learning: Definitions and practice." 2021 IEEE Symposium on Security and Privacy (SP). IEEE, 2021.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks a lot for the clarification! | Summary: This paper describes techniques and tools that can be used for verifying the "provenance" of large neural models, to evaluate their risks. These techniques and tools are part of "protocols" used by a model trainer to convince a "verifier" that the training data was used to produce the model parameters. The authors show experimentally that their prescribed procedures can catch a variety of known attacks from the "proof-of-learning" literature.
Strengths: * The paper addresses an increasingly important problem, as large neural models are becoming very popular, and advances practical techniques that can be used by regulators to check the provenance of large models.
* The authors present convincing evaluation using GPT-2, demonstrating that the proposed procedures are effective in catching a variety of attacks, such as glue-ing and interpolation as well as data addition and subtraction.
Weaknesses: * I find that the use of "proofs" in the title and throughout the paper is misleading as the authors do not present techniques that amount to an actual proof.
* I think it is great that the techniques presented by the authors are practical and effective wrt several attacks but I wonder if this is enough for regulators. I mean they would possibly need stronger guarantees for such techniques.
* It is unclear to me how the verification strategies presented in section 3 relate to definition 1. The authors should work on adding a theorem that clearly states that their strategies achieve the desired properties, i.e., the verifier accepts/rejects true winesses/spoofs with the desired probabilities.
* The technical contribution beyond "proof-of-learning" is unclear.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see above.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: In conclusion this is very interesting work but may be too preliminary for publication.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Below, we respond specific points made in your review:
**W1:** *I find that the use of "proofs" in the title and throughout the paper is misleading as the authors do not present techniques that amount to an actual proof.*
**Response:** The use of the word “Proof” in Proof-of-Training-Data comes from a large body of existing literature (e.g. “Proofs-of-Learning”, which are themselves also not “proofs”). However, we agree that this may be confusing, so we have changed the paper’s title for the final version to “Tools for Verifying Neural Models’ Training Data”.
**W2:** *I think it is great that the techniques presented by the authors are practical and effective wrt several attacks but I wonder if this is enough for regulators. I mean they would possibly need stronger guarantees for such techniques.*
**Response:** In developing this work, we spoke with US regulators who expressed interest in using these techniques, as they substantially improve the state of verifiability relative to the current baseline (which is blindly trusting Provers). Indeed, much real-world regulatory verification is done heuristically rather than with proofs.
**W3:** *It is unclear to me how the verification strategies presented in section 3 relate to definition 1. The authors should work on adding a theorem that clearly states that their strategies achieve the desired properties, i.e., the verifier accepts/rejects true winesses/spoofs with the desired probabilities.*
**Response:** Thank you for bringing this to our attention. We will add a subsection in Section 2 better clarifying Definition 1’s connection to our contribution, summarized below:
- Definition 1 is intended to formalize the PoTD problem and defines robustness as being over all computable adversaries $\mathcal{A}$, but mathematically proving such guarantees is not yet possible given the lagging state of the NN theory literature. Indeed Jia et al.’s PoL protocol [1] itself does not make claims to provable robustness, as evidenced by attacks that have been found.
- Instead, we follow the heuristic approach common in the PoL literature by enumerating a common-sense list of possible attacks, which provides a first approximation to the full space of attacks $\mathcal{A}$ in the definition.
- We then show a solution that is robust to these attacks, and thereby demonstrate that our defenses heuristically satisfy Definition 1. As mentioned in lines 283-291, we hope future work will find new attacks (thereby better approximating $\mathcal{A}$) which could need to be addressed with new methods, just as the original PoL paper inspired further attacks and defenses.
[1] Jia, Hengrui, et al. "Proof-of-learning: Definitions and practice." 2021 IEEE Symposium on Security and Privacy (SP). IEEE, 2021.
**W4:** *The technical contribution beyond "proof-of-learning" is unclear.*
**Response:** To clarify, beyond defining the new problem of Proof-of-Training-Data (which is distinct from the existing PoL problem), we also contribute the following:
- We propose two new defenses, memorization-tests and verifiable data shuffling, which successfully address all published (and currently unaddressed) attacks on PoL. We also highlight several new attacks specific to PoTD (data addition & subtraction) and show that our methods can be effective at catching these too.
- Our methods are substantially more efficient than prior work, which means that we are also the first to scale PoL/PoTD methods to LLM-scale using academic compute budgets. (Prior PoL papers scaled only to ResNet-50 on CIFAR.)
---
Rebuttal 2:
Comment: Thank you for your response which clarifies my questions. I will raise my score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their useful feedback, and are glad that many of them enjoyed the paper. We have written detailed responses to each reviewer’s comments, and thank the reviewers for their recommendations.
**For Reviewer pydU** we attach the figure referred to in our response.
Pdf: /pdf/1a0a5d082ccc32483dac5635bbacac8882cd75a1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Optimal Parameter and Neuron Pruning for Out-of-Distribution Detection | Accept (poster) | Summary: This paper contributes a new parameter and neuron pruning methods for OOD detection. Built upon the energy-based score, this paper defines the sensitivity of a parameter (neuron) wrt the energy-score by using gradient.
Strengths: The arguments are clear and easily understood. The method is well motivated by removing the weights and neurons. The experiments are comprehensive and clear, and the results are promising.
Weaknesses: $\bullet$ The major weakness is the theoretical explanation between the sensitivity and OOD performance, but this is clearly pointed out in the paper.
$\bullet$ The usage of the sensitivity is based on intuition, while this is a good technique for solving OOD problems.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: $\bullet$ What are the techniques (feature ensemble or/and input preprocessing) used in Mahalanbis score in comparison?
$\bullet$ This pruning method seems to be very promising. I understand this was discussed as limitation and future work, but is it possible to share the insights why the sensitivity term(s) will work?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The limitations are fully discussed with explanations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment** We thank Reviewer qR61 (R4) for the helpful suggestions.
> **W:** The major weakness is the theoretical explanation between the sensitivity and OOD performance, but this is clearly pointed out in the paper. The usage of the sensitivity is based on intuition, while this is a good technique for solving OOD problems.
**A:** We provide three remarks to explain the insight of our work
## Insight Justification
**Remark 1: Parameter and neuron pruning avoid overconfident predictions**
The over-parameterized deep neural networks always generate overconfident predictions even for OOD samples [1,2,3]. Therefore, most exsiting methods improve OOD performance by avoiding overconfident predictions [4,5]. For a deep network, the last fully connected layer can be regarded as a linear classifier. To prevent the classifier from overfitting, the most widely used technique is $\ell_1$ and $\ell_2$ regularization, which can be formulated as
$\min_{\theta}\mathbb{E}_{(x,y)\sim D}\Vert\theta^\top\cdot h(x)-y\Vert_2^2 + \lambda\mathcal R(\theta)$
Here $\mathcal{R}(\theta)$ represents $\ell_1\text-$ or $\ell_2\text{-norm}$ of $\theta$. As sensitivity based parameter pruning is able to reduce $\mathcal{R}(\theta)$, it can be regarded as an effective post reguralization technique that reduces the model complexity and avoids overconfident outputs. Besides, the neuron pruning is similar to dropout technique [6] which has also been demonstrated to reduce overfitting. Therefore, the proposed OPNP can avoid overconfident predictions and potentially improve the OOD detection performance.
**Remark 2: Pruning the least sensitive parameters and neurons improve separability between ID and OOD samples**
We denote $f_j(x)$ the logit output of $j\text{-th}$ class, denote $\mathbf{W}$ the output weights, denotes $\mathbf{M}$ the sensitive matrix of $\mathbf{W}$. After pruning the least sensitive parameters, the logit reduction of the j-th class can be estimated as
$\Delta f_j(x) = \sum_{\mathbf{M}\_{jk}<\Omega\_{min}^w}\mathbf{M}\_{jk}\cdot\lvert\mathbf W_{jk}\rvert\cdot h_k(x)$
where $L$ denotes the number of hidden neurons, and $\Omega_{min}^w$ denotes the sensitivity threshold. Eq.2 shows that the logit reduction is positively correlated with the average sensitivity of the pruned weights. As the parameter sensitivity is computed over the training ID set, the least sensitive parameters on ID distribution should be more sensitive for OOD samples on average, i.e.,
$\sum_{\mathbf{M}\_{jk}<\Omega\_{min}^w}\mathbf{M}\_{jk}^{OOD} > \sum_{\mathbf{M}\_{jk}<\Omega\_{min}^w} \mathbf{M}_{jk}^{ID}$
Therefore, the logit reduction on OOD samples is larger than on ID samples, which tends to improve the separability between ID and OOD samples, and leads to better OOD detection performance.
We also show the parameter sensitivity distribution (on ID and OOD set) of the pruned weights in Fig. 1 (see the PDF), which demonstrates that: (1) The pruned weights contain many high sensitive connections for OOD set. (2) The average sensitivity (0.00024) on OOD set is larger than the average sensitivity (0.00018) on ID set. This experimentally verifies Eq. 3.
**Remark 3: Pruning the most sensitive parameters and neurons improves generalization**
We follow [10] to define the first-order flatness as
$R_\rho(\theta) \triangleq \rho\cdot\max_{\theta'\in B(\theta,\rho)}\Vert\Delta f(\theta')\Vert, \quad \forall\theta\in\Theta$
where $\Delta f(\theta')$ denotes the derivative at point $\theta$, $B(\theta,\rho) = {\theta': \Vert\theta-\theta'\Vert<\rho}$ denotes the open ball of radius $\rho$ centered at the point $\theta$ in the Euclidean space and $\rho$denotes the perturbation radius that controls the magnitude of the neighbourhood. The flatness $R_\rho(\theta)$ descripts how flat the function landscape is [9]. It has been demonstrated that a flatter landscape could lead to better generalization [7,8,9]. Eq. 4 indicates that the first-order flatness is determined by the largest gradient. Therefore, a smaller gradient norm leads to flatter landscape, and many recent works have been proposed to penalize the gradient norm to get better generalization performance [7,8]. Our proposed method pruning the most sensitive parameters, therefore, is able to improve the flatness of the function landscape and lead to better generalization. However, according to Remark 2, pruning the most sensitive parameters and neurons may also hurts the separability between ID and OOD samples. Therefore, there is a trade-off between better generalization and better ID-OOD separability. This explains why OOD performance improves with very few sensitive parameters pruned and drops with a large pruning ratio (see in Fig. 3 of our paper ).
> **Q:** What are the techniques (feature ensemble or/and input preprocessing) used in Mahalanbis score in comparison?
**A:** For the results of Mahalanbis score, we follow [1,2] to report the performance in their paper since we utilize the same evaluation setting. The Mean and Covariance matrix are computed based on the training set. The implementation details of the Mahalanbis score can be seen in [3]
[1] React: Out-of-distribution detection with rectified activations. NeurIPS 2021.
[2] DICE: Leveraging Sparsification for Out-of-Distribution Detection. ECCV 2022.
[3] A simple unified framework for detecting out-of-distribution samples and adversarial attacks. NeurIPS 2018.
[4] Energy-based Out-of-distribution Detection. NeurIPS 2020.
[5] DICE: Leveraging Sparsification for Out-of-Distribution Detection. ECCV 2022
[6] Learning sparse networks using targeted dropout. Arxiv 2019.
[7] Sharpness-aware minimization for efficiently improving generalization. ICLR 2021
[8] Penalizing Gradient Norm for Efficiently Improving Generalization in Deep Learning. ICML 2022
[9] Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization. CVPR 2023
---
Rebuttal Comment 1.1:
Title: Thank you for your reply
Comment: Thank you for your reply. I like the explanation. Thank you again and good luck.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your further review and immediate feedback. We are happy to provide more explanations if you have any other questions. | Summary: This paper proposes a parameter and neuron pruning strategy to enhance out-of-distribution detection. The approach involves removing near-zero- and high-sensitivity parameters, which are measured by the average gradient corresponding to all training in-distribution samples. Empirical results demonstrate the superior performance of the proposed algorithm.
Strengths: The proposed algorithm is characterized by its simplicity and remarkable effectiveness in out-of-distribution detection tasks, exhibiting consistently high performance.
Weaknesses:
While the proposed principle shows promising results, providing theoretical explanations for its success would be valuable. Additionally, selecting the appropriate hyper-parameters beforehand presents challenges.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Although empirical evidence indicates performance gains when pruning the largest sensitivity value (as seen in Figure 3), understanding its behavior and identifying optimal values require further investigation. Users would benefit from discussions clarifying such cases and receiving intuitive suggestions to aid in parameter selection. Additionally, exploring the impact of using a fixed threshold instead of a percentile for pruning sensitivity values may lead to improved performance.
The reliance on the average sensitivity of connection weights for neuron pruning lacks an intuitive explanation. Other statistics, such as min, max, or median, could also be considered as potential alternatives.
An explicit explanation of how to combine OPNP with ReAct should be provided.
To enhance efficiency, it is advisable to consider pruning based on a subset of the training set rather than the entire set.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors have addressed some limitations of their work, and there are additional suggestions for improvement in the 'Paper Weakness'' part and "Questions" part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment** We thank Reviewer tCft (R3) for the insightful questions and suggestions, which really helped us improve our paper. Here, we respond to the questions and suggestions point by point.
> **W1:** While the proposed principle shows promising results, providing theoretical explanations for its success would be valuable. Users would benefit from discussions clarifying such cases and receiving intuitive suggestions to aid in parameter selection
***
**A:** Thanks for your suggestion. We provided three remarks to explain the insight and explanations of why the gradient based sensitivity works. Please refer to the response to **Review qR61** for the details of the remarks.
**Remark 1: Parameter and neuron pruning avoid overconfident predictions**
**Remark 2: Pruning the least sensitive parameters and neurons improve separability between ID and OOD samples**
**Remark 3: Pruning the most sensitive parameters and neurons improves generalization**
***
> **Q1:** Selecting the appropriate hyper-parameters beforehand presents challenges. Exploring the impact of using a fixed threshold may lead to improved performance.
**A:** As we don't know the distribution of sensitivities, it is difficult to determine a fixed threshold. We refer to ReAct [3] and DICE [4] to use a prunning ratio, and explore the ratio on the validation set. The results in Tabel 1 and 2 show that the selected pruning ratio works for different OOD sets.
***
> **Q2:** The reliance on the average sensitivity of connection weights for neuron pruning lacks an intuitive explanation. Other statistics, such as min, max, or median, could also be considered as potential alternatives.
**A:** It is difficult to prove theoretically which statistic works best. When we design the sensitivity metric for neuron pruning, we have tried different statistics, such as mean, variance and norm, the results showed that average works best.
For the $i\text{-th}$ neuron in the penultimate layer, it contributes to all output neurons, therefore the sensitivity of the $i\text{-th}$ neuron should be defined based on the sensitivity of the connection weights between the $i\text{-th}$ hidden neuron and all output neurons. Considering that the $\ell_1$ or $\ell_2$ norm of the weights are usually used to meansure the importance of the units in deep networks [1, 2]. We define the neuron sensitivity as the average sensitivity of the connections, which is equivalent to to using $\ell_1$ norm of the weight sensitivity, and reflects the average sensitivity to all classes.
**Ablation study on different statistics**
In the main experments, we utilize the average sensitivity (equivalent to $\ell_1$norm ) as the neuron sensitivity. It is worth noting that other statistics, such as $\ell_2$ norm, variance, max, min, median may also a feasible statistics to measure the neuron sensitivity. Therefore, we perform a comparision experiment to show the performence of using different statistics as the neuron sensitivity. The results are presented as follows, which show that using the Mean, Min and Median of the weights sensitivity as neuron sensitivity achieve similar performance, and using the Variance statistic performs worst. As the Mean value is more robust and less likely to be susceptible by noisy connections, we suggest the users utilize the Mean sensitivity for neuron pruning.
FPR95 and AUROC in SUN and Places datasets are reported, ResNet50 model is utilized.
| Statistics | Mean | Max | Min | Median | Norm | Variance |
|:----------:|:---------:|:---------:|:----------:|:-----------:|:---------:|:-----------:|
| SUN | 26.7/94.7 | 32.4/93.1 | 26.0/95.1 | 26.3/94.6 | 31.2/92.7 | 46.1/88.5 |
| Places | 32.7/92.9 | 39.8/91.2 | 32.6/92.8 | 33.4/92.5 | 34.5/91.2 | 51.1/87.4 |
We will revise the submission (lines 172-174) to add more intuitive explanation and discuss the results of using different statistics for neuron pruning in the appendix.
***
> **Q3:** An explicit explanation of how to combine OPNP with ReAct should be provided.
**A:** Thanks for your suggestion. ReAct is a clipping operation that truncates activations above c to limit the effect of noise. It can be defined as $\overline h(x) = \min(h(x), c)$, where $c$ is the clipping threshold. Therefore, ReAct can be integrated into our method easily by clipping the neuron outputs after our neuron pruning stage. The clipping ratio is set to 10% in ResNet50 and 5% in ViT-B/16.
We will add the information in the revised paper as suggested.
***
> **Q4:** To enhance efficiency, it is advisable to consider pruning based on a subset of the training set rather than the entire set.
**A:** Good suggestion. That is actually what we do when we try different strategies to get the parameter sensitivity.
We performed comparision experiments by using different number of training samples for parameter sensitivity estimation. The results are presented as follows, which shows that estimating the sensitivity with only 1% training samples also achieved promising performance. The ablation results will be added to the appendix of our paper.
FPR95 and AUROC in SUN and Places datasets are reported, ResNet50 model is utilized. w/o pruning denotes baseline without pruning.
| Sampling Ratio | w/o pruning | 1% | 5% | 20% | 100% |
|----------------|-------------------|-------------|-------------|-------------|-------------|
| SUN | 59.3/85.9 | 32.57/92.83 | 32.06/92.78 | 30.92/93.05 | 30.40/93.17 |
| Places | 64.9/82.9 | 42.30/90.00 | 41.96/90.08 | 41.21/90.33 | 40.76/90.65 |
***
**Reference**
[1] Pruning Filters For Efficient ConvNets. ICLR 2017.
[2] Learning sparse networks using targeted dropout. Arxiv 2019.
[3] React: Out-of-distribution detection with rectified activations. NeurIPS 2021.
[4] DICE: Leveraging Sparsification for Out-of-Distribution Detection. ECCV 2022.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: Thank you for your rebuttal; it has effectively addressed most of my concerns. | Summary: This submission proposes a post-hoc method for detecting out-of-distribution samples, by pruning the final classification layer, using a sensitivity metric based on the gradients of the energy scores. The proposed method is mainly validated on residual networks and visual transformers based on the ImageNet dataset.
Strengths: - The submission proposes an interesting application using pruning to out-of-distribution detection, the latter being an important research problem
- The proposed method is simple, and can be applied with a fairly small computational overhead
- The results suggest that the proposed method, also when used in conjunction with other methods, can achieve state-of-the-art results on several tasks.
Weaknesses: - The submission is not very well written, particularly the introduction, with quite a few typos and grammar issues (e.g. using “post-hot” instead of “post-hoc” in quite a few places). This makes reading a bit difficult.
- The proposed method is not very novel, as other OOD detection methods have used pruning (e.g. references 20, 35 seem quite similar)
- The performance of the method depends quite heavily on the pruning thresholds (see Figure 3) and can substantially decrease the accuracy on the original ID classification task (see Table 3). This makes the method less likely to be used “out of the box”.
- While the authors introduce their method as "post-hoc", in fact it has a similar cost to training the model for one epoch, since the gradients for all the samples have to be computed.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Suggestion: I would advise the authors to go through the submission and correct the typos and grammar issues, for the next revision
- I believe Equation (4) is an approximation, rather than an identity. Have the authors considered what would be a theoretical explanation that would connect the approximation in Equation (4) and the removal of least and most sensitive weights?
- What is the norm in Equation (6)? Should it be an absolute value instead?
- Can the authors provide more explanations on the intuition that “parameters and neurons with exceptionally large sensitivity can easily lead to overconfidence” (51-52)?
- Lines 145-147: it is hard to see these differences from Figure 1, I would suggest adding numbers
- Lines 181-182: this is not really a Gaussian (e.g it is not symmetric)
- Can you provide more details on how OPNP is combined with ReAct?
- In Table 2, can you provide results for DICE + ReAct?
- Some of the plots and tables do not include information about the model and dataset used. For example, what is the model used in Table 3 or Table 4? What is the model and dataset from Figure 4?
- The authors show results when (independently) pruning other layers in a ResNet50 model (Appendix Table 3). What happens if the weights are pruned globally, instead of each layer at a time?
- Have the authors considered second order information for computing the sensitivity metric? For example, a second order Taylor approximation could be used instead of Equation (4), and since pruning is only done for the final layer, the Hessian could be more easily calculated. This is a similar approach to Optimal Brain Surgeon (Hassibi et al.)
- Can you clarify what is meant by "(subset of) iNaturalist, SUN, Place and Texture" (line 211)? Are the same corresponding datasets used for evaluation when comparing with the other methods from Tables 1 and 2?
- Can you please confirm that the ID test set used for validation (and determining the pruning hyperparameters and OOD threshold) was different than the one used for the final testing?
- Can you please clarify the differences between Algorithm 1 and the text which specifies that the parameter and neuron sensitivities are computed based on the entire training set? In line 5 from Algorithm 1, it sounds like neuron pruning is done individually for each sample, but from the text the neuron sensitivities are computed on the entire training set. In my understanding, the model is pruned only once, and the same sparse model is used for evaluation on different OOD tasks. Can you please confirm whether my understanding is correct?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: While the authors have addressed some of the limitations of their work in Section 5, I believe the proposed method, in its current form, has some technical limitations that would make it more difficult to use “plug-and-play” as the authors hope. For example, the method relies quite heavily on the optimal hyperparameters (thresholds) for pruning and it can have a substantial negative impact on the ID classification accuracy for the original task. I would advise the authors to consider how the search of the pruning hyperparameters could be automatized, or at least made more efficient.
--------------------------------
**Edited after rebuttals**
After reading the authors' answer, I have raised my score from 4 to 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment** We thank Reviewer 8qxy (R2) for the careful reviews and insightful suggestions, which really helped us improve our paper. Due to space constraints, part of the responses can be seen in our global response.
***
> **W1:** Typos and grammar issues.
**A:** We have corrected the typos in the revised paper, and will polish the whole paper carefully in the next version.
***
> **W2:** The proposed method is not very novel, as other OOD detection methods have used pruning (e.g. Ref 20, 35)
**A:** In our global response, we further explained our insight through 3 remarks and explained the novelty of our proposal .
***
> **W3:** The performance depends heavily on the thresholds (Fig.3) and can substantially decrease the ID accuracy (Tab.3).
**A:** (1) According to Fig.3, of all the hyperparameters, only parameter $\rho_{max}^w$ is relatively sensitive, which can be explained by **Remark 3** (See the global response). (2) The performance is consistently improved in a wide range of pruning ratio. The optimal pruning ratio can be set to $\rho_{min}^w\in[10,30]$ and $\rho_{max}^w\in[0.5,3]$ across different OOD set (see in Fig. 3).(3)As we only modified the last FC layer, we can always use the original FC layer for classification, which ensures an identical classification accuracy as unpruned model. Existing ReAct and DICE also rely on using the original FC layer for classification.
***
> **W4:** The authors introduce their method as "post-hoc", but it has a similar cost to training the model for one epoch
**A:** The "post-hoc" is relative to the training based method. The cost of parameter sensitivity estimation is much cheaper than training the model, since we only compute the gradient of the last FC layer, and the parameter sensitivity could be estimated in a subset of the training data (See the global response) . Therefore, the training cost is less than 1% compared to the training based methods (even for one epoch).
Although our method is more costly than MSP and Energy, it has similar cost as other popular post-hoc methods (ReAct and DICE), while achieves better performance.
***
> **Q1:** Equation (4) is an approximation, rather than an identity. Any theoretical explanation of removing the least and most sensitive weights?
**A:** Thans for your seggestion, we have revised Eq. 4 as
$E(x_k; \theta+\delta) - E(x_k; \theta) \approx \sum_{i,j}g_{ij}(x\_k)\delta_{ij}$
We have provided three remarks in the global response to explain why sensitivity based pruning works.
***
> **Q2:** What is the norm in Equation (6)? Should it be an absolute value instead?
**A:** Yes, it should be absolute, we have revised it as
$\mathbf{M}\_{ij} = \frac{1}{m}\sum_\{k=1}^m\lvert g_{ij}(x_k)\lvert$
***
> **Q3:** Provide more explanations on the intuition that “parameters and neurons with exceptionally large sensitivity can easily lead to overconfidence” (51-52)?
**A:** We have provided three remarks to explain the insight in our global response. The function landscape is flat if the model output does not change drastically in the neighbourhood of the model parameter. Many existing works have shown that a flat landscape lead to better generalization. We have revised the sentence as "As the parameters with exceptionally large sensitivity can lead to a sharp landscape, which has been proved to hurts model generalization [1]"
***
> **Q4:** Lines 145-147: it is hard to see these differences from Figure 1, I suggest adding numbers
**A:** We have revised Fig.1 as suggested.
***
> **Q5:** Lines 181-182: this is not really a Gaussian
**A:** The sentence has been revised as "As observed, before neuron pruning, there are several risky neurons which has typical large sensitivities."
***
> **Q6:** More details on how OPNP is combined with ReAct?
**A:** We have provided the details of OPNP+ReAct in the global response.
***
> **Q7:** In Table 2, can you provide results for DICE + ReAct?
**A:** We have added the results for DICE+ReAct in Table 2 as suggested. Results are as follows.
FPR95/AUROC are reported.
| OOD Dataset | iNaturalist | SUN | Places | Texture | Average |
|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|
| DICE+ReAct | 2.65/99.38 | 29.45/93.52 | 38.45/91.17 | 33.78/93.27 | 26.08/94.34 |
***
> **Q8:** Some of the plots (Fig. 4) and tables (Table 3, 4) do not include information about the model and dataset used.
**A:** ResNet50 model is used in both Table 3, Table 4 and Fig. 4. In Fig.4, we utilize the prediciton results in ImageNet-1K to evaluate the calibration performance. We have added the information in the revised paper.
***
> **Q9** What happens if the weights are pruned globally?
**A:** Please see the **Ablations on global pruning** in our global response.
***
> **Q10:** Have the authors considered second order information for computing the sensitivity?
**A:** Yes, we have considered using the eigenvalues of the hessian matrix to measure parameter sensitivity. However, for the ResNet50 in ImageNet-1K, the hessian matrix is 2,048,000x2,048,000, which needs about 8TB Memroy in float16.
***
> **Q11:** Can you clarify what is meant by "(subset of) iNaturalist, SUN, Place and Texture" (line 211)? Please confirm that the ID test set used for validation was different than the one used for testing?
**A:** Please see the **Evaluation Setting** in our global response.
***
> **Q12:** In line 5 from Algorithm 1, the neuron pruning is done individually, but from the text the neuron sensitivities are computed on the entire training set.
**A:** The pruned parameter and neuron index is computed once, which are obtained based on the training set. But the neuron pruning for different test samples is done individually by setting the correspinding feature to zero in inference time. (See the revised pseudocode in PDF)
[1] Sharpness-aware minimization for efficiently improving generalization
---
Rebuttal 2:
Comment: Thank you for addressing my questions! After reading the answers and the other reviews, I have raised my score from 4 to 5.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer again for evaluating our work and carefully reading our response. It is more than welcome to post any further comments. | Summary: This paper proposes to adopt weight and neuron pruning for OOD detection. The proposed method is able to be combined with training-based approaches, demonstrating SOTA performance.
Strengths: 1. The motivation of the method is reasonable and sound.
2. The results of OPNP+ReAct is strong.
Weaknesses: 1. It is not new for the ML community that sparsity can help improve OOD detection. Numerous related works have been proposed to adopt sparsity/pruning to improve OOD detection. While might having different pruning approaches, it is necessary to discuss and compare them in the submission.
[1] Cheng, Zhen, et al. "Average of pruning: Improving performance and stability of out-of-distribution detection." arXiv preprint arXiv:2303.01201 (2023).
[2] Djurisic, Andrija, et al. "Extremely simple activation shaping for out-of-distribution detection." arXiv preprint arXiv:2209.09858 (2022).
[3] Sun, Yiyou, and Yixuan Li. "Dice: Leveraging sparsification for out-of-distribution detection." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[4] Liu, Shiwei, et al. "Deep ensembling with no overhead for either training or testing: The all-round blessings of dynamic sparsity." arXiv preprint arXiv:2106.14568 (2021).
2. The pseudocode of Algorithm 1 looks very trivial, and can be significantly improved.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See the above limitation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment**
We thank Reviewer Dntp (R1) for the feedback and suggestions. Here, we respond to the concerns point by point.
> **W1:** It is not new for the ML community that sparsity can help improve OOD detection. Numerous related works have been proposed to adopt sparsity/pruning to improve OOD detection [1,2,3,4]. While might having different pruning approaches, it is necessary to discuss and compare them in the submission.
**Ans:** This is a good question and the recommended papers are related to our work, we will improve Section 3.4 according to the suggestion.
In our global response, we provided three remarks to explain the insight of our work, and discussed the relationship/difference between our method and these related methods.
We acknowledge that sparsity and pruning are common strategies in machine learning, and some previous work have adopted sparsity/pruning to improve OOD detection [1,2,3,4]. However, Our proposal differs significantly from those methods:
(1) The proposed OPNP is training free while Deep Ensemble ans AoP rely on training multiple sparse network from scratch with dynamic sparse regularization [1,2].
(2) We propose a gradient based sensitivity metric for post pruning, whereas those methods utilize the magnitude of features [2] or weights [1,3] as pruning metric. The sensitivitiy based pruning metric is more intuitive and technical sound compared to those magnitude based metric.
(3) Our proposal prunes both weights and neurons and prunes both the most and the least sensitive units for OOD detection, whereas the existing methods only prunes the weights or features with small magnitude [1,2,3,4].
(4) Our proposal is simple, intuitive, effective, and demonstrate much better performance than DICE [4] (see in table1 and table 2 of our paper).
(5) We explain why the sensitive based metric is reasonable to improve OOD detection performance, and explains why prunes both the most and the least sensitive units works.
Besides, AoP (released to ArXiv in March) is a contemporaneous work as ours.
Based on our insights and contributions, we believe our work is novel enough and should be known by the community.
*******************
> **W2:** The pseudocode of Algorithm 1 looks very trivial, and can be significantly improved.
**A:** Thank you for your suggestion. We have improved the pseudocode (see the pdf), and will revise the submission accordingly.
******************
We have carefully provided our response to your questions and concerns. We are looking forward to your reply to see whether our response has resolved your concerns. If your have any other questions, please let us know and we are happy to provide more explanations. We will feel grateful if you could boost our paper.
******************
**Reference**
[1] Deep ensembling with no overhead for either training or testing: The all-round blessings of dynamic sparsity. ICLR 2022
[2] Extremely simple activation shaping for out-of-distribution detection. ICLR 2022
[3] Average of Pruning: Improving Performance and Stability of Out-of-Distribution Detection. Arxiv 2023.
[4] Dice: Leveraging sparsification for out-of-distribution detection. ECCV 2022.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: I thank the authors for the explanation. I would like to raise the score to Borderline accept.
Could the authors explain why pruning weights and neurons that are the most and the least sensitive units for OOD detection is a better choice?
Also, could the authors explain what we can observe from the Sensitivity distribution in Figure 2?
---
Reply to Comment 1.1.1:
Comment: Thank you again for your further review and immediate feedback. And we appreciate that you can increase your rating of this submission. The explanations are provided below.
> **Q1:** Could the authors explain why pruning weights and neurons that are the most and the least sensitive units for OOD detection is a better choice?
**A:** Existing pruning based methods mainly remove the weights or features with low magnitude [1,2,3,4], which is able to avoid overconfident predictions and tends to improve OOD performance. The limitations of those methods are: (1) The magnitude of weights/features is not related to OOD scores directly, which makes magnitude based pruning less intuitive. (2) It is not guaranteed that the separability between ID and OOD samples can be improved by removing smaller weights/features, since the magnitude based metric does not take advantage of prior information about the ID/OOD distributions. (3) Most existing magnitude-based weights pruning are utilized in the training-based methods [2,4], which is relatively high cost.
In contrast,the proposed sensitivity metric is obtained based on the ID distribution, which is able to identify both the redundant weights (the least sensitive weights) and the risky weights (with exceptionally large sensitivities). The advantagtes of the sensitivity based pruning are: (1) The sensitivity metric is able to make use of prior information about the ID distribution. (2) As the sensitivity metric is measured by the energy score,it better reflects the impact of weights on the OOD scores, making the sensitivity based pruning more intuitive. (3) The separability between ID and OOD distributions can be improved by pruning the least sensitive weights (see **Remark 2**). (4) The sensitivity based pruning is able to identify and remove the risky weights, which leads to flatter landscape and better generalization (see **Remark 3**). (5) The sensitivity based pruning is low-cost compared to the training-based methods [2,4]
Therefore the sensitivity based pruning is a better choice to improve OOD detection performance. The experimental results in Table 1 and Table 4 demonstrate the superior of our sensitivity based pruning compared to other pruning methods
[1] Dice: Leveraging sparsification for out-of-distribution detection. ECCV 2022.
[2] Deep ensembling with no overhead for either training or testing: The all-round blessings of dynamic sparsity. ICLR 2022.
[3] Extremely simple activation shaping for out-of-distribution detection. ICLR 2022.
[4] Average of Pruning: Improving Performance and Stability of Out-of-Distribution Detection. Arxiv 2023.
***
> **Q2:** Could the authors explain what we can observe from the Sensitivity distribution in Figure 2?
**A:** In our global response (see **Ablations on global pruning**), we demonstrate the performance of pruning the whole model with a global threshold. The sensitivity distribution in Fig. 2 explains why pruning the whole model with a global threshold does not work better than only pruning the last FC layer. Fig.2 shows that the parameter sensitivity of the last FC layer and the shallow Conv layers are not on the same scale (the sensitivity of the FC layer is less than 1/10 of the shallow layers). This indicates that: (1) Pruning the weights in the Conv layers might lead to model collapse due to extreme high sensitivities. (2) The most sensitive weights in the FC layer can not be pruned when pruning the whole model with a global threshold. (3) There are very few redundant weights (low sensitivity weights) in the Conv layers. Therefore, pruning the weights in the Conv layers does not benefit to OOD detection performance, which explains why pruning the whole model does not work better than only pruning the last FC layer.
***
We are happy to provide further explanations if needed. Thanks! | Rebuttal 1:
Rebuttal: ## Comment
We sincerely thank all the reviewers for their careful reviews and constructive suggestions, which helped us improve our submission. Here, we provide the response to several common concerns and suggestions raised by reviewers.
## Insight Justification
We provide three remarks to explain why OPNP improves OOD detection performance.
**Remark 1: Parameter and neuron pruning avoid overconfident predictions**
**Remark 2: Pruning the least sensitive parameters and neurons improve separability between ID and OOD samples**
**Remark 3: Pruning the most sensitive parameters and neurons improves generalization**
Due to space limitation, we put the details of the remarks in the response to **Review qR61 (R4)**
## Novelty Justification
We declare the contributions and highlights of our work and discuss the relationship between our proposal and other methods
**Contributions and highlights**
(1) We introduce a gradient based approach to estimate the sensitivity of parameters and neurons in deep models and propose the sensitivity based pruning method for OOD detection. The sensitive based pruning is technically sound and we explain why it works
(2) We are the first to prune both the most sensitive and the least sensitive parameters and neurons for OOD detection, and explain the insight behind it.
(3) The whole method is intuitive, training free, easy to implement and is able to be combined with different post-hoc approaches to achieve promising performance.
(4) Extensive experiments and ablations have been performed to show the effectiveness and efficiency of our proposal.
**The relationship between our proposal and other methods**
We have discussed the relationship between our proposal and Energy Score, ReAct and DICE in Section 3.4. Here, we also discuss the relationship between our proposal and Deep Ensemble [1], ASH [2] and AoP [3] recommended by Review Dntp. Both Deep Ensemble and AoP rely on training multiple sparse network from scratch with dynamic sparsity constraint, whereas our OPNP is training free and prunes both the most and the least sensitivity weights and neurons. ASH clips or binarizes the features based on magnitude to improve OOD performance, whereas our OPNP prunes both weights and neurons based on sensitivity. Compared to those methods that also explored sparsity to improve OOD detection performance, our proposal is more intuitive, comprehensive and user-friendly.
## Common Response
**Parameter sensitivity estimation with lower cost**
To reduce the cost for parameter sensitivity estimation, it is advisable to compute the parameter sensitivity based on a subset of the training set rather than the entire set. To demonstrate the effectiveness of using a subset for sensitivity estimation. We utilize 1%, 5%, 20% and 100% training samples for sensitivity estimation, and perform parameter pruning based on the sensitivities. The experimental results are shown in Table 1, which demonstrate that using only 1% of the training set (ImageNet-1K) also achieved promising performance.
FPR95 and AUROC in SUN and Places datasets are reported, ResNet50 model is utilized. w/o pruning denotes baseline without pruning.
| Sampling Ratio | w/o pruning | 1% | 5% | 20% | 100% |
|----------------|-------------------|-------------|-------------|-------------|-------------|
| SUN | 59.3/85.9 | 32.57/92.83 | 32.06/92.78 | 30.92/93.05 | 30.40/93.17 |
| Places | 64.9/82.9 | 42.30/90.00 | 41.96/90.08 | 41.21/90.33 | 40.76/90.65 |
**Implementation details of OPNP\+ReAct**
ReAct is a clipping operation that truncates activations above c to limit the effect of noise. It can be defined as $\overline h(x) = \min(h(x), c)$, where $c$ is the clipping threshold. Therefore, ReAct can be integrated into our method easily by clipping the neuron outputs after our neuron pruning stage. The clipping ratio is set to 10% in ResNet50 and 5% in ViT-B/16.
**Ablations on global pruning**
To help understand what happens when weights are pruned globally, we also estimate the parameter sensitivity of the whole model and pruning the weights with different globally pruning stragery. The pruning stargegy are: (1) Global threshold pruning (GTP): pruning all the weights in a global threshold. (2) Layer-wise pruning (LWP): pruning the weights in different layer individually with the same pruning ratio. The results are as follows. We find that both global pruning methods performs worse than pruning the FC layer. We believe the reason is that the sensitivity magnitude across different layers is differs greatly, the sinsitivity of the FC layer is less than 1/10 of the previous Conv layer (See the Fig. 2 in the PDF). Therefore, it is unreasonable to utilize a global threshold for all layers. It might be work if we use different thresholds for different layers, but it is too tricky to determine the optimal thresholds.
| Pruning strategy | SUN | Places |
|:----------------:|:---------:|:---------:|
| OPP | 30.4/93.2 | 40.8/90.7 |
| GTP | 39.7/91.9 | 49.1/88.3 |
| LWP | 40.5/92.4 | 44.9/90.3 |
**Evaluation Setting**
we utilize the same evaluation setting as ReAct and DICE. The "(subset of) iNaturalist, SUN, Place and Texture" are selected by previous method and widely used for OOD detection evaluation. The hyperparameters is determined in the ID validation set and OOD validation set.
## Reference
[1] Deep ensembling with no overhead for either training or testing: The all-round blessings of dynamic sparsity. ICLR 2022
[2] Extremely simple activation shaping for out-of-distribution detection. ICLR 2022
[3] Average of Pruning: Improving Performance and Stability of Out-of-Distribution Detection. Arxiv 2023.
Pdf: /pdf/ddd837f6a08751ce008ef025180f2048848a238a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
GIMLET: A Unified Graph-Text Model for Instruction-Based Molecule Zero-Shot Learning | Accept (poster) | Summary: This paper introduces GIMLET, a unified graph-text model for instruction-based molecule task pretraining. It leverages natural language instructions and decoupled graph encoding techniques to improve the interaction of graphs and texts. Through experiments and evaluations, the paper demonstrates the effectiveness and robustness of GIMLET in various downstream tasks, showcasing its potential for molecule zero-shot learning.
Strengths: - This paper proposed GIMLET, a novel unified graph-text model that leverages natural language instructions for pretraining. It takes each node as a token, alleviating the problem of losing structural information using separate graph encoding.
- The authors collected a set of datasets for pretraining, which can be a useful resource for the community.
- The empirical results look promising compared to previous methods.
- The paper is overall well-written and easy to follow.
Weaknesses: - The authors didn't present the training details, e.g. total iterations, hyper-parameters, etc.
- GIMLET was trained from scratch with a large dataset. What are the computational requirements (GPU/memory/time) for training such a model? It seems the training can be quite expensive, especially considering the graph encoding part.
Minor:
- Page 4, line 126: need to explain what is $o_m$.
- Page 9, line 333: nature language -> natural language.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - For the pretraining dataset, I understand the authors isolated the tasks for training/testing, but are the molecules also isolated for train/test, i.e. will the same molecule appear in both the training and testing set?
- For graph, node embeddings are used as input tokens. If the number of nodes varies a lot for different graphs, how do the authors handle the computational efficiency as a max length (padding) will be set for the input?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: I'm not an expert in biology/chemistry, so I'm not sure if the procedures for constructing the dataset are correct.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to question 1 in weaknesses:
Thank you for your reminder regarding reproducibility. We will open-source both our pretraining dataset and code to ensure that reproducibility is achievable. We also introduce some important hyperparameters here:
| Hyperparameters | Value |
| ------------------ | -------- |
| sample number | 23.8M |
| batch size | 64 |
| epoch | 1 |
| total steps | 370K |
| learning_rate | 5.00E-05 |
| dropout_rate | 0.1 |
| dense_act_fn | relu |
| layer_norm_epsilon | 1.00E-06 |
we will include these hyperparameters in the future version of the paper.
### Response to question 2 in weaknesses:
Sorry for the confusion. We continue pretrain the existing T5 parameters rather than from scratch. This approach allows us to leverage the strong text understanding capabilities of the pretrained T5 model, resulting in better instruction following.
The pretraining process is relatively efficient in terms of computational resources. It requires approximately 1.5 GPU days on a single V100 32G. Through techniques like gradient accumulation, it can also be performed on GPUs with smaller memory, albeit with slightly extended training times.
The computation cost for graph encoding is not high, as GIMLET utilizes the same computational framework as the standard T5 model. Specifically, the graph-text mixture positional embeddings incur costs equivalent to the normal T5 relative positional embeddings.
### Response to question 3 in weaknesses:
Thanks for pointing out the omission! The $[o_1,\dots,o_m]$ denotes tokens in the instruction text, where each $o$ represents a token, and $m$ represents the length of the instruction text. We will correct these omissions in the next version of the paper.
### Response to question 1 in Questions:
This is an indeed interesting question! To verify the transferability of our model to both novel tasks and new data, we split the Chembl dataset with both isolated tasks and molecules, i.e., the Chembl Zero-Shot, as the pretraining and downstream testing dataset. The experiment in Table 2 demonstrates the remarkable capability of GIMLET to transfer to Chembl Zero-Shot well, thereby validating our approach of instruction-based molecule zero-shot learning for both novel tasks and new molecules.
### Response to question 2 in Questions:
Thanks for the interesting question! Currently, there's a relatively small variance in molecule data lengths compared to text lengths. For instance, while text length varies from 30 to 500, most molecule sizes do not exceed 60. Therefore, the size of the molecules doesn't pose a significant challenge to computational efficiency. For extreme cases, the computation cost may slightly increase due to the increased input length.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: The authors have addressed my questions. I'm maintaining my previous score and recommend acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback! We remain available to address any further questions you may have. | Summary: The paper proposes a new unified language model for graph and text data for instruction-based molecule zero0shot learning. The paper tries to address the problem of a supervised fine-tuning approach where labeled data by instruction tuning. The paper first treats both graph nodes and instruction tokens as input tokens. The paper then proposes a new relative position embedding to represent the edges in the graph and the relative position for text. The paper then retrains the model on Chembl and sometimes pm several tasks from MoleculeNet.The instructions are based on websites and databases. In a zero-shot setting, the model is compared against several baselines, including KVPLM, Galactica, and MoMu. The model is also compared to the graph-based models, including GCN, GAT, GIN, and Graphormer, in a supervised setting. The paper applies few-shot instruction-0based tuning for several datasets and compares it against KVPLM, MoMu, and GIN. The paper then conducts ablation and exploration studies.
Strengths: 1. The paper proposes a new unified text-graph T5-based model to unify molecular graphs and task instructions. The idea of relative position embedding for molecular graphs is interesting.
2. The paper achieves strong performance against several zero-shot baselines with fewer parameters. The paper also conducts experiments over multiple different tasks. The model also performs well on the few-shot settings. The paper shows its model design by conducting an ablation study to analyze the effectiveness of unified inputs and decoupling encoding. The paper also shows the effectiveness of pertaining over two different pertaining dataset. The paper also examines the robustness of instruction by GPT3.5 paraphrase. Finally, the paper ablates the explanation of instructions in downstream tasks.
3. The paper includes a very detailed appendix to explain some parts of the paper. The paper also visualizes attention on molecular to help readers understand the model.
Weaknesses: 1. The idea of encoding graphs with a seq2seq transformer has been introduced previously. This idea has been applied to graph-to-text tasks since 2021 (Ribeiro et al., 2021), despite using an unchanged T5 structure. One contribution of this paper is the new relative position embedding. However, the paper fails to include it in the ablation study.
2. The input of the model needs to be clarified. It would be better to include a sample input-output pair in the Appendix. Does the model use each node's chemical element as the molecular graph node's textual form? The datasets used for testing are hard to understand for readers who are not familiar with them. It would be better to include a small explanation for each testing dataset in the Appendix.
3. The paper fails to include a limitation section and an ethical section for potential social impact.
Ribeiro, L., Schmitt, M., Schutze, H., & Gurevych, I. (2021). Investigating Pretrained Language Models for Graph-to-Text Generation. In Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI (pp. 211–227). Association for Computational Linguistics.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Does the model use each node's chemical element as the molecular graph node's textual form?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper fails to include a limitation section. The paper needs to include a social impact section to discuss the potential misuse of the current system.
Flag For Ethics Review: ['Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to question 1 in weaknesses:
**Graph-to-text tasks related work**
Thanks for the reminding of related work. We will add this and other related work in the next version of the paper.
The difference between the mentioned work and ours lies in the task objectives. The mentioned work processes serialized graph and translate it to the corresponding text. our task involves utilizing both molecular graph data and task-specific instructional text as input, with the objective of generating corresponding task outcomes as output.
**Ablation study of the position embedding**
We conducted an ablation study to assess the impact of our unified graph-text encoding method augmented with mixture position embedding. In this study (refer to Subsection 4.3 Ablation and Exploration Studies, Table 4), we introduced a simplified model labeled as "w.o. unifying." This model employs a GNN to derive graph features, which are then fed into the language model as tokens, without our mixture position embedding. The ablation experiment clearly demonstrates that this "w.o. unifying" model underperforms compared to GIMLET. This result underscores the significance of the unified graph-text encoding method that we have introduced.
We further extend the ablation analysis on the position embedding by introducing the GIMLET-SMILES baseline. In GIMLET-SMILES, a regular T5 model is employed to process instructional text and sequential graph information, represented as SMILES sequences of molecules. To evaluate the performance of both GIMLET and GIMLET-SMILES, we employed instruction-based zero-shot testing on the Chembl zero-shot dataset. The experimental result is as follows:
| Model | Chembl zero-shot ROC-AUC $\uparrow$ |
|---------------|------------------------------------|
| GIMLET | 0.7860 |
| GIMLET-SMILES | 0.7414 |
The results clearly indicate that the model utilizing the graph encoding method (GIMLET) surpasses the ablated GIMLET-SMILES. This ablation study further illustrates the advantages of our graph-encoding method.
### Response to question 2 in weaknesses (and question 1 in Questions):
Sorry for the confusion, and your understanding is right. Each graph node corresponds to an atom within the molecule, identified by its respective chemical element. The representation of our input data is provided in Figure 1. We will include more detailed illustrations of data in the appendix in the next version of the paper.
### Response to question 3 in weaknesses:
Thanks for your reminder. We intend to include a section on limitations and ethical considerations in the upcoming version of the paper.
---
Rebuttal Comment 1.1:
Title: Response to authors' rebuttal
Comment: Thanks for the rebuttal. The authors have answered most of my questions. I'm maintaining my previous score.
However, I want to clarify the position embedding ablation study part. So basically, I want to see a GIMLET with most of its original structures but exclude the position embedding part (eq3). It would be great if the authors could provide the results of it.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and the suggestion regarding the ablation study on position embeddings. In our previous ablated model, "w.o. unifying" and "GIMLET SMILES" were designed to exclude the unified position embedding method, while still preserving the capacity to enrich the model with molecular structure information by other methods. Following your recommendation, we explored another interesting baseline, "GIMLET w.o. P," which omits the position embeddings and disregards the structural information of the molecule. In this case, the model is only provided with the information of the set of molecule nodes. The result is as follows:
| Model | Chembl zero-shot ROC-AUC $\uparrow$ |
|---------------|------------------------------------|
| GIMLET | 0.7860 |
| GIMLET w.o. P | 0.6227 |
It is not surprising that the performance drops significantly when the position embeddings for graph structure are removed. This occurs because the molecular structure information is disregarded, leaving the model with only the set of nodes. Deprived of sufficient information about the input molecule, the model struggles to perform the tasks effectively. | Summary: This paper proposes a unified language model for both graph and text data with two main tech contributions: (1) a unified graph-text transformer encoder with a distance-based joint position embedding to encode graphs, and (2) textual instructions to enhance transferability among tasks. Zero-shot tests on classification and regression tasks are conducted to show its effectiveness.
Strengths: The high-level motivations of a unified graph-text model for molecular understanding and enhancing transferability with instructions are interesting.
Weaknesses: 1. It doesn’t make sense to evaluate the model on classification and regression tasks. The model structure is based on T5, which is for generation/translation tasks. In fact, the results in Table 1 show a huge performance gap between GIMLET and GNNs. Why not use those GNNs and Graphormers with better performance and fewer parameters? I think additional experiments of fine-tuning GIMLET on the train datasets of Bio-activity, Toxicity, and Pharmacokinetic tasks in Table 1 are needed to make a fair comparison with supervised GNNs.
2. Claims need to be further clarified. For example, in line 150-154, I don’t understand “the dense vectors encoded by GNN have a limited capacity to carry structure information”. According to the results in your Table 1, those dense vectors encoded by GNN actually carry more information than GIMLET. And “training the additional module is difficult due to the increased layers”, actually, in MoMu and CLAMP (cited in line 146), the graph and text encoders are two-tower structure like CLIP, which I don’t think increased layers.
3. The writing needs to be improved.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Critical question: Do we really need a text-decoder model instead of conventional GNNs to perform classification and regression tasks?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to question 1 in Weaknesses and question 1 in Questions:
**Why use the language model (T5)?**
Thank you for your question regarding the task setting. This study focuses on instruction-based zero-shot learning for molecule property prediction. Our approach aims to predict properties by natural language task instructions, **without the need for supervision from training data**. This methodology aligns with the instruction-based zero-shot learning paradigm [1], as introduced in Section 2: Related Work - Instruction-based Zero-Shot Learning.
In comparison to supervised training for molecule tasks, instruction-based zero-shot learning offers several key advantages. Firstly, it eliminates the need for task-specific labeled data, which can often be complex and expensive to annotate for molecules. Secondly, it allows the execution of new tasks without the resource and time-intensive process of retraining on new task samples. Lastly, it offers a more user-intuitive way of interacting with the model through natural language task descriptions, as opposed to relying solely on task samples. This is why instruction-based task execution has gained significant popularity not only in natural language processing [2] but also in the realm of visual-text multimodal tasks [3].
To enable the model to perform novel molecule tasks **without the need for supervised training, task instructions for these new tasks are crucial**. By comprehending these instructions, the model can directly carry out the new tasks.
The necessity to comprehend textual instructions underscores the imperative for utilizing a language model. In pursuit of this goal, we extend the capabilities of the existing T5 language model to encode both graph and text data.
**Is the task format suitable for T5?**
Regarding task type classification and regression, we unifiy the outputs of different tasks into the text modality. This is explained in Subsection 3.1: Problem Formulation and Framework Overview, and depicted in Figure 1. The outputs of diverse tasks are presented as text, such as "yes," "No," or "3.11." This approach unifies various tasks into conditioned text generation tasks, which aligns well with the nature of the T5-type model, an encoder-decoder model for conditional text generation.
**Why not utilize GNNs?**
The primary limitation is that graph-only models are unable to execute the instruction-based learning approach, involving both text and graph modalities. GIMLET, however, unifies the two modalities within a single model, enhancing its capacity for graph encoding and textual comprehension.
**The performance gap between GIMLET and GNNs?**
First, it is important to address that in Subsection 4.1 (Table 1, 2, and 3), the results of GIMLET and all the molecule-text model baselines are evaluated using instruction-based zero-shot learning. In contrast, the results of GNNs are obtained through supervised learning. We present the supervised learning results as a reference to gauge the difficulty of the tasks.
To illustrate the model performance improvement when supervised training data is available, we conduct few-shot experiments in Subsection 4.2: Instructions-Based Few-Shot Fine-tuning. Our experiment demonstrates that with a small amount of training data, GIMLET's performance can be further enhanced.
Additionally, we present the experimental results under the fully finetuned setting here:
Supervised result (ROC-AUC $\uparrow$ ) over Bio-activity, Toxicity, and Pharmacokinetic tasks.
| | bace | hiv | muv | Avg.bio | tox21 | toxcast | Avg.tox | bbbp | cyp450 | Avg.pha |
| ---------- | ------ | ------ | ------ | ------- | ------ | ------- | ------- | ------ | ------ | ------- |
| GIN | 0.701 | 0.753 | 0.718 | 0.724 | 0.740 | 0.634 | 0.687 | 0.658 | 0.821 | 0.739 |
| Graphormer | 0.7760 | 0.7452 | 0.7061 | 0.7424 | 0.7589 | 0.6470 | 0.7029 | 0.7015 | 0.8436 | 0.7725 |
| GIMLET | 0.8280 | 0.7834 | 0.7267 | 0.7794 | 0.7676 | 0.6591 | 0.7134 | 0.7315 | 0.8809 |0.8062|
Supervised performance (RMSE $\downarrow$) on Physicalchemical datasets.
| | ESOL | lipo | FreeSolv | Avg.phy |
| ---------- | ----- | ----- | -------- | ------- |
| GIN | 1.243 | 0.781 | 2.871 | 1.632 |
| Graphormer | 0.901 | 0.740 | 2.210 | 1.284 |
| GIMLET | 0.850 | 0.945 | 1.881 | 1.226 |
It is evident that with full supervised training data, GIMLET's performance surpasses or is comparable to supervised baselines. This reveals the potential of GIMLET in supervised fine-tuning.
### Response to question 2 in weaknesses:
Sorry for the confusion. In lines 150-154, we discuss a simple and common method for combining text with other modalities, i.e., the features obtained from the GNN are input into the language model as tokens. This simple method is also tested in our ablation study (Table 4) and is denoted as "w.o. unifying." The ablation experiment reveals that it performs worse than GIMLET.
The purpose of lines 150-154 is to analyze the drawbacks of this simple method. First, the dense vectors encoded by the GNN have limited capacity to carry structural information as input for language models. Furthermore, the additional GNN is appended before the initial layer of the language model, resulting in increased layer count and potential issues.
Our intention is not to refer to the GNN capacity in general case, or the contrastive baselines like MoMu. We will provide clearer explanations for the situation of these claims to avoid confusion in the future version.
### Response to question 3 in weaknesses:
Thanks for pointing out! We will improve the writing in the future version.
[1] Multitask Prompted Training Enables Zero-Shot Task Generalization. In ICLR 2022.
[2] Finetuned Language Models are Zero-Shot Learners. In ICLR 2022.
[3] Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. preprint 2023.
---
Rebuttal Comment 1.1:
Comment: The authors have answered most of my questions. Thus, I would like to raise my score by one point.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your valuable feedback. We were wondering if there might be any additional concerns or unresolved issues that are preventing the paper from achieving a positive rating. We remain available to address any further questions you may have.
Thank you for your time and consideration. | Summary: This paper presents GIMLET, a unified graph-text model for instruction-based molecule zero-shot learning. The proposed model uses natural language instructions to tackle molecule-related tasks in a zero-shot setting. GIMLET overcomes existing limitations and significantly outperforms molecule-text models. The paper includes experimental results that demonstrate the superiority of GIMLET over molecule-text models. The paper also explores the use of pretraining tasks and the robustness of GIMLET to instructions. Overall, the paper provides a comprehensive approach to molecule zero-shot learning using natural language instructions.
Strengths: This paper introduces GIMLET, a pioneering, integrated graph-text model specifically designed for instruction-based, zero-shot learning in molecular sciences. Leaping beyond the confines of current methodologies, GIMLET leverages natural language instructions to proficiently navigate molecule-related tasks in a zero-shot context. It surpasses the performance of conventional molecule-text models, reflecting substantial advancements in this field. Through rigorous experimental analysis, the superiority of GIMLET over conventional models is vividly illustrated.
Weaknesses: 1.One thing I'm not quite sure about is why the authors chose only a few molecule-text models as baselines, with MoMu and Galactica not even being specifically designed for predicting molecular properties. I understand that the authors might have done this because the model proposed in the paper is based on both molecules and text, but for a fair comparison, shouldn't other types of models (like those involving only molecules and not text) that predict molecular properties also be used as baselines? Otherwise, how can we highlight the advantage of incorporating text into the model for this task?
2.The paper uses a graph-text position encoding method to encode molecular graphs. Given that molecules already have their serialized representations, such as SMILES and SELFIES, isn't this approach superfluous and potentially introducing unnecessary computations and noise?
3.Based on the description in this paper, I can't clearly understand the specific definition of molecule zero-shot learning, which doesn't seem to differ much from other pre-training-fine-tuning training methods? Perhaps I misunderstood, could the authors provide a more precise explanation?
4.In Figure 1, the authors don't seem to have clearly described the specific meaning of different colored lines and boxes, which makes it difficult for readers to interpret this image.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your questions. We would like to respond to question 3 first, which pertains to the fundamental aspect of the task our paper is working on.
### Response to question 3:
Sorry for the confusion. In this work, we propose to investigate the feasibility of employing instructions to accomplish molecule tasks in a zero-shot setting. This approach aligns with the paradigm of instruction-based zero-shot learning [1], which we introduced in Section 2: Related Work - Instruction-based Zero-Shot Learning.
**What is instruction?** The instructions, also referred to as prompts, refer to natural language descriptions of the tasks to be executed. For example, in the case of the text abstract generation task, the instruction could be "Write a summary for the following text." In our context, these are natural language instructions for molecule property prediction tasks. For instance, in the "toxicity to ARE pathway" task (from Tox21), the instruction is as follows: "Oxidative stress has been implicated in the pathogenesis of a variety of diseases ranging from cancer to neurodegeneration. The antioxidant response element (ARE) signaling pathway is important in the amelioration of oxidative stress. Is this molecule agonists of antioxidant response element (ARE) signaling pathway?"
**What is instruction-based molecule zero-shot learning?** The instruction-based molecule zero-shot learning aims to enable model to perform unseen molecule tasks **without supervised training**. This is enabled by the provided task instructions for new tasks. By understanding the instructions, the model can perform the unseen tasks directly. For example, the "toxicity to ARE pathway" task mentioned above is not seen by the GIMLET before, but GIMLET is able to perform it without supervision by utilizing and understanding the task instruction. This approach eliminates the need for labeled data and leverages the textual knowledge available for downstream tasks.
**How can we enable model to perform instruction-based molecule zero-shot learning?** It has been shown that language models exhibit strong instruction-based zero-shot performance after pretraining on a large scale of tasks with instructions [1]. For the molecule domain, we also do the instruction-based task pretraining for our graph-text model GIMLET. As illustrated in Figure 2 (left), we partition tasks into pretraining tasks and downstream testing tasks, without overlapping. After pretraining, we zero-shot-test model on the downstream tasks with task instructions provided in Subsection 4.1: Instruction-Based Zero-Shot Learning, which is our main experiment.
**Difference between pretrain-finetuning method?** The main difference lies in the testing session, where instruction-based zero-shot learning does not require supervised training data, but executes the new tasks by only instructions.
### Response to question 1
We employ molecule-text models as baselines due to our focus on the instruction-based zero-shot learning setting. In this context, the input comprises both molecule data and task instruction text, necessitating the utilization of molecule-text models rather than graph-only models.
Besides the molecule-text models, we also have presented supervised results of graph-only models in Tables 1 and 3. These models include GCN, GAT, GIN, and Graphormer, serving as references for the supervised results.
### Response to question 2
This is indeed an insightful question. While serialized representations of molecules, such as SMILES, can convey molecular information, they might fall short in accurately reflecting the actual structural features of the molecule's graph.
To better illustrate the impact of employing graph representations on performance and speed, we conducted an additional experiment. We pretrained another T5 model using the same pretraining settings as GIMLET, but utilizing the SMILES representation of molecules rather than graph. This version is referred to as GIMLET-SMILES. We evaluate the performance of both GIMLET and GIMLET-SMILES using instruction-based zero-shot testing on Chembl zero-shot dataset. Additionally, we present the average FLOPs per data point as a metric to compare the computational efficiency. The experimental results are as follows:
| Model | Chembl zero-shot ROC-AUC $\uparrow$ | Avg. FLOPs per data $\downarrow$ |
|---- | ---- | ---- |
|GIMLET| 0.7860| 5.03E+09
|GIMLET-SMILES| 0.7414| 5.12E+09
It is evident that when utilizing the molecular graph, the model outperforms the use of molecule SMILES in instruction-based zero-shot learning tasks. Remarkably, the computational costs of the two models are nearly identical. This is due to GIMLET employing the same computational framework as the regular T5 model, with the graph-text mixture positional embeddings incurring costs equivalent to the normal T5 relative positional embeddings. There is no need for additional modules or computations within the model.
### Response to question 4:
Sorry for the confusion. We suppose the color and lines you mentioned are the illustration of distance aware encoding, where lines with different colors refer to different distance values. In our upcoming versions, we intend to provide a more comprehensive explanation of the figure for clarity.
[1] Sanh, V., Webson, A., Raffel, C., Bach, S. H., Sutawika, L., Alyafeai, Z., ... & Rush, A. M. (2022, April). Multitask Prompted Training Enables Zero-Shot Task Generalization. In ICLR 2022-Tenth International Conference on Learning Representations.
---
Rebuttal Comment 1.1:
Comment: We sincerely hope that our response adequately addresses your questions. We remain available to address any further questions you may have.
Thank you for your time and consideration.
---
Rebuttal 2:
Comment: Does our response adequately address your concern? If there are any remaining issues, we will continue to work on resolving them. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper addresses the challenge of molecule property prediction, especially the label insufficiency caused by costly lab experiments.
It uses natural language instructions to handle molecule-related tasks in a zero-shot setting. The authors propose GIMLET, a model that unifies language processing for both graph and text data. GIMLET uses generalized position embedding to encode graph structures and instruction text without additional modules. It also decouples the encoding of the graph from task instructions, thereby improving graph feature generalization across different tasks. The authors create a dataset of over two thousand molecule tasks with corresponding instructions and pretrain GIMLET on these tasks, which allows the model to transfer effectively across tasks. GIMLET demonstrates good performance in instruction-based zero-shot learning, even closely matching supervised Graph Neural Network (GNN) models on tasks such as toxcast and muv.
Strengths: - The paper is well-written and clearly presented;
- The authors create a comprehensive dataset for molecule tasks with instructions, which can be a valuable resource for future research.
- The ideas of leveraging instruction plus graph structures are novel ways for molecule property pretraining and the proposed unified GIMLET for graph and text data could contribute to the field of cheminformatics.
- The zero-shot performance of proposed methods is strong across baselines and tasks. Detailed ablations including pretraining, model design, and instructions are presented to illustrate the design choices.
Weaknesses: - While GIMLET demonstrates strong performance in zero-shot learning, how could it perform with more supervised learning scenarios (beyond 256 or even with full training set) is questionable. The instruction-tuned LLM demonstrates strong generalization performance on single-task finetuning in NLP, is this also true for GIMLET remains a question;
- How could model size (only one size is presented), the scale of pretraining, and the diversity/quality of instruction / pretraining tasks affect GIMLET performance, it could also be valuable to see the performance of instruction-aided evaluation / qualitative examples on helf-on validation tasks (whether GIMLET could actually follow the instructions).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How model size, and the scale of pretraining (not only the domain) contribute to the final performance of GIMLET;
- Could GIMLET be extended to using frozen / parameter-efficient fine-tuned language models? / How to validate if the instruction-based pretraining / the proposed GIMLET model is scalable?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to question 1 in Questions and question 2 in Weaknesses:
Thank you for your insightful question. We conduct experiments to illustrate the impact of varying pretraining scale and model size on GIMLET. To manipulate the pretraining scale, we explore different task numbers (tasks selected randomly) in the pretraining phase. Our evaluation includes instruction-based zero-shot performance on the large scale Chembl zero-shot testing set, as well as testing GIMLET across various model sizes on Bio-activity, Toxicity, and Pharmacokinetic tasks. The summarized results are presented below:
Zero-shot performance (ROC-AUC) over Chembl zero-shot
| Pretraining task ratio | 1/4 | 1/2 | Full(1.1K) |
| ---- | ---- | ---- | ---- |
GIMLET(T5-small) | 0.6650 | 0.7343 |0.7860
GIMLET(T5-base) | 0.6384 | 0.7318 | 0.8041
GIMLET(T5-large) | 0.6921 | 0.7562 | 0.8178
Zero-shot performance (ROC-AUC) over Bio-activity, Toxicity, and Pharmacokinetic tasks
| -| bace | hiv | muv | Avg. bio | tox21 | toxcast| Avg.tox | bbbp| cyp450| Avg. pha |
| ---- | ---- | ---- | ---- | ---- | ----|----|----|----|----|----|
|GIMLET(T5-small) |0.6957 |0.6624 |0.6439| 0.6673 |0.6119| 0.5904| 0.6011| 0.5939| 0.7125| 0.6532
GIMLET(T5-base) |0.7240| 0.6636| 0.6322| 0.6733| 0.6136| 0.5811| 0.5974| 0.7087| 0.7174| 0.7131
GIMLET(T5-large)| 0.6855| 0.6986| 0.6421| 0.6754| 0.5988| 0.5773| 0.5880| 0.6758| 0.7365| 0.7062
From our experiment, we can draw the following conclusions:
(a) The principal bottleneck in the current GIMLET framework appears to be the number of tasks, or instructions, used during pretraining. As the number of pretraining tasks increases, the performance of GIMLET consistently improves. Despite already incorporating 1.1k tasks for pretraining, there's still room for enhancement, especially considering the variability and complexity of molecule property prediction tasks. Thus, increasing the number of tasks is a primary direction for enhancing GIMLET's performance.
(b) Enlarging the model size also leads to performance gains across most tasks. However, the improvements are relatively minor compared to increasing the scale of the dataset. This phenomenon can be attributed to the constrained size of pretraining instructions, which prevents full utilization of the capacity of larger models.
In summary, both scaling up pretraining and increasing model size have the potential to enhance GIMLET's performance. However, in the current context, the primary bottleneck lies in the scale of pretraining tasks.
### Response to question 2 in Questions:
This is indeed a key question. Our model architecture is adaptable for direct application to language models of varying scales, but the training method needs to be adjusted accordingly.
First, the model architecture we have introduced, which includes mixture of distance-based position embeddings and casual attention between graphs and text, represents a general approach that empowers language models to effectively understand graph data. This approach is applicable to language models of various scales.
Second, a significant aspect related to model scalability is the training method employed. While we trained the T5 model through continuous pretraining, for larger language models, opting for parameter-efficient fine-tuning techniques such as adapters would likely be a more optimal solution.
### Response to question 1 in Weaknesses:
Thanks for the question. This study mainly focuses on zero-shot learning for molecule property prediction based on instructions, enabling generalization to unseen tasks without the requirement of labeled data, which typically necessitates expensive wet experiments. While we have shown that the performance of GIMLET can be enhanced through finetuning on few-shot examples in Subsection 4.2: Instructions-Based Few-Shot Finetuning, we also include the experimental results under the fully finetuned setting here:
Supervised result (ROC-AUC $\uparrow$ ) over Bio-activity, Toxicity, and Pharmacokinetic tasks.
| | bace | hiv | muv | Avg.bio | tox21 | toxcast | Avg.tox | bbbp | cyp450 | Avg.pha |
| ---------- | ------ | ------ | ------ | ------- | ------ | ------- | ------- | ------ | ------ | ------- |
| GIN | 0.701 | 0.753 | 0.718 | 0.724 | 0.740 | 0.634 | 0.687 | 0.658 | 0.821 | 0.739 |
| Graphormer | 0.7760 | 0.7452 | 0.7061 | 0.7424 | 0.7589 | 0.6470 | 0.7029 | 0.7015 | 0.8436 | 0.7725 |
| GIMLET | 0.8280 | 0.7834 | 0.7267 | 0.7794 | 0.7676 | 0.6591 | 0.7134 | 0.7315 | 0.8809 |0.8062|
Supervised performance (RMSE $\downarrow$) on Physicalchemical datasets.
| | ESOL | lipo | FreeSolv | Avg.phy |
| ---------- | ----- | ----- | -------- | ------- |
| GIN | 1.243 | 0.781 | 2.871 | 1.632 |
| Graphormer | 0.901 | 0.740 | 2.210 | 1.284 |
| GIMLET | 0.850 | 0.945 | 1.881 | 1.226 |
It is evident that with a complete set of supervised training data, GIMLET's performance surpasses or is comparable to other supervised baselines. This reveals the potential of GIMLET in supervised fine-tuning.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttals, which have answered many of my questions. And I raise my original rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback on our paper. We kindly inquire whether there may exist any additional concerns or unresolved questions that might be impeding the paper's attainment of a higher rating. We remain available to address any further questions you may have. | null | null | null | null | null | null |
Meta-Adapter: An Online Few-shot Learner for Vision-Language Model | Accept (poster) | Summary: For the Vision-Language Model task, which usually requires a small number of samples for fine-tuning, this work proposes a Meta-Adapter method. The Meta-Adapter method is based on the gated multi-head attention mechanism, and can be generalized to unseen categories without additional fine-tuning after a small amount of training by Few-Shot. The effectiveness of this work is experimentally demonstrated.
Strengths: 1. From the experimental results, this work performs better in corss-dataset generalization and has good performance on downstream tasks.
2. This work is well-written, the related works are well summarized and the contributions are clearly demonstrated. The empirical performance is promising.
Weaknesses: 1. The explanation of the formulas in this work could be more clear to help the reader understand the content.
2. The connection between the meta-learning method and Meta-Adapter is not well explained.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why is it called Meta-Adapter? In my understanding, the method of this work is to generalize to new samples by using a small number of training samples, and this method is not called meta-learning.
2. The motivation for this work is that "few-shot learning methods based on CLIP typically require offline fine-tuning of the parameters on few-shot samples ", and the same motivation for Tip-Adapter. It seems to me that Meta-Adapter is an improved approach based on Tip-Adapter, is there a new improvement in the motivation for the approach to this work?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The explanation of the formulas could be more clear.**
**A1:**
Many thanks. Accordingly, we will provide a more comprehensive explanation of the formulas in Section 3.2, including additional descriptions and analyses of the symbols and their functions.
**Q2: Why is it called Meta-Adapter? The connection between the meta-learning method and Meta-Adapter.**
**A2:**
The Meta-Adapter aims to build a new clip adapter with general few-shot learning abilities instead of specializing in certain tasks and domains. To achieve it, as stated in Line 46-50, we adopt the meta-testing mechanism. Specifically, we split the data into seen and unseen sets according to category, domain, or task. During training, we randomly sample few-shot image/text pairs from the seen set to optimize the parameters of the meta-adapter, enabling it with general few-shot learning ability. During testing, we first sample some few-shot image/text pairs per category from the unseen set, then fed to the frozen meta-adapter to generate embeddings for each category, and finally predict classification scores by calculating the similarity between the embeddings and features of other images in the unseen set. We will clarify it in the revision.
**Q3: The improvement in the motivation against Tip-Adapter.**
**A3:**
The motivation of the Meta-adapter is to learn how to learn from few-shot samples. It has the more general ability to learn in a few-shot setting rather than domain-specific capabilities. Previous methods, including Clip-Adapter and Tip-Adapter, require fine-tuning or hyper-parameter search tailored to the target domain in order to achieve good performance. Our method, on the other hand, can be applied to other domains or tasks seamlessly and demonstrates significant efficacy and performance advantages.
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: I am very grateful to the author for his efforts in the rebuttal process. The author's rebuttal partly resolved my doubts, and I decided to maintain the previous rating unchanged. | Summary: This paper proposes a meta-adapter structure for CLIP like vision-language backbone. Specifically, a cross-attention with a gate mechanism are used to construct the meta-adapter. It aims to improve the few-show learning ability for current CLIP backbone. Compared with other baselines such as CLIP-adapter and Tip-adapter, the proposed meta-adapter obtains improvements on several experimental settings.
Strengths: 1. This paper studies a popular question about CLIP few-shot learning.
2. The motivation and idea are well presented.
3. Meta-adapter is straightforward and easy to follow.
Weaknesses: 1. The meta-adapter uses a cross-attention and gate module, which are commonly used for this field resulting in the limited technical contribution. Simply adding these modules to improve the final performance is not very surprising.
2. The proposed module is general for vision-language model, but the evaluation is only for classification. Supplementing retrieval evaluation will further support this method.
3. Some figures are relatively misleading the readers. For example, in figure1, please unify the number scales in the performance axis. Current version is a little confusing to show the performance gain.
4. As shown in Tab1 and the limitation section discussed by the authors, the general performance gain is not very significant.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please check the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Discussed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Limited technical contribution of the meta-adapter.**
**A1:**
The main contribution of this paper is to introduce meta-learning into clip adapters for the first time, to achieve online few-shot learning for visual-language models. We mainly pursue building a new framework for clip adapters, rather than the network architecture. Although the structure of the Meta-Adapter is simple, as shown in Figure 1, it not only has higher efficiency but also stronger generalization performance than the Tip-Adapter. Besides, as shown in Table 7, compared to other sophisticated offline methods, our method demonstrates clear advantages in generalization capability and overall performance.
**Table 7. Comparison of Zero-Shot CLIP, CLIP-Adapter, CoOp, CoCoOp, and Meta-Adapter on ImageNet, UCF101, Caltech101, DTD, and FGVCAircraft datasets in the in-domain generalization setting. H: Harmonic mean.**
| Dataset | ImageNet | ImageNet | ImageNet | UCF101 | UCF101 | UCF101 | Caltech101 | Caltech101 | Caltech101 | DTD | DTD | DTD | FGVCAircraft | FGVCAircraft | FGVCAircraft |
|----------------|----------|----------|----------|--------|--------|--------|------------|------------|------------|------|-------|------|--------------|--------------|--------------|
| Model | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** |
| Zero-shot CLIP | 71.9 | 32.8 | 45.0 | 79.4 | 21.1 | 33.4 | 95.4 | 60.6 | 74.1 | 59.3 | 8.2 | 14.3 | 23.9 | 0.6 | 1.2 |
| CLIP-Adapter | 76.3 | 15.1 | 25.3 | 89.4 | 5.4 | 10.2 | 97.3 | 39.3 | 54.0 | 70.2 | 2.0 | 3.9 | 32.1 | 0.3 | 0.6 |
| CoOp | 75.3 | 2.7 | 5.2 | 89.3 | 1.0 | 2.0 | 97.2 | 31.3 | 47.4 | 71.6 | 1.5 | 2.9 | 32.7 | 0.3 | 0.6 |
| CoCoOp | 75.5 | 33.9 | 46.8 | 86.5 | 9.1 | 16.5 | 96.8 | 60.9 | 74.8 | 69.1 | 3.0 | 5.8 | 30.0 | 0.8 | 1.6 |
| Meta-Adapter | 76.3 | 40.8 | **53.2** | 82.4 | 47.7 | **60.4** | 94.9 | 76.1 | **84.4** | 64.1 | 49.1 | **55.6** | 30.8 | 17.1 | **21.9** |
**Q2: The evaluation is only for classification.**
**A2:**
As shown in Table 5 and Section 4.3 in the manuscript, we evaluate our Meta-Adapter for object detection and segmentation tasks. Without further fine-tuning, our method demonstrates superior generalization capabilities, achieving much higher performance than the Tip-Adapter and zero-shot baseline. Additionally, in Table 8 we provide the quantitative results (recall@10) of retrieval evaluation on several classification datasets. The results show that our Meta-Adapter obtains consistent and considerable improvements over Tip-Adapter across datasets.
**Table 8. Average Recall of Zero-Shot CLIP and Meta-Adapter on several datasets for retrieval evaluation.**
|Recall@10 | ImageNet | FGVC | Oxford Pets | SUN397 | UCF101 | DTD |
| ----------- | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: |
| Zero-shot CLIP |57.5 | 15.1 | 77.0 | 51.9 | 52.4 | 29.8 |
| Tip-Adapter |62.5 | 18.8 |77.3 | 57.8 | 59.0 | 39.3 |
| Meta-Adapter |**64.2**| **22.4** | **77.6** | **61.5** | **64.5** |**46.0** |
**Q3: Some figures are relatively misleading to the readers.**
**A3:**
Thanks for the suggestion. We will unify the number scales of Figure 1 in the revision.
**Q4: The general performance gain is not very significant in Table 1.**
**A4:**
Please note that our Meta-Adapter improves over Tip-Adapter by 4.96% on average, while having lower inference cost. Additionally, $\Delta$ represents the gap between optimization on individual datasets and cross-dataset generalization performance. The results reflect that our algorithm has strong generalization capabilities, enabling the cross-dataset performance to be comparable to individually optimized performance. In contrast, Tip-Adapter not only has inferior absolute performance but also notably weaker generalization abilities.
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: Thank for the authors' feedback. Even if I agree this draft focuses on a new framework instead of structure, I still concern the proposed framework is more like a technical combination. Thank you for reminding the other experimental setting other than classification, actually, my concerns was more about if the retrieval task can be added.
I also checked other reviews. My concerns have been partially resolved and I hope the authors can revise/supplement this draft accordingly. I raise my score to 5.
Thanks! | Summary: The main goal of the paper is to explore an approach that is light-weight to allow a CLIP-pretained model to perform well in few-shot settings. The proposed approach (called Meta-Adapter) essentially learns an additional multi-head attention network with an additional gating function. The approach is simple and seems to achieve interesting results.
Strengths: The addressed problem is of great practical relevance and pretraining with CLIP is highly popular these days
Overall the idea is quite simple (which is positive) and intuitive
The experiments give a variety of comparisons to zero-shot CLIP and TIP-Adapter
Weaknesses: The paper mentions on line 61 that the paper performs abalation studies - but honestly the reported experiments in the main paper are not real proper ablations - in particular the paper should have ablated the design choices of the approach where I could not find any experiment - since the approach is so simple that would have been very easy to do and would have helped to understand the approach better
The paper reports slightly lower "base class" performance (e.g. table) 2 for most datasets - which is then "compensated" for by higher "novel class" performance. This is a classic trade-off many works have - but I would have liked to not just see a single pair of base/novel class performance but rather a curve/set of novel/base class performances - that would be again more interesting and telling
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weaknesses section above
The authors argue that the work is doing some sort of "meta-learning" (lines 116-118) - however, section 3 (method) does not talk about any meta-learning setting as far as I can tell - can you please give an explanation what the meta-learning part is here?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: ok for me
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: More ablation studies of the Meta-Adapter.**
**A1:**
Thanks for this insightful suggestion. Accordingly, we further conduct more ablation studies to demonstrate the advantages of our design. As shown in Table 7, the results demonstrate that multi-head attention contributes most significantly to improving accuracy. The proposed learnable gating block can further enhance the performance while introducing value projection leads to decreased generalization capability. Besides, as shown in Table 8, we increase the model scale of the meta-adapter by widening the projection layers (Wider) or cascading multiple modules (Deeper). The results show that increasing the number of modules can improve the parameter size and accuracy slightly, but brings a significant efficiency decrease. We will add these tables to the revision.
**Table 7. Ablation study on different components. The LGB, VP, and MHA indicate the learnable gating block, value projection layer, and multi-head attention block, respectively.**
|Method | ImageNet | SUN397 | UCF101 | DTD |
|----------- | :------: | :---: | :----: |:----: |
|Meta-Adapter w/ VP | 25.6 |18.2 | 5.9 | 5.0 |
|Meta-Adapter w/o MHA| 32.9 |28.9 | 21.4 | 10.0 |
|Meta-Adapter w/o LGB| 40.2 |51.3 | 50.2 | 55.0 |
|Meta-Adapter | **40.8** |**52.6** |**51.0** | **55.8** |
**Table 8. Quantitative results of Meta-Adapter's variants on several datasets.**
|Method | ImageNet | SUN397 | UCF101 | DTD | #Param | Latency |
| ----------- | :-----------: | :-----------:| :---------:| :-------: | :-------:| :----: |
|Meta-Adapter | 40.8 | 52.6 | 51.0 | 55.8 | 2.1M | 3ms |
|Wider (X2) | 40.1 | 51.4 | 50.0 | 55.0 | 4.2M | 5ms |
|Wider (X4) | 40.5 | 51.3 | 50.4 | 55.0 | 8.4M | 9ms |
|Deeper (X2) | 40.4 | 52.9 | 52.2 | 56.7 | 4.2M | 6ms |
|Deeper (X4) | 38.9 | 52.7 | 51.4 | 56.3 | 8.4M | 11ms |
**Q2: The performance trade-off between base and novel classes.**
**A2:**
Thanks for this valuable suggestion. To demonstrate the generalization ability of Meta-Adapter, similar to CoCoOp [22], we introduce the harmonic mean as the evaluation criteria, which is used to reflect the overall performance on both seen and unseen sets [57]. As shown in Table 9, although Tip-Adapter can overfit the seen data on some datasets, its overall performance is significantly inferior to Meta-Adapter. We will update the table in the revision.
**Table 9. Comparison of Zero-Shot CLIP, Tip-Adapter, and Meta-Adapter on ImageNet, UCF101, Caltech101, DTD, and FGVCAircraft datasets in the in-domain generalization setting. H: Harmonic mean.**
| Dataset | ImageNet | ImageNet | ImageNet | UCF101 | UCF101 | UCF101 | Caltech101 | Caltech101 | Caltech101 | DTD | DTD | DTD | FGVCAircraft | FGVCAircraft | FGVCAircraft |
|----------------|----------|----------|----------|--------|--------|--------|------------|------------|------------|------|-------|------|--------------|--------------|--------------|
| Model | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** |
| Zero-shot CLIP | 71.9 | 3.8 | 45.0 | 79.4 | 21.1 | 33.4 | 95.4 | 60.6 | 74.1 | 59.3 | 8.2 | 14.3 | 23.9 | 0.6 | 1.2 |
|Tip-Adapter | 73.3 | 36.5 | 48.7 | 85.2 | 40.3 | 54.7 | 96.3 | 73.3 | 83.2 | 66.8 | 41.7 | 51.3 | 30.7 | 14.9 | 20.0 |
| Meta-Adapter | 76.3 | 40.8 | **53.2** | 82.4 | 47.7 | **60.4** | 94.9 | 76.1 | **84.4** | 64.1 | 49.1 | **55.6** | 30.8 | 17.1 | **21.9** |
[22] Zhou K, Yang J, Loy C C, et al. Conditional prompt learning for vision-language models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 16816-16825.
[57] Xian, Y., Schiele, B. and Akata, Z., 2017. Zero-shot learning-the good, the bad and the ugly. In
Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4582-4591).
**Q3: Which part explains the meta-learning settings?**
**A3:**
Sorry for the confusion. We have presented the meta-learning strategy of our method in Line 46-50. Specifically, we split the data into seen and unseen sets according to category, domain, or task. During training, we randomly sample few-shot image/text pairs from the seen set to optimize the parameters of the meta-adapter, enabling it with general few-shot learning ability. During testing, we first sample some few-shot image/text pairs per category from the unseen set, then fed to the frozen meta-adapter to generate embeddings for each category, and finally predict classification scores by calculating the similarity between the embeddings and features of other images in the unseen set. We will clarify it in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response that essentially addresses my main questions. Please include that information also in the final paper !
My assessment - also after reading the other reviews and rebuttals - remains the same. | Summary: This paper proposes Meta-Adapter which can refine the CLIP features guided by the few-shot samples in an online manner. The major challenge of adapting CLIP with few-shot samples is over-fitting. Compared with offline approaches CoOp or online approaches TIP-Adapter, Meta-Adapter alleviates the over-fitting problem and demonstrates superior generalization across datasets. The author further adapts the Meta-Adapter to an open-vocabulary object detector, ViLD, and also finds decent improvements
Strengths: + This paper is generally well-written and easy to follow.
+ The proposed Meta-Adapter shows improved generality over different datasets compared with the Tip-adapter baseline.
+ Meta-Adapter can be served as a plugin module for open-vocabulary models, such as ViLD.
Weaknesses: - Limited comparisons with existing approaches. Even though there are few online approaches like Tip-adapter, it is also encouraged to include more competitors, such as offline approaches in the experimental discussions.
- The authors claim that Meta-Adapter uses 'a lightweight residual style adapter'. However, what is the optimal size of the adapter is not justified in ablation.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - The captions of each figure are short. I recommend adding more detailed explanations for each figure.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Limited comparisons with existing approaches.**
**A1:**
Thanks for the suggestions. As shown in Table 7, we provide the ablation studies between our meta-adapter and other offline methods. For a fair comparison, similar to CoCoOp, the experiments adopt a base-to-novel generalization setting. The results demonstrate that our method has significant advantages in generalization on novel classes, e.g. improving over CoCoOp by 6.9% on ImageNet. Additionally, as in previous works, we introduce Harmonic Mean to measure overall performance on base and novel classes. It can be observed that our approach also shows clear superiority in terms of overall performance. Moreover, as shown in Table 8, we provide training time comparisons between different methods, where the duration is based on default settings in the original papers. The results show that our method not only surpasses these offline approaches in generalization performance but also has noticeable advantages in training speed. More importantly, finetuning is not needed for our method when applying to new datasets or tasks. We will add these tables to the revision.
**Table 7. Comparison of Zero-Shot CLIP, CLIP-Adapter, CoOp, CoCoOp, and Meta-Adapter on ImageNet, UCF101, Caltech101, DTD, and FGVCAircraft datasets in the in-domain generalization setting. H: Harmonic mean.**
| Dataset | ImageNet | ImageNet | ImageNet | UCF101 | UCF101 | UCF101 | Caltech101 | Caltech101 | Caltech101 | DTD | DTD | DTD | FGVCAircraft | FGVCAircraft | FGVCAircraft |
|----------------|----------|----------|----------|--------|--------|--------|------------|------------|------------|------|-------|------|--------------|--------------|--------------|
| Model | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** |
| Zero-shot CLIP | 71.9 | 32.8 | 45.0 | 79.4 | 21.1 | 33.4 | 95.4 | 60.6 | 74.1 | 59.3 | 8.2 | 14.3 | 23.9 | 0.6 | 1.2 |
| CLIP-Adapter | 76.3 | 15.1 | 25.3 | 89.4 | 5.4 | 10.2 | 97.3 | 39.3 | 54.0 | 70.2 | 2.0 | 3.9 | 32.1 | 0.3 | 0.6 |
| CoOp | 75.3 | 2.7 | 5.2 | 89.3 | 1.0 | 2.0 | 97.2 | 31.3 | 47.4 | 71.6 | 1.5 | 2.9 | 32.7 | 0.3 | 0.6 |
| CoCoOp | 75.5 | 33.9 | 46.8 | 86.5 | 9.1 | 16.5 | 96.8 | 60.9 | 74.8 | 69.1 | 3.0 | 5.8 | 30.0 | 0.8 | 1.6 |
| Meta-Adapter | 76.3 | 40.8 | **53.2** | 82.4 | 47.7 | **60.4** | 94.9 | 76.1 | **84.4** | 64.1 | 49.1 | **55.6** | 30.8 | 17.1 | **21.9** |
**Table 8. Training time on ImageNet of Meta-Adapter and other offline methods on a single GeForce RTX 3090.**
|CLIP-Adapter (200 epochs) | CoOp (200 epochs) | CoCoOp (10 epochs) | Meta-Adapter (10 epochs) |
| ----------- | :-----------: | :-----------: | :-----------: |
| 17h 30min | 15h 10min | 23h 30min | **20min** |
**Q2: The captions of each figure are short.**
**A2:**
Many Thanks. We will add more details to the captions according to the reviewer's suggestion.
**Q3: What is the optimal size of the adapter is not justified in ablation.**
**A3:**
We appreciate the comments. Since we pursue online few-shot learning, our method needs to balance both accuracy and efficiency. As shown in Table 9, we increase the model scale of the meta-adapter by widening the projection layers (Wider) or cascading multiple modules (Deeper). The results show that increasing the number of modules can improve the parameter size and accuracy slightly, but brings a significant efficiency decrease. We will clarify it and add this table to the revision.
**Table 9. Quantitative results of Meta-Adapter's variants on several datasets.**
|Method | ImageNet | SUN397 | UCF101 | DTD | #Param | Latency |
| ----------- | :-----------: | :-----------:| :---------:| :-------: | :-------:| :----: |
|Meta-Adapter | 40.8 | 52.6 | 51.0 | 55.8 | 2.1M | 3ms |
|Wider (X2) | 40.1 | 51.4 | 50.0 | 55.0 | 4.2M | 5ms |
|Wider (X4) | 40.5 | 51.3 | 50.4 | 55.0 | 8.4M | 9ms |
|Deeper (X2) | 40.4 | 52.9 | 52.2 | 56.7 | 4.2M | 6ms |
|Deeper (X4) | 38.9 | 52.7 | 51.4 | 56.3 | 8.4M | 11ms |
---
Rebuttal Comment 1.1:
Title: Good rebuttal, please keep additional experiments in the paper
Comment: Thank the authors for the detailed rebuttal. It resolved all my concerns and I highly encourage the authors to include the additional experiments in the main paper or appendix.
I have raised my score to 5: Borderline accept. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes an online adaptation method for CLIP (no fine-tuning of few-shot samples of unseen categories are required, unlike CoOp, CoCoOp, and CLIP-Adapter), called Meta-Adapter. The main claim seems to be that Meta-Adapter is more robust than the most related approach Tip-Adapter, which relies heavily on the hyperparameter search strategy on the target dataset – alpha and beta in Eq (2) which help adjust the weight between category embeddings and few-shot visual embeddings.
More specially, the high-level idea of Meta-Adapter can be found in Figure 2 and Eq. (3). One can make a comparison between Eq. (2) and Eq. (3) to see how Meta-Adapter simplifies the online adaptation. The architecture of Meta-Adapter is based on the gated multi-head attention mechanism [26].
Strengths: S1: Even though the design choice for Meta-Adapter seems arbitrary, it is simple and sound.
S2: Experimental results show the superiority of Meta-Adapter to Tip-Adapter across tasks/settings and architectures (Table 1-5).
Weaknesses: W1: Experimental settings seem to deviate from those in Tip-Adapter, making them less convincing. Examples: (i) This paper considers 8 image classification datasets instead of 11. (ii) The efficiency analysis is lacking. (iii) Related to (ii), there are no CoOp, CoCoOp, or CLIP-Adapter baselines. (I understand that these are online approaches but it would be nice to discuss, for example, an effectiveness/efficiency tradeoff).
W2: Clarity. This point is minor but overall the paper seems to jump into details too quickly and it would benefit from more coarse-to-fine writing. For example, Section 3.2 could benefit form describing different components before jumping into details.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Experiments on more image classification datasets.**
**A1:**
Thanks for the valuable suggestion. As shown in Table 7, we conduct the experiments in the other 3 datasets as the reviewer mentioned. Similar to the reported results in the main paper, our method also achieves consistent gains over the Tip-Adapter. We will add this table to the revision.
**Table 7. Quantitative results of Meta-Adapter and other methods on Food101, Stanford Cars, and Oxford Flowers datasets.**
|Method | Food101 | Stanford Cars | Oxford Flowers |
| ----------- | :-----------: | :-----------: | :-----------: |
| Zero-Shot CLIP| 77.4 | 55.7 | 66.0 |
| Tip-Adapter | 77.8 | 66.7 | 89.9 |
| Meta-Adapter | **79.0** | **67.3**| **93.5** |
**Q2: Efficiency analysis is lacking.**
**A2:**
We have reported the comparison of inference time for online methods in Figure 1(b). The results show that the inference time of Tip-Adapter increases linearly with the number of input shots, due to its individual encoding for each shot. In contrast, our method can maintain a much lower and constant time consumption, while achieving a higher accuracy.
**Q3: Comparison with offline methods.**
**A3:**
As shown in Table 8, we provide the ablation studies between our meta-adapter and other offline methods. For a fair comparison, similar to CoCoOp, the experiments adopt a base-to-novel generalization setting. The results demonstrate that our method has significant advantages in generalization on novel classes, e.g. improving over CoCoOp by 6.9% on ImageNet. Additionally, as in previous works, we introduce Harmonic Mean to measure overall performance on base and novel classes. It can be observed that our approach also shows clear superiority in terms of overall performance. Moreover, as shown in Table 9, we provide training time comparisons between different methods, where the duration is based on default settings in the original papers. The results show that our method not only surpasses these offline approaches in generalization performance but also has noticeable advantages in training speed. More importantly, finetuning is not needed for our method when applying to new datasets or tasks. We will add these tables to the revision.
**Table 8. Comparison of Zero-Shot CLIP, CLIP-Adapter, CoOp, CoCoOp, and Meta-Adapter on ImageNet, UCF101, Caltech101, DTD, and FGVCAircraft datasets in the in-domain generalization setting. H: Harmonic mean.**
| Dataset | ImageNet | ImageNet | ImageNet | UCF101 | UCF101 | UCF101 | Caltech101 | Caltech101 | Caltech101 | DTD | DTD | DTD | FGVCAircraft | FGVCAircraft | FGVCAircraft |
|----------------|----------|----------|----------|--------|--------|--------|------------|------------|------------|------|-------|------|--------------|--------------|--------------|
| Model | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** | Base | Novel | **H** |
| Zero-shot CLIP | 71.9 | 32.8 | 45.0 | 79.4 | 21.1 | 33.4 | 95.4 | 60.6 | 74.1 | 59.3 | 8.2 | 14.3 | 23.9 | 0.6 | 1.2 |
| CLIP-Adapter | 76.3 | 15.1 | 25.3 | 89.4 | 5.4 | 10.2 | 97.3 | 39.3 | 54.0 | 70.2 | 2.0 | 3.9 | 32.1 | 0.3 | 0.6 |
| CoOp | 75.3 | 2.7 | 5.2 | 89.3 | 1.0 | 2.0 | 97.2 | 31.3 | 47.4 | 71.6 | 1.5 | 2.9 | 32.7 | 0.3 | 0.6 |
| CoCoOp | 75.5 | 33.9 | 46.8 | 86.5 | 9.1 | 16.5 | 96.8 | 60.9 | 74.8 | 69.1 | 3.0 | 5.8 | 30.0 | 0.8 | 1.6 |
| Meta-Adapter | 76.3 | 40.8 | **53.2** | 82.4 | 47.7 | **60.4** | 94.9 | 76.1 | **84.4** | 64.1 | 49.1 | **55.6** | 30.8 | 17.1 | **21.9** |
**Table 9. Training time on ImageNet of Meta-Adapter and other offline methods on a single GeForce RTX 3090.**
|CLIP-Adapter (200 epochs) | CoOp (200 epochs) | CoCoOp (10 epochs) | Meta-Adapter (10 epochs) |
| ----------- | :-----------: | :-----------: | :-----------: |
| 17h 30min | 15h 10min | 23h 30min | **20min** |
**Q4: Minor issues about clarity.**
**A4:**
Thanks for the constructive comments. Accordingly, we will improve the presentation by adding more high-level descriptions and backgrounds before Section 3.2. | null | null | null | null | null | null |
Block-local learning with probabilistic latent representations | Reject | Summary: The present work proposes a block-wise learning strategy, whereby the architecture is split into several blocks, with each block receiving an error signal stemming from a local (block-wise) loss. As this technique makes use of a parametrized twin network to compute these error signals, it also bypasses the so-called "weight transport problem" of the standard backprop algorithm. The motivation of this work is the hardware-friendliness of the resulting algorithm.
More precisely:
- Section 3.1 formulates the learning objective in a probabilistic fashion (likelihood maximization) and how to do it (variational inference)
- Section 3.2 explains the parametrization of the base model $p(z|x) := \alpha(z)$ to be trained and that of the variational posterior $q(z|x, y)$. The variational posterior itself decomposes into the product of bottom-up top-down messages: $q(z|x, y) \propto p(z|x) p(y|z) := \alpha(z) \times \beta(z)$. The base model and the variational posterior are parametrized within the *exponential family* and the neural networks are made to output the natural parameters associated with these distributions. No stochastic quantity is ever propagated through the nets -- e.g. it does not operate as a VAE for instance.
- Section 3.3 grounds the previous idea in an encoder-decoder setting where the last layer of the decoder is augmented with extra channels to output pixel-wise variances in the image space. Training this architecture on F-MNIST with the proposed approach (the learning objective is not yet introduced at this stage), it is shown that the resulting uncertainties are good proxy to the reconstruction error.
- Section 3.4 introduces the proposed local (block-wise) learning objectives -- the derivation is in Appendices 1.2 -> 1.3.4. Heuristically: the learning rule reads as the forward/posterior mismatch backpropagated into the feedforward block (first term and second term of Eq. 7) and its feedback counterpart (second term of Eq. 7) through the natural parameters gradients. Importantly, an heuristic called "data mixing" is introduced to midly take into account the residual terms of the upper bound of the loss (details in Appendix).
- Section 4.1 presents results obtained on MNIST, F-MNIST and CIFAR-10 with the proposed technique on ResNet architectures, benchmarked against feedback alignment and standard backprop. The proposed approach performs comparably with backprop on MNIST and F-MNIST, slightly outperforms FA on CIFAR-10 but is considerably degraded with respect to backprop.
- Section 4.2 finally presents results on a 20-layers deep transformer architecture on a toy task (reverting a random permutation of numbers ranging between 0 and 9) where the proposed approach is shown to perform comparably with backprop.
Strengths: - The paper tackles an interesting problem: block-wise local training casted into a probabilistic setting.
- The idea of the paper is interesting: the starting point is the same as Predictive Coding (Whittington & Bogacz, 2017), but: i/ picking a different variational family ii/ amortizing inference with a single forward pass (rather than minimizing energy for the E-step at each batch iteration) iii/ having a whole block of layers (where backprop applies) to compute the parameters of the distributions. I like the idea of stitching several algorithms together to solve a problem.
Weaknesses: - The writing and the structure of the paper make it extremely difficult to deeply understand the proposed approach. To my eyes, ideas do not appear in the right order in the main, important ideas are in the appendix and the notations are confusing, important details are missing.
- It is *not* true that the algorithm parallelizes the forward and backward pass (L.40: "forward and backward propagation can work in parallel"). The underlying algorithm is just variational inference applied to a generative model conditioned on a given input $x$. I do not see any valid (theoretically grounded) argument as to why the first block could start processing a novel input $x'$ while the upper blocks process an input $x$. **The block-wise locality of the learning rule does not suffice as an argument here**.
- In the same vein, it is *neither* true that the algorithm allows for the parallelization the backward pass *across different blocks*. Here again, as the underlying algorithm simply is variational inference, I see no valid argument as to why each (feedforward) block could do without having a top-down error signal. However, it is true that the training of the *feedback* parameters can be parallelized.
- The derivation of the learning rule -- which should be, to my eyes, the *central piece* of the paper -- is unfortunately not presented in a sufficiently clear and detailed fashion.
- Section 3.3 on uncertainty estimation is weak and orthogonal to the scope of the paper in my eyes. The uncertainty estimation is carried out on a single simple task, and does not abide by the standard of the uncertainty estimation literature: are the uncertainties well calibrated? Can they be used for anomaly detection? What is the quantitative performance of the anomaly detection in terms of binary classification metrics? A good task to consider (and not too difficult) is the MVTech dataset (https://www.mvtec.com/company/research/datasets/mvtec-ad).
- The baselines chosen in the experimental section are not very relevant. The proposed technique applies to block-wise training, but not a single block-wise training baseline (e.g. Belilovsky (2019)) is considered. Why? A relevant choice would have been to consider a VGG-11 architecture with 3 layers per block and try to reach $\approx 67.6 \\%$ top-1 performance on ImageNet (Belilovsky et al, 2019: https://arxiv.org/pdf/1812.11446.pdf).
- Given that the use of backprop is allowed within each block and that ResNet architectures are considered, the experimental results are disappointing: the performance obtained on CIFAR-10 is very poor ($\approx 70 \\%$ accuracy on CIFAR-10 is achievable by a 4 layers-deep convnet), ImageNet32 (or even ImageNet) has not been considered, and some results are surprising (see one of the questions below).
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - It is more a suggestion for clarity improvement than a question: for the sake of clarifying the derivation of the learning rule, could you consider presenting things **in the main** closer to the following:
+ $-\log p(y|x) \leq L_b^k (^*)$ with $L_b^k:=-\mathbb{E}_q[\log p(z^k, y|x)] - H(q(z^k |x,y)) = -\log p(y|x) + KL(q(z^k |x,y) || p(z^k|x,y))$.
+ Sum $(^*)$ over k, divide per $N$ to obtain the upper bound $L_b := -\log p(y|x)+ \frac{1}{N}\sum_{k=1}^N KL(q(z^k |x,y) || p(z^k|x,y))$.
+ Re-write $L_b$ as: $L_b=\frac{1}{N}\sum_{k=1}^N\ell(x, y, z^k)$ with $\ell(x, y, z^k) := \mathbb{E}_{q(z^k|x,y)}\left[\log \frac{q(z^k|x,y)}{p(y|z^k)p(y|z^k)}\right]$
+ Finally rewrite $\ell(x, y, z^k) = KL(q(z^k|x,y) || p(z^k|x)) - \mathbb{E}_{q(z^k|x,y)}\left[\log p(y|z^k)\right]$
For next questions onward, we denote $\ell^{(1)}(x, y, z^k) := KL(q(z^k|x,y) || p(z^k|x))$ and $\ell^{(2)}(x, y, z^k) := - \mathbb{E}_{q(z^k|x,y)}\left[\log p(y|z^k)\right]$ such that $\ell(x, y, z^k) = \ell^{(1)}(x, y, z^k) + \ell^{(2)}(x, y, z^k)$.
- In the light of the previous remark, I don't think it is optimal for clarity purposes to directly give the formula in Eq. (7). Also, Eq. 7 is essentially skewed as you simply write $\partial_\theta \ell^{(1)}(x, y, z^k)$ and *not* $\partial_\theta \ell (x, y, z^k)$ (i.e. the total ELBO).
Another thing which is very unclear is that you make no distinction between the parameter of the *base network* and that of the *target* / feedback network. In L. 104, you define $\theta$ as the "network parameters", but in Eq. 8, you are explicitly taking gradient of the output of the feedback network with respect to $\theta$. There is an ambiguity here that hinders clarity. Please distinguish between the two sets of parameters.
- Following up on the previous bullet: there is *no* explanation in the main of why you are discarding $\ell^{(2)}(x, y, z^k)$ from the ELBO gradient. It only when looking at 1.3.4 inside the supplementary material that we understand that: 1/$\ell^{(2)}(x, y, z^k)$ is intractable 2/ "data mixing",which appears to be a heuristic optimization trick in the main, is in fact intended to approximate/emulate the intractable contribution of $\ell^{(2)}(x, y, z^k)$. Even after reading multiple times L. 72-80, I still misunderstand this trick. If I missed something important, please clarify it inside the main.
- I'm doubtful about the experimental results: why does the proposed technique overfits so well CIFAR-10 while being so poor at generalization, in spite of using a ResNet? Do you train all layers (4 layers would be enough to overfit CIFAR-10)? Do you not apply optimization tricks to avoid overfitting (i.e. weight decay, dropout, data augmentation)? This is very surprising.
- One other question which is absent in the paper (perhaps hinted by Fig. 1) is how **the last block**, with the classification error signal, is trained? My assumption is that it is trained by mere backprop.
- **A potential interpretation for your CIFAR-10 results**. If the previous point holds true, my interpretation of your surprising CIFAR-10 results is that you might end up training *only* the last block (4 layers for ResNet-18), which is sufficient to overfit CIFAR-10, but generalizes as a 4 layers-deep architecture (consistently with my remark above) because the error signal received by previous layers might be irrelevant. To sanity-check this, I would suggest performing a low-dimension project (e.g. using t-SNE) of the activations of the last layer of the penultimate block (e.g. $a_2$ on Fig. 1) and visualize whether the classes are well separated. I assume (but perhaps I'm wrong) they are not.
- **Why the previous blocks might not learn?** This is something to be investigated further. I see several possibilities:
+ The "data mixing" trick is too heuristic and does not suffice to "emulate"/take into account the intractable part of your ELBO gradient.
+ If you indeed parallelize gradient computation *across different blocks*, it might be that some blocks never receive top-down error information.
- A really minor point at this stage would be to mention Predictive Coding (Whittington & Bogacz, 2017), whose starting point is the same as yours (e.g. variational inference on a generative model), but using a much simpler variational family.
- Coming back on uncertainty estimation: I'm not sure it is a desirable property that the model exhibit high pixel-wise variance for *in-distribution* features. A more desirable property rather is to have high uncertainty on *out-of-distribution* features -- namely: segmentation anomalies/defects. Although I still think this direction is orthogonal to the scope of the paper and should be removed, if this is of interest to you, consider the MVTech dataset and see if the pixel uncertainties can be leveraged to detect object anomalies.
- Another minor point: Frenkel et al (2021) does not suffice as a reference to Target Prop algorithms. Could you please add Lee et al (2015) -- the seminal target prop paper --, Meulemans et al (2020), Ernoult et al (2022).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: To summarize my points of advice above:
- In terms of presentation, I would recommend:
+ you clarify the derivation of your learning rule along the lines suggested above,
+ state clearly the intractability of $\ell^{(2)}$ for the ELBO gradient and better explain the heuristic used ("data mixing"),
+ distinguish between the feedback and feedforward block parameters, e.g. $\theta_f$ and $\theta_b$,
+ remove that your approach allows for forward/backward pass parallelization and backward pass parallelization across layers. While you *can* do this in practice, the theoretical approach itself (e.g. variational inference) does not prescribe doing this. This might explain why the penultimate and upstream blocks don't learn.
+ **Please write a detailed pseudo-algorithm** for the proposed procedure for a given training batch. That would be extremely helpful.
- Try to check if the previous blocks are really learning. My hypothesis is that it is not and that it requires fixing the algorithm itself.
- If uncertainty estimation matters to you, I would suggest considering the MVTech dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the vary detailed and valuable feedback. We made multiple changes to make the paper more accessible for a broad audience. We also clarified that the proposed method in fact combat the locking problem by adding pseudo code and additional explanations. We will provide additional details to individual concerns below.
**Response to "weaknesses":**
1) The writing and the structure of the paper ...
*Response:* Based also on comments by other reviewers we updated the theory part of the paper. We shortened some parts of the uncertainty estimation example in Fig.2 to make more space to describe the details of the model better. We also added additional details to the supplement (see Sections 3.1, 3.4, S1.3, S1.3.4 and S1.4).
2) It is *not* true that ...
*Response*: The proposed BLL model is not just any application of variational inference (VI), but a very specific application of VI that allows us to separate the forward and backward propagation. The key point is that the linear combination of forward and backward messages in Eq.S10 form the variational posteriors, which reflects the conditional independence structure of (S3). This property allows us to split inference paths and parameter spaces, but is usually not exploited in other VI models. Hence, forward and backward messages can start propagating in parallel from both ends (our claim in L.40), and parameter updates can be computed as soon as both streams of information arrive at individual blocks. In contrast, in standard error back-propagation, backward updates are completely stale until losses are computed at the end of the forward pass. We added additional details to Sec. S2.2 to clarify and explain this feature of our algorithm in greater detail.
3) In the same vein, it is *neither* true that ...
*Response:* We put more emphasis on the derivation as outlined above.
3) Section 3.3 on uncertainty estimation is weak ...
*Response:* This example was meant for didactic reasons and we think that the it helps to better understand the model. We will consider the MVTech dataset to evaluate uncertainty detection in future work.
4) The baselines chosen in the experimental ...
*Response:* Belilovsky et al. 2019, uses a a block-wise learning approach, but uses greedy learning where blocks are trained subsequently (rather than simultaneously). While this is somewhat orthogonal to our approach, where all parts of the network are trained in parallel, we agree that it is interesting to point out the existing different approaches. We included this and other baselines as suggested.
5) Given that the use of backprop is allowed ...
*Response:* After the submission we found that the poor results in CIFAR-10 and Fashion MNIST were due to an error in the implementation. The error concerned the generation of splits between blocks and local losses and prevented meaningful learning in all but the last blocks. We re-ran all experiments and get significantly better results now. We included these experiments in the updated version. We will also run additional experiments on ImageNet as suggested for the camera-ready version.
**Response to "Questions":**
1) It is more a suggestion for clarity improvement ...
*Response:* We restructured the math as suggested.
2) In the light of the previous ...
*Response:* Our initial idea was to treat the model for the general case where parameter spaces are not separated. We see now that this only led to confusions and made this point clear from the beginning as suggested.
3) Following up on the previous ...
*Response:* Yes, the data mixing as presented is a heuristic based on the recursive decomposition of the VI gradient. A theoretically exact version of data mixing is possible but produces exponentially many terms in the number of splits. For small number of splits this is still tractable but our first experiments suggested the much cheaper approximate version included in the paper suffices. We clarified this.
4) I'm doubtful about the experimental results ...
*Response:* This difference is gone now after fixing the code as outlined above. We updated the results.
5) One other question which is absent in ...
*Response:* Exactly. For the last block the exact gradients are straightforward.
6) A potential interpretation for your CIFAR-10 results ...
*Response:* We added this analysis to visualize the parameter space in Fig. S1.
7) Why the previous blocks might not learn ...
*Response:* The newly added analysis on t-SNE suggests that gradients do propagate through the network. We are investigating the impact of the data mixing heuristic, but a complete analysis seems to extend beyond the scope of the paper (or at least were not ready for this rebuttal). We are implementing the more complex recursive approximations L2, L3, … to see if they give significantly better results on tasks like CIFAR10, ImageNet, etc. We will include first results in the camera ready version of the paper.
8) A really minor point ...
*Response:* We added a mention of and relation to predictive coding as suggested here and also by other reviewers as well.
9) Coming back on uncertainty estimation ...
*Response:* Thank you, we will consider this data set.
10) Another minor point ...
*Response:* We added these references.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal answer
Comment: I hereby acknowledge I thoroughly read your rebuttal and took into account the updated version of the paper.
**Responses to weaknesses**
1. OK
2. *"Hence, forward and **backward messages** can start propagating in parallel from both ends (our claim in L.40), and **parameter updates can be computed as soon as both streams of information arrive at individual blocks***. So you acknowledge that you *cannot* update the **feedforward** parameters until a "backward" stream of information reaches "individual blocks": **therefore feedforward updates are locked**, thank you for acknowledging this point. This being said, I do agree that **feedback** parameter updates are not locked. I already pointed this out in my initial review: *please refine the discussion about update locking separating out clearly the cases for feedforward and feedback parameters*.
3. Same.
3. Yet, it would still deserve further explorations and is still, to my eyes, orthogonal to your work. I would rather picture this part in Appendix.
4. Again: in spite of the peculiar topology of the underlying probabilistic model, your approach does *not* prescribe updating the *feedforward parameters* in parallel - or at least you haven't yet convinced me about this. So Belilovsky et al (2019), which embraces the sequentiality of the feedforward updates, stands as a relevant comparison to your work.
5. Thank you for acknowledging that my interpretation of your initial results was exact. Also, I really appreciate the amount of effort here. This being said, 84.2\% test-1 accuracy on CIFAR-10 and *most importantly* 87.6\% train-1 accuracy is still low and accounts for an **optimization issue**: you should be able to have $\approx 95\%$ train-1 accuracy. Therefore, there is still an unresolved issue here which I think might be due to *the bias induced by the estimation of the intractable term of your gradient formula*, or because the parallelization of your feedforward weight updates carries your feedforward weights in irrelevant directions.
**Responses to questions**
1. Thank you, it is highly appreciated.
2. The update locking narrative depends on whether you consider feedforward or feedback parameters, so it is **crucial** to distinguish them.
3. OK, making this clearer is really important. Thank you.
4. I already addressed this point above.
5. OK, it is important to mention it explicitly somewhere.
6. *"The error concerned the generation of splits between blocks and local losses and prevented meaningful learning in all but the last blocks."*. Thank you for acknowledging my interpretation of your results.
7. This answer holds *after your code fix*. As to the results currently standing in the updated pdf: as I said above, I'm convinced that optimization is working well.
8. Thank you.
9. OK
10. OK
Bottom line:
- Thank you a lot for the tremendous amount of effort put into your updated version. I really appreciate it.
- Still, I still strongly dispute the claim that you can parallelize feedforward parameter updates for the aforementioned reasons. I'm still deeply unconvinced about it.
- While the updated experimental results are better than the previous ones (where the bug is *exactly* the one I suspected), the train accuracy is still low, which hints at an **optimization issue** which, I hypothesize, might be due to: 1/ updating your feedforward weights in irrelevant directions as you update them in parallel *while you should await for a top-down error information signal*, as you *yourself* acknowledge, or 2/ the way you treat the intractable term of your gradient formula.
---
Reply to Comment 1.1.1:
Title: Reply to post rebuttal answer
Comment: 2.) The backward network only needs the (typically very sparse) labels to compute the required b_k terms and not the errors. Therefore, the activations of the backward network can be calculated in parallel to the forward network. Which means the weight updates can be calculated as soon as the forward pass in the forward network is done since the backward activations have already been calculated. Whereas in standard backprop, the backward pass can only begin once the forward pass is done i.e. it is locked. For a supervised training method, updating the parameters of each block with greater parallelism than this introduces a tradeoff in terms of staleness of parameters. This is because, information from the labels will always be required. Given these two constraints, we think we have achieved very high parallelism, so for the class of models that use tunable backward weights, there is not more that can be done.
5.) We added Belilovsky et al (2019) as a comparison despite it being a somewhat different approach as described in the rebuttal.
6.) We have very carefully checked the implementation now, we don’t think there are any more issues with it. Also our analysis demonstrates that gradients propagate throughout blocks suggesting that propagation of backward information works. Clearly the performance is below that of end-to-end training, but also previous approaches have reported lower training losses on CIFAR-10. As pointed out we are running ablation studies and augmentations to the model to further improve the performance (also on ImageNet). They look quite promising and will be included in the camera ready version. | Summary: The authors present a block-local learning rule as an alternative to end-to-end gradient backpropagation to train neural networks. They present a probabilistic view of neural network representations and assuming an exponential family of distributions, derive a learning rule that can be understood as forward and backward message passing between blocks. Notably their message passing interpretation allows them to formulate auxilliary local losses that can be then optimized using gradient descent at the block-local level. Furthermore, they claim that their algorithm is a more principled way of performing algorithms like Feedback Alignment and Target Propagation. Finally, they demonstrate that their algorithm can be used to train ResNets (ResNet-18 & ResNet-50) on certain vision datasets and Transformer architecture on a sequence prediction task. Overall, the proposed algorithm seems a promising alternative to backpropagation with local learning properties that enable better memory footprint and neuroscientific realism than backpropagation.
Strengths: 1. The paper does a good job in explaining the probabilstic interpretation and message passing view of forward and backward phase in neural networks.
2. The authors also empirically demonstrate the efficacy in training neural networks in Autoencoder setting as well as image classification and sequence prediction settings.
3. The proposed probabilistic interpretation of activations allows uncertainty estimation in neural network predictions. Although these uncertainty estimates are not benchmarked in the paper, I feel this is a strength of the proposed method.
4. Owing to its block-local nature, the proposed algorithm can be thought to be a biologically-plausible credit assignment algorithm for hierarchical neural networks. In doing so, this paper also offers a potential solution to memory-efficient distributed training of neural networks.
Overall, I think it's a strong submission and is very relevant for the NeurIPS community.
Weaknesses: 1. The writing and presentation in the paper is sometimes hard to read and understand, thereby leading to lack of an in-depth understanding of the proposed algorithm.
2. The current version of the paper does a great job in introducing the probabilistic interpretation of the neural network activations and the message passing view of forward and backward passes. However, their main learning rule is described in Section 3.4 which is not as clearly described. Unfortunately, the reader is deferred to the Appendix for key details of the derived learning rule, which adds to the lack of clarity regarding their proposed learning rule.
3. In contrast to the Feedback alignment algorithm, the block-local learning algorithm seems to overfit significantly to the Cifar-10 dataset. Although this is an interesting finding in itself, the paper doesn't offer an explanation into this phenomenon or which components of their algorithm contribute to this.
4. The discussion section offers more conjectures and falls short of discussing the implications of their results. For instance, the discussion section doesn't revisit the issue of overfitting or dive deeper into strategies around choosing the blocks & their backward counterparts and how these choices could affect the performance of the algorithm. The discussion section could also potentially highlight the potential biological plausibility of the proposed algorithm.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. In Like 115 (top of page 4), there seems to be a $log$ missing after the equals sign. Also, it seems that the right hand side of the expression would be a log-sum term. It is not clear how you write it as a sum of log terms in right hand side of Eq. 2.
2. In Eq. 4, you describe $\beta_k(z_k)$ to be the backward messages and $\rho_k(z_k)$ to be the estimated posterior. However, in Eq. 7, you describe passing posterior messages $\rho_k(z_k)$ using backward network activations. Could you please clarify this apparent change of notation and/or interpretation of backward messages?
3. In Eq. 7, the partial derivative wrt $\theta$ is computed only for the block local parameters right? But in the true formulation, gradients should be computed for all parameters in the computational graph. Is that correct? Or does the formulation of the probalistic graph enable inferring the true gradients by just computing the block-local parameter gradients? The variational local loss also probably plays a role here. Could you kindly clarify this part as it seems central to the entire proposal?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for ecognising the novelty of the proposed twin-network architecture, and pointing out several ways to improve the paper. We have made a number of changes to make the paper more accessible as suggested, which we will detail below.
We also would like to thank all the reviewers for their constructive comments and questions. Please note that we have uploaded an updated version of our main text as well as supplement. We have indicated all major changes using blue color text. We have also responded to each reviewer separately and in detail.
**The updates are summarised as follows:**
1. Updates to the notation and theoretical description of the model
2. Updated results across the board in Table 1. Large change in performance for CIFAR-10 due to a discovered bug that was affecting just those experiments that caused overfitting. The bug affected the construction of splits in the forward and backward networks to create blocks and local losses. It resulted in effectively only generating meaningful training signals in the last block as suspected by reviewer 5.
3. Additional explanatory test and t-SNE results in the supplement.
**Questions:**
1) In Like 115 (top of page 4), there seems to be a missing ...
*Response:* Yes, correct. The log in line 115 is a typo. We fixed that, thank you! However, Eq.2 is correct. The identity in Eq.2 is commonly exploited in the EM literature. The trick is to take the derivative on the left side to get 1/p(y|x) and then absorb this term into the expectation to get p(z|x,y). We included a step-by-step expatiation in Sec. S1.4 in the supplement for completeness.
2) In Eq. 4, you describe to be the backward messages and ...
*Response:* The model uses the forward and backward messages a_k and b_k that correspond to the distributions (3) and (4). The variational posteriors \rho_k are formed by combining a_k and b_k locally. Eq.(7) is the form for a general posterior and Eq.(8) the construction for the specific choice of a_k and b_k. Many of these details were hidden in the supplement, and we re-structured the paper to make this clear.
3) In Eq. 7, the partial derivative wrt is computed only ...
*Response:* Correct, the proposed method is an approximation to the true gradient. We described this in greater detail in section S1.3.4 where we provide more details to posterior mixing. We also moved more details to the main text to make clear what assumptions were made to arrive at the result in Eq.7.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for the clarification and updating the manuscript based on the reviews.
> The log in line 115 is a typo. We fixed that, thank you! However, Eq.2 is correct. The identity in Eq.2 is commonly exploited in the EM literature.
Thank you for this clarification. If I understand correctly from the supplementary material, you used $p(y|x) = \mathbf{E}_z [p(y|z_k) P(z_k|x)]$ to get to Eq. 2. I think it would be clearer to write this definition in Line 115 (now Line 121-122).
> The model uses the forward and backward messages a_k and b_k that correspond to the distributions (3) and (4).
I think I understand this point a bit better now. It was very difficult for me to understand this from the original writing. The updated manuscript probably is a better way of presenting this. However, since the NeurIPS instructions were to upload a 1-page rebuttal only, I haven't been able to get through the updated paper and supplementary closely.
> Correct, the proposed method is an approximation to the true gradient.
Thanks for this clarification. I must admit this was a misunderstanding on my part, wherein I believed that you had somehow computed the true gradient (with some variance in estimation). But I now understand that this is a biased estimate of the gradient (along with the variance in gradient estimation).
Furthermore, based on the comments (+discussion) of reviewer xCFC, I believe that there are some issues in the scalability of this method, specifically around the Cifar-10 training numbers being low. Taken together, I believe that the proposed approach is interesting but probably requires more work/tweaks to be considered as a truly bio-plausible learning rule. Nevertheless, this work could be interesting to the wider NeurIPS community and could serve as a precursor to other bio-plausible learning rules. Therefore, I have readjusted my score. | Summary: This paper introduces a novel framework for block-local training of deep networks. It proposes a twin network design that propagates information backwards from targets to the input to provide auxiliary local losses. This design allows forward and backward propagation to occur in parallel preventing the problems of weight transport and locking across blocks. This design is applied to training ResNets and transformers on several tasks.
Strengths: - Overall this paper is clearly written and proposes a novel interesting idea. I think this is an exciting direction that others will be able to build upon.
- The proposed twin architecture and the treatment of block outputs as uncertainty is novel
- The empirical results are strong and show a clear advantage of the block-local learning method particularly on CIFAR-10
Weaknesses: - One selling point of this work is improved training efficiency. I would like to see an analysis even theoretically of what type of speedups can achieved with the proposed block-local training method.
- Although a few different architectures were evaluated the effect of block size on performance was not discussed. This seems like a important parameter to address as it effects both how parallelized/distributed training can be and biological plausibility.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How would this method scale with block size?
What sort of practical performance speedups can be expected from using your block-local learning? (for example on a system with 1 gpu vs on a system with several gpus)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: I think the limits of the biological plausibility of the twin architecture where the backwards network requires the same number of parameters as the forwards network should be discussed more.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive review and recognising the novelty of the twin-network architecture, and the strength of the empirical results. We have added an analysis of the speedup achievable with our model and results related to block-size.
**Responses to specific questions:**
1) How would this method scale with block size?
*Response:* In the transformer example in Fig.3 we investigated the impact of number of blocks and found that the model seems to scale quite well. Meanwhile we also ran initial experiments for ResNet and also found quite promising results (shown in supplementary section S.2.2.1 and corresponding figure).
2) What sort of practical performance speedups can be expected from
using your block-local learning? ...
*Response:* Thank you for this suggestion. In our model inference can be started in both the forward and backward paths in parallel which will lead to a speed up both on single GPU (with sufficient memory) as well as on multiple GPUs. We included pseudo-code and a detailed description of the expected level parallelization compared to backprop in the updated version in Supplementary section S2. | Summary: In this work, the authors address the problem of weight transport and weight locking issue in backprop by introducing a new bio-plausible algorithm known as block-learning to train NNs. The model uses different forward and backward weights, creating a twin network-like scheme to learn efficient signals via local losses. The proposed learning algorithm is tested on convolution and transformer-based architectures to show that the proposed framework can scale to complex architectures.
Strengths: 1. Well-written paper
2. Experiments on convolution and transformers show that the proposed work can scale to complex architectures.
Weaknesses: 1. Novelty is limited, given current framework shares several similarities with other approaches, such as Local representation alignment.
2. Related work is missing several key citations.
3. Experimental setup is restricted, as several SoTa bio-plausible approaches, including PC based approaches, are not compared
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Other examples of Block-learning framework is Local representation alignment (LRA-E[4], Rec-LRA [5]) , Difference target Propagation (DTP [2], DTP-sigma [4], DTP with backward targets [1], DTP with fixed weights [3]), weight mirroring [11] and Neural Generative Coding (Conv-NGC [6], NGC [8], Act-NGC [7]). The authors should compare and contrast against these existing lines of works, given all these frameworks have shown scaling results on various domains and architectures.
Second several works on predictive coding is not cited [9,10], given they have shown to approximate BP and have shown to achieve similar performance on various benchmarks. Can authors compare against PC based approaches? FA is known to struggle on complex architectures, hence to have
Backprop is known to struggle whenever Bias is set to high value, it is not clear, whether experiments with FA and BP are performed with this settings. If yes, then how does model perform when you set biases to some low number such as 0.01? Can authors report these numbers?
What is the benefit of current approach? Do you observe faster convergence? Better features (for instance one can use T-sne plots to visualize separations between classes, or visualize features of intermediate conv layers)? It would be beneficial if authors can report advantages of proposed approach.
As report by Lillicrap and [4], can authors show update angle compared to BP for the proposed framework? Does model update lies within 90 degree or even 45 degree compared to BP? Such analysis would further strengthen proposed work.
How robust is the model? Can authors report model performance across various settings/hyperparameter settings?
1. Ernoult, M.M., Normandin, F., Moudgil, A., Spinney, S., Belilovsky, E., Rish, I., Richards, B. and Bengio, Y., 2022, June. Towards scaling difference target propagation by learning backprop targets. In International Conference on Machine Learning (pp. 5968-5987). PMLR.
2. Lee, D.H., Zhang, S., Fischer, A. and Bengio, Y., 2015. Difference target propagation. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2015, Porto, Portugal, September 7-11, 2015, Proceedings, Part I 15 (pp. 498-515). Springer International Publishing.
3. Shibuya, T., Inoue, N., Kawakami, R. and Sato, I., 2023, June. Fixed-Weight Difference Target Propagation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 8, pp. 9811-9819).
4. Ororbia, A.G. and Mali, A., 2019, July. Biologically motivated algorithms for propagating local target representations. In Proceedings of the aaai conference on artificial intelligence (Vol. 33, No. 01, pp. 4651-4658).
5. https://ojs.aaai.org/index.php/AAAI/article/view/26118
6. Ororbia, A. and Mali, A., 2022. Convolutional Neural Generative Coding: Scaling Predictive Coding to Natural Images. arXiv preprint arXiv:2211.12047.
7. Ororbia, A.G. and Mali, A., 2022, June. Backprop-free reinforcement learning with active neural generative coding. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 1, pp. 29-37).
8. Ororbia, A. and Kifer, D., 2022. The neural coding framework for learning generative models. Nature communications, 13(1), p.2064.
9. Millidge, B., Salvatori, T., Song, Y., Bogacz, R. and Lukasiewicz, T., 2022. Predictive coding: towards a future of deep learning beyond backpropagation?. arXiv preprint arXiv:2202.09467.
10. Salvatori, T., Pinchetti, L., Millidge, B., Song, Y., Bao, T., Bogacz, R. and Lukasiewicz, T., 2022. Learning on arbitrary graph topologies via predictive coding. Advances in neural information processing systems, 35, pp.38232-38244.
11. Akrout, M., Wilson, C., Humphreys, P., Lillicrap, T. and Tweed, D.B., 2019. Deep learning without weight transport. Advances in neural information processing systems, 32.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: 1. Comparison is needed with relevant methods.
2. Analysis and ablation study missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and pointers to previous literature. We disagree that our algorithm is limited in novelty — in fact there are several novel and key differences between our work and the works referred by the reviewer as described below. Since the primary goal of our approach is more scalable training, the PC are a bit orthogonal. We have included references to them and a result comparison nevertheless.
**Responses to questions:**
1) Other examples of Block-learning framework is Local representation alignment ...
*Response:* We included these references and also the suggested T-SNE analysis in the updated version (see Figure S1 in the supplement). Note that Ororbia et al. 2023 was published after the NeurIPS 2023 submission deadline and therefore we couldn’t know about this result. The PC literature is a bit orthogonal but gives a very nice additional view on the broader topic of block-local learning. We believe that the probabilistic formulation of the framework that comes with explicit per-block uncertainty estimates are conceptually novel and beneficial for distributed training of the model compared to previous methods. We further discussed this and provided additional analysis. We added additional baselines as suggested. See updated versions of the related work section 2, updated table 1 with additional benchmarks and updated experimental results.
2) s report by Lillicrap and [4], can authors show update angle ...
*Response:* Thank you for this suggestion. We made a first test based on the ResNet-18 experiments and found that in fact the angles were consistently below 90 degrees but rarely below 45. We will include a detailed analysis in the camera ready version of the paper.
3) How robust is the model? Can ...
We added first additional experiments with varying number of blocks (and thus different blocks sizes). The results look quite promising, the network seems to scale well with the number of blocks. We included this analysis in the updated version of the paper and will add additional ablation studies also for other tasks for the camera ready version. | Rebuttal 1:
Rebuttal: **General response to the reviewers:**
We would like to thank all the reviewers for their constructive comments and questions. Please note that we have uploaded an updated version of our main text as well as supplement. We have indicated all major changes using blue color text. We have also responded to each reviewer separately and in detail.
The updates are summarised as follows:
1. Updates to the notation and theoretical description of the model
2. Updated results across the board in Table 1. Large change in performance for CIFAR-10 due to a discovered bug that was affecting just those experiments that caused overfitting. The bug affected the construction of splits in the forward and backward networks to create blocks and local losses. It resulted in effectively only generating meaningful training signals in the last block as suspected by reviewer 5.
3. Additional explanatory test and t-SNE results in the supplement.
Pdf: /pdf/9ed5d993bd304cf454d0d50494f4c87f058d5848.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a novel approach to the estimation of deep neural network parameters using block-localized backpropagation in conjunction with belief propagation. This approach is much more parallelizable, thus should help for distributed training, enabling horizontal scaling across devices.
The proposed approach works using a twin *backward* network, and by incorporating the belief messages into the block-local losses which are optimized using gradient descent.
Strengths: - The proposed approach is an interesting combination between belief propagation and back-propagation. I particularly appreciated the view of a neural network as a Markov chain.
- The proposed approach is significant, especially considering the current state of deep learning research. Large neural networks are steadily becoming the norm, and the development of specific learning algorithms for this kind of models is a valid and important research direction.
Weaknesses: - The paper is sometimes not clear. I personally had difficulties understanding the following sections;
- 3.1, especially after equation (2)
- 3.4, it seems it requires the supplementary material to be correctly comprehended.
- I think with the current state of the paper it might be difficult to reproduce the reported results. The algorithm itself is not completely clear, and I think the submission would improve from a clear explanation (maybe provided in the supplementary material).
- A more in-depth analysis of the BLL algorithm convergence would improve the paper's strength.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - How big should the twin network be? As much as the forward network, or have you found that it can be smaller?
- This is a suggestion, but I think the submission will benefit from a clear description of the algorithm (possibly written in pseudo-code). This can be appended in the supplementary material and then referenced in the main paper.
- Can a Gaussian distribution with non-constant variance be used to model the layer probabilities, or would the algorithm not work with this assumption?
- I believe there is an error in the equations S9, S10, S11, S13. I imagine index *k* should not be one of the variables of the summation.
- I believe there is an error in line [115]. I imagine it should be *p(y|x)* instead of *log p(y|x)*.
- This is a simple suggestion, but in my experience (in the ML community) it seems to be more common to use 𝔼[…] and ∇ to indicate the expected value and gradient respectively. For a more standardized notation, I personally would favor that nomenclature. If you think that would be detrimental to the paper presentation in any way, you are free to ignore this suggestion.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations:
- The algorithm requires to train an additional twin network, thus the number of total parameters optimized is greater than with standard gradient descent.
- The message-passing operation could affect convergence time, Fig. 3 seems to suggest otherwise, but it may not be always holtrue for other dataset or hyper-parameters.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors would like to thank the reviewer for the valuable comments and recognising the novelty and significance of the approach. We have improved clarity of the paper and added more details about the algorithm and hyper-parameters.
**Further responses inline:**
1) How big should the twin network be? As much as the forward network, or have you found that it can be smaller?
*Response:* The transformer model in the submitted version already had a simple backward structure that was not as large as the forward network, which still worked well. We will include further experiments with the resnet architectures as well to study this issue further.
2) This is a suggestion, but I think the submission will benefit from a ...
*Response:* We added pseudo-code to Supplement section S2
3) Can a Gaussian distribution with non-constant variance be used ...
*Response:* Yes, that is also possible. Multi-parameter exponential family distributions can be easily accommodated in the theoretical framework by splitting the network outputs into multiple parts when computing the local loss. Each part then e.g. represents mean or variance respectively.
4) I believe there is an error in the equations S9, S10, S11, S13. I imagine index *k* should not be one of the variables of the summation.
*Response:* Yes, the index is used twice here. Thank you for pointing this out. We fixed the notation in the updated version.
5) I believe there is an error in line [115]. I imagine it should be *p(y|x)* instead of *log p(y|x)*.
*Response:* Correct, we fixed that. Thank you!
6) This is a simple suggestion, but in my experience (in the ML ...
*Response:* We updated the notation to the more standard version in the updated version.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response and the changes introduced in the rebuttal. I appreciate the effort the authors put into improving the manuscript (I find that the pseudo-code in Table S4 is particularly useful in my opinion). I however feel that the paper might still need work regarding the overall clarity of the proposed approach, so I have decided to maintain my current rating. | null | null | null | null | null | null |
SatLM: Satisfiability-Aided Language Models Using Declarative Prompting | Accept (poster) | Summary: This paper aims at improving reasoning with large language models (LLMs) by prompting them in a way that they parse the problem into a language that is understandable by a SAT solver, and then employ an off-the-shelf SAT solver to solve the problem. Empirical results on multiple datasets show the benefit of the proposed approach.
Strengths: * The idea of obviating the need for planning by employing a SAT solver is neat.
* Empirical results are strong.
* The approach works on various tasks/datasets.
* The analysis in Table 5 is quite interesting.
Weaknesses: * I believe the benchmarks used in this work may slightly overestimate the performance of the proposed model and the approach may not work as well on more realistic use cases (see the limitations section for more detail)
* It is not clear to me how some bodies of work can fit within the parse-plan-execute framework described on Page 4. Could you comment on how the approaches such as [1, 2] that use LLMs as a tool within a reasoning algorithm can be described in this framework? How about decomposition-based approaches such as [3, 4]?
* [minor] The descriptions in Section 3 and, to some extent, Section 2 could be significantly shortened and some experimental details could be moved to the supplementary (e.g., decoding strategy, temperature, etc.). This makes room for a more elaborate description of different categories of related work (as opposed to putting everything into one category) and for moving some example failures from the appendix to the main text.
[1] LAMBADA: Backward Chaining for Automated Reasoning in Natural Language
[2] Selection-inference: Exploiting large language models for interpretable logical reasoning
[3] Decomposed prompting: A modular approach for solving complex tasks
[4] Least-to-most prompting enables complex reasoning in large language mod
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * Could you comment on the applicability of the approach on more realistic use cases (see limitations section for detail)?
* It has been shown in [1] that in the case of logical reasoning, dealing with “unknown” examples is difficult for CoT. How does SatLM deal with unknowns (e.g., for the ProofWriter dataset)? Could the “selective prediction” analysis be extended to deal with “unknown” labels? e.g., the UNSAT class.
* Any intuition why SatLM works sub-par to ProgLM on the GSM dataset, but significantly outperforms it on the GSM-SYS subset?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Many of the current LLM reasoning datasets (including some of the benchmarks in this work) have been created by first generating a puzzle in a formal language and then turning it into text either using templates or using human annotators. Therefore, the performance of approaches that translate back to a formal language (including the current work) may be overestimated on these benchmarks. Could the authors comment on the applicability of their approach to more realistic applications in the following two cases as examples:
1- Consider a logical reasoning puzzle described below:
Fiona assassinated the mayor. If somebody killed the mayor, they must go to prison. Should Fiona go to prison?
My guess is that if you translate to a formal language, you will get something like this:
assassinated(Fiona, mayor)
killed(X, mayor) -> go(X, prison)
go(Fiona, prison)?
which makes the solver not be able to produce the correct answer.
2- Consider the mathematical puzzle below:
Fiona has 10 apples, 15 dragon fruits, and 11 coconuts. How many tropical fruits does Fiona have?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and feedback.
**Q1: It is not clear how the approaches such as [1, 2] that use LLMs as a tool within a reasoning algorithm and how the decomposition-based approaches such as [3, 4] can be fit into the parse-plan-execute framework.**
A: At a high level, all the approaches [1,2,3,4] generally follow a CoT paradigm and rely on LLMs to perform parsing, planning, and execution. ([3] involves calling external APIs such as search in some settings). But unlike the standard streamlined CoT prompting that performs the parsing, planning, and executing with a single LLM call, these approaches are different in that 1) the plans are built iteratively by LLMs through multiple LLM calls, and 2) the execution of individual reasoning steps are performed using designated prompts.
In particular, [1,2] iteratively builds the plans by prompting LLMs to select the relevant facts, and iteratively executes individual steps by prompting LLMs to make the inference (e.g., deduce new facts). [3,4] alternatively calls a decomposer prompt and specialized prompts for solving sub-problems. Overall, these approaches all use LLMs to generate the plans and also use LLMs to execute them.
**Q2: Could you comment on the applicability of the approach on more realistic use cases?**
A: Thanks for the question and detailed examples! Powerful LLMs can indeed generate consistent specifications to handle NL statements that involve non-trivial NL-to-formula parsing (required in the first case) and some commonsense reasoning (in the second case).
We’ll now dive into each of the two use cases.
> Fiona assassinated the mayor. If somebody kills the mayor, they must go to prison. Should Fiona go to prison?
Yes, this kind of reasoning is in scope for our method. When parsing the NL into specifications, the generation of one formula is conditioned on previously generated formulas as opposed to being generated independently. We observe empirically that LLMs can recognize such flexibility and generate consistent specifications.
Concretely, when giving our prompt used for ProofWriter, SatLM returns:
```
kill(fiona, mayor)
ForAll([x], Implies(kill(x, mayor), prison(x)))
solve(prison(fiona))
```
> Fiona has 10 apples, 15 dragon fruits, and 11 coconuts. How many tropical fruits does Fiona have?
Yes, we can solve this. When given our arithmetic reasoning prompt, SatLM returns:
```
apples = 10
dragon_fruits = 15
coconuts = 11
tropical_fruits = dragon_fruits + coconuts
result = tropical_fruits
solve(result)
```
Powerful LLMs will be able to perform commonsense reasoning implicitly and output correct formulas. The LLM successfully ignores apples when generating the specification in this case.
Lastly, we give some additional output examples from GSM that reflect how LLMs can perform non-trivial parsing that requires commonsense reasoning.
- For the NL statement “a rectangle with an area of 160”, SatLM generates the constraint `length * height = 160`.
- For the NL query “How much does she spend on food in the month of May?”, the SatLM produces code that assigns 31 to variable days_in_may (`days_in_may = 31`).
- For the NL query “Farmer Brown has 60 animals on his farm, all either chickens or cows. He has twice as many chickens as cows. How many legs do the animals have, all together?” SatLM generates the constraints `legs_chickens = animals_chickens * 2. legs_cows = animals_cows * 4`.
**Q3: It has been shown in [1] that in the case of logical reasoning, dealing with “unknown” examples is difficult for CoT. How does SatLM deal with unknowns (e.g., for the ProofWriter dataset)? Could the “selective prediction” analysis be extended to deal with “unknown” labels?**
A: While we follow [2] and primarily focus on the “closed-world” setting for the ProofWriter dataset, SatLM can naturally handle the “unknowns”.
Specifically, in the “open-world” setting, the value of a statement is unknown if we cannot prove the statement has to be true or has to be false (in other words, there exists an assignment that makes the statement evaluate as true as well as an assignment that makes the statement evaluate as false). This essentially indicates an “AMBIG” solution. Therefore, our SatLM can naturally deal with the “unknowns”, if we let SatLM output “unknown” when spotting an AMBIG solution instead of signaling an error.
We note that the data points in the “closed-world” setting used in our work and [2] have ensured that there are no ambiguous solutions. As a result, the SatLM always makes a Yes/No prediction if successfully executed.
**Q4: Any intuition why SatLM works sub-par to ProgLM on the GSM dataset, but significantly outperforms it on the GSM-SYS subset?**
A: Intuitively, our approach is a particularly good fit for settings where the problem description is declarative—i.e., the NL description specifies what should be solved rather than how to solve it. Our approach works better on GSM-SYS because task descriptions have this property, which is difficult ground for ProgLM. In contrast, many problems in the rest of the GSM dataset contain more step-by-step instructions for many tasks.
We’d also like to note that, while SatLM works less well than ProgLM on GSM dataset with greedy decoding, SatLM outperforms ProgLM with self-consistency decoding. By drawing multiple samples, SatLM can increase its coverage and achieve higher accuracy than ProgLM since its predictions are more accurate. (see detailed discussion in line 275 - 278 of the paper).
---
[1] LAMBADA: Backward Chaining for Automated Reasoning in Natural Language
[2] Selection-inference: Exploiting large language models for interpretable logical reasoning
[3] Decomposed prompting: A modular approach for solving complex tasks
[4] Least-to-most prompting enables complex reasoning in large language mod
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: Thanks for your responses and for providing the model outputs for the examples I provided. The fact that both of them were effectively solved using your fewshots for other tasks is quite exciting. Accordingly, I have increased my score.
One final recommendation: To strengthen the argument regarding the performance of the proposed model on datasets with incomplete information, I encourage the authors to consider providing results on one such dataset. One possibility is the no-conflict subset of the BoardgameQA dataset (the dataset was seemingly made public only recently, so my score will not be affected if the authors don't incorporate it).
---
Reply to Comment 1.1.1:
Comment: Thanks for your response and for bringing up the BoardgameQA dataset, which is relevant and interesting. We will look into it and consider adding this evaluation. | Summary: This paper propose SATLM, which aims to solve the problem of planning error when using CoT and ProgramLM. Specifically, SATLM use a LLM to translate natural language problems into formal the language that is accepted by a solver and let the solver to do planning as well as calculation. The results on several reasoning datasets show the effectiveness of this method.
Strengths: 1. The paper propose to translate a problem into a formal form and then directly call a SAT solver, which is a good idea. The experiment results in Table 1 shows the effectiveness of the method.
2. The structure of the paper is clear and the paper is easy to follow. Figure 1 is informative enough to decribe the proposed method.
3. I like the ablation studies in this paper, which makes it clear which parts are important to the performance gain.
Weaknesses: 1. My main concern is the method may only perform well on simpler tasks such as GSM (I think it's GSM8K?) or Proofwriter. The question in Proofwriter is nearly in the format of formal reasoning, which only requires line-by-line translation, which in turn makes the accuracy to be as high as 99.7%. On more complicated tasks such as MathQA and MATH, I think (1) the problems might not be able to expressed in first-order logic and (2) even they can, it's not easy to do the translation.
2. There are other works that combine symbolic solver with LLMs in a similar or different way. The authors have mentioned some in the related work part. However, none of them is empirically compared in this paper, and only listing the difference from them is not enough.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See weaknesses. I don't have any further questions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper has discussion of limitations in section 6, which I think is enough.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and feedback.
**Q1: Can more complicated tasks such as those in MathQA and MATH be expressed in first-order logic? If expressible, how intuitive would it be to translate more complicated tasks to first-order logic formulas?**
A: Our work can handle most of the problems in MathQA since the operations in these problems are mainly equalities and arithmetic, which are supported in first-order logic theories. We believe most of the problems are intuitive to translate to FOL as well.
We show some example outputs of SatLM (obtained using our arithmetic reasoning prompt) on data instances from MathQA below.
> What quantity of water should be added to reduce 9 liters of 50% acidic liquid to 30% acidic liquid?
```
liters_initial = 9
percent_initial = 50
percent_desired = 30
water_added = Variable()
liters_after = liters_initial + water_added
percent_after = percent_initial * liters_initial / liters_after
percent_after = percent_desired
```
> From a pack of 52 cards, 1 card is drawn at random. What is the probability that a red king is drawn
```
cards_total = 52
red_kings = 2
probability = red_kings / cards_total
```
In contrast, for the MATH dataset, we agree with the reviewer that it might be difficult to encode some of the problems as a FOL formula, in particular those problems that involve reasoning on graphs and geometry, for instance:
> [Figure] Beginning at point A in the diagram, Dora selects one of the four possible directions with equal probability. Each time she comes to an intersection, she again randomly selects one of the possible directions. What is the probability that, in her first four steps, she will walk completely around the gray square? Express your answer as a common fraction.
Many of the problems in MATH also require symbolic computations, such as:
> The equation x^2 + 2x = i has two complex solutions. Determine the product of their real parts.
While doing symbolic computations is currently not supported in SMT solvers, we could potentially handle these problems in the same framework using a richer logic and more powerful solver like mathematica. Furthermore, we would like to point out that the main purpose of the MATH dataset is to evaluate the ability of machine learning models doing complicated reasoning, while we show how to offload complicated reasoning to symbolic solvers. Our framework shows the capability of LLMs in the presence of strong solvers; we believe that solvers will always be the most reliable way to handle problems like this at the frontiers of LLMs’ capabilities.
**Q2: There are other works that combine symbolic solver with LLMs in a similar or different way. The authors have mentioned some in the related work part. However, none of them is empirically compared in this paper, and only listing the difference from them is not enough.**
A: Actually, we do **directly compare** against a line of program-aided LMs [1,2,3] in the evaluation, all of which are techniques that equip LLMs with symbolic executors (program interpreters). In particular, we use the implementation from [1] for the ProgLM results on GSM and COLOR, use the implementation from [2] for the results on CLUTRR, and use our own implementation for other tasks where existing implementations are not available (see Appendix B for more details of implementation). These program-aided systems set the state-of-the-art performance on these tasks at the time of our experiments. We would be happy to consider evaluating more techniques as suggested by the reviewer.
Furthermore, in **Appendix D**, we also include an apples-to-apples comparison between our work and concurrent work [4] that focuses on using symbolic solvers for solving math problems. Our approach achieves better performance than theirs on arithmetic reasoning (see Table 10 in Appendix D) and is more widely applicable across a diverse range of tasks beyond arithmetic reasoning.
[1] PAL: Program-Aided Language Models. Gao et al., 2023.
[2] Faithful Chain-of-thought Reasoning. Lyu et al., 2023.
[3] Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks. Chen et al., 2022.
[4] Solving Math Word Problems by Combining Language Models with Symbolic Solvers. He-Yueya et al., 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I still find your description of baseline methods a little bit confusing. For examples, you are comparing with different baselines for different datasets, though I understand that there might not be ready-made prompts. Also I would recommend moving some key points in Appendix B to the main text to avoid confusion.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply and suggestions.
> For examples, you are comparing with different baselines for different datasets, though I understand that there might not be ready-made prompts.
Yes. The baseline, ProgLM, covers implementations of symbolic executor-aided approaches from several papers [1,2], and each of them only includes ready-made prompts for a subset of the dataset studied in our paper. We also note that because they all follow exactly the same paradigm that produces a python program and employs a python interpreter to execute the program, we unify them together in a framework called ProgLM.
We will make this clearer in any future version. We will also move the comparison with concurrent symbolic solver-aided work to the main text if page limits allow. | Summary: This paper presents a framework to augment LLMs with symbolic solvers to compensate known flaws in LLMs' reasoning (e.g., planning and arithmetic). The main novelty of this paper is that LLMs is prompted to generate declarative specifications (rather programs) so that off-the-shelf automatic theorem provers like Z3 can be deployed to solve the symbolic problems.
Strengths: Augmenting LLMs with symbolic solvers is a super valuable direction to explore. Upgrading the symbolic solver from a programming language interpreter to an automatic theorem prover surely will benefit LLMs further. This paper is well motivated, nicely structured and has compelling experiments.
Weaknesses: The main weakness to me is that SatLM does not differentiate itself enough from the previous [Faithful CoT paper](https://github.com/veronica320/Faithful-COT) as a framework. Besides parsing sub-problems into a programming language like Python, Faithful CoT can also put those problems into Planning Domain Definition Language (PDDL), which is, in a sense, specification.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - line 101, [Maieutic Prompting](https://arxiv.org/abs/2205.11822) appears to be relevant here.
- line 118, I know that authors decided to use Python rather than a specialized DSL here, but it would be great if the authors can elaborate on the syntax of the specification.
- Table 1, it would be interesting to see results on more advanced models like gpt-3.5-turbo and gpt-4, as the performance of [Faithful CoT](https://github.com/veronica320/Faithful-COT) increased quite a bit on these models.
Minor:
- Figure 1, 'an NL input'
- line 318, did you miss a citation here?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and feedback.
**Q1: How does SatLM differentiate itself enough from the previous Faithful CoT paper as a framework?**
A: SatLM uses SAT specifications that can encode a wide range of reasoning problems spanning arithmetic reasoning, logical reasoning, and symbolic reasoning, which are further solved with a unified solver. In contrast, the PPDL specification and planner used in Faithful-CoT or LLM+P is specifically designed for planning problems.
Furthermore, we’d like to point out that for the math world problems (GSM8K, specifically) and logical inference problems (CLUTTER, specifically) used in both our work and Faithful-CoT, Faithful-CoT uses an imperative specification (python program), which is fundamentally different from ours.
**Q2: How does SatLM relate to Maieutic Prompting?**
A: Maieutic prompting differs in two fundamental ways. First, it elicits statements from the language model, including both correct and incorrect statements, and tries to identify a logically-consistent subset. Our approach only generates correct statements. Second, maieutic prompting uses soft logical relations between statements derived from an entailment model, making the execution very different from using a Z3-based solver.
**Q3: Please elaborate on the syntax of the specification.**
A: We include the answer to Q5 to reviewer Gm6N in the following.
> The specification used in our work largely follows and simplifies the syntax for specifying constraints used in [z3 python](https://z3prover.github.io/api/html/namespacez3py.html).
>
> We list concrete example statements as follows. See Figure 1, Figure 2 and Appendix G for more examples.
>
> ```
> # declare a variable
> x = Variable()
> # declare enum set
> People = [Alice, Bob]
> Cities = [Austin, Boston]
> Food = [Apple, Banana]
> # declare function
> visit = Function(People, Cities)
> eats = Function(People, Food)
>
> # logic
> visit(Alice) != visit(Bob)
> # quantifier
> ForAll(x: People, Implies(visit(x) == Austin, eats(x) == Banana))
> ```
>
> We note that these formulas are close to the actual python-code formulas used by z3 python but are slightly modified to be more amenable to prompting, so we need a postprocessing step to form the actual Z3 input. We implemented a simple parser that transforms these formulas into actual specifications used by z3 python via string transformation (using regexes). For example, we transform `ForAll(x: People, Implies(visit(x, Austin), eats(x, Banana)))` into `x = Variable(People) ForAll([x], Implies(visit(x) == Austin, eats(x) == Banana))`.
**Q4: What happens if you run the evaluations on more advanced models such as GPT-3.5-turbo or GPT-4?**
A: We include the results of SatLM and baselines (CoT, ProgLM) on gpt-3.5-turbo-0613 below. The results suggest the same trend as the results on other LLMs. SatLM outperforms both CoT and ProgLM across all datasets except on GSM.
| | GSMSys | GSM | Algebra | LSAT | CLUTRR | Proof |
|--------|--------|------|---------|------|--------|-------|
| CoT | 44.8 | 74.4 | 29.3 | 23.9 | 41.2 | 82.3 |
| ProgLM | 51.2 | 77.9 | 62.6 | - | 45.9 | 76.4 |
| SatLM | 63.4 | 76.4 | 77.9 | 30.0 | 50.5 | 98.8 |
---
Rebuttal Comment 1.1:
Title: Thanks a lot for the clarification
Comment: I appreciate the detailed response from the authors.
> A: SatLM uses SAT specifications that can encode a wide range of reasoning problems spanning arithmetic reasoning, logical reasoning, and symbolic reasoning, which are further solved with a unified solver. In contrast, the PPDL specification and planner used in Faithful-CoT or LLM+P is specifically designed for planning problems.
I admit that there are subtle differences between planning and SMT solving, but I don't think they are drastically different in the benchmarking scenarios in this paper. As a specification language, both PPDL and SMT-LIB (I assume you are calling Z3 through this interface) are heavily underutilized here.
I understand that the authors want to emphasise that SatLM uses a unified solver (for the symbolic tasks), but I still yet to see the clear advantage of using just one solver against various symbolic solvers including programming language interpreters (e.g., Python) and PPDL solvers (which again can be a SMT solver like Z3).
---
Reply to Comment 1.1.1:
Comment: Thanks for your response. We’d like to note two things:
1. While this work uses few-shot prompting, we believe that future work can exploit various types of fine-tuning (e.g., reinforcement learning with outcome-based feedback) to improve performance at the conversion to our declarative specifications. In this case, having a unified format across problems enables a single fine-tuned model to more effectively multi-task.
2. We do want to note that in addition to unifying the solver, our work particularly emphasizes the use of declarative specifications, whereas Faithful CoT uses a mix of imperative and declarative specifications across problems. In particular, on arithmetic and logical reasoning tasks, where our work uses declarative specification as opposed to the imperative specification used in Faithful CoT, our choice of specifications achieves significant performance gain compared to Faithful CoT across LLMs (see Table 1, Table 6, Table 10, and the initial author response). | Summary: This work looks at using an LLM to generate a declarative task specification from a natural language specification for reasoning tasks and leverage an automated SAT solver to solve the problem. They showcase that SATLM performs better than using Chain of thought or ProgramLM (which converts natural language into python programs) in various reasoning tasks.
Strengths: The paper is well-written, the experiments are thorough.
Weaknesses: The main contribution of this work seems to be about using LLMs as a semantic parser (for mostly short-context problems) which essentially maps a problem from NL specification to a formal specification (SAT problem specification here). This is a well-known problem in NLP for which LLMs (particularly LLMs which are trained on text and code) are known to perform comparably to the supervised neural semantic parsers [1,2]. The results of this work essentially indicate whether LLMs can perform NL-to-SAT semantic parsing given that once the parsing is done correctly, the SAT solver solves it. This makes the comparison between SATLM and ProgLM/CoT a bit unfair as the core ability that has to be compared is the parsing ability and not the planning/execution ability and these are intertwined in the latter. Comparisons between SATLM and other semantic parsers (other LLMs as done in Section 4.5 and other neural task-specific semantic parsers) for NL-to-SAT semantic parsing has to be highlighted in my opinion.
[1] Shin, R., & Van Durme, B. (2022, July). Few-Shot Semantic Parsing with Language Models Trained on Code. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 5417-5425).
[2] Zhuo, T. Y., Li, Z., Huang, Y., Li, Y. F., Wang, W., Haffari, G., & Shiri, F. (2023). On robustness of prompt-based semantic parsing with large pre-trained language model: An empirical study on codex. arXiv preprint arXiv:2301.12868.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is the LLM producing anything that is semantically new and not present in the natural language specification?
2. Shouldn’t COT also be tested with text models as opposed to code models given that code models are optimized for more formal representations?
3. Just for clarification, in Table 5, does “Answered” for ProgLM mean that the program was executable without runtime errors (but it need not return the right answer)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I have provided the limitations of the work in the weaknesses section. The authors do address important limitations such as problems being less suitable to declarative prompting, limitations of SAT solvers when dealing with complex formulas and potential future work on re-prompting the LLM with the SAT solver feedback.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and feedback.
> The main contribution of this work seems to be about using LLMs as a semantic parser.
We disagree a bit with this interpretation. Our main point is that, for these types of reasoning problems, it makes more sense to decompose the problem into two parts: (1) Parsing the query into a declarative specification; and (2) Solving it using a theorem prover. This is in contrast to the ProgLM approach that generates a program to solve the end-to-end task. As such, we view the contribution of this paper as a reasoning framework, powered by LLMs and SAT solvers, as opposed to an LLM-based semantic parser.
We note that both the SatLM framework and ProgLM framework follow a semantic parsing paradigm, where LLMs parse an NL problem into a formal specification that is executed by a symbolic executor. But the use of declarative specifications allows SatLM to more accurately parse the descriptions than ProgLMs. Furthermore, we note that most prior work in semantic parsing handles single-sentence queries, not long and complex problems like those in our datasets (e.g., each SAT specification for GSM8k involves more than 7 constraints on average).
> The comparison between SatLM and ProgLM/CoT a bit unfair as the core ability that has to be compared is the parsing ability and not the planning/execution ability
The comparison between SatLM and ProgLM essentially compares LLMs’ parsing abilities in terms of producing SAT-style specifications and program-style specifications. We note as both SatLM and ProgLM use symbolic executors to execute the parsed specifications, which ensures correct execution with respect to the parse outputs, so the improvements of SatLM over ProgLM can be attributed to the fact that using declarative specifications leads to more accurate parsing compared to using imperative specifications.
We agree that the solver we use (Z3) does have a greater planning capacity than the Python interpreter used in much of the past ProgLM work. However, we view that as an inherent strength of the framework proposed here.
> Comparisons between SatLM and other semantic parsers for NL-to-SAT semantic parsing has to be highlighted in my opinion.
This work uses LLMs to parse NL into SAT specifications via few-shot prompting, similar to [1]. We are not claiming novelty on the idea of building a semantic parser this way. To the best of our knowledge, there aren't any other existing NL-to-SAT semantic parsers that we can compare to, but if the reviewer is aware of tools that can be used here, we would be happy to evaluate them in our framework.
[1] Few-Shot Semantic Parsing with Language Models Trained on Code. Shin & Van Durme. 2022.
---
**Q1: Is the LLM producing anything that is semantically new and not present in the natural language specification?**
A: Interpreting the natural language to produce a SAT specification does require making inferences. Here are some examples:
- For the NL statement “a rectangle with an area of 160”, SatLM generates the constraint `length * height = 160`.
- For the NL query “How much does she spend on food in the month of May?”, SatLM produces code that assigns 31 to variable days_in_may (`days_in_may = 31`).
- For the NL query “Farmer Brown has 60 animals on his farm, all either chickens or cows. He has twice as many chickens as cows. How many legs do the animals have, all together?”, SatLM generates the constraints `legs_chickens = animals_chickens * 2. legs_cows = animals_cows * 4`.
**Q2: Shouldn’t COT also be tested with text models as opposed to code models given that code models are optimized for more formal representations?**
A: On these tasks, even with text-focused models, CoT usually lags ProgLM by a large margin (see Lyu et al., 2023; Gao et al., 2023). We have also included the results of CoT on text-davinci-003 below. As seen in the table, CoT generally underperforms both ProgLM and SatLM on text-davinci-003, which agrees with past work (Lyu et al., 2023; Gao et al., 2023).
We have additionally tested on chat-focused models (gpt-3.5-turbo), which shows the same trend. Please see Q4 in the response to Reviewer Zc4x for details.
| | GSMSys | GSM | LSAT | CLUTRR | Proof |
|--------|--------|------|------|--------|-------|
| CoT | 42.8 | 66.5 | 21.7 | 34.5 | 83.5 |
| ProgLM | 51.2 | 71.7 | - | 41.2 | 83.7 |
| SatLM | 63.4 | 70.3 | 30.4 | 58.2 | 99.7 |
**Q3: In Table 5, does “Answered” for ProgLM mean that the program was executable without runtime errors (but it need not return the right answer)?**
A: Yes, “Answered” means the program interpreter or SAT solver successfully executes the program or SAT problem, respectively, and returns a not necessarily correct answer. We see SatLM makes less predictions but has substantially higher selective accuracy.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and results of COT on text-based models. I have updated the score slightly. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper describes a new approach to solving NL reasoning tasks using large language models (LLMs). Specifically, the key idea is to combine LLMs with SAT solving, where LLMs are only used in the first parsing step. The authors of the paper call this approach satisfiability-aided language modeling (SATLM). The authors also point out that by SAT, they mean all kinds of tools for automated reasoning, including traditional SAT solvers, SMT, and first-order theorem provers. The idea is that the declarative specification that is generated as output from prompting is closer and more accurate compared to more imperative problem formulation. The work is evaluated on several data sets with standard prompting, chain-of-thought (COT) prompting, and executor-augmented LLMs (PROGLM).
Strengths:
- The paper is relatively easy to read and strikes a good balance between formality and accessibility.
- The main idea of the paper is clear, and it seems to be a new direction of how to encode NL tasks using LLMs.
- The paper contains relevant related work and contains relevant benchmarks
Weaknesses: - There is an interesting discussion on how the proposed approach can catch errors better than PROGLM (see e.g., the paragraph starting on line 166). This is good, but there is no discussion on how this kind of error can be handled. How easy is it to debug such errors? How can a user generate a fix or solution if such an error occurs?
- The results are not always that easy to interpret. For instance, in table 2, if an answer is incorrect, does it then include both incorrect because of parse error, or just incorrect because of the SAT solver generating the wrong answer? This is much clearer in table 5.
- The level of detail of how the approach has actually been implemented could be improved. More specifically, how are the different tools chained together, are all steps automated etc.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: How do we know that SatLM generated a correct SAT problem? Can we get guarantees? If not, how can we debug the program?
On lines 127-127, you explain that you use Python instead of a declarative DSL. Please provide more details on what the target language looks like for the SAT problem specification. In what ways does it differ for traditional SAT solvers, SMT, and first-order theorem provers?
It would be useful to get a deeper understanding of the benchmarks. In the cases where there is a pure SAT problem (true or false), what is then the distribution of TRUE and FALSE answers? If the distribution is 50/50, would not then a random generator with uniform distribution perform better than many of the solutions that are compared?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: There is a clear paragraph on the limitations of the approach at the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and feedback.
**Q1: While SatLM can capture more types of errors (specifically, UNSAT and AMBIG) than PROGLM, there is no discussion on how this kind of error can be handled. How easy is it to debug such errors? How can a user generate a fix or solution if such an error occurs?**
A: The reviewer brings up very interesting questions about user interaction with SatLM. Our current prototype just produces UNSAT or AMBIG, but we can think of several directions for providing more feedback in these cases:
- UNSAT errors: We can show the user an UNSAT core, which is a (small) subset of the constraints that are unsatisfiable. This can help the user to more quickly identify any problems.
- AMBIG errors: We could potentially use abductive reasoning to suggest additional constraints. In particular, abduction can generate additional constraints under which the constraint system would have a unique solution.
This work opens the door to explore exciting questions like these. We believe there is potential both to allow more effective intervention by human users (e.g., by recognizing errors in the specification) as well as using iterative prompting of LLMs to improve the specification once the error has been localized. However, a full treatment of this topic is outside the scope of this paper.
**Q2: In Table 2, if an answer is incorrect, does it then include both incorrect because of parse error, or just incorrect because of the SAT solver generating the wrong answer?**
A: The accuracy in Table 2 refers to the accuracy of the final answer caused by either parsing errors or solving errors made by LLM solvers (CoTSolver and NoSolver). We will make this clearer in any future version.
**Q3: How do we know that SatLM generated a correct SAT problem? Can we get guarantees? If not, how can we debug the program?**
A: To be 100% sure that SatLM generated the correct SAT problem, the user would need to inspect the generated output. We believe that inspecting the SAT problem is much easier than inspecting a program that produces the solution because it is declarative and much higher-level
**Q4: How are the different tools chained together, are all steps automated etc.**
A: First, we prompt an LLM using declarative prompting (described in Section 3.2) to output the SAT specification (see Section 4.1 for details of the decoding method). Next, the output (SAT specification) is post-processed into actual z3 python code (see the response to the next question for details). Lastly, we execute the z3 python code to obtain the final answer. These steps are chained together automatically without needing manual intervention. We can clarify this more in any future version!
**Q5: Please provide more details on what the target language looks like for the SAT problem specification. In what ways does it differ from traditional SAT solvers, SMT, and first-order theorem provers?**
A: The specification used in our work largely follows and simplifies the syntax for specifying constraints used in [z3 python](https://z3prover.github.io/api/html/namespacez3py.html).
We list concrete example statements as follows. See Figure 1, Figure 2 and Appendix G for more examples.
```
# declare a variable
x = Variable()
# declare enum set
People = [Alice, Bob]
Cities = [Austin, Boston]
Food = [Apple, Banana]
# declare function
visit = Function(People, Cities)
eats = Function(People, Food)
# logic
visit(Alice) != visit(Bob)
# quantifier
ForAll(x: People, Implies(visit(x) == Austin, eats(x) == Banana))
```
We note that these formulas are close to the actual python-code formulas used by z3 python but are slightly modified to be more amenable to prompting, so we need a postprocessing step to form the actual Z3 input. We implemented a simple parser that transforms these formulas into actual specifications used by z3 python via string transformation (using regexes). For example, we transform `ForAll(x: People, Implies(visit(x, Austin), eats(x, Banana)))` into `x = Variable(People) ForAll([x], Implies(visit(x) == Austin, eats(x) == Banana))`.
**Q6: In the cases where there is a pure SAT problem (true or false), what is then the distribution of TRUE and FALSE answers? If the distribution is 50/50, would not then a random generator with uniform distribution perform better than many of the solutions that are compared?**
A: We’d like to clarify that the SAT problem used in our work is not restricted to “pure SAT” problems but can refer to problems that require checking the satisfiability of formulas in formal logic. Most problems studied in our paper are not binary classification tasks where the final answer is to assess whether a statement is true or false.
Only the ProofWriter dataset is a binary classification dataset. As the labels are balanced for the ProofWriter datasets, all the solutions perform much better than random guessing (50%).
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications and the concrete examples. When you iterate the manuscript, I think it would be great to include more details on how the tools are chained together and to be more specific about the target language.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response and suggestions. We will integrate these details into the main text in any future versions. | null | null | null | null | null | null |
Predict-then-Calibrate: A New Perspective of Robust Contextual LP | Accept (poster) | Summary: The authors study a risk-averse variant of contextual linear optimization, using VaR as the risk measure. The authors develop two heuristic approaches. In both of these approaches, the problem of minimizing the VaR is approximated by a robust optimization problem. The authors propose two different ways of specifying the uncertainty sets, both of which involve building a regression model to predict the conditional mean of the uncertainty, and then calibrating the size of uncertainty with the intent of satisfying a coverage guarantee. The authors provide bounds on the probability that the coverage guarantee will be satisfied, and computational experiments that measure the performance of the proposed method against several baselines. In addition, the authors briefly discuss a distributionally robust variant of contextual linear optimization, and show that there exists a policy that converges to an optimal policy with a specific convergence rate.
Strengths: - The problem selected by the authors, namely risk-sensitive contextual linear optimization, is an interesting and relevant problem.
- The design of the experiments provided is reasonable and the proposed methods show good performance.
Weaknesses: - The exact form of the uncertainty set suggested by the authors seems poorly motivated. It would be helpful if the authors could explain why they chose the approach that they did. In the rebuttal, the authors provided an explanation that helped to motivate the form of the uncertainty set.
- The coverage guarantees provided by the authors' methods are not the relevant coverage guarantees for the contextual problem (4). In particular, the conditional probability P(c ϵ U_α(z) | z) should be equal to α (approximately), but the authors instead have designed the methods to guarantee that P(c ϵ U_α(z)) is equal to α (approximately). I view this as a serious flaw in the method, as someone solving a contextual linear optimization problem is presumably specifying their risk for that context, and the authors' approach could in theory severely underestimate the risk in heteroskedastic settings. The authors should redesign the method to ensure that the correct coverage guarantee is met. In the rebuttal, the authors explained how the method could be adapted to provide a conditional coverage guarantee, and explained their rationale for using a global coverage guarantee rather than a conditional one.
- Even if the goal were to produce an unconditional coverage guarantee rather than a conditional one, the theoretical guarantees in Proposition 1 and Corollary 1 are weak. Some very bad ways of choosing uncertainty sets would satisfy these guarantees. For example, you could specify your uncertainty set as a ball with a center of 0 and with radius chosen so that the proportion of samples from the validation set falling in the ball is equal to α, and it would have the same guarantees. It would be better if the authors could show some stronger properties that their choice of uncertainty set satisfies. The authors address this in their rebuttal by acknowledging that the theoretical guarantees could be satisfied by bad choices of uncertainty sets, there is other evidence in the paper that suggests that their choice of uncertainty set is effective.
- In Proposition 3, the authors should ensure that "P(c ϵ U_α(z) | z) = α", not that "P(c ϵ U_α(z)) = α" since that is the relevant coverage guarantee for the contextual problem. I considered the possibility that this is a typo, but it can be confirmed that the coverage guarantee for the contextual problem is not satisfied. For example, when z=1, the probability P(c ϵ U_α(z) | z) = 1 - sqrt(2-2α)/2, which is not generally equal to α. This example should be reworked. This was also addressed by the author's discussion of the conditional and global coverage guarantees in the rebuttal.
- Section 4 feels like a summary of a different paper that was inserted into this paper. The proposed method for the contextual DRO problem seems only loosely connected to the methods for the risk-sensitive contextual linear optimization. Furthermore, the authors relegate their proposed method for contextual DRO problems to the appendix and provide no computational results for this method. I recommend removing Section 4. The authors addressed this in their rebuttal by clarifying the connections between the contextual DRO problem and the risk-sensitive problem.
- (A minor point) The authors are inconsistent as to whether the values A and b are random or constant. The methodology seems to assume that these are either fixed or, at the very least, independent of the values of c and z, so I would recommend treating these as constant throughout.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - In the box uncertainty quantification method, why did the authors choose to perform conditional quantile estimation in two steps, where first the mean is estimated and then quantiles are estimated from the residuals? I would note that in existing works where the mean is estimated and then a density is fit on the residuals, there is usually some assumption that justifies this. For example, if it is assumed that c = f(x) + ε, where the errors ε for different observations are i.i.d., then you can perform unconditional density estimation on the residuals (rather than conditional density estimation). However, I don't see any justification in this work. The authors answered this question in their rebuttal.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: - One limitation (mentioned in the "Weaknesses" section) is that the coverage guarantees provided by the method are not the ones that would be desired in the stated problem. This is mentioned in passing, but the authors should investigate this limitation further.
- Even in the absence of uncertainty, the proposed method by the authors is a heuristic in the sense that it does not identify the optimal solution nor are any approximation ratios known. It would be worth exploring how close (or far) the solutions are from optimal solutions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments and the raised questions. We believe the clarification of these questions will make the positioning of our work clearer.
Individual/conditional coverage guarantee:
We thank the reviewer for noting the point, and we have also mentioned in our paper this as a limitation of the proposed algorithms. However, we note that the predict-then-calibrate (PTC) framework is more of a general framework to tailor a prediction-aimed ML model for a robust task. The performance guarantee of the current two algorithms only hold for global coverage but not for individual coverage, yet this is NOT caused by the framework, but indeed by the chosen calibration/uncertainty quantification algorithm and the corresponding assumption. If we change the calibration algorithm to one that aims for an individual/conditional objective, the whole framework will be able to fulfill the individual coverage. Now we present such an algorithm and the corresponding theory results:
- Calculate the residual $r_t=c_t-\hat{f}(z_t)$
- Perform a nonparametric regression with respect to $r_t$ using $z_t$ and a chosen kernel $k$ (such as simple window kernel or Gaussian kernel). This obtains the following function
$\hat{Q}(z;\tau) = Q(\sum I(r_t) k(z_t,z)/ \sum k(z_t,z); \tau)$
where $I(r_t)$ denotes a point mass distribution at $r_t$, and $Q(\cdot;\tau)$ outputs the $\tau$-quantile of a random variable
- For a new $z_{new}$, output the confidence interval by $[\hat{Q}(z_{new};(1-\alpha)/2), \hat{Q}(z_{new};(1+\alpha)/2)]$
Theoretical analysis:
Under the Lipschitzness condition of the quantile function
$|Q(c|z;\tau)-\hat{Q}(c|z’;\tau)|\le L\|z-z’\|$
for any $(z,z’)$ and some boundedness assumption, the produced confidence interval ensures a bot finite-sample and asymptotically optimal coverage for each $z_{new}$ almost surely.
The analysis is built upon the nonparametric theory. Please let us know if you and other reviewers want more details on the analysis. We will respond timely in the following week.
We note that the existing works on robust contextual optimization [7,20] also fail to achieve the individual coverage guarantee, and more important, these two existing works are restricted to certain prediction model such as k-NN or neural networks. Comparatively, our paper aims to encourage a more flexible choice of the prediction model, and the usage of uncertainty calibration method for robust optimization tasks. It is a secondary goal to achieve individual coverage guarantee, but as it shows above, the PTC framework doesn’t exclude the possibility of achieving this.
Two-step procedure:
Generally speaking, an ML prediction problem will become more sample efficient and easier when the conditional expectation/quantile function $E[Y|X]$ or $Q_{\tau}(Y|X)$ is smoother. In our context, the two-step procedure first predicts the conditional mean with $\hat{E}[c|z]$ and then try to predict quantiles/distribution of $c-\hat{E}[c|z]|z$. To fit the error $c-\hat{E}[c|z]|z$ can be easier than the original $c|z$ because the error’s conditional distribution is usually smoother. To see this, generally, the conditional expectation function $E[c|z]$ is highly related and very likely to wax and wane together with the quantile function $Q_{\tau}[c|z]$. In this light, subtracting $E[c|z]$ can very likely smooth out the quantile function $Q_{\tau}[c|z]$. In metaphor, this is quite like the method of controlled variates in Monte Carlo simulation which introduces a controlled random variable that is correlated with the target random variable such to reduce the variance of the estimator. Here, the conditional expectation function $E[c|z]$ works as the “controlled variate” for the original “target variate” of $Q_{\tau}[c|z]$. Another example is the nonparametric estimator presented in above. The convergence rate of the coverage guarantee scales linearly with the Lipschitz constant $L$. And the corresponding Lipschitz constant for the conditional distribution $c|z$ will be generally much larger that of the error distribution $c-\hat{f}(z)|z$.
While such two-step procedure is commonly adopted in the literature of conformal prediction and ML model calibration, we provide the above explanation to justify such an approach, and we will include this discussion in the next version of our paper.
Section 4:
We agree that Section 4 has less coherence with other sections of the paper, because the majority of our paper studies the robust optimization problem, while this section concerns the distributionally robust optimization (DRO). Yet, they all share one theme which is to make the choice of the ML model and uncertainty calibration methods more flexible. When we reviewed the related literature, we find that the DRO papers on contextual optimization usually impose very strong assumptions to achieve the performance guarantee. To this end, while Section 4 develops no new algorithm upon the existing literature, we believe it significantly relaxes the realizability assumption and the error distribution assumptions in the existing works.
We defer more responses to the raised question to Author Rebuttal. We are sorry for the inconvenience and we really appreciate your time in reading our paper and responses. Please let us know if there is any further confusion; we look forward to further discussion with you.
---
Rebuttal Comment 1.1:
Comment: Regarding the "PTC framework":
While the authors occasionally use phrases such as "our predict-then-calibrate framework" within the paper, this framework is not clearly defined. As currently written, the paper presents a fairly specific problem, develops two specific methods for that specific problem, and then presents computational results for those specific methods. For this reason, I agree with the authors that my criticisms apply to a specific methods and not of the framework, but if the authors want to invoke this as a response, then they need to provide details of their framework. It seems to me that rewriting the current paper to present a framework rather than addressing a specific problem would be creating an entirely different paper. In a paper proposing a framework, I would expect a rigorous definition of the framework, some explanation how the framework can be applied in different contexts, and some demonstration of what benefit the framework provides, such as theoretical results and/or broad experiments in a wide variety of settings where the framework might apply. I would recommend that the authors fix the concrete issues with their presented methods rather than appealing to a framework that is not clearly defined.
Regarding the individual coverage guarantee:
The method described in the rebuttal to produce an individual coverage guarantee makes sense. I think that the paper would be improved if this method and the associated guarantees were presented rather than the current methods.
Regarding coverage guarantees in existing works:
As far as I can tell, Ohmori (2021) provides no coverage guarantees of any kind, but I suspect that it should be possible to provide individualized coverage guarantees for their k-nearest-neighbors-based method, at least asymptotically, if k were scaled at an appropriate rate as the sample size increases. It is true that Chenreddy et al. (2022) provide global coverage guarantees, but the risk-averse optimization problem in Chenreddy et al. (2022) is defined differently than the one that the authors problem. In particular, the CVO problem presented by Chenreddy et al. (2022) aims to find a policy with minimal CVaR under random context, while that presented by the authors aims to find an action with minimal VaR conditioned on a context. For the former problem, the global coverage guarantee is appropriate; for the latter problem, it is not appropriate. This perhaps provides an alternative avenue for the authors to fix the issue of the incorrect coverage guarantee: instead of changing the method to provide an individual coverage guarantee, the authors could perhaps redefine the main problem to identify a policy with minimal VaR under random context.
Regarding value of proposed work relative to existing work:
I agree with the authors that their method appears to provide some value over existing methods by allowing additional flexibility in choice of prediction method.
Regarding the two-step procedure:
The authors' justification seems reasonable and I look forward to seeing the changes.
Regarding Section 4:
The authors claim that the method in Section 4 shares a common theme with those presented elsewhere in the paper, but this theme is not clearly communicated in the current paper, and it is difficult to see the connections. I suppose that the authors are claiming that all methods fall within the predict-then-calibrate framework, but as I mentioned earlier, this framework is not clearly defined, and the material in Sections 2,3, and 5 all deal with a specific problem rather than discussing a general framework. I still recommend removing this section unless the authors can provide a much stronger connection to the other sections than currently exists.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the prompt feedback, and we appreciate it in particular as we ourselves as reviewers haven't got to read the author's rebuttal to those papers that we reviewed :P
Regarding the PTC framework:
We are sorry for the caused confusion by the wording of “framework”. In fact, we intentionally avoid using the word “framework” in our paper as we don’t want to overclaim our contribution to the existing literature. The main motivation for us to write the paper is that when we see (i) the existing works on RO such as Ohmori (2021) and Chenreddy et al. (2022), while providing elegant solutions to the problem, are tied to certain ML methods; (ii) the existing works on DRO require strong assumptions such as realizability. We use the name “predict-then-calibrate” to emphasize
- The two-step procedure frees out the choice of the ML model and leaves the work to uncertainty quantification methods.
- The quantification of uncertainty should be made aware of downstream robust tasks, such as outputting box- or ellipsoid-shaped uncertainty sets.
We used the word “framework” a bit loosely in our response (again, our apologies for that), and we didn’t mean to say our framework umbrellas all the existing methods/works. Yet, on the other hand, this disentanglement of the prediction from the calibration does give the flexibility to cover the problem of CVaR (by outputting a generative model for distributional prediction) and to extend to the case where optimality is ensured such as Gupta (2019) (by outputting ellipsoid-shaped uncertainty set).
Regarding the existing works:
We thank the reviewer for the detailed discussions on Ohmori (2021) and Chenreddy et al. (2022). This is indeed helpful! We agree with the comments on random context; an easier way to improve the presentation of the global coverage guarantee is to position it as a random context. For the comment on Ohmori (2021), we agree that conditional coverage can be derived in a similar manner as the roadmap in our last response for nonparametric estimators. Yet one advantage of calibrating the error distribution than the k-NN method in Ohmori (2021) is the two-step procedure argument. | Summary: The author(s) propose a novel method of robust contextual LP, which extends the conventional contextual LP problem by allowing some uncertainties from the prediction model. It is very well-written, and states very clearly what is the problem setting, and in which direction this work advances. The model properties are discussed in detail and the implication of each theory/statement is explained in a very careful manner.
Strengths: 1. Clearly written and well organized
2. Provides a clear problem statement, and the algorithms are described in good detail
3. Strong algorithm analysis with additional detials to provide context for the statements
4. Good empirical work demonstrates the method
5. The author proposes two algorithms to construct the contextual uncertainty sets $\mathcal_{U}$ (which is based on z) for the prediction model. They mention the choice of the prediction model is quite flexible, and there are a couple of advantages by doing so.
Weaknesses: 1. It would be nice if the author could provide the audience with some motivations for this work. For example, under what circumstances would it be beneficial to incorporate a risk-sensitive objective into contextual LP.
2. All theoretical guarantees mentioned in Section 3.1 require $\mathcal{D}_2\sim \mathcal{P}$, e.g. the validation set is correctly specified. It would be useful to understand how misspecification impacts these results (either empirically or theoretically).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Proposition 1 provides a coverage guarantee for Algorithm 1 and Algorithm 2. I was a bit confused by the statements. On the left hand side, the inequality goes to 0 and $|\mathcal{D}_2|$ increases. While on the right handside, the inequality goes to 1 as $|\mathcal{D}_2|$ increases. Does it mean the more observations in $\mathcal{D}_2$, the more uncertainty there will be? It would be nice if the author could provide more explanation on it.
2. As stated above, all theoretical guarantees mentioned in Section 3.1 require $\mathcal{D}_2\sim \mathcal{P}$, e.g. the validation set is correctly specified. For practical application purposes, could the author provide some strategies to make sure this requirement is guaranteed?
3. In the Experiment Section, Simple LP Visualization paragraph, the author mentions both PTC-B and PTC-E. Maybe I missed it, but it seems in Figure 2, the author only provides visualization for PTC-B. Where is PTC-E?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: See above for list of areas for improvement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the comments and feedback.
Typo in Proposition 1:
The inequality (8) should be “>=” instead of “<=”. We thank the reviewer for noting this mistake. We agree with the intuitions mentioned by the reviewer and now the inequality becomes aligned with these intuitions.
PTC-E in Figure 2:
We are sorry for the caused confusion. In the setting of Figure 2, the algorithm of PTC-E will be exactly the same as PTC-B. Specifically, Figure 2 illustrates the benefits of a better prediction or calibration model in a one-dimensional case. In the one-dimensional case, the PTC-B coincides with PTC-E then because both the box-shaped and ellipsoid-shaped uncertainty sets will degenerate into line segments. We will mention the point in the next version of our paper.
Validation set:
That's a great point. The theoretical results hinge on the fact that the validation set comes from the same distribution as the test data. Practically, just like the standard setup of machine learning, if one has an available training dataset that is from the same distribution as the test set, then one can reserve part of the training data as the validation data. If this is not true, which is known as the out-of-domain problem, where the training data and the test data may be from different distributions, the situation doesn’t become fully pessimistic. Indeed, our algorithms essentially calibrate the uncertainty of c|z. One type of out-of-domain problem is known as the covariates shift, where the distribution of z (covariates) changes between the training data and the test data, but the conditional distribution of the target variable c|z remains the same. In this case, the theoretical guarantee of our algorithms still works. Of course, there are other setups of the out-of-domain problem that may make the algorithms fail. In our opinion, this type of setup studies the potential discrepancy between the training data and the test data, and it deserves more attention from the literature on data-driven robust optimization and contextual optimization, where it often assumes a standard i.i.d. setting. To this end, we will include more discussions about it to call for more awareness of the problem.
We hope our response addresses the raised questions. We refer to our response to Reviewer 1GB8 for more discussions on the motivation of the formulation. If there are any follow-up questions/concerns, we will get back to you timely in the following discussion week. | Summary: The authors study a risk-sensitive contextual LP setting. They seek to predict the objective function of the LP from a context vector using a generic machine-learning algorithm, and then use this prediction to achieve a low (good) objective value in the LP. Their insight is this can be done cleverly with calibration—instead of changing the ML prediction as others have done, they calibrate the output. In theory and empirical evidence, they show their method has advantages over competitors.
Strengths: - There is substantial interest in the predict-then-optimize setting for LPs (see the citations on Grigas and Elmachtoub). This setting (while not mentioned by the authors) seems to have clear applications in finance.
- The authors' work is clearly presented and claims are backed up by theory and experiments. The theory and experiments support the claims not only that the approach achieves better objective values, but gives a more detailed view of why (I particularly like 3.2).
Weaknesses: - The method combines tools that we understand pretty well. In particular, it leverages the detailed picture of robust optimization linear programs that we have. The results are naturally limited to LPs.
- There isn't much discussion of limitations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Have the authors' investigated the application to conditional value at risk?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not much discussion, a little bit in Future Directions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for bringing up the finance application. We are not quite experts in the finance domain, so we didn’t mention it in the first place. However, in the next version of our paper, we will include more discussions about it. Generally, this financial application leads to a natural consideration of CVaR objective for its convexity and time consistency.
CVaR objective instead of VaR:
The framework of predict-then-calibrate (PTC) is more about a generic approach that first predicts the objective vector and then quantifies the uncertainty of the prediction model for the downstream robust task. In particular, PTC specifies a procedure to characterize the conditional distribution $c|z$. For the VaR objective, it outputs box- or ellipsoid- shaped confidence set so as to meet the tractability of the robust task. In comparison, for the CVaR objective, one should replace the calibration part with a finer characterization of the conditional distribution. This indeed can be done with minor modifications of Algorithm 2 or other algorithms of a similar spirit. Note that Algorithm 2 fits a multivariate Gaussian distribution for $c|z$; with the fitted model, for newly observed covariates $z_{new}$, one can generate a number of independent samples
$\tilde{c}_1,....,\tilde{c}_k$
from the conditional distribution of $c|z_{new}$. Then to optimize the following empirical CVaR objective
$\min_{x} \min_{\gamma} \gamma+\frac{1}{k}\sum_{j=1}^k (\tilde{c}_j x - \gamma)^+$
$s.t. Ax\le b, x\ge 0$
This is a convex program that has an equivalent linear program. We haven’t thought about this in the submitted version of our paper. However, we do find it interesting, so we will include more discussion of the CVaR objective as an extension in the next version of our paper. Moreover, we remark that the method to fit the distribution $c|z$ can be quite flexible and it can be replaced with any generative probabilistic model that outputs a distributional prediction.
General optimization problem beyond LP:
We presented the PTC framework under the problem of LP for that LP seems to be the most natural playground for studying robust contextual optimization. One key point here is that for the contextual-free (without $z$ or $z=1$ a.s.) case, the robust VaR formulation should have a tractable (approximate) solution; then we can utilize the structure to tailor the uncertainty calibration part. Specifically, for the robust LP problem, the context-free case follows the box- or ellipsoid-shaped uncertainty set, so the PTC algorithms inherit the structure and have the calibration model output (contextual) uncertainty set of corresponding shapes. For more general problems such as the quadratic program or convex program or even multi-stage stochastic program, as long as the context-free robust problem has a tractable solution, we can design the PTC algorithm accordingly. Compared to the context-free case, such PTC design contextualizes the uncertainty set with the context information. To this end, PTC behaves more like a plug-in module that can fit into the existing robust optimization literature. We should have included more discussions on this in our paper; we thank the reviewer for noting the point and we will include it in the next round.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks to the authors for their responses. I am generally satisfied with them (to me an other reviewers). I will keep my score. | Summary: This paper considers the contextual linear optimization problem, where one is given a vector of covariates $z$ that can be used to predict a cost vector $c$, and one wishes to solve the following LP:
$ \max \ E[ c \mid z]^T x$
$ \text{subject to}: A x = b, x \geq 0 $
This is the risk-neutral version of the contextual LP problem. The risk-sensitive version instead considers the $\alpha$-value-at-risk with respect to the conditional distribution of $c^T x$ given $z$:
$ \max \ VaR_{\alpha}( c^T x \mid z )$
$ \text{subject to}: A x = b, x \geq 0 $
The paper proposes an approximate method for solving this problem, where one approximates the VaR with a max over $c$'s in a fixed uncertainty set that depends on the given covariate vector $z$. The uncertainty set itself is obtain from a procedure that in the abstract works like this: take a prediction model $\hat{f}$ and make predictions of $c$ in a validation set. Calculate the errors/residuals, and build another model, called the calibration model, to predict these residuals. One then constructs an uncertainty set conditional on $z$ where the center comes from the prediction model and the width comes from the calibration model. This approach is shown with high confidence over the validation set to satisfy the $\alpha$ probability coverage requirement. The paper also tests this approach using synthetic toy instances as well as instances based on the contextual shortest path problem, and shows that it leads to better (lower) VaRs than existing robust-contextual proposals.
Strengths: - From a novelty standpoint, I think this paper does present something new; to my knowledge the combination of value-at-risk within contextual optimization has not been done before.
- The method seems reasonably simple and could be implemented easily. It is also nice to see that there is a high-confidence guarantee on the coverage probability of the set $\mathcal{U}$ produced by Algorithms 1 and 2. Another nice aspect is that the validation data set does not need to be the same data set used to train the predictive model $\hat{f}$; this goes along nicely with the goal of robustness in the paper.
- I appreciate that the authors provided the simple examples in Section 3.2 to illustrate how the approach works and the difference between having a good prediction model vs. calibration model.
Weaknesses: - I think the motivation of the paper is a little weak. While I understand the motivation for predict-then-optimize / contextual optimization, and the motivation for optimizing value-at-risk, it is not clear to me why one would want to couple these two things together. The introduction very quickly brings up the "risk-sensitive" aspect of the problem without providing an accompanying problem from the literature or from practice that would need this type of machinery.
- From a practical standpoint, there are a lot of questions that are unanswered, namely how good is the method (can one do better than this method in certain cases? see my suggestion #2 below) and how should one choose the prediction model and the calibration model?
- The numerical evaluation feels incomplete, and the method should be compared to stronger benchmarks; for this type of contextual problem, it feels like there should be lots of ways that one could represent the conditional VaR with respect to a given predictive model (see my suggestion #1 below, on using the conditional distribution outputted by the predictive model).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: _Question/Suggestion 1_: If the goal is to maximize the alpha-VaR conditional on a given covariate vector, could the following simpler strategy not work: in Bertsimas and Kallus (2020), the authors propose solving the contextual problem by using a machine learning model to obtain an estimate of the conditional distribution of $c$ given $z$. (For example, for a CART tree, one would run the given $z$ down the tree to determine the corresponding leaf, look at the historical $(z',c')$ pairs that are in that leaf, and consider the distribution with a weight of $1 / M$ on the $M$ pairs of $(z', c')$ values that are in that leaf, and a weight of zero on all other observations.) Bertsimas and Kallus discusses how this type of strategy can be used in conjunction with other machine learning models, such as random forests and k-nearest neighbors.
My question here is why could one not take a similar strategy here; specifically, why not output the $x$ that maximizes $\alpha$-VaR with respect to the estimated conditional distribution? So for example, for CART, this would mean minimizing $\alpha$-VaR with respect to the same discrete distribution that puts a weight of $1/M$ on the $M$ points that are mapped to the same leaf as the given $z$. (This would likely be some kind of simple MIP problem.)
In addition to the connection to the existing work of Bertsimas and Kallus, this simpler strategy is also appealing in that one does not need to specify a prediction and a calibration model separately; there is only one machine learning model used that needs to be specified.
_Question/Suggestion 2_: Is it possible to say something about how "optimal" the proposed predict-then-calibrate approach is? To elaborate, in Bertsimas, Gupta and Kallus (2017), one deals with a robust linear constraint of the form
$\mathbf{u}^T \mathbf{x} \leq b, \quad \forall \ \textbf{u} \in \mathcal{U}$
and one assumes that the actual value of $\mathbf{u}$ is the random variable $\mathbf{\tilde{u}}$. One then seeks to select $\mathcal{U}$ so that
$ \text{if} \ \mathbf{u}^T \mathbf{x} \leq b \quad \forall \mathbf{u} \in \mathcal{U}, \ \text{then} \ \mathbb{P}( \mathbf{\tilde{u}} \mathbf{x} \leq b ) \geq 1 - \epsilon$
Bertsimas, Gupta and Kallus (2018) note that this guarantee can be met if $\mathcal{U}$ is chosen as a $1 - \epsilon$-confidence region for $\mathbf{\tilde{u}}$, i.e., if $\mathbb{P}( \mathbf{\tilde{u}} \in \mathcal{U}) \geq 1 - \epsilon$, but this is not the only way that the guarantee can be met, and by choosing a smaller $\mathcal{U}$ one can still have the some probabilistic guarantee and be less conservative. Gupta (2019) shows that a number of uncertainty sets that have been proposed that are based on this confidence region idea are unnecessarily conservative, and that one can obtain much smaller uncertainty sets with the same probabilistic guarantee. The proposed methodology in the paper (replace the intractable problem on lines 106 - 107 with problem (4), and then apply either Algorithm 1 or 2) seems more aligned with this confidence region approach, so it would be great if the authors can discuss more on how tight / optimal this approach is.
_Question/Suggestion 3_: It would be nice to see better motivation for the contextual VaR formulation that the paper studies, both in the introduction and the numerics.
_Other comments/suggestions_:
Page 5, line 143: "but rather the only two options" $\to$ "but these are not the only two options".
Page 9, line 264: the word "better" is repeated here.
References:
Bertsimas, D., Gupta, V., & Kallus, N. (2018). Data-driven robust optimization. Mathematical Programming, 167, 235-292.
Bertsimas, D., & Kallus, N. (2020). From predictive to prescriptive analytics. Management Science, 66(3), 1025-1044.
Gupta, V. (2019). Near-optimal Bayesian ambiguity sets for distributionally robust optimization. Management Science, 65(9), 4242-4260.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The discussion of limitations seems adequate; as noted in my questions above, it would be nice to have some discussion of how optimal this approach is.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the comments, and in particular, for the detailed suggestion improving our paper.
Motivation for the robust formulation:
We thank the reviewer for raising the point. Our problem setup lies at the intersection between robust optimization and contextual optimization, thus it has two positioning from the perspectives of both lines of the literature. The notion of contextual optimization has become popular in the recent decade. Our work is among the earliest efforts in working on the risk-sensitive contextual optimization problem; and compared to [7] and [20], our work emphasizes (i) the flexibility in choosing the prediction model, and (ii) the disentanglement of the prediction and the uncertainty quantification. As for the motivation, we believe it’d be better to take the perspective of robust optimization. The literature and methodology of robust optimization (RO) have been widely applied in various domains such as manufacturing, control system, energy, finance, etc. For all these RO applications, it can also be the application of the robust contextual optimization, and the framework of PTC, as long as there is a presence of covariates, which is more and more commonly available nowadays. Compared to the traditional RO methods, the PTC framework enables contextualized uncertainty set and thus provides more dynamic and less conservative solutions.
Practically, robust contextual optimization has been used across many applications, such as transportation (Guo et al., 2023), portfolio management (Wang et al. 2022), and healthcare (Gupta, Vishal, et al. 2020). Also with a few more applications mentioned in [7,23]. We will include more discussions on motivation in the next version of our paper.
Hardness of the problem and MIP formulation:
The robust optimization, even without the covariates, is generally NP-hard due to joint optimization over the decision variable and the uncertainty set. We adopt an approximation approach that gives up the optimization over the uncertainty set but just sticks to one single uncertainty set. The mixed integer approach in Bertsimas and Kallus (2020) takes another route of reformulating the problem as a mixed integer program (MIP). The MIP reformulation doesn’t change the hardness of the problem but is an exact reformulation, and for small-scale problems, it offers an exact solution. We should have mentioned this in our paper, and we appreciate the reviewer for pointing it out.
Compatibility with other existing results:
In the paper, we mainly present the PTC framework for approximately solving the robust optimization (RO) problem, and generally, such a route lacks a theoretical guarantee on its optimality (even for the context-free case). Meanwhile, the PTC framework is also compatible with the MIP reformation, and when there is an optimality guarantee for the context-free robust optimization, the PTC framework is capable of migrating the result to the contextual case.
- One-step approach v.s. two-step approach: Both approaches aim for predicting the distribution of $c|z.$ The literature on uncertainty quantification/conformal prediction usually adopts this two-step approach, and this in fact has a theoretical justification. The two-step procedure first predicts the conditional mean with $\hat{E}[c|z]$ and then tries to predict quantiles/distribution of $c-\hat{E}[c|z]|z$. To fit the error $c-\hat{E}[c|z]|z$ can be easier than the original $c|z$ because the error conditional distribution is usually smoother, and ML models can be more sample efficient when fitting smoother functions. To see this, generally, the conditional expectation function $E[c|z]$ is highly related and very likely to wax and wane together with the quantile function $Q_{\tau}[c|z]$. In this light, subtracting $E[c|z]$ can very likely smooth out the quantile function $Q_{\tau}[c|z]$. In metaphor, this is quite like the method of controlled variates in Monte Carlo simulation which introduces a controlled random variable that is correlated with the target random variable to reduce the variance of the estimator. Here, the conditional expectation function $E[c|z]$ works as the “controlled variate” for the original “target variate” of $Q_{\tau}[c|z]$. Between these two approaches, we don’t argue for the superiority of one over the other, and one can follow the training-validation approach to pick the better one for a specific dataset.
- Our algorithms are compatible with the MIP formulation of the RO problem. For example, once we obtain the output of Algorithm 2, we can use it as a generative model to generate samples from the conditional distribution of $c|z$. And then we can plug these samples in the MIP formulation in Bertsimas and Kallus (2020).
- We thank the reviewer for bringing our attention to Gupta (2019). We weren’t aware of the work and read it with great interests. The uncertainty set constructed by Gupta (2019) has an ellipsoid form. For the contextual setting, the PTC framework can also predict the conditional mean and conditional variance of the posterior distribution as the parameters for the ellipsoid for some fixed prior distribution. Then, one can construct the uncertainty set in a similar manner as Gupta (2019). Consequently, under some additional assumptions, one can also prove similar optimality results as in Gupta (2019) for the contextual LP problem. We will include more discussions on this in the next version of our paper.
We also thank the reviewers for pointing out some typos in our writing. In the coming week, we will get back to you timely for follow-up questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for these clarifications. | Rebuttal 1:
Rebuttal: We thank the reviewers for spending the time reading our paper, and for all the helpful comments. The raised questions inspire us to think about important aspects that we haven't come across when we write the paper. We look forward to further discussions in the coming week.
We'd like to take the extra space here to further address a few comments made by Reviewer tVPZ. Our apologies for the confusion and the caused inconvenience.
Scalar adjustment:
One very simple adjustment that appears in both of our algorithms is the scalar adjustment, i.e., the choice of $\eta$. It is a very simple yet effective way to ensure the guarantee. As far as we know, this (surprisingly) seems to be the first such design in robust optimization, while many other works need a complicated procedure to ensure the coverage guarantee.
Motivation for our algorithm design:
As noted above and in our paper, our goal is to advocate for a more flexible choice of the prediction model and the uncertainty calibration method and also to show the value of contextual information in robust optimization. The framework is compatible with many algorithm designs such as the above nonparametric design, our response to reviewer VdM1 on CVaR objective, and our response to reviewer 1GB8 on mixed integer reformulation. We choose Algorithm 1 and Algorithm 2 as exemplifiers for the PTC framework because they naturally inherit the box- and ellipsoid-shaped uncertainty sets in context-free robust optimization. Algorithm 1 fits an ML model to predict the side length of the box while Algorithm 2 predicts the shape of the ellipsoid. We agree with the reviewer that the theoretical guarantees in Proposition 1 and Corollary 1 do not exclude the possibility of having a bad uncertainty set. First, we don’t observe such a phenomenon in our numerical experiments. Second, the PTC framework is generally compatible with other algorithm designs such as parametrized uncertainty sets (Wang et al. 2023, Learning for Robust Optimization).
Optimality ratio:
The robust optimization problem, even for the context-free case, is an NP-hard problem in general due to the joint optimization over the decision variable and the uncertainty set. We didn’t provide any optimality ratio guarantee in our paper; neither does general robust optimization literature. In other words, the lack of optimality ratio guarantee is not caused by the contextual setup, the PTC framework/algorithms, or our analysis; it is determined by the nature of the robust optimization problem. However, we don’t exclude the possibility of achieving an optimality guarantee or a finer analysis. For example, as our response to reviewer 1GB8, we believe that under the same conditions as Gupta (2019), a similar optimality ratio guarantee can be achieved under the PTC framework and also as per our examples and discussions in Section 3.2. We thank the reviewer for raising the point and will include more discussion on this.
Again, we appreciate the time spent by the reviewers reading our paper and all the comments. We look forward to further discussions in the coming week.
Pdf: /pdf/a8d06d7e9da540fc6b2eaa33f081cbce50031edd.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper considers a risk-sensitive version of the contextual LP problem by replacing the original risk-neutral expected cost objective with VaR. The authors propose a new paradiam termed "predict-then-calibrate" that first learns a prediction model, and then uses calibration to quantify the uncertainty of the prediction. They present two algorithms that output a box/ellipsoid uncertainty set. The authors then provide a coverage guarantee for both algorithms, and conduct empirical experiments that show favorable properties of the proposed algorithms.
Strengths: - This paper considers contextual linear optimization problems with a risk-sensitive objective, which is a good complement to existing risk-neutral contextual linear optimization literature. It also complements the conditional robust optimization literature by providing a flexible algorithm in terms of the modeling choices. The idea of adding a calibration step in contextual linear optimization is novel.
- The two algorithms provided are intuitive and flexible to implement.
- The paper is overall clearly written and well-structured. The authors use simple intuitive examples to illustrate the value of better prediction and calibration, which I find really helpful.
Weaknesses: - The modeling choice for $\hat{h}$ seems vague to me. More elaborations on this subject would be helpful.
- I have some concerns regarding the comparisons in the empirical session. See "questions" for details.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Can the authors provide more guidelines for the modeling choice of $\hat{h}$? In practice, how to determine which model is better for $\hat{h}$?
- In Figure 4 and Table 1, the authors use the Kernel Ridge method with the RBF kernel as the prediction model and the NN model as the preliminary calibration model in both PTC-B and PTC-E (I found this in the appendix; please state it in the main text). Since RBF kernel works the best in Figure 3, this looks like a somewhat unfair comparison, since we are comparing the best among PTC-B/PTC-E with other algorithms. Can the authors elaborate more on this? Also, how does the run time of difference algorithms compare?
- Is it possible to empirically demonstrate how good the proposed algorithms do in terms of individual-level coverage?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discussed the limitations in the conclusion session.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the comments and feedback, and we hope our response to the raised questions further clarifies the positioning of the predict-then-calibrate framework, and in what way it is connected with the existing results and potential future works.
The choice of model $\hat{h}:$
In short, the vector function $\hat{h}$ specifies the shape of the confidence set. In other words, the vector function $\hat{h}$ captures the heteroscedasticity (with respect to the covariates $z$) of the prediction residual. Essentially, the confidence interval reduces to the quantile prediction, which results in the choice of pinball loss here. The optimization/learning problem of (5) aims to fit a quantile model to the residuals $r_{ti}$’s. So $\hat{h}$ can be chosen as any model that is compatible with the optimization of the pinball loss, including linear models, NNs, and gradient boosting regression. A natural criterion for selecting $\hat{h}$ would be to select the one with the smallest empirical loss of (5) as this will lead to the most accurate quantile prediction (and hence the confidence set prediction).
More discussions on Figure 4 and Table 1:
The reviewer raised a nice point that the implementation of PTC-B/PTC-E uses several different prediction models such as linear models, RBF kernel, and NN. Meanwhile, for the benchmark methods, the method “kNN” is indeed tied to the use of the k-NN as the prediction model, and the methods “DCC” and “IDCC” are tied to the use of the neural network model. They can’t be adapted to other prediction models. This in fact highlights that in handling robust contextual optimization problems, the predict-then-calibrate (PTC) proceeds in two steps, first predict and then quantify the prediction uncertainty, while these existing methods are a one-step procedure. The two-step procedure offers flexibility in choosing the prediction model and thus can best utilize the power of the off-the-shelf ML toolboxes. To some extent, we think this PTC framework is not quite about one new method to outperform the existing benchmarks, but to extend these existing methods to a larger scope -- first, develop the best ML prediction model, and then properly characterize the prediction uncertainty. Importantly and ideally, such disentanglement of prediction and uncertainty quantification provides better empirical performance and theoretical tractability.
Run time:
For the uncertainty quantification part, the run time of PTC-B and PTC-E algorithms depends on the selected models in prediction and calibration.
For the downstream robust optimization problem, the runtime depends on the shape of the specified uncertainty set. There are three types/shapes of uncertainty set in the experiments. (1) Box-shaped uncertainty sets for the PTC-B algorithm, which leads to an LP by reformulating the RO problem, so it is the most computationally efficient. (2) Ellipsoid-shaped uncertainty sets for the kNN and PTC-E algorithms in comparison, which leads to a second-order conic programming problem in the reformulation of the RO problem. (3) Specific-shaped uncertainty sets for the DCC and IDCC algorithms proposed in (Chenreddy et al. 2022), where a decomposition-based solving method has been presented in their paper and the termination criterion is defined by the tolerance gap or maximum iteration, so the run time also depends on the self-defined settings. Generally speaking, the run time is (1)<(2)<(3). Considering that the run time of (1) and (2) is not much different in our experiment, while the run time of (3) depends on their own Settings, and the time is not long (each instance is basically no more than half a minute), so we didn’t compare it in the paper.
Individual coverage:
We mentioned the limitation of global coverage in our paper. In fact, this is not an intrinsic problem associated with the PTC framework. If one aims for individual coverage, one can choose accordingly an uncertainty quantification method that satisfies such an objective. We invite the reviewer to check our response to Reviewer tVPZ for more details if you are interested and have time.
We hope the above response addresses the raised questions, and if there are any further questions/confusions, we will respond timely in the following discussion week. | null | null | null | null | null | null |
Kernel Stein Discrepancy thinning: a theoretical perspective of pathologies and a practical fix with regularization | Accept (poster) | Summary: This paper explores two pathologies associated with KSD thinning: the inability to distinguish mixing weights and concentration on low-probability regions of the target. The authors provide formalisations of when these pathologies can occur (Theorem 2.3 and Theorem 2.4), supported by empirical evidence. To address these issues, they propose the addition of two penalty terms: entropy regularisation and a Laplacian correction. The effectiveness of the proposed regularised KSD approach in mitigating these pathologies is demonstrated through both theoretical studies and numerical experiments.
Strengths: **Novelty**: Although the pathologies of blindness to mixing proportions and spurious concentration to low-probability regions have been acknowledged in the community, this paper stands out by providing formalizations and solutions to these issues. Solutions to Pathology I has been previously proposed in other applications of KSD (e.g., Zhang et al., 2022; Liu et al., 2023), but no solution nor formalisation have been provided regarding Pathology II. The proposed regularisation for Stein thinning represents a novel and principled solution, supported by strong theoretical guarantees.
**Applicability**: The paper thoroughly discusses and provides experimental evidence on the impact of the regularisation parameter, the degree of freedom, on the performance of the proposed approach. Practical guidelines are given to aid in choosing appropriate parameter values in practice.
**Writing**: This paper is well motivated and clearly written.
Zhang, M., Key, O., Hayes, P., Barber, D., Paige, B., & Briol, F. X. (2022). Towards healing the blindness of score matching. *arXiv preprint arXiv:2209.07396*.
Liu, X., Duncan, A. B., & Gandy, A. (2023). Using Perturbation to Improve Goodness-of-Fit Tests based on Kernelized Stein Discrepancy. ICML 2023.
Weaknesses: This paper is well written overall. I only found a couple of minor areas that could potentially be improved:
**Limitations**: It would be beneficial to include more in-depth discussions on any limitations of the L-KSD approach. E.g., whilst a convergence guarantee was given when $n \to \infty$ (Theorem 3.5), for finite $n$ and $m_n$, how prominent is the bias introduced due to the addition of regularisation?
**Clarity**: Whilst this paper is mostly clearly written, some parts deserve elaboration and clarification. E.g.,
1. In Section 3.2, the intuition behind why the entropic regularisation solves Pathology I is not very clearly explained (also see Q1).
2. In Theorem 3.3, it would be helpful to clarify the precise meaning of “concentrated at $x_0$”.
3. Also in Theorem 3.3, it might be beneficial to provide an interpretation of the inequality for $p(x_0)$ on L313. Specifically, is this condition satisfied by the mixture of Gaussian example used in the paper?
4. In Section 4, it would be helpful to clarify the choice of the kernel (and any hyper-parameters) in the MMD used to evaluate the results.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: **Q1. Entropic regularisation**: Could you elaborate on why the entropic regularisation helps alleviate Pathology I? In particular, L242 says “The main idea of this entropic regularization is that $− \log(p(x))$ takes higher values in modes of smaller probability, and therefore provides the relative mode weight information”. Whilst I agree with the first half of this claim, I am unsure how this sensitivity to low-density regions relates to the ability to detect mis-specification of the mixing weights, and why this regularisation would lead to small values when the mixing weights are correctly specified.
**Q2. Figure 3**: The proportion of particles selected by L-KSD that lie in the left mode, while better complying with the true proportion of $w=0.2$ compared to the result of KSD, still differs from the true value by a statistically significant gap (since the reported estimated proportion is $0.11 \pm 0.03$). Could you explain whether this gap is expected? Does it mean that the regularised penalty suffers from a significant bias in the finite sample scheme, despite being consistent in the infinite limit?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: More discussions on any limitations of the approach should be discussed; see the question in "Weaknesses".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you again for the detailed review, and the questions raised. We anwser to the main points below. See also the global rebuttal for additional insights.
**Theorem 3.3.** Regarding Theorem 3.3, “concentrated at $x_0$” means that all particles are located at $x_0$. We will improve clarity of this statement in the final version of the article. Thank you for the comment. The inequality of $p(x_0)$ tells us that the Laplacian regularization penalizes particles located around stationary points of the distribution in low probability regions. Since several intricate terms in this inequality depend on the distribution $p$, it is unfortunately not possible to give a precise interpretation of the threshold. However, many experiments show that such regularization is indeed efficient to avoid Stein thinning solutions with particles located in low probability regions. In particular, this is true for Example 1 with Gaussian mixtures. While it is possible to derive formula in this specific case (in the same spirit than Corollary 2.6), their interpretation is difficult.
**Entropic regularization.** Modes with smaller weights take smaller density values, and are therefore more penalized by $–\log(p(x))$ than modes of higher weights. Therefore, with such entropic penalization, regularized Stein thinning tends to select particles in modes of higher weights more frequently than in modes of smaller weights, and we recover appropriate proportions.
**Mode proportions in Figure 3 and finite sample bias.** The mode proportion achieved by L-KSD, although much better than the standard KSD one, is indeed not the true proportion in this example. This is not due to a significant bias from the regularized penalty, but this is related to our heuristic for $\lambda$. As mentioned in the global answer, we only have the guarantee that an optimal $\lambda$ exists to recover the true proportion, but unfortunately, a precise tuning is impossible, and our rule of thumb does not ensure such exact recovery. It only gives a default value, which consistently outperforms the standard KSD, with much closer proportions to the target distribution, more representative of the true balance between the modes.
**MMD settings.** All details about the MMD settings are given in Appendix 1.2. We will add these details in the main paper thanks to the additional page if the paper is accepted.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses, which have answered my questions. I would like to keep the scores. | Summary: In this article the author(s) studied the Stein thining method, an algorithm for post-processing outputs of MCMC based on the kernelized Stein discrepancy (KSD). This article first theoretically analyzed two pathologies of KSD, and then proposed methods to mitigate the two issues by regularizing the KSD objective function. Both theoretical analysis and numerical experiments show that the regularized Stein thinning method has improved the sampling quality.
Strengths: The blindness of score-based methods to multimodal distributions is a long-standing issue, and this paper presents a careful theoretical anaysis in the context of Stein thining. The author(s) also point out another pathology of KSD caused by spurious minimum of the objective function. Both analyses deepen our understanding of KSD-based methods. The proposed regularzed Stein thinning method also shows promising performance.
Weaknesses: 1. For the theoretical analyses, some questions need to be clarified. See the questions section.
2. For the numerical experiments, the target densities explored so far are somewhat simplified. I would suggest considering the following additional scenarios:
- Sampling from energy-based models, e.g. [1].
- Gaussian mixtures with more than two distant modes, e.g. the experiments in [2].
- Bayesian neural networks, as an extension of the linear logistic regression.
These are all "difficult" target distributions, and I would not expect regularized Stein thinning solving them perfectly. But exploring at least some of the cases above would be helpful.
3. If my understanding is correct, the truncated Laplacian operator requires computing the eigen decomposition of the Hessian matrix for every point in the MCMC sample, and this seems to be a huge computational cost. (After author rebuttal: **this point was my misunderstanding**, and the actual computing cost was clarified by the author(s). The truncated Laplacian operator only requires computing the diagonal elements of the Hessian matrix.)
[1] Che, T., Zhang, R., Sohl-Dickstein, J., Larochelle, H., Paull, L., Cao, Y., & Bengio, Y. (2020). Your GAN is secretly an energy-based model and you should use discriminator driven latent sampling. Advances in Neural Information Processing Systems.
[2] Qiu, Y., & Wang, X. (2023). Efficient Multimodal Sampling via Tempered Distribution Flow. Journal of the American Statistical Association.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Both Theorem 2.3 and Theorem 3.2 are based on radial basis function kernels, whereas Theorem 2.4 and Theorem 3.3 are using inverse multi-quadratic kernels. What considerations is this based on?
2. Assumption 2.2 seems too hard to verify. Can you give an example for it to hold? Considering Example 1 of this article with only one dimension, is it possible to given an expression for $\eta$? I would expect $\eta$ to be a function of $\mu$ and $w$, and $\eta\rightarrow 0$ as $\mu\rightarrow\infty$.
3. As mentioned in the weakness section, the computing time may be taken into account and reported in the experiments.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: After rebuttal: my previous concern on computational cost is no longer a major problem, and other limitations may include the tuning of $\lambda$, and some technicalities such as the choice of kernel functions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you again for the detailed review, and the many suggestions provided. We hope that we tackle the main points in the global answer above. We answer to the remaining specific points below.
**Assumption 2.2.** Notice that Assumption 2.2 is satisfied when the two modes of $q$ have a similar KSD distance with respect to $p$. In particular, this is true for symmetric distributions. Unfortunately, it is not possible to give an expression for $\eta$, since it strongly depends on the distributions $p$ and $q$, and the result is stated for any continuous distributions $p$ and $q$.
**Additional experiments.** Actually, we conducted unreported experiments with Gaussian mixtures with several distant modes as in [2] and in a setting similar to the first energy model considered in [1]. They show thinned samples of good quality even for such difficult target distributions, as you can see in the attached pdf in the global rebuttal. We will add such cases in the article to show the good behavior of the regularized Stein thinning. Thank you for this suggestion.
Another suggestion is to conduct experiments with Bayesian neural networks, which is definitely an exciting idea. However, the scope of such application is very large, involves ultra-high dimensional distributions, and thus deserves a full separate paper in our opinion, since this setting has not been considered yet in the previous KSD literature.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the author(s) for their rebuttal that addresses most of my concerns. The clarification on computational cost (apologies for the previous misunderstanding) and the additional experiments better demonstrate the strength of the proposed method, and hence I have raised my score.
I have one additional mild comment that the author(s) may consider to further improve the quality of the article: Can the author(s) also show the average proportion of the particles in each mode in the first figure of rebuttal PDF, similar to Figure 3? I feel the smaller modes tend to be over-shrunk. Does tuning $\lambda$ help to approach the correct mode weights?
---
Reply to Comment 1.1.1:
Comment: Thank you for reconsidering your score. You are right about the proportions in the first figure of the rebuttal PDF. We found that approximatively 6% of the particles are located in each of the smaller modes. We made a few experiments and found that is it possible to get closer to 10% by slightly reducing the value of lambda. With lambda = 0.4/m, a run gives weights of 0.103 and 0.083 in the smaller modes, for instance.
Thank you again for your questions and suggestions. | Summary: This paper proposes a regularized version of Stein thinning for post-processing the output of Markov Chain Monte Carlo algorithms. The regularization addresses two common challenges of Stein thinning: insensitivity to mode proportions and samples concentrating at stationary points. The paper analyzes the above two pathologies under a theoretical framework which is then utilized to justify the proposed regularization technique. Numerical experiments demonstrate gains over standard Stein thinning in both toy examples and in a logistic regression setting.
Strengths: - The paper is well-written and easy to follow. The structure of the paper is logical.
- I found the study of pathologies illuminating and interesting. The theoretical framework appears to be sound.
- The proposed technique shows promise both in the toy case and in the simple logistic regression setup.
Weaknesses: - More intuition on the assumptions of the theoretical formulation would be helpful, with some discussion on the limitations of the results with more focus on practical applications of the method.
- Extra hyperparameter is introduced, for which a heuristic tuning method is provided, however a systematic approach is missing.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Why do we see an initial sharp increase of MMD with respect to $d$ followed by gradual decrease in the Gaussian mixture example (Fig. 5) for RST, but strict decrease in MMD for ST?
- Is there a systematic way to set $\lambda$ beyond the heuristics provided in the paper?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you again for the review, and the questions raised. Notice that we tackle the main points in the global answer above.
**MMD variations.** The variations of the MMD with respect to dimension d in Figure 5 indeed depend on the considered examples, with different observed behaviors for the Gaussian mixture and the banana-shaped one. This is quite difficult to interpret, since these variations are due to the intrinsic complexity of summarizing a high-dimensional distribution with just m=300 points (as in Fig. 5), and is thus case dependent. But we agree that it would be interesting to investigate more precisely what distribution properties could imply such differences.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional clarification. I raised my score to accept. | Summary: The goal of Stein thinning is to post-process the outputs of Markov chain Monte Carlo (MCMC) methods by minimizing the kernelized Stein discrepancy (KSD) between the produced chain and the target distribution. It is useful and has become fast a quite popular method in Bayesian inference because it automatically removes the burn-in period, corrects the bias, and has asymptotic properties of convergence towards the target.
However it suffers from two pathologies, namely Pathology I of mode proportion blindness and Pathology II of mode collapse. Pathology I originates from the score insensitivity to mode weights, while Pathology II arises from the over-representation of a few modes. To mitigate these pathologies, the paper proposes a regularization technique called regularized Stein thinning based on adding an entropy and Laplacian term to the KSD.
Strengths: The paper does a good pedagogic job on the issues related to the optimization of the KSD, and offers a complete picture by providing theoretical negative results about the regular KSD (eg Theorem 2.3, 2.4).
For instance Th 2.4 (through Cor 2.5) says one can attain a better KSD than empirical samples of the true target mixture by putting all the points between the modes because this is a region where the score is close to zero.
Moreover the proposed regularised KSD is shown to enjoy good properties regarding Pathologies I and II and has theoretical guarantees and extensive experiments to demonstrate its efficiency.
Weaknesses: Theorem 2.4 is still limited to the IMQ kernel.
The Laplacian correction involves second-order derivatives of the target.
Their regularised KSD introduces an additional hyperparameter lambda that a user should tune, however it is not clear how to fix this parameter in advance (depending on the dimension, the target etc). It is not clear in the theoretical result Theorem 3.2 neither what lambda recovers the true mixture weights and fixes Pathology I. Still, the authors conducted a reasonable number of experiments with different values of lambda.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do the authors have any intuition on a general recipee or dependence of lambda on the parameters of the problem?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors reasonably discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you again for the review and the positive comments. We hope that we adress the identified weaknesses in the global answer above.
---
Rebuttal Comment 1.1:
Title: Response to rebutal
Comment: I read the authors' rebutal and other reviewers' comment. Other reviewers seem to share some of my concerns (e.g. extension of some theoretical results to other radial kernels, tuning of lambda) and eventually others (experimental settings not considered in the submission).
The authors rebutal reasonably adressed my concerns and their additional experiments on mixture of Gaussians advocate for their regularized Stein thinning. I still have a positive opinion on the paper and maintain my score. | Rebuttal 1:
Rebuttal: We greatly thank the reviewers for their positive comments about our article and relevant suggestions. We explain below how we will improve the article clarity following the reviewer guidelines. We tackle the main points below, and also provide specific answers to each reviewer.
**Computational complexity.** The truncated Laplacian operator is simply the trace of the Hessian matrix, where negative components are set to $0$, and no eigendecomposition is needed. Therefore, the computational cost of the regularized Stein thinning is similar to the original Stein thinning. We will improve clarity of this important point in the final version of the article.
**Tuning of $\lambda$.** In our Bayesian setting, it is unfortunately not possible to tune the parameter $\lambda$, since there is not metric to assess the quality of the samples obtained over a grid of $\lambda$ values. This is the exact same limitation encountered in practice for the bandwidth of IMQ kernel $\ell$, for which, to the best of our knowledge, there is no satisfying and systematic tuning procedure. However, as opposed to the bandwidth parameter $\ell$, we are able to provide a simple heuristic for $\lambda$, directly guided by the careful analysis of Theorem 3.5. In fact, it turns out that setting $\lambda$ to $1/m$ is consistent in practice, and we show in various experiments that this heuristic is highly efficient. Additionally, we provide two theoretical guarantees for this setting of $\lambda$: the algorithm converges (Theorem 3.5), and the bias for samples truly sampled from the target does not increase with this regularization. Notice that Figure 6 in the Supplementary Material provides experiment results for lower and higher values of $\lambda$ than $1/m$: performance strongly decreases in both cases. Besides, the goal of Theorem 3.2 is to show that the regularized Stein thinning is now sensitive to the weights of multimodal distributions, as opposed to the original Stein thinning. However, we cannot theoretically determine which exact range of values of $\lambda$ lead to good thinned samples in finite sample regimes. As suggested by the reviewers, we will comment more on the limitation of this aspect of the L-KSD in the paper.
**IMQ kernel for Theorem 2.4.** The most efficient kernel for KSD-based algorithm is the IMQ kernel, as often stated in the literature, since using other types of kernels strongly degrades performance (Riabiz et al., 2022; Chen et al., 2018). Therefore, it was of utmost importance
that our theoretical results hold for the IMQ kernel used in practice. In our opinion, it was of secondary importance to extend the analysis to radial kernels. In the case of Theorem 2.3, it happens that the result holds for radial kernels without any additional assumption than for the IMQ kernel case. We thus decided to state the result in full generality. On the contrary, extending Theorem 2.4 to radial kernels requires intricate additional assumptions about the kernel properties. More importantly, it will strongly increase the complexity of the inequality, which relates the sample size m and the threshold s, leading to an obscure and unintuitive condition. For the sake of clarity and the practical scope of the theory, we thus believe that Theorem 2.4 should only be stated for IMQ kernels.
**Additional details in the theoretical analysis.** We will take advantage of the additional page, if the paper is accepted, to add details about assumptions of the theoretical results, to improve clarity. We separately answer to the points raised by the reviewers in dedicated posts.
Riabiz, M., Chen, W. Y., Cockayne, J., Swietach, P., Niederer, S. A., Mackey, L., & Oates, C. J. (2022). Optimal thinning of MCMC output. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(4), 1059-1081.
Chen, W. Y., Mackey, L., Gorham, J., Briol, F. X., & Oates, C. (2018, July). Stein points. In International Conference on Machine Learning (pp. 844-853). PMLR.
Pdf: /pdf/fa42587ab2a11eaaf272c9cf22b9ff260a0c0c7e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Exact recovery and Bregman hard clustering of node-attributed Stochastic Block Model | Accept (poster) | Summary: This paper extends the analysis of exact recovery of the Stochastic Block model to node attributed graphs (CSBM). This opens up a new dimension as the desired clustering must not only be optimal in the sense of the SBM but also respect the node attributes (which are assumed to be distributed based on the block of the SBM). First, a binary setting is presented, where edges are either present or not. For this setting, the authors present a tight bound on the information theoretical (im-)possibility of recovering the blocks of the SBM using the Chernoff information of the model. Second, a setting is analysed where the graph structure is sampled from an SBM and node attributes and edge weights are sampled from a distribution from the exponential family. For this setting, the authors present the Chernoff information of the model given its parameters allowing for a concrete characterization of the phase transition between possibility and impossibility of recovery. An iterative likelihood maximization algorithm is then proposed and evaluated on synthetic data.
Strengths: - This paper provides a nice perspective on the relevance of information gained from the network structure and that gained from the node attributes.
- The claims of the paper seem sound.
- The sections themselves have clear statements.
Weaknesses: - The experiments are not convincing.
- All experiments are carried out on synthetic graphs generated in a way that benefits this algorithm.
- The presented algorithm is compared with 2 baseline algorithms that only optimize for one of the two dimensions and thus fail at the other. Instead, one could have compared with other approaches to node-attributed community detection, which should pose more of a challenge. [19,20] could have been used as a comparison.
- The related work section is quite sparse and is used to distinguish the contribution rather than present related work. E.g. section 2.2 "Algorithms for clustering node-attributed networks" contains 2 citations. This should be augmented.
- In the presentation of the results it should be made much clearer that the developments presented here, heavily depend on [3]; with the current writing the reader gets a wrong impression of the novelty of the here presented ideas.
- The structure and the presentation of this paper are sometimes confusing.
- The reader is often left guessing where we are headed next. E.g. section 4, which is framed as the main contribution (extending [19] by allowing for sparse networks) simply starts by introducing zero-inflated distributions from the exponential family. Or 4.2 gives the derivation for the negative log-likelihood without previously mentioning that this will be used in the iterative likelihood maximization algorithm.
- The numbering in the appendix overwrites the numbering in the paper (e.g. Theorem 1 in the paper is not Theorem 1 in the appendix) which makes referencing the paper from the appendix or the appendix from the paper very hard.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Can you find any guarantees for your algorithm such as recovery with high probability in a certain regime?
- In the experiments, why don't you compare your approach to the other papers [19,20] that you mentioned in your related work?
- In the experiments, how close is the performance of your algorithm to the theoretical bound you presented?
- Why did you not compare the algorithms on real-world data?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations of the setting or the approach are not discussed -- this should be rectified.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
thank you for the time spent reviewing our paper. Please find below answers and comments to the weaknesses and questions you have raised. We hope this will help clarify and strengthen our contribution.
Weaknesses:
* This is not the case. Please note that Figs. 2 and 3 of the paper compare the performance of Alg. 1 (our proposal) to 4 other algorithms. Two of them (SC and EM) only optimize for one of the dimensions (network and attributes, respectively), while the other two (attSBM and IRsls) optimize over both the network and attributes. Note that Figure 2 deals with binary networks with Gaussian weights, which is exactly the model for which both attSBM and IRsls were designed. Hence this setting is not advantageous for our Algorithm. Yet, even in this case, Alg. 1 outperforms them.
Figure 3 shows that attSBM and IRsls perform terribly when the attributes are non-Gaussian and the network is non-binary, while Alg. 1 has a relatively good performance. Thus, Alg. 1 performs well in both scenarios. Moreover, we have performed experiments that show that Algorithm 1 is not too dependent on the choice of $\psi$, as questioned by reviewer coB1 (see plots in the global rebuttal file). This robustness to the choice of $\psi^*$ and $\phi^*$ confirms the superiority of Alg. 1 with respect to attSBM and IRsls.
Indeed, Algorithm 1 is similar to [19] and the main (and only) difference is that their model assumes a dense weighted network (an edge is present between all node pairs with non-zero weight). As most real networks are sparse (most node pairs do not have an edge), we propose a model for sparse weighted networks (see the answer to reviewer cu9t). In any case, we have performed experiments that indicate that using a sparse model (Alg. 1) yields much better accuracy than the dense model assumed in [19, 20] when the network is sparse, as is the case for most real networks (see plots in the global rebuttal file). Moreover, since [19, 20] assume a dense network (all possible edges are present, $O(n^2)$) their algorithms do not scale to large networks. In contrast, the complexity of Alg. 1 depends on the number of edges, which is often $O(n\log n)$. We plan to revise the exposition of Section 4 and include these results and observations on complexity.
* Indeed, page limitation forced us to severely restrict the discussion of the related work. The revised version will improve this discussion. Note that Section 2.2 cites the two relevant algorithms to cluster (dense) weighted networks: [9] and [20]. We mentioned [19], as it inspired Algorithm 1, but we should have described [8] and [28] as well.
* The proof techniques indeed rely on [3], but also on on [33] and [5]. Yet, some arguments here are new (refer to our answer to reviewer cu9t). However, the paper clearly states that the Chernoff-Hellinger (CH) divergence was first introduced in [3] (see the introduction, section 2.1 and Remark 1). Nonetheless, as defined in [3], the expression of the CH divergence does not extend easily to non-binary interactions (contrary say, to the Renyi divergence which appears when the interaction are homogeneous SBM).
It appears that the only work generalising the CH to a non-binary setting is [32]. Yet, in [32] the divergence is expressed as a minimisation of KL divergences, which again does not easily extend to more general interactions (say, real-valued probability distribution for edge weights). More, in [32] the link with the CH divergence requires a technical lemma (See Claim 4 of [32]), whose proof requires sparse interactions.
Thus, in our paper expression (3.3) of the CH is novel, and not directly evident from prior works.
* Indeed, the submitted paper had some typos and we appreciate your indication. A throughout reading of the paper will correct them for the revised version. We also plan to better organize the exposition of Section 4.
Questions:
* Unfortunately, Alg. 1 comes without any theoretical guarantees. It does not necessarily computes the MLE for the instance at hand. However, Fig. 2 in the global rebuttal file shows that Alg. 1 achieves exact recovery in a region very close to the information-theoretic limit. Indeed, two-stage algorithms have been shown to achieve optimal accuracy in binary SBMs and K-means achieve optimal accuracy in a Gaussian mixture model (see e.g., additional references [1]-[4] in comments to reviewer cu9t).
* Note that the algorithms used for comparison, attSBM and IRsls in Figures 2 and 3 of the paper, are recent proposals for clustering networks with attributes (App. Net. Sci 2019 and ICML 2022, respectively). Note that algorithms proposed in [19, 20] are much older and are designed for dense networks (all possible network edges are present). In any case, we have performed experiments that indicate that using a sparse model (Alg. 1) yields much better accuracy than the dense model assumed in [19, 20] when the network is sparse, as is the case for most real networks (see plots in the global rebuttal file). Moreover, since [19, 20] assume a dense network (all possible edges are present, $O(n^2)$) their algorithms do not scale to large networks. In contrast, the complexity of Alg. 1 depends on the number of edges, which is often $O(n\log n)$. We plan to revise the exposition of Section 4 and include these results and observations on complexity.
* This is a very good question. We have performed experiments with Alg. 1 comparing its exact recovery performance (fraction of times the algorithm correctly recovers the community of all nodes) with the theoretical threshold for exact recovery proved in the paper (Theorem 1). Results are shown in Fig. 2 of the global rebuttal file. Interestingly, Alg. 1 achieves exact recovery in a region very close to the information-theoretic limit. This indicates that Alg. 1 is very promising!
* We have not considered real datasets. Please see comments for reviewer coB1 and hHhv.
---
Rebuttal Comment 1.1:
Title: response
Comment: I thank the authors for the response and clarifications, which have improved my understanding of the paper.
I thus increased my score to 6.
I still disagree with the statement that the experiments are not favorably stacked for the algorithm considered. Indeed only 50% of the other baselines considered even consider attributes. Of those that consider attributes all assume Gaussian attributes, so it is not a surprise to see that such algorithms fail if the attributes are not Gaussian. In the case of Gaussian attributes one of the two attribute considering baselines is in fact not so much worse than what is considered here. Moreover, those baseline algorithms are designed for dense networks and not for sparse networks.
Ultimately, I think it would be much more convincing if the authors could represent results on real-world networks.
---
Reply to Comment 1.1.1:
Title: Preliminary results using real datasets
Comment: Thanks for the additional comment.
Indeed, numerical experiments using real datasets will clearly
strengthen our contribution, indicating that the proposed algorithm
can also be successfully applied in the wild. Thus, we conducted
preliminary experiments using the following three publicly available
datasets used as benchmarks (all have no edge weights):
* CiteSeer [1]: $n=3279$, $m=9104$, $K=6$, $d=3703$.
* Cora [1]: $n=2708$, $m=10556$, $K=7$, $d=1433$.
* Cornell [2]: $n=183$, $m=298$, $K=5$, $d=1703$.
For each network, the original node attribute vector was reduced to
have dimension $d=10$ by selecting the 10 best features according to
the chi-square test. Since the node attribute vector in these datasets
is binary, Alg. 1 assumes a Bernoulli distribution with $d=10$ for node
attributes and Bernoulli edges (no edge weight). The initialization
for Alg. 1 and attRBM used the spectral clustering of both the node
similarity matrix (using node attributes) and network edges.
Average ARI results (over independent runs) for the three datasets (in
the order CiteSeer, Cora, Cornell) were as follows:
* Alg. 1: 0.20, 0.12, 0.49
* attRBM: 0.17, 0.09, 0.46
* EM-GMM: 0.13, 0.06, 0.37
* SC: 0.00, 0.00, 0.02
Note that in all scenarios, Alg. 1 outperformed the other 3
algorithms. Note that SC (spectral clustering of the network) has near
zero performance, indicating that the network in these datasets is
not informative of the clusters of the nodes. Note that both Alg. 1
and attRBM (that leverage network and node attributes) outperform
EM-GMM that uses only node attributes. These preliminary results
indicate that Alg. 1 is promising, even when used in real datasets.
Note that differently from node attributes, no preprocessing of the network
was performed in the above experiments. By preprocessing the network
(e.g., removing or adding edges or edge weights) it is expected that the
network can also provide information, improving the results of Alg. 1.
[1] Datasets available at https://linqs.org/datasets/
[2] Datasets available at https://github.com/graphdml-uiuc-jlu/geom-gcn | Summary: This paper studies clustering of node-attributed Stochastic Block Model (SBM). The authors provide information-theoretic threshold for exact recovery under generic distributions for both edge weights and node attributes. In addition, the authors propose a clustering algorithm based on iterative likelihood maximization when edge weights and node attributes are drawn from exponential family distributions. The authors carry out experiment on synthetic data and show that the proposed algorithm outperforms existing state-of-the-art methods.
Strengths: - The theoretical setting (i.e. clustering of node attributed SBM) considered in this work is fairly general. The threshold for exact recovery is presented in a general setting without assuming any particular family of distributions for either edge weights or node attributes. The examples and implications of Theorem 1 provided on page 5 are nice.
- The threshold which involves Equation (3.3) captures an intuitive additive signal from edge connections and node attributes.
- The overall writing and presentation is very clear and generally easy to follow (although there are many typos here and there). In particular, the presentation of background and related works seem reasonably thorough.
- The empirical results, though limited to synthetic data, are promising.
- Overall, I think this paper studies a well-known problem in a more general setting than those have been considered in prior work. The new results and algorithm can be a nice addition.
Weaknesses: - Algorithm 1 requires knowing the functions $\psi^*$ and $\phi^*$. This requirement often does not hold in practice. The authors should discuss the practical implications for such requirement and what if these functions are unknown.
- Though not necessary for a theory paper, it would be nice to have at least some elementary experiments on real data.
- There are many typos throughout the paper. Here are some:
- Line 95: standard deviation $\sigma^2$ -> standard deviation $\sigma$
- Line 124: while his work -> while this work
- Line 129: is find the community -> is finding the community
- Equation (4.2): subscripts for $\theta$ do not match, similarly, subscripts for $\eta$ do not match
- Lemma 2: $p_{k\ell}$ in the Equation should be $p_{ab}$?
- Line 284: in Nor(...) I think the placements for mean and variance should be reversed
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - In practice, how should one use Algorithm 1 when $\psi^*$ and $\phi^*$ are unknown?
- Does Example 2 imply that an oracle to the true community is almost useless unless the oracle gives almost full access to all labels?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I could not find where the authors discuss the limitations or potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
thank you for the time spent reviewing our paper. Please find below answers and comments to the weaknesses and questions you have raised. We hope this will help clarify and strengthen our contribution.
Weaknesses:
* Indeed, the distributions used to compute $\psi^*$ and $\phi^*$ are parameters of Alg. 1 and must be determined a priori, through an educated guess, for example. Note that given the distribution for edge weights and node attributed, $\psi^*$ and $\phi^*$ are uniquely determined. Nonetheless, we have performed experiments that indicate that the performance of Alg. 1 does not strongly depend on in the right choice of $\psi^*$ (see plots in the global rebuttal file). We will add this assumption and results to the revised paper. Interestingly, a similar observation (that the right choice of distribution leads to marginal performance gains) was also made in [6] (see Tables 3 and 4 of [6]).
* We have not considered real datasets because this work focuses on characterizing (theoretically) and measuring (empirically) the influence of different model parameters (e.g., edge probability, edge weights, and node attributes) on the performance of recovering the clusters. Measuring this kind of influence is quite challenging when real datasets are used. In contrast, Fig. 2 in the global rebuttal file shows the relationship between the theoretical threshold for exact recovery (theorem proved in this paper) and the performance of Alg. 1 (proposed in the paper).
* Indeed, the submitted paper had some typos and we appreciate your indication. A throughout reading of the paper will correct them for the revised version.
Questions:
* This is a tough question and outside the scope of this paper (since we assume them to be known or guessed informatively). However, here is an idea. Since the initial clustering does not require knowledge of $\phi*$ and $\psi*$, one could treat the Bregman divergence as unknown within a set of distributions, and then learn the distribution (divergence) in the iterative steps. We will mention this as potential future work, and cite the papers [1,2] that are recent advancements in this direction.
* Indeed, the exact recovery threshold is not modified unless the oracle gives almost all the labels. A high-level explanation is the following.
Consider the planted partition model (two communities of equal size $n/2$, intra- and inter-community edge probabilities $p$ and $q$ with $p = a n^{-1} \log n$ and $q = b n^{-1} \log n)$. Exact recovery is impossible if (at least) one node has more neighbours in the opposite community than in its own community. The probability that a given node has more neighbours in the opposite community is $P_e = e^{- (1+o(1)) \frac12 ( \sqrt{a} - \sqrt{b} )^2 \log n }$ (see Section 4.1 (Lemma 4) in the last arxiv version of the review by E. Abbé, where he names this the 'genie-aided hypothesis').
In the unsupervised setting, since there are n nodes to label, exact recovery is impossible if $n P_e$ does not go to zero. This provides the condition $(\sqrt{a} - \sqrt{b} )^2 > 2$.
In the semi-supervised setting, there are $\eta \\, n$ nodes to cluster (where $\eta$ is the fraction of labels revealed by the oracle). If $\eta$ is constant, then the condition for exact recovery is unchanged.
While this negative result is not new (we refer to [25]), we highlighted it here as an example since it is a special case that we find to be counter-intuitive.
Additional references:
[1] Shah, Shah, Wornell (2021). A computationally efficient method for learning exponential family distributions. Advances in neural information processing systems, 34, 15841-15854.
[2] Siahkamari, Xia, Saligrama, Castañón, Kulis (2020). Learning to approximate a Bregman divergence. Advances in Neural Information Processing Systems, 33, 3603-3612. | Summary: This paper studies community recovery in sparse, weighted networks, which is an important setting that is more general than the commonly-studied undirected, unweighted networks. The authors' first main contribution is to establish the information-theoretic conditions for exact community recovery in this setting, which is a form of the Chernoff information (and cleanly reduces to well-known info-theoretic thresholds in unweighted settings). Next, assuming that edge and node attributes belong to exponential families, an expression for the likelihood in terms of Bregman divergences is given, which then leads to a natural clustering procedure. Extensive numerical experiments are provided comparing the proposed procedure to other approaches.
Strengths: Deriving the information-theoretic threshold for exact recovery via the Chernoff information is perhaps not surprising, but it is a solid contribution as it encompasses a wide range of important examples. The clustering procedure derived, based on a connection between exponential families and Bregman divergences, is clean and intuitive. The simulations on synthetic data show that the clustering procedure has favorable performance in terms of the ARI, compared to other methods, which is a good validation of the theory.
Weaknesses: The paper could be improved if the authors filled in some gaps regarding a few central topics.
- Please discuss in more detail how similar / different your approach is to [19], since both leverage the connection between exponential families and Bregman divergences to come up with clustering procedures.
- The information-theoretic results are not too surprising. Does the proof require new techniques to handle the more general setting under consideration? If so, they should be discussed in the main text, as it would be of interest to theoreticians in this field.
- Please provide additional discussion around the accuracy and sample complexity of the clustering algorithm. Is it efficient? Is it guaranteed to output the MLE or similar?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - You use the term "homogeneous" a lot: can you define precisely what you mean by this?
- Section 2: You can also consider citing work on Censored Block Models, in which edges take on 3 values (present, absent, unknown)
- Theorem 1: How crucial is the assumption on the convexity of CH_t (a^*, b^*) in t?
- A clarification question: if the attribute distributions were not from an exponential family, would the likelihood be intractable to compute?
- For clarity, I would suggest defining ARI.
- In the simulations, is the ARI measured between the estimated and ground-truth communities? If so, please clarify in the main text.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
thank you for the time spent reviewing our paper. Please find below answers and comments to the weaknesses and questions you have raised. We hope this will help clarify and strengthen our contribution.
Weaknesses:
* Indeed, Alg. 1 is similar to [19] and the main (and only) difference is that their model assumes a dense weighted network (an edge is present between any pair of nodes and has non-zero weight). As most real networks are sparse (most node pairs do not have an edge), we propose a model for sparse weighted networks. Therefore, their model assumes edge weights $w_{ij}$ are drawn from an exponential family for every node pair $(i,j)$; our model assumes edges can be present or absent (Bernoulli) and a present edge $(i,j)$ has weight $w_{ij}$ drawn from an exponential family. Note that Lemma 2 established a relationship between the log-likelihood of such zero-inflated distribution and Bregman divergences. In any case, we have performed experiments that indicate that using a sparse model (Alg. 1) yields much better accuracy than the dense model assumed in [19, 20] when the network is sparse, as is the case for most real networks (see plots in the global rebuttal file). Moreover, since [19, 20] assume a dense network (all possible edges are present, $O(n^2)$) their algorithms do not scale to large networks. In contrast, the complexity of Alg. 1 depends on the number of edges, which is often $O(n\log n)$. We plan to revise the exposition of Section 4 and include these results and observations on complexity.
* The novelty in this proof is the usage of Plachky-Steinebach's theorem (a generalisation of Cramer's large deviation theorem) to derive the asymptotic behaviour of the probability that a given node $u$ is predicted to be in a wrong community by the MLE (let us call this event $Au$). Another novelty is the usage of FKG inequality to show that events $Au$ and $Av$ are positively correlated. Previous works in binary or edge-labelled SBMs typically computed this probability using ad-hoc calculations that only work for Bernoulli interactions (or interaction on a discrete, finite space). We plan to explicitly mention the Plachky-Steinebach theorem in the main text since this is essential for our theorem and may also be of interest to some readers.
* Unfortunately, Alg. 1 comes without any theoretical guarantees. It does not necessarily computes the MLE for the instance at hand. However, Fig. 2 in the global rebuttal file shows that Alg. 1 achieves exact recovery in a region very close to the information-theoretic limit. Indeed, two-stage algorithms have been shown to achieve optimal accuracy in binary SBMs (see additional refs. [1]-[2]) and K-means achieve optimal accuracy in a Gaussian mixture model (additional refs. [3]-[4]). Thus, it is possible that the proposed algorithm can be analysed rigorously, but this challenging task is outside of the scope of this conference paper.
Questions:
* Homogeneous refers to the setting where two distributions $f$ and $g$ determine the interactions (edge weights) within and between communities, respectively. This provides for a very simple and symmetric setting. We defined it in Section 2.1, but will emphasise its meaning also in the text.
* Indeed, the Censored Block Model is a nice additional example, we will add it as an example and mention recent results such as [5].
* We noticed a typo in Theorem 1: the assumption should be that $CH_t(a,b)$ is strictly concave (which in turn implies the strict convexity of the function $\beta$ defined in line 71 p.4 of the Appendix). This strict convexity of $\beta$ is required to apply the Plachky-Steinebach theorem. However, this is not a constraining assumption for our setting, since the quantity $CH_t(a,b)$ is concave and will be strictly concave except in degenerate cases where all the probability distributions are equal and the divergence is zero. For example, if $f$ and $g$ are Gaussian with mean $\mu_1$, $\mu_2$ and variance 1, then $(1-t) D_t( f \| g ) = \frac12 t(1-t) (\mu_1-\mu_2)^2$ (where $D_t$ is the Renyi divergence of order $t$). This quantity is indeed strictly concave in $t$, except in the degenerate case where $\mu_1 = \mu_2$. This same argument can be made for any distribution in the exponential family. This technical comment was not mentioned in the paper due to lack of space, but we do plan to add a comment in the revised version.
* Indeed, the exponential family provides some advantages when considering the MLE. For example, the Pitman-Koopman-Darmois Theorem states (under some smoothness assumptions on the probability density) that sufficient statistics with bounded dimensionality (i.e., not growing with the sample size) exist if and only if the distribution belongs to an exponential family. However, computing the MLE in weighted SBMs is computationally challenging even when restricting to exponential families, and this is the goal of Alg. 1 (not necessarily met).
* Indeed, the performance of all algorithms is evaluated using the ground truth, since we have synthetic network models. We will clarify this and add a definition of the metric used (ARI).
[1] Chao Gao, Zongming Ma, Anderson Y. Zhang, Harrison H. Zhou. Community detection in degree-corrected block models. The Annals of Statistics, 46(5) 2153-2185 2018.
[2] Arash A. Amini, Aiyou Chen, Peter J. Bickel, Elizaveta Levina. "Pseudo-likelihood methods for community detection in large sparse networks." The Annals of Statistics, 41(4) 2097-2122 2013.
[3] Yu Lu, Harrison H. Zhou (2016). Statistical and computational guarantees of Lloyd's algorithm and its variants. arXiv preprint arXiv:1612.02099.
[4] Chao Gao, Anderson Y. Zhang. Iterative algorithm for discrete structure recovery. The Annals of Statistics, 50(2) 1066-1094 2022.
[5] Dhara, Gaudio, Mossel, Sandon. Spectral recovery of binary censored block models. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022. | Summary: This paper studies community detection in node-attributed stochastic block models. Although these models have been studied before, this work has two main contributions: (i) the edge weights in the model are now not necessarily binary, but can also be weighted. The node attributes don’t have to be Gaussian, and can come from a more general exponential family. (ii) An iterative clustering algorithm which maximizes the likelihood by placing nodes in correct cluster based on attributes and edge weights.
Experiments on synthetic datasets are also presented
Strengths: • The weakening of the assumptions in this work are more general than what previous work has considered. This is a significant strength of this work. The assumptions in this model (weighted edges, non Gaussian node attributes) are more realistic than what previous work has considered, and I consider this a nice contribution.
• To the best of my knowledge the theoretical work is sound and contains some non-trivial insights. Some of the technical techniques presented might have impact on future work.
• Experimental results on synthetic data suggest that the proposed algorithm 1 performs well compared to previous work.
Weaknesses: • Given that a motivation for this paper is to provide an algorithm for a more realistic setting, I would have expected to see experiments on real world datasets. In general, stochastic block models enjoy a very symmetric structure, which in some sense makes it nice to cluster. It would be interesting to see how the performance of this algorithm generalises to real-world graphs.
• Experimental results on synthetic data is only performed when $k=2$. Higher values of $k$ should be considered
## Minor:
Line 73: conditioned of --> conditioned on
Line 129: is find --> is to find
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: None
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for the time spent reviewing our paper. In order to address the weaknesses pointed out in your review, we have performed novel experiments using synthetic data sets with a larger number of clusters ($k=3$ and $k=4$). Results using more clusters are qualitatively similar to the $k=2$ case but do strengthen the results in the paper (see plots in the global rebuttal file).
We have not considered real datasets because this work focuses on characterizing (theoretically) and measuring (empirically) the influence of different model parameters (e.g., edge probability, edge weights, and node attributes) on the performance of recovering the clusters. Measuring this kind of influence is quite challenging when real datasets are used. In contrast, Fig. 2 in the global rebuttal file shows the relationship between the theoretical threshold for exact recovery (theorem proved in this paper) and the performance of Alg. 1 (proposed in the paper).
---
Rebuttal Comment 1.1:
Title: Acknowledgement Response
Comment: I thank the authors for their additional experiments on larger $k$ which do improve the experimental results in this work.
I do still think that real-world experiments (albeit in a slightly adjusted form) would significantly strengthen the paper. I do appreciate the theoretical analysis of this work which is non-trivial and important (hence my positive evaluation of the paper). However, a large part of the introduction is spent justifying the additional parameters (for continuous edge weights) in the CSBM because of real-world observations. One would therefore also expect some form of real-world experiments.
---
Reply to Comment 1.1.1:
Title: Preliminary results using real datasets
Comment: Thanks for the additional comment.
Indeed, numerical experiments using real datasets will clearly
strengthen our contribution, indicating that the proposed algorithm
can also be successfully applied in the wild. Thus, we conducted
preliminary experiments using the following three publicly available
datasets used as benchmarks (all have no edge weights):
* CiteSeer [1]: $n=3279$, $m=9104$, $K=6$, $d=3703$.
* Cora [1]: $n=2708$, $m=10556$, $K=7$, $d=1433$.
* Cornell [2]: $n=183$, $m=298$, $K=5$, $d=1703$.
For each network, the original node attribute vector was reduced to
have dimension $d=10$ by selecting the 10 best features according to
the chi-square test. Since the node attribute vector in these datasets
is binary, Alg. 1 assumes a Bernoulli distribution with $d=10$ for node
attributes and Bernoulli edges (no edge weight). The initialization
for Alg. 1 and attRBM used the spectral clustering of both the node
similarity matrix (using node attributes) and network edges.
Average ARI results (over independent runs) for the three datasets (in
the order CiteSeer, Cora, Cornell) were as follows:
* Alg. 1: 0.20, 0.12, 0.49
* attRBM: 0.17, 0.09, 0.46
* EM-GMM: 0.13, 0.06, 0.37
* SC: 0.00, 0.00, 0.02
Note that in all scenarios, Alg. 1 outperformed the other 3
algorithms. Note that SC (spectral clustering of the network) has near
zero performance, indicating that the network in these datasets is
not informative of the clusters of the nodes. Note that both Alg. 1
and attRBM (that leverage network and node attributes) outperform
EM-GMM that uses only node attributes. These preliminary results
indicate that Alg. 1 is promising, even when used in real datasets.
Note that differently from node attributes, no preprocessing of the network
was performed in the above experiments. By preprocessing the network
(e.g., removing or adding edges or edge weights) it is expected that the
network can also provide information, improving the results of Alg. 1.
[1] Datasets available at https://linqs.org/datasets/
[2] Datasets available at https://github.com/graphdml-uiuc-jlu/geom-gcn | Rebuttal 1:
Rebuttal: First and foremost, we would like to thank all four reviewers for their time spend reviewing our paper and their valuable comments.
Some of the questions raised by the reviewers can be addressed with further experiments, as the following:
* how does Algorithm 1 perform when the network has more clusters?
* how sensitive is the performance of Algorithm 1 to the choice of the divergences $d_{\psi^*}$ and $d_{\phi^*}$?
* how does the performance of Algorithm 1 compare to the theoretical threshold for exact recovery derived in Section 3?
* how does Algorithm 1 compare to algorithms published in [19] and [20]?
The attached PDF file has figures with the results of these questions. More precisely:
* Figures 1(a) and 3(c) have 4 clusters, and Figure 2(b) has 3 clusters. (Note that all these new experiments were run with same size clusters, and respective intra-cluster and inter-cluster edge densities $f_{in}$ and $f_{out}$)
* Figure 1 shows that using a divergence (distribution) for edge weights (Fig. 1(a)) and node attributes (Fig. 1(b)) different from the distribution used to generate the data does not impact the results.
* Figure 2 compares the performance of Alg. 1 in terms of exact recovery (fraction of times the algorithm correctly recovers the community of all nodes) with the theoretical threshold for exact recovery proved in the paper (red curve in the plots) in two settings:
(2a) binary weight with Gaussian attributes, and (2b) zero-inflated Gaussian weights with Gaussian attributes. Solid black and white squares represent fraction zero (no trial was recovered exactly) and one (all trials were exactly recovered) over 50 trials.
This is a very important numerical validation of Algorithm 1, and we plan to include it in the paper (to replace Figure 1 of the article, which does not explicitly show the theoretical curve). Due to limited time in the rebuttal phase, this new Figure 2 has a relatively large granularity and each pixel is averaged over 50 runs, but we can increase these numbers for the final version.
* Figures 3(a) and (b) compare Algorithm 1 to the V-EM algorithm of [20] and to the Bregman algorithm of [19]. Note that algorithm [19] is a special case of Algorithm 1 for when the network is dense (all possible edges are present). Results indicate that when the network is sparse, [19] has poor performance (and not surprising since it assumes a dense network). The V-EM of [20] is also designed for dense networks, and while its performance on sparse networks is not poor, Algorithm 1 is superior (and not surprising, since it was designed for sparse networks). Last, since the models in [19, 20] assume a dense network (all possible edges are present, $O(n^2)$) their algorithms do not scale to large networks. In contrast, the complexity of Alg. 1 depends on the number of edges, which is often $O(n\log n)$. (The three algorithms had the same initialisation, and the performance of the initialization is given as a reference of a baseline to beat).
* Finally, Figure 3(c) is similar to Figure 2(b) of the paper (binary network with Gaussian attributes) but with a larger number of clusters. We see that Algorithm 1 performs better than the algorithm of [8], which was a recently proposed algorithm to tackle sparse networks with binary edges and Gaussian attributes.
References:
[6] Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, Joydeep Ghosh, and John Lafferty. Clustering with Bregman divergences. Journal of machine learning research, 6(10), 2005.
[8] Guillaume Braun, Hemant Tyagi, and Christophe Biernacki. An iterative clustering algorithm for the contextual stochastic block model with optimality guarantees. In International Conference on Machine Learning, pages 2257–2291. PMLR, 2022.
[19] Bo Long, Zhongfei Mark Zhang, and Philip S Yu. A probabilistic framework for relational clustering. In ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), pages 470–479, 2007.
[20] Mahendra Mariadassou, Stéphane Robin, and Corinne Vacher. Uncovering latent structure in valued graphs: A variational approach. The Annals of Applied Statistics, 4(2):715 – 742, 2010.
Pdf: /pdf/63dfcd369ff87dc62f218918e583d9fb5d8c69eb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Loss Dynamics of Temporal Difference Reinforcement Learning | Accept (poster) | Summary: In summary, they study how learning dynamics and plateaus depend on feature structure, learning rate, discount factor, and reward function in the case of batch TD(0) for policy evaluation.
This paper applies concepts from statistical physics to study typical case learning curves for TD learning in linear FAs with nonlinear but fixed features (specifically, their study is targeted at policy evaluation with batch TD(0)). They conjecture a Gaussian feature equivalence: roughly, having a high-dim feature vector, the learning curves of a TD learner are conjectured to be equivalent to that of one with Gaussian features having matching mean and correlations. Further, this view allows them to use a Spectral Perspective to find a proxy for what would constitute a harder reward function to optimize (showing alignment between learning rate progress vs. the measure; shaped reward being simpler than sparse reward).
Strengths: - A step in an important direction: towards better understanding the learning dynamics of TD methods
- Proposing a creative tool for the analysis of TD methods
- Good exposition of related works
- Paper is well-motivated and clearly written
- Claims are carefully stated and supported: no overstated claims
- Experiments are pretty interesting and carefully designed
Weaknesses: - Considering tasks have a finite horizon (fixed T), it has not been made clear if the experiments are run using features that have a good representation of the remaining time in them or not. Of course, this could be my misunderstanding (to be clarified by authors in the Questions section).
- Not sure whether the comparison between fixed and annealing learning rates make too much sense in the context of this study (to be clarified by authors in the Questions section).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Are the results of the Discount Factor analysis (Fig. 3b) for a dense reward scenario? Wouldn’t this be a very different case with sparse rewards due to the issues studied by van Seijen et al. (2019)?
2. Isn’t annealing the learning rate a condition (even for tabular TD(0)) to guarantee asymptotic convergence to the optimal V with probability 1? If yes, what is the comparison of Fig. 5c vs 5d really showing us in the context of this paper?
- van Seijen *et al.* [NeurIPS 2019], "Using a Logarithmic Mapping to Enable Lower Discount Factors in Reinforcement Learning".
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations have been discussed sufficiently well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the supportive review and good questions. We attempt to address the questions below.
### Response to Questions
1. About the discount factor analysis: The scenario we study is slightly different than the one studied by van Seijen et al. (2019). In our simualtions, we fix the reward function and compute the corresponding value function for each discount factor. We then compute the empirical and theoretical learning curves associated with each discount factor (which have different target value functions). We are therefore not studying the 'regularizing' effect of the discount factor, where there is a discrepancy between the metric used for performance and the discount factor used in hte learning rule, as investigated in van Seijen et al. (2019) or Amit. R., Meir, R. and Ciosek, K., 2020, November. Discount factor as a regularizer in reinforcement learning. ICML. Using our theory to understand this process would be an interesting avenue for future studies. In the experiments, the reward function was a sparse localized bump near the corner of the grid world.
2. About learning rate annealing: Learning rate annealing can be a condition to guarantee asymptotic convergence but here we examine a slightly different issue. The usual condition is that the annealing schedule for the learning rate $\eta_n$ at step n should satify: $\sum_{n=1}^{\infty} \eta_n =\infty$ and $\sum_{n=1}^{\infty} \eta_n^2 \lt \infty$. But many possible annealing schedules satify these conditions and although they will eventually converge they will do so at different speeds. In Figure 3d-e, we show that our theory allows us to estimate the effect of the annealing schedule on the full learning dynamics and help choose a schedule that converges faster. In Figure 3c, there is no annealing and the dynamics reach a plateau which our theory can predict. In the additonal simulations in Figure 2 of the rebuttal PDF, we show that the predicted scaling of these plateaus is verified in a MountainCar environment. More generally, this point highlight how our approach differs from some theoretical work in RL which analyses bounds at convergence but does not provide a description of the learning dynamics.
---
Rebuttal Comment 1.1:
Title: Reviewer response needed
Comment: Hello Reviewer,
The authors have endeavoured to address your comments in their rebuttal. The rebuttal phase is a key part of the NeurIPS review process. I invite you to read and respond to the author's comments as soon as possible, latest tomorrow, to give everyone time to continue and conclude the discussion.
Thank you for helping make NeurIPS a great conference for our community.
---
Rebuttal Comment 1.2:
Title: Thanks for the clarifications
Comment: Thank you for your response. I remain positive about the paper and would like to see it accepted. My questions were clarified by the authors and as such I do not require any additions to the paper. | Summary: This paper looks to introduce a new theory for the learning dynamics in the online batch policy evaluation setting for temporal difference learning. The paper introduces the Gaussian Equivalence Conjecture, which postulates that the learning curves in TD can be modeled by Gaussian features (with per-time-step mean and covariance over features and all trajectories). Along with this modeling assumption, the work does not prove this conjecture, but instead assumes this conjecture to be true to show empirical results in modeling policy evaluation learning curves analytically on a simple gridworld MDP with “place cell” features.
Using this assumption and the example MDP, the work first shows results fitting their model with the learning curve of online policy evaluation, which is interesting in itself. Using this modeling assumption, the authors provide insight into the “hardness” of certain policy evaluation tasks and how it relates to their TD learning model with an example comparing sparse vs dense reward functions in policy evaluation. Beyond this, the work also shows examples (on the same domain) of their theory of learning dynamics predicting the effects of batch size, discount factor, learning rate and learning rate annealing. Finally, the work also uses this model to explain and predict what happens to learning dynamics in the case of reward shaping. The work uses this proposed model to reshape rewards to improve timescales of convergence by rotating the reward-scaling weights to be more aligned with features of high variance.
Strengths: Overall, I think the work is of great interest to the reinforcement learning community with a good amount of both theoretical and empirical results to back their proposed model of learning dynamics. The paper poses and answers well-founded questions, and is also well written. I only have a few (albeit important) questions I’ll pose below.
Weaknesses: My biggest issue with this paper in its current state is its set up. Currently, the proposed model is just a conjecture that seems to fit well with this one set of features (that are Gaussian it seems!) in this one gridworld environment. All the results seem to hinge on this conjecture being true, as evinced by the claim “We do not aim to provide a rigorous proof of this conjecture for TD learning but instead compute the learning curve implied by this assumption and compare to experiments on simple Markov Decision Processes”. Different forms of feature construction have been proposed throughout reinforcement learning history, including tile coding, polynomials, fourier basis etc. I’m still not convinced that this Gaussian model will carry over to learning dynamics with different features and different environments. While this is essentially my only concern with this paper (it’s very well written!), I think it is quite an important concern to address. Due to this concern, I have decided to only give this paper a weak accept. If more feature construction methods and more environments were included in these results, I would definitely be open to raising this score!
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Throughout reading this work, I only have a few section-by-section questions:
******3.1******
Could you give more specific details with regards to this environment? Maybe even in the appendix? I don’t exactly understand what you mean when you say ‘The feature map is parameterized by the bandwidth of individual “place cells”’.
******3.2******
For Remark 2 and 5, could you elaborate a bit on “fully explainable by features”? Is this referring to the case of partial observability? or is it just the most general formulation of R as a function of (s, a, s’)?
The notation for the proof for Proposition 3.1 is a bit confusing. I would suggest either simplifying by removing the notation that’s not defined here in the main paper and moving this to the appendix, or moving more of the appendix here.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please see the general comment made in weaknesses to address limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their support and useful questions and suggestions.
### Response to Weaknesses
We thank the reviewer for their comment. These concerns were shared by others and we have added new theoretical results and simulations to show the generality of our approach.
Specifcially, we have extended our theory to include a general formulation (see global response) in terms of fourth order moments of the features and show in Figure 1a-b of the rebuttal PDF that the gaussian model is a good approximation.
In Figure 1c-d of the rebuttal PDF, we show that our theory also predicts the learning curves for agents with polynomial and fourier feature.
In Figure 2 of the rebuttal PDF, we show that the scaling in terms of hyperparameters ($\eta, B$) predicted by our theory (equation B.24) is verified in a Mountain car environment
### Response to Questions
1. About Section 3.1: The 'place cell' terminology comes from neuroscience and refers to basis functions were each feature is a 2-D gaussian shape that is localized at a single location in the 2-D space (neurons with such receptive fields are found in the hippocampus). These are also called RBF features. In the simulations, we parametrize the feature maps by the width of the individual gaussian bumps. The environment is a 2-grid in which the agent performs an unbiased random walk. We will give an extended description of the environment and simulation details in the Appendix.
2. About Section 3.2: Note that we will merge this remarks into a single one. Here, we do not investigate the case of a partially observable MDP. Instead, 'fully explainable' refers to the case where the true value (or reward) function can be described with zero error within the space spanned by the features. If the true value (or reward) function has components that lie outside of the space spanned by the features, these components will never be explainable by a TD algorithms using this feature space. This is analagous to a target function outside the hypothesis class in supervised learning. In the Appendix, we provide the full formula taking into account the unexplainable case.
3. About the proof of Proposition 3.1: We will simplify our description of the proof outline in the main text. We will convey the intuition and methods used and we will link to the relevant equations in the Appendix instead.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the additional results and responses.
It seems that most other authors have the same quip with the work: the Gaussian Equivalence Conjecture. Given the new results presented in the rebuttal PDF, I believe the the Gaussian Equivalence Conjecture is a decent modelling assumption that represents a representative range of feature representations. With these new results, I am bumping my score up to a 7. If accepted, I believe the work to be a valuable contribution to the reinforcement learning community, and opens up new directions of research in trying to understand and model TD(0) learning curves.
On that note, a final suggestion: instead of assuming a _conjecture_ over features, why not phrase it as a _modelling assumption_ instead? Much of reinforcement learning is just (good or bad) modelling assumptions made over the world. This change of argument might serve the paper better.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their appreciation of the new results we presented and the raising their score. The reviewer's suggestion is valuable, we will consider it while preparing the final submission. A related approach could be to first phrase the most general recursion which depends on fourth cumulants; and then state that our results simplify under a Gaussian features modeling assumption; and then state that the simplified form is more broadly applicable under the Gaussian Equivalence conjecture and present numerical evidence for it. We do not want to totally remove the Gaussian Equivalence conjecture, because we believe it should apply for high dimensional features. Further, it is much simpler to analyze than the most general recursion which depends on fourth moments (see rebuttal). Lastly, we think this approximation has the potential to trigger follow up work. We would also like to hear other reviewers' responses. | Summary: This work utilizes concepts from statistical physics to analyze the learning dynamics of TD(0) (policy evaluation) under the assumption of Gaussian equivalence, online batch update and linear function approximation. Using the theory, it demonstrates how learning rate annealing and reward shaping can improve learning in various aspects, based on multiple synthetic experiments.
Strengths: The paper studies the learning dynamics of TD(0) from a novel perspective, which could provide readers with new insights and understandings
Weaknesses: 1. The whole framework relies on an important hypothesis that is not sufficiently verified in the TD(0) context. The Gaussian equivalence conjecture is only justified using a toy MDP (Fig.1) without explaining how accurate this hypothesis is for general MDPs. The provided example is using RBF features and RBF reward, which could be favorable to the given theory.
2. The connection to least-square TD methods is missing. In the same regime as the current paper, LSTD has been extensively studied for its convergence and learning dynamics (see, for example, Tagoti and Scherrer (2015) and Pan et al., (2017)). It does not rely on the Gaussian assumption and should be compared with the convergence result in the current paper.
Ref:
- Tagorti, M. and Scherrer, B., 2015, June. On the Rate of Convergence and Error Bounds for LSTD (λ). In *International Conference on Machine Learning* (pp. 1521-1529). PMLR.
- Pan, Y., White, A. and White, M., 2017, February. Accelerated gradient temporal difference learning. In *Proceedings of the AAAI Conference on Artificial Intelligence* (Vol. 31, No. 1).
3. It is unclear what Sec.4 is trying to demonstrate. The matrix A naturally appears in LSTD and has nothing special w.r.t. the current Gaussian equivalence setup.
Minor
- L136: correlation here means autocorrelation instead of Pearson product-moment correlation
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Q1: Any comments on why the Gaussian equivalence assumption would hold for general MDPs?
Q2: How are the current convergence results different from the typical results from the LSTD literature?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: The paper did not sufficiently explain its connection to the literature.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for their good questions and for pointing us to the LSTD literature. Below we address the weaknesses and questions.
### Response to Weaknesses
We thank the reviewer for their comment and have added both derivations and simulations to show the generality of our framework. We analyzed non-RBF features including polynomial and fourier features (see Figure 1c-d of the rebuttal PDF). We also verify the predicted scaling of the fixed point of the dynamics with hyperparameters ($\eta, B$, equation B.24) for policy evaulation using TD(0) in a Mountain car environment (Figure 2 of rebuttal PDF)
We also extended our theory to account for the non-asymptotic and non-Gaussian cases. Further, we also investigated more generally the question of Gaussian equivalence and looked at some simple exactly solveable models (at any finite $N,B$) where the joint limit is well described by Gaussian learning curve (is insensitive to higher moments in features, see Figure 1a-b of rebuttal PDF). There is a rich literature trying to establish universality in these high dimensional learning settings (see for instance Goldt et al, 2020, arXiv:2006.14709; Hu and Lu, 2020,arXiv:2009.07669; Gerace et al., 2022, arXiv:2205.13303).
Second, we will add a detailed comparison with the LSTD literature to our related works section. In summary, we are merely studying online stochastic temporal difference learning where randomly sampled episodes (state sequences $\{ s_\mu(t) \}$) are used to estimate the gradient at each incremental step (more like SGD dynamics). This means at each iteration, the weights $w_n$ are updated incrementally with fresh samples of data. This is in contrast to LSTD algorithms where after observing $n$ samples, the best linear fit to the weights is performed (finding weights $w$ so that $\sum_{t=1}^n [\psi_t - \gamma \psi_{t+1}] \psi_t \cdot w = \sum_{t=1}^n \psi_t R_t$). In the papers cited by the reviewer, the authors are computing bounds on the error of the linear solution as a function of $n$. This generates a different kind of convergence rate (for example $O(n^{-1/2})$ in the first paper cited) than what we observe in the online/SGD setting, which are very dependent on feature structure and SGD noise. Further, the style of the analysis carried out in these papers (often worst case bounds) is different than what we pursue in the present work (average/typical case analysis in high dimension). Therefore, we believe our work presents a novel and complementary approach to study dynamics of convergence in reinforcement learning, specifically by studying the online setting of TD learning.
Section 4 is trying to demonstrate what kinds of dynamics are observed when the SGD noise is negligible (The setting used in most LSTD papers and the approach taken by Lyle et al arXiv:2206.02126). This gives an ordering of reward/value functions which are easier or harder to learn. We agree that the $A$ matrix will also appear as the limiting value of the matrix in the LSTD algorithm and will comment on this similarity. This is a consequence of convergence of empirical covariances $\frac{1}{B} \sum_{\mu} \psi_\mu \psi_\mu^\top \to \Sigma$ to the population average of the feature covariance.
We will specify that the correlations are not the pearson correlations in line 136.
### Response to Questions
1. We have worked out a general solution to the learning curves which closes at the level of the fourth cumulants (see global response). We also looked at some special cases where we can see why Gaussian equivalence should hold in high dimension. First, consider the case where each features $\psi_i$ is statistically independent. The variable $\hat{V} = \psi \cdot w$ should obey a central limit theorem and behave as a Gaussian random variable when $N$ is large. Similarly, the random variable $\frac{1}{\sqrt B}\sum_{\mu=1}^B \Delta_\mu \psi_{\mu i}$ is also a sum of a large number of independent variables. We would thus expect the algorithm to depend only on the mean and variance of these variables. A concrete example where $T=1$ is provided in Figure 1 of the rebuttal PDF. We provide an exact solution and show that dependencies on higher cumulants vanish as $N,B\to\infty$.
2. The current convergence results are for online learning with SGD rather than for LSTD which can be considered as solving a full least squares problem at each iteration $n$. In the first case, the model does not perfectly fit the examples it has seen at finite $n$, whereas in LSTD, the model is fitting the observed samples as well as possible by solving the linear system
$A w = b \ , \ A = \sum_{t} (\phi_t - \gamma \phi_{t+1}) \phi_t^\top \ , \ b=\sum_t \phi_t R_t$
Both algorithms are interesting as methods to learn a value function from samples, but the dynamics are both conceptually and practically distinct. We will clarify this in the paper.
---
Rebuttal Comment 1.1:
Title: Reviewer response needed
Comment: Hello Reviewer,
The authors have endeavoured to address your comments in their rebuttal. The rebuttal phase is a key part of the NeurIPS review process. I invite you to read and respond to the author's comments as soon as possible, latest tomorrow, to give everyone time to continue and conclude the discussion.
Thank you for helping make NeurIPS a great conference for our community.
---
Rebuttal Comment 1.2:
Comment: Thank you for the additional examples and results. However, the discussion regarding LSTD remains unsatisfactory. To begin with, LSTD can certainly be applied to incremental settings (Geramidfard et al., 2006) where samples are freshly generated at each step. More importantly, the current paper does not provide a clear and comparable result of convergence rates. For example, the annealing strategy gives the rate in L251, which is similar to Dalal et al. (2018, Theorem 3.1), and their results are not asymptotic. Finally, a similar analysis exists using Gaussian approximation (or CTL) for the convergence of LSTD (Konda 2002, Chapter 6).
Overall, the paper did not discuss the existing literature in RL sufficiently, and the theoretical results are not significant enough. Thus I keep my score.
Reference:
- Geramifard, A., Bowling, M., Zinkevich, M. and Sutton, R.S., 2006. iLSTD: Eligibility traces and convergence analysis. Advances in Neural Information Processing Systems, 19.
- Dalal, G., Szörényi, B., Thoppe, G. and Mannor, S., 2018, April. Finite sample analyses for TD (0) with function approximation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1).
- Konda, V., 2002. Actor-critic algorithms (Ph.D. thesis). Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology.
---
Reply to Comment 1.2.1:
Title: Response to Comment
Comment: We thank the reviewer for their response. We hope to clarify some of the issues raised above.
1. We are not claiming that LSTD cannot be used with incremental samples. We are stating that the **updates to the weights** in LSTD are not an instantaneous stochastic gradient update like in TD learning (what we study), but rather the instantaneous solution to a linear system of equations. Compare Section 2.1 and 2.2 of the paper [1]. For TD($\lambda$) (Section 2.1) the updates to weights are incremental given the new sample. In section 2.2 the new weights are solved as $w = A^{-1} b$.
2. We will cite Dalal et al about the upper bound they obtain for annealed learning rates. The bound in their Theorem 3.1 is distinct from our result (our typical case theory does not imply a scaling of $e^{- (\lambda / 2) n^{1-\sigma}}$ when annealing is sufficiently fast). However, their second term in their bound (when annealing slowly) would match the scaling of our derived asymptote $M \sim O(\eta_n) \sim O(n^{-\sigma})$. We will mention this.
3. The Konda et al argument uses the central limit theorem to estimate the error of the solution to the above linear system $w=A^{-1}b$. This is a standard tool in the analysis of linear systems where the matrix $A$ and vector $b$ are converging to steady state values. We instead are studying the dynamics of stochastic gradient TD updates through iterations. The vector $w_n$ has a different mean and covariance in TD learning compared to LSTD.
As we mentioned in our rebuttal, we are planning on adding a related works section that describes the comparison of our work to LSTD and again explaining why it is different (like the difference between Section 2.1 and 2.2 in [1]).
[1] Geramifard, A., Bowling, M., Zinkevich, M. and Sutton, R.S., 2006. iLSTD: Eligibility traces and convergence analysis. Advances in Neural Information Processing Systems, 19.
---
Rebuttal 2:
Title: Review Discussion
Comment: We are following up to hear if our rebuttal addressed the main concerns of this reviewer or if they have any remaining questions that should be answered before the discussion period ends today. Any comments would be greatly appreciated. Thank you for your time. | Summary: This paper provides a theoretical model that predicts the dynamics of TD learning. The theory assumes that the distribution of feature vectors is, in some sense, equivalent to a Gaussian distribution and predicts the value estimate at each iteration. The theory reveals a rich set of phenomena such as plateaus in TD learning.
Strengths: The complete dynamics of TD learning have not been previously studied. And the paper provides a unique perspective on this problem.
The dynamics of TD learning according to the theoretical prediction are close to those in experiments, verifying the usefulness of the theory in these experiments.
The theory is also consistent with some existing tricks to improve TD learning such as reward shaping and step-size annealing.
Weaknesses: The paper does not mention the limitations of its main assumption: the Gaussian Equivalence Conjecture. It verifies it using experiments in some small MDPs but these experiments do not tell when this assumption holds.
The paper does not explain clearly its theoretical results, making it difficult to understand these results.
The writing of the paper needs to be improved. Specifically, many sentences and notations remain confusing to readers. See Questions for details.
typos:
line 73: the features -> the vector of features
line 78: the offline setting
line 79: width -> with
line 245: two "learning"s
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Is T a constant or a random variable? If T is a constant you are considering the finite-horizon setting and the value function is horizon-dependent. If T is a random variable you need to specify how it is defined.
"the algorithms bootstraps its current predictions to estimate future states" Not sure what you mean here. I think you mean algorithm, not algorithms. And I think you mean bootstrapping from the estimated next state's value.
line 134: what are "the learning curves"? "high dimensional features" how many dimensions are needed? What do you mean when you say a set of learning curves is equivalent to another set of learning curves?
eqn 3: what is the meaning of <x>_{y} in your notation?
Line 138: "higher order cumulants of the features is negligible in high dimensional feature spaces under the square loss". Again, what do you mean by square loss. What do you mean by "higher" order "cumulants" of the features?
Figure 1: what is this diffusion process?
Proposition 3.1: what is the meaning of <>_s in the definition of L_n?
Line 191: this remark seems to be much more useful. Why not present your result without assuming that B -> infty? Could you briefly explain under what conditions the theory holds when B is not infinity?
Line 195: this remark also seems to be much more useful. Again, why not present your result without assuming that the value function is representable?
Line 203: how is this remark different from remark 2?
Line 205: what is w (without _n)?
Line 209: "which can be learned easily and which require more sampled trajectories" Why is it "easily" when it requires more sampled trajectories?
Line 216: "The theory predicts that," I do not see which part of your theory gives this result. Furthermore, how is this result different from the rate of convergence in a linear dynamical system?
Proposition 5.1: what is a fixed point of a learning curve?
How is eqn B.24 derived?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for their careful reading of our paper and their detailed questions. We tried our best to clarify and address each of the weaknesses and questions raised below.
**Response to Weaknesses**
1. We will add an acknowledgement that our paper relies on an assumption (Gaussian equivalence) which can not be validated or proven in general. Since the original submission, we have been able to derive more general error estimates (see Global response) and in some simple cases establish that Gaussian equivalence will hold in high dimension (learning curves don't depend on higher order cumulants, see Figure 1 of the rebuttal PDF). Based on this new result, we expect Gaussian equivalence under a variety of conditions including: (a) if the fourth moments are close to those of the corresponding Gaussian or (b) the features are close to independent (in some basis) and $N,B$ are both large. However, a more general set of conditions under which universality should hold would require a more careful proof so we will acknowledge this limitation explicitly in the discussion.
2. We thank the reviewer for pointing out areas where we can improve our writing and explanation of our results. We are adressing the points raised by the reviewer and listed below in Responses to Questions
### Responses to Questions
We thank the reviewer for such detailed reading and questions. Below are our answers which we will include in the revised manuscript:
1. In our model $T$ is a constant that does not scale with the size of the features $N$ or the batch size $B$. We will state this more clearly in the problem setup.
2. We agree that we wrote this in a confusing way. We will fix this sentence so it reads *"the algorithm bootraps from the current estimate of the next state's value"*
3. line 134: by learning curve, we mean $\mathcal{L}_n$ the value estimation error as a function of iteration $n$. Our new theoretical work and simulations in Figure 1 of the rebuttal PDF, show that for high dimensional features, the contibution of the higher order cumulants is neglibible.
4. The notation $\left< x \right>_y$ is borrowed from physics and denotes an average of the function $x$ which depends on the random variable $y$.
5. Line 138: by the square loss, we mean that the evaluation metric is a square error between predicted and true value $(\hat{V} - V)^2$. By cumulant of features we mean fourth, fifth, etc. cumulants of the random variable $\psi_i(s_t)$. For example, the second cumulant is the covariance $\left< \psi_i(s_t) \psi_j(s_{t'}) \right> - \left< \psi_i(s_t) \right> \left<\psi_j(s_{t'}) \right>$. The fourth cumulant for mean zero random variables $\{ \psi_i \}$ is $\left< \psi_i \psi_j \psi_k \psi_l \right> - \left< \psi_i \psi_j \right>\left< \psi_k \psi_l \right> - \left< \psi_i \psi_k \right>\left< \psi_j \psi_l \right> -\left< \psi_i \psi_l \right>\left< \psi_j \psi_k \right>$. Generally, the cumulants are the coefficients in the Taylor series for the function $m(c) = \ln \mathbb{E}_{\psi} \exp\left( c \cdot \psi \right)$ around $c = 0$. Our new theoretical work shows we can handle arbitrary fourth cumulants $\kappa$ and our simulations show (Figure 1 of rebuttal PDF) that as assumed, the contribution of higher order cumulants becomes negligible as dimension increases
6. The diffusion process is an unbiased (isotropic) random walk in the 2D grid world state space.
7. $\left< \right>_s$ denotes average over states.
8. Line 191: We expect our result can hold at small $B$ if either the features are close to Gaussian (small fourth cumulant $\kappa$) or if the learning rate is small so that the SGD noise is small. In general, we expect the Gaussian equivalence to only kick in for large $N,B$, but we empirically observe that it can work well at small $N,B$ as well.
9. Line 195: We present the main text result with a representable value function to simplify the main text expression. We provide the full formula in the Appendix.
10. Thank you for pointing out the redundany in line 203. We will combine these two remarks.
11. Line 205: nice catch! It should be $w_n$. We will change this.
12. Line 209: This was a badly worded sentence. What we mean to say is that our theory can distinguish between easy and hard reward functions. We will reword the sentence appropriately.
13. Line 216: the theory predicts this expression since the equation for $M$ in the zero SGD noise limit is $M_{n+1} = (I-\eta A) M_n (I-\eta A)^\top$. Diagonalizing $A$ in its right eigenvectors, we can arrive at the spectral equation in line 216. We will add a detailed derivation in the appendix.
14. In proposition 5.1, the fixed point of the learning curve or the dynamics would correspond to a value where $L_{n+1}=L_{n}$ and $M_{n+1} = M_n$. We show that there can exist non-zero solutions to these equations that have a scale of $\mathcal{O}(\eta \gamma^2 /B)$.
15. There are many ways to derive equation B.24. Perhaps the simplest is to expand $M$ and $Q$ in a power series in $\eta$ so that $M = M_0 + \eta M_1 + \eta^2 M_2 + ...$ and $Q = Q_0 + \eta Q_1 + ...$. Plugging this into the fixed point condition in equation B.23 and solving order by order yields $M_0 = 0$ and $A M_1 + M_1 A = \frac{1}{T^2 B} \sum_{tt'} Q_0(t,t') \Sigma(t,t')$
where $Q_0$ is independent of $M_1$.
We see that the leading order scaling (in $\eta$) is thus $M \sim \mathcal{O}(\eta)$. This procedure can be repeated by expanding $M, Q$ in power series in $\gamma$ and $B^{-1}$ and extracting the leading order scalings, namely that $M\sim \mathcal{O}(\gamma^2)$ and $M \sim \mathcal{O}(B^{-1})$. Lastly, one can verify that $M \sim \mathcal{O}(\eta \gamma^2 /B )$ is self-consistent. We will add a more detailed set of arguments for this scaling in the Appendix.
We hope these questions helped clarify our results. We will aim to improve the writing around these topics in the text and provide additional explanation of our notation and assumptions.
---
Rebuttal Comment 1.1:
Title: Discussion Follow Up
Comment: We are following up to hear if our rebuttal addressed the main concerns of this reviewer or if they have any remaining questions that should be answered before the discussion period ends. Thank you for your time. | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments and suggestions for additional theoretical justifications and experiments. Based on the reviews, we have added a more in depth analysis of non-Gaussian features at arbitrary dimension and can show that the learning curves close under fourth moments of the features and that the predicted plateau of $\mathcal{O}(\frac{\eta \gamma^2}{B})$ is preserved. We have added more experiments testing the range of validity of our theory's predictions and figures which we will include in the revised manuscript (see the attached PDF) which show:
1. Simulations showing why the Gaussian equivalence ansatz is reasonable in the high dimension limit based on a toy model where $T=1$ (SGD) and each feature is independent. We solve the model for arbitrary fourth cumulant $\kappa$ at any dimension $N$ and find that the learning curve scales as $\left[(1-\eta)^2 + \frac{\eta^2(N + 1+\kappa)}{B} \right]^n$, which loses dependence on $\kappa$ in the large $N,B$ limit. This is intuitive since $\hat{V} = \frac{1}{\sqrt N} \sum_i \psi_i w_i$ should behave as a Gaussian random variable during training even if $\psi_i$ is non-Gaussian.
2. More experiments showing the accuracy of our theory on the random walk MDP with polynomial features of varying degrees and fourier features with different lengthscales/bandwidths. This provides more evidence that our theory (and the Gaussian approximation) is reasonable for many types of features.
3. We have run policy evaluation using TD(0) with a pre-trained policy on Mountain-Car ( we perform policy evaluation with Fourier features) and show that the loss curves exhibit asymptotes that scale linearly with $\eta$ and inversely with $B$. Since the feature space dimension $N$ and episode length $T$ are both very large, we do not attempt to estimate the $\Sigma(t,t')$ matrix. However, our theory's prediction of the asymptote allows us to accurately capture the scaling of the plateau with various hyperparameters.
We hope that these simulations give more evidence that our setting and assumptions are reasonable. Given more time, we can add additional tests of other types of features.
In addition, we also will make several changes to the paper
1. In the new version of our paper's Appendix we will also provide a derivation of a theoretical learning curve which is non-asymptotic (does not require $N,B$ large) and for any distribution. The SGD noise term in the resulting theory depends on the fourth moments of features $\kappa_{ijkl}(t_1,t_2,t_3,t_4) = \left< \psi_i(t_1)\psi_j(t_2)\psi_k(t_3)\psi_l(t_4) \right>$. The loss is still $\mathcal{L}_n = \frac{1}{N} \text{Tr} M_n \bar{\Sigma}$ but with the dynamics
\begin{align}
M_{n+1} &= M_n - \eta A M_n - \eta M_n A^\top + \frac{\eta^2 (B - 1)}{B} A M_n A^\top \nonumber
\\
&+ \frac{\eta^2}{B T^2} \sum_{t,t'} \text{Tr} \kappa(t,t',t,t') \left< (w_R -w_n)(w_R-w_n)^\top \right> \nonumber
\\
&+ \frac{\gamma \eta^2}{B T^2}\sum_{t,t'} \text{Tr} \\ \kappa(t,t',t,t'+1) \left< w_n (w_R -w_n)^\top \right> \nonumber
\\
&+ \frac{\gamma \eta^2}{B T^2}\sum_{t,t'} \text{Tr} \\ \kappa(t,t',t+1,t') \left< (w_R -w_n) w_n^\top \right> \nonumber
\\
&+ \frac{\gamma^2 \eta^2}{B T^2} \sum_{t,t'} \text{Tr} \\ \kappa(t,t',t+1,t'+1) \left< w_n w_n^\top \right>
\end{align}
The trace is taken over the last two of the feature indices of $\kappa$. All weight averages can still be expressed in terms of $M_n$ like in the current Appendix. This prediction still recovers a potential fixed point in the dynamics at scale $\mathcal{O}(\eta\gamma^2/B)$, so this prediction is actually universal. Computing this theory is even less tractable than our original result due to the number of entries associated with the $\kappa$ object $\sim N^4 T^4$.
2. We also will provide more room to explain and introduce our notation (for instance that $\left< \right>$ denotes averaging) and will more carefully introduce the mathematical objects which appear in our main text.
3. We will provide a more extensive comparison of the algorithm considered in our work (online TD learning) to other algorithms in the literature including TD Least Squares (TDLS).
We hope that the reviewers will take these updates into consideration when re-evaluating our work.
Pdf: /pdf/901fdd8160a51edd175694cb4b1d6c26221ef820.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces the concept of statistical physics to analyze reinforcement learning models. The authors propose a theory of learning dynamics for RL, with an emphasis on the role of linear function approximation. They investigate how strategies such as learning rate annealing and reward shaping can positively modify learning dynamics and surmount plateaus. The paper concludes by revealing new tools for developing a theory of learning dynamics in RL, thereby setting the stage for future research in this domain.
Strengths: 1. This paper introduces a novel method using statistical physics to formulate a theory of learning dynamics in RL.
2. The authors derive an analytical formula for the typical learning curve, demonstrating how their theory can predict the scaling of both learning convergence speed and performance plateaus based on problem parameters. This predictive ability is a significant advantage of the paper.
3. The paper outlines how this theory can assist in understanding and guiding design principles when selecting meta-parameters in RL algorithms. Such insights can enable practitioners to gain a deeper comprehension of how these factors influence the RL learning process.
Weaknesses: 1. The paper does have limitations regarding practical implications. Although it presents a theoretical framework and makes predictions about learning dynamics, the methods utilized, such as linear approximation, deviate significantly from current reinforcement learning (RL) methodologies.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is there a particular reason for choosing batch sizes that are not powers of 2, such as N=3, N=30, N=20, for the experiments? As far as I am aware, batch sizes that are powers of 2 are most commonly used for practical considerations.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No concerns about negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and useful questions. We respond to the weaknesses and questions below.
**Responses to Weaknesses**
The reviewer is correct that our theory is limited to linear function approximation. This allows us to make strong predictions, but does not capture interesting aspects of deep learning such as feature learning. We address this in our limitations section but will expand on it further.
However, we think that even in the linear model, some interesting statistical and dynamical phenomena such as SGD effects arise that are worthy of study (Paquette et al 2022 arXiv:2205.07069, Simon et al arXiv:2303.15438 2023, Mignacco et al 2020 arXiv:2006.06098).
Further, if the Gaussian equivalence/universality ideas hold in RL, it could allow for future analyses of two layer neural neural networks which learn features as they optimize the value function (such as in Goldt et al arXiv:2006.14709 2020).
**Response to Questions**
There is no particular reason we chose the batch sizes we did. We can add a plot with powers of two if the reviewer thinks this is important. It would look very similar with plateaus evenly spaced on the log-scaled y-axis (see also scaling as a function of batch size on the MountainCar simulation in Figure 2 of rebuttal PDF).
---
Rebuttal Comment 1.1:
Title: Reviewer response needed
Comment: Hello Reviewer,
The authors have endeavoured to address your comments in their rebuttal. The rebuttal phase is a key part of the NeurIPS review process. I invite you to read and respond to the author's comments as soon as possible, latest tomorrow, to give everyone time to continue and conclude the discussion.
Thank you for helping make NeurIPS a great conference for our community.
---
Rebuttal Comment 1.2:
Comment: Thank you for the additional responses.
I acknowledge the potential for extending the linear function approximation into future studies.
Regarding the batch sizes, I wanted to confirm that the authors haven't selectively chosen specific batch sizes that could unduly favor their hypothesis. I comprehend that the results have exhibited consistency across various batch sizes, and that the numbers have not been selectively chosen. As a suggestion, opting for numbers that are powers of two could help demonstrate that the choices are not biased.
---
Reply to Comment 1.2.1:
Title: Discussion Response
Comment: Thank you for your response!
*Regarding the batch sizes, I wanted to confirm that the authors haven't selectively chosen specific batch sizes that could unduly favor their hypothesis. As a suggestion, opting for numbers that are powers of two could help demonstrate that the choices are not biased.*
Yes, we confirm that these batch size choices are not biased. If the paper is accepted we will use powers of 2 for the batches in this figure. The plot still shows good agreement between experiment and theory.
---
Rebuttal 2:
Title: Batchsize Powers of Two Data
Comment: Though we cannot add new links to figures at this point due to the response period policy, below we provide a table of the experimental and theoretical value errors $\mathcal{L}_n$ at times $n \in [1,10,100]$. The experimental losses are averaged over 10 random training experimetns. The full loss curve $\mathcal{L}_n$ will be included in the final version of the paper.
batch = 1
expt
[0.62140151 0.04056274 0.03076534]
theory
[0.62642096 0.03724296 0.02816423]
batch = 2
expt
[0.61854012 0.02638373 0.01317829]
theory
[0.62018211 0.0263546 0.01381489]
batch = 4
expt
[0.58340318 0.02736459 0.00678785]
theory
[0.61706269 0.02101523 0.00684711]
batch = 8
expt
[0.6271198 0.01924012 0.00419846]
theory
[0.61550298 0.01837126 0.00341322]
batch = 16
expt
[0.62514669 0.01700518 0.00193409]
theory
[0.61472312 0.01705564 0.00170856]
batch = 32
expt
[0.62464086 0.01665385 0.00081263]
theory
[0.61433319 0.01639942 0.00085928] | null | null | null | null | null | null |
Bayesian target optimisation for high-precision holographic optogenetics | Accept (spotlight) | Summary: This paper proposes a new method for limiting off-target optogenetic stimulation based on Gaussian Process modeling. In holographic photostimulation, the goal is to excite specific neurons via targeted laser light, but widespread expression of opsins may result in additional neurons not in the desired population ("off-target" neurons) also firing action potentials. The proposed method uses approximate GP inference (MAP estimation) in combination with a novel gradient-based optimization method to refine target locations in order to minimize the $L_2$ distance between the desired and evoked patterns of stimulation.
This is a very nice paper that combines good modeling with a very thoughtful approach to experimental realities. While direct experiments will be needed to fully test its efficacy, it stands to make an important contribution to this particular neuroscience application.
Strengths: - Careful modeling of features of real experiments.
- Use of a scalable GP approach.
- Clear exposition.
Weaknesses: - Much of the inference algorithms seems specific to some of the particular modeling choices made (particularly the need for a convex loss in Line 1 of Algorithm 1).
- Figs 4-5 should clearly be labeled as _simulation experiments_ based on real data; this is clearly explained in ll. 212-214 but elided in, e.g., ll. 254-56, which makes it seem as if the optimization was performed and validated as part of data collection.
- I was suprised not to see references to Fletcher and Rangan (2014) and Draelos and Pearson (2020) in the related work section.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Line 102 defines $\sigma$ as the sigmoid function. Does that mean logistic? (E.g, $\tanh$ is also sigmoid in shape but clearly not intended.)
- It is not stated in the paragraph surrounding Eq. 2 that $I$ is the laser power. It might help to simply state $\mathbf{x} = (c_1, c_2, I)$.
- Line 105 states that one should have $g_n(x) \ge 0$ so that stimulation should always have a positive effect, but in the supplement, this condition is waived in the single-target case. I assume this is because one can compensate in this case by taking $\theta = \inf_{x} g_n(x)$? If so, it might help to articulate this in the supplement.
- what is the MAP version of the ORF $\hat{g}_n$? This is just $\hat{g}_n(\mathbf{X})$, with $\mathbf{X}$ the matrix of test data points?
- Why use MSE and not cross-entropy, which would be the standard assumption for a desired set of observations under a Bernoulli model?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: - Much of the inference algorithm is quite closely tied to the experimental setup. This is both a strength and a limitation, since the proposed work is somewhat unlikely to find broader applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks again for your helpful feedback.
**Figs 4-5 should clearly be labeled as _simulation experiments_ based on real data**
Thanks for noting this ambiguity. We will update the figure captions and lines 254-256 accordingly.
**Missing references**
We thank you for pointing out that we had not cited these papers, these were poor and unintended omissions on our part that we will fix. Notably, Draelos and Pearson (2020) is a recent inspiration for our work that we had absolutely meant to cite in the related work section.
**Clarification of which sigmoid function is being used**
Yes, we did indeed mean the logistic sigmoid and we will update the main text to clarify this.
**Clarification of laser power parameter**
Thanks for pointing out that this is unclear -- we will follow your suggestion!
**Non-negativity is waived in the single-target case**
When we waive the non-negativity constraint in the single-target case, we actually drop the threshold parameter entirely, much like in Gaussian process models of visual receptive fields [14]. This is because now if a point on the optogenetic receptive field (ORF) becomes inhibitory (by taking a negative value), it will not conflict with excitation from any other hologram, and therefore will still effectively model the response to optogenetic stimulation. We will clarify this in the supplement.
**What is the MAP version of the ORF $\hat g_n$?**
The MAP version of the ORF is $\hat g_n(\mathbf{X})$ (i.e. the ORF evaluated at $\mathbf{X}$), but where $\mathbf{X}$ is the set of "training" points probed during the ORF mapping phase (perhaps this is what you meant already). We will be explicit about this in section 3.2.
**Why use MSE and not cross-entropy?**
No particular reason, other than that it appeared to be an effective loss function in practice!
Thanks again for all your comments.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' thoughtful replies to my questions and congratulate them on a very nice result. | Summary: The problem the authors tackle in their manuscript is the problem of target selection in holographic optogenetics. Briefly, of the many neurons in a field of view, many experiments require the selective stimulation of a small subset of cells, while minimizing off-target stimulation that may muddy the interpretation of the data. This question is of considerable importance to the field of systems neuroscience, and several experimental methods have been pursued trying to solve it, such as the use of soma-localized opsins or sparse expression of optogenetic protein. The authors present in this manuscript a complementary, Bayesian computational approach that takes into account differences in sensitivity of stimulation of different neurons in a given FOV to optimize laser intensity of position to minimize off-target stimulation while maintaining sufficient target stimulation. The authors use gaussian processes to GPs to infer the response properties of neurons based on stimulation parameters. The authors validate this approach in simulation and in real optogenetic experiments.
Strengths: Originality: The authors introduce a novel computation algorithm to address a pressing issue in systems neuroscience; that of off-target stimulation. While the authors note that similar methods have been used to infer receptive fields for other types of stimulation, the authors cleverly employ these methods in a new way for inferring the responses of neurons to optogenetic stimulation to address a technical challenge facing the field. This software approach complements experimental approaches and can be readily combined with them. Similar computational approaches of thus far remained lacking, so the work is pioneering in this regard.
Clarity: The manuscript is clear with respect to how it poses the question being addressed, how it delineates its approach, and its statement of results.
Significance: The authors make a significant contribution to a timely question for systems neuroscience and will likely see significance use by labs performing holographic optogenetic stimulation, especially for applications in which minimizing off-target stimulation is crucial. The method could also see use in designing stimulation protocols to precisely pattern the activity of an ensemble of neurons, although that is a future direction.
Weaknesses: Optimizing stimulation according to one parameter (in this case, minimization of off-target stimulation) often necessitates trade-off to other parameters. The authors bring up one such parameter, the time required to map the ORF and perform the optimization. While this consideration will vary from experimenter to experimenter, as the authors suggest, other considerations might not. For example, by moving stimulation locations off-center relative to the soma, stimulation reliability might become more sensitive to sample motion. Furthermore, the authors assume a ORF fixed during time; the manuscript would benefit from additional consideration for how the ORF changes with sustained stimulation, as desensitization of the channelrhodopsin could significant affect the ORF. In general, the authors should further explore the limitations and trade-offs of this method. This weakness is minor, however, since I assume that these other considerations could be incorporated as parameters to be optimized.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Are the real-world validations provided using soma-localized channelrhodopsin, or is there channelrhodopsin in the processes? Does the method provide a greater benefit in one situation versus the other?
As stated in weaknesses section:
How sensitive is an OTS-minimized stimulation protocol to sample motion? How does this compare to soma-targeting? How about to changes in neuron sensitivity to stimulation? Can these be incorporated into optimization?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you again for spending the time to review our submission and for providing your comments!
**Sensitivity to sample motion**
Thanks for raising this relevant point. It is true that sample motion could affect the optimality of the computationally identified stimuli. In practice, we would therefore recommend performing online motion correction [11] immediately before stimulation, so that the learned optogenetic receptive field (ORF) posterior is as closely aligned in space to the true "real-time" receptive field as possible.
**Sensitivity to opsin nonstationarity**
Desensitisation of opsins following prolonged illumination has indeed been observed [12]. However, after the ORFs have been mapped and holographic stimuli have been optimised, it is straightforward to periodically recompute the posterior as the experiment proceeds. This would provide an updated estimate of the laser power required to maintain optimal precision. Note that this does not require redoing the whole ORF mapping phase as we would not expect the shape of the ORF to change drastically beyond desensitisation.
We will update the manuscript to discuss these important points.
**Are the real-world validations provided using soma-localized channelrhodopsin?**
Yes, the validation using experimental data is performed with ChroME2f, a state-of-the-art soma-targeted opsin [13].
**Does the method provide a greater benefit in one situation versus the other?**
We have not explored applying our technique in the case of an opsin that is not localised to the soma, because in such experiments the precision of any optogenetic manipulation is extremely poor [7, 8]. We therefore recommend that any experimenter wanting to achieve single-cell precision first change to a soma-localised opsin before attempting any computational optimisation.
Thanks again for your comments.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply! My opinion that the paper is technically solid and impactful remains, as does my score. | Summary: The authors present a method for reducing off-target stimulation of neurons during photostimulation by modifying the laser power and target locations. They use a Bayesian optimization approach to determine neuron responses (ORFs) to stimulations at different targets and laser powers and then choose the optimal target parameters to attain a desired neural response to stimulation. In contrast to typical stimulation targeted to the desired neuron’s soma, stimulating somewhat ‘off-target’ can still produce spikes in the desired neurons and avoid stimulating its nearby neighbors.
Strengths: Originality: This work is well-motivated by the growing number of methods used for direct neuronal stimulation using photostimulation to determine e.g. causal effects of single neuron activity or the functional structure of a neural circuit. Here, rather than address the inference of neural function or connectivity itself, the authors present a solution to the problem of off-target stimulation (OTS), an experimental reality where neural targets not selected for stimulation are stimulated by the laser anyway (due to e.g. poor spatial resolution). The idea of using Bayesian optimization (BO) and finding locations for off-soma stimulations that reduce OTS is very appealing and the method proposed here with BO, GPs to model neural responses, and optimization of stimuli location shows a nice original combination of techniques to address this problem.
Quality: The methods used (GPs, BO, gradient descent for optimization) are all well-established and robust. The inference method for the ORFs with the additional non-negativity constraint is nicely explained. The results clearly demonstrate an advantage of this method over traditional nuclear stimulation for the single-neural-target case in particular.
Clarity: Overall the paper is very well written. Ideas from the literature are well-sourced and the authors make it clear how their approach uses some similar techniques and where their method is novel. The figures are good quality as well. The code is relatively clear, though it could use additional documentation or commenting for ease of use.
Significance: As more neuroscience researchers turn to photostimulation for causal testing of hypotheses, this method should prove useful for the cases where laser accuracy is insufficient for selective stimulations.
Weaknesses: One weakness is the lack of experimental data, though it is understandably still difficult to obtain. While the authors mentioned slice data, this was only used as input to their simulations. Looking at the various ORFs mentioned in the supplement, is it reasonable to think that only 4 types of ORFs in the simulation is enough? And if these ORFs that were experimentally obtained were done so via stimulating a grid nearby, would that change the estimation of 1 neuron's ORF if another neuron nearby was also being stimulated (in an ensemble)? In vivo, one would expect that confounding neural activity could occur due to connectivity in addition to laser power spillover.
It is unclear if, at the end, this method is useful for ensemble stimulation in addition to single target paradigms. While the initial motivation and explanation of the method appears to be geared towards both cases (including Algorithm 1), it is not clear if in Figures 4-5 the optimization of each 'single-target hologram' is the end result of an optimization across multiple targets, or if the optimization was made per target. The different factors examined here ('density', size of ensemble, xy plus z dimensions) are all nicely presented but showing the method works in the 'worst case' (high density, larger ensembles, fully 3D, ...) would make this work stronger.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. How well does this scale from an optimization (time) perspective? Can this be done for thousands of neurons? Ensembles of 50?
2. How does the exact neuron expression matter (to the ORF shape)? Can a simplified version be used during the 2nd step of target optimization?
3. Similarly, what do ORFs look like? Are there other references used for establishing the prior beyond the few shown here?
4. Would it be possible to integrate spatial information (from e.g. calcium imaging) into the ORF prior in a useful way? To minimize the number of required mapping stimulations before the target optimization step.
5. It appears that the method uses random locations to first map the ORFs. Would it be possible to optimize the selection of locations for the ORF mapping based on the desired targets for later stimulation?
6. If Algorithm 1 needs to be run many times with different seeds, how long does it take to determine the optimal stimulation pattern? How practical is this in an experimental setup?
7. Clarification on Figure 3: Line 199 says increasing density of neurons, and Fig. 3a, shows 50 - 150 neurons (in the same 250 x 250 um plane) – would be nice to explicitly state that is what was done in simulation, and state what the density therefore is. Why were these density ranges chosen (for different example brain regions)? Are the number of neuron-neuron connections a potential factor here?
8. Is the performance increase the same for lower laser power in the nuclear condition? Ie if we used nuclear stimulation at lower power would we do just as well as optimized? Figure 4 & 5 indicate the optimization chose overall lower laser powers than initial settings.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discussed some trade-offs between the time spent mapping the ORFs and the need for single-cell precision stimulations. They mention potential future work to minimize the time needed for Bayesian target optimization, but fail to detail exact time requirements for the current method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your insightful comments and feedback! Unfortunately we had to cut much of our response to meet the character limit -- we would have liked to address every point and with more detail.
**Is it reasonable to think that 4 ORFs are enough?**
We do not have a sense of the variability of optogenetic receptive field (ORF) shapes beyond the slice recordings that have been made. However, we have attempted to account for this by sampling over many different GPs in our simulations, which should extensively explore the space of ORFs.
**It is not clear if in Figures 4-5 the optimization of each single-target hologram is the end result of an optimization across multiple targets, or if the optimization was made per target**
Thank you for highlighting this ambiguity. The optimisation was made for each target individually, and we will further clarify this in the figure caption and main text.
**How does this scale? Can this be done for thousands of neurons? Ensembles of 50?**
Thank you for raising this important point. We first note that experiments of such scale (with thousands of neurons being stimulated) are beyond what we expect the current implementation of Bayesian target optimisation can handle. Such scales could become feasible computationally if more advanced GP estimation techniques were employed, though it's not clear to us that biological experiments of this size (where a single SLM can flexibly deliver 2p excitation to thousands of neurons individually, within a single stimulation field, and with high spatiotemporal fidelity) are feasible with existing technology.
While our objective in this submission was to establish the feasibility of overcoming off-target stimulation computationally rather than achieving the fastest possible implementation, we have provided some preliminary runtime data in the attached supplement (see **Table 1** in the rebuttal pdf), and we plan to extend the characterisation for the camera-ready copy.
Also note that both ORF inference and target optimisation are performed sequentially, and thus major speed gains could be made by parallelising ORF inference over different neurons, and target optimisation over different desired ensembles.
The runtimes given in **Table 1** demonstrate suitability for typical all-optical experiments using GCaMP6 indicators [6], the most widely used class of calcium sensors. We believe there is room for further optimisation (e.g. with more parallelisation), and in future work we plan to push the efficiency of the technique for use with the class of faster GCaMP8 indicators. We hope this also answers Question 6 from the reviewer, which we do not have space to respond to in a separate comment.
**How does the exact neuron expression matter to the ORF shape? Can a simplified version be used during the 2nd step of target optimization?**
While at a fixed power and in two dimensions some ORFs might be reasonably well-described by a simpler parametric model (e.g. a circle), in three dimensions the shape of the ORF can change so drastically and with unpredictable asymmetries that a single parametric model is unlikely to capture their varying shape with much accuracy at all (see supplemental figures S3-S6). Thus, we believe we have taken the simplest nonparametric approach (an RBF Gaussian process) that adequately describes the data.
During the second step of target optimisation, a simple parametric ORF model might go some way in better repositioning holograms compared to naive nuclear stimulation, but for achieving the highest performance possible our experiments have found that it is certainly advantageous to exploit asymmetries in the ORF shapes, especially in three dimensions. For example, with the neuron shown in supplemental figure S4, it is clearly better to learn during an experiment that one should stimulate this neuron deeper in the tissue (if trying to avoid other neurons) than shallower.
**Are there other references used for establishing priors beyond the few shown here?**
The convention in the two-photon optogenetics field has been to characterise the ability to stimulate a neuron by moving only in one dimension at a time (radially or axially, i.e. "left and right" or "up and down"; see e.g. [7-9]). The only existing reference of ORFs profiled at this level of detail (i.e. a complete grid) is [10, Figure 1G], which is similar data to what we present in the supplement.
**Would it be possible to integrate spatial information (from e.g. calcium imaging) into the ORF prior?**
We thank the reviewer for making this interesting suggestion. In most cases, calcium sensors and opsins are expressed through separate viral vectors, and neurons are not guaranteed to express both proteins at once. However, some research groups have developed constructs that fuse the two proteins together directly [10], in which case the relative brightness of a calcium transient across the cell soma and proximal dendrites might provide some hint at different locations that the cell could be better activated, though it remains to be seen how useful this would actually be in practice.
**Would it be possible to optimize the selection of locations for the ORF mapping based on the desired targets for later stimulation?**
Yes! We definitely think this can and should be done in practice. If one only wishes to stimulate a desired subset of neurons, then ORF mapping should only be performed in neighbourhoods surrounding those neurons.
**If we used nuclear stimulation at lower power would we do just as well as optimized?**
We performed a control analysis by matching the nuclear stimulation laser power to the average optimised laser power (see **Figure 1** in the rebuttal supplement) and reran the mapping and target optimisation phases multiple times while randomly reassigning neurons different ORFs. This process showed that we maintain a very similar improvement in performance as what was shown in the main text.
Thanks again!
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their extensive replies to my and other reviews. I have read over the other reviews and the author responses, and think the paper is strengthened by the improvements the authors will make. | Summary: The authors present a set of methods to efficiently characterize the activation field under optogenetics and optimize the optogenetic stimulation patterns to target certain cells while avoiding the others.
Strengths: The paper tackles an important problem for reproducing the neural responses with high resolution in a neural circuit. The presented approach is direct, principled and effective.
Weaknesses: The experimental validation of the method is limited. It is unclear how the real-data was used in analysis (Sec 4.2), and if it represents what would happen in a real experiment.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: • How is linear summation of activation from different optogenetic stimulation sites justified, especially if the sites are very close to each other?
• Could a simple stimulation optimization approach, where the location is at the intersection of high g(x) for one cell and low g(x) for other cells, work?
A characterization of the efficiency (reduction in the number of measurements for the same estimation quality) is missing.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks again for spending the time to review our submission and for providing your comments.
**The experimental validation of the method is limited. It is unclear how real-data was used in analysis (Sec 4.2) and if it represents what would happen in a real experiment.**
Thanks for noting that there is room to improve the clarity in the text about how the real data was used in the analysis. As we commented on in the global rebuttal, the ideal experimental data for validating these techniques are not currently available to us (i.e., an _in vivo_ demonstration of the technique, though we are actively collaborating to acquire such data). Therefore, we are limited to working with the data that we have, which come from slice experiments.
We made use of two kinds of experimental data: (1) detailed optogenetic receptive field (ORF) maps for single neurons, obtained by making a loose-patch recording of a single neuron, stimulating at regularly spaced locations surrounding the patched neuron and at multiple powers, and then fitting the GP-Bernoulli model; and (2) a fluorescence image of opsin expression from a separate experiment showing a typical distribution of opsin-expressing neurons in space. To use these data to simulate an experiment, we randomly assigned each opsin-expressing neuron in (2) an ORF from (1), and used these ORFs to sample responses to neural stimulation at arbitrary locations and powers.
**How is linear summation from multiple nearby sites justified?**
To our knowledge, an experimental characterisation of the photocurrent evoked by multiple closely positioned holograms has not yet been performed. However, the photocurrent directly depends on the number of opsin molecules illuminated [4], and we do not expect an interaction effect between two-photon laser pulses that would nonlinearly recruit opsin molecules beyond more quickly approaching a saturation point (that is currently accounted for by the sigmoid nonlinearity).
That being said, some prior experiments using two-photon glutamate uncaging have found cases of nonlinear summation when stimulating multiple (_distal_) dendritic spines on the same branch [5]. However, in the experiments that our technique is designed for, opsin expression is restricted to the soma and (at most) the _proximal_ dendrites. We expect that instances of simultaneously stimulating multiple points along the same segment of proximal dendrite immediately adjacent to a target neuron's soma will be very rare, and we do not currently know whether this would measurably change the ability to optogenetically stimulate the neuron.
**What about a simpler approach of finding high $g(x)$ for one cell and low $g(x)$ for other cells?**
We thank the reviewer for this intuitive suggestion, and note that for the single-target case this is similar to what the proposed gradient descent solution finds (though the supplementary algorithm for single-target optimisation further accounts for posterior uncertainty in the inferred ORFs). However, this does not generalise to the ensemble stimulation case because it does not account for the contribution from multiple holograms at once.
**A characterisation of the efficiency is missing**
While we show the effect of probing the ORFs with fewer stimuli (i.e. using downsampled grids) in the supplement, we will provide further data related to statistical efficiency in the final version of the manuscript and make an explicit reference to the result in the main text.
Thanks again for your comments! | Rebuttal 1:
Rebuttal: Thank you all for your excellent feedback and positive evaluation of our submission! We presented Bayesian target optimisation, a computational approach to overcoming off-target stimulation in two-photon optogenetics experiments. We are delighted that every reviewer clearly understood the motivation, methodological contributions, and experimental relevance of our work.
Below we consider the comments that are specific to each review, but first we would like to address the fact that multiple reviewers had concerns regarding experimental data. In most cases, the ideal data for calibrating the internal model components unfortunately do not yet exist or are not currently available to us. We are actively working with experimental collaborators to validate our technique _in vivo_. However, obtaining new experimental data to address the specific concerns of individual reviewers is (hopefully understandably) beyond the scope of this NeurIPS submission. We have therefore done our best to provide, where possible, additional references to what is known in the literature. Nevertheless, we believe that our chosen validation approach (combining data from separate real experiments) comes as close to a direct experimental validation as is currently feasible, and is more closely tied to real experimental data than any existing computational work at NeurIPS in this area [1-3].
Please note that we have attached some additional data in response to Reviewer 4tSm, and have collected all references in this general rebuttal in order to meet character limits. Thanks again all for your work in reviewing our paper.
**References**
1. Hu, T., & Chklovskii, D. (2009). Reconstruction of sparse circuits using multi-neuronal excitation (RESCUME). Advances in Neural Information Processing Systems, 22
2. Shababo, B., Paige, B., Pakman, A., & Paninski, L. (2013). Bayesian inference and online experimental design for mapping neural microcircuits. Advances in Neural Information Processing Systems, 26.
3. Draelos, A., & Pearson, J. (2020). Online neural connectivity estimation with noisy group testing. Advances in Neural Information Processing Systems, 33, 7437-7448.
4. Rickgauer, J. P., & Tank, D. W. (2009). Two-photon excitation of channelrhodopsin-2 at saturation. Proceedings of the National Academy of Sciences, 106(35), 15025-15030.
5. Losonczy, A., & Magee, J. C. (2006). Integrative properties of radial oblique dendrites in hippocampal CA1 pyramidal neurons. Neuron, 50(2), 291-307.
6. Russell, L. E., Dalgleish, H. W., Nutbrown, R., Gauld, O. M., Herrmann, D., Fişek, M., ... & Häusser, M. (2022). All-optical interrogation of neural circuits in behaving mice. Nature Protocols, 17(7), 1579-1620.
7. Baker, C. A., Elyada, Y. M., Parra, A., & Bolton, M. M. (2016). Cellular resolution circuit mapping with temporal-focused excitation of soma-targeted channelrhodopsin. Elife, 5, e14193.
8. Shemesh, O. A., Tanese, D., Zampini, V., Linghu, C., Piatkevich, K., Ronzitti, E., ... & Emiliani, V. (2017). Temporally precise single-cell-resolution optogenetics. Nature neuroscience, 20(12), 1796-1806.
9. Mardinly, A. R., Oldenburg, I. A., Pégard, N. C., Sridharan, S., Lyall, E. H., Chesnov, K., ... & Adesnik, H. (2018). Precise multimodal optical control of neural ensemble activity. Nature neuroscience, 21(6), 881-893.
10. Bounds, H. A., Sadahiro, M., Hendricks, W. D., Gajowa, M., Gopakumar, K., Quintana, D., ... & Adesnik, H. (2023). All-optical recreation of naturalistic neural activity with a multifunctional transgenic reporter mouse. Cell Reports, 42(8).
11. Pnevmatikakis, E. A., & Giovannucci, A. (2017). NoRMCorre: An online algorithm for piecewise rigid motion correction of calcium imaging data. Journal of neuroscience methods, 291, 83-94.
12. Marshel, J. H., Kim, Y. S., Machado, T. A., Quirin, S., Benson, B., Kadmon, J., ... & Deisseroth, K. (2019). Cortical layer–specific critical dynamics triggering perception. Science, 365(6453), eaaw5202.
13. Sridharan, S., Gajowa, M. A., Ogando, M. B., Jagadisan, U. K., Abdeladim, L., Sadahiro, M., ... & Adesnik, H. (2022). High-performance microbial opsins for spatially and temporally precise perturbations of large neuronal networks. Neuron, 110(7), 1139-1155.
14. Park, M., Horwitz, G., & Pillow, J. (2011). Active learning of neural response functions with Gaussian processes. Advances in neural information processing systems, 24.
Pdf: /pdf/6a6acedc4f3519dfc8a79fc4a89505c0671eeb77.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Sparse Modular Activation for Efficient Sequence Modeling | Accept (poster) | Summary: The paper introduces Sailboat which builds upon MEGA. MEGA uses a combination of an Exponential Moving Average (EMA) block (which can be interpreted as a specific parameterization of the kernel from SSM models) and Gated Attention Units (GAU). MEGA explores both full attention and chunked attention. Sailboat instead takes a modular/MoE approach. Each layer in Sailboat linearly combines the outputs of multiple EMA + GAU modules. Moreover, the GAU layer in each module dynamically takes as input only a subset of the original input; thus reducing per module computation cost. Sailboat-mem is also proposed as a variant of Sailboat that uses local attention with a sliding window in each module. Sailbot-mem performs better than MEGA-chunked in LRA while being more efficient.
Strengths: 1. The routing of a subset of inputs for attention per module is an interesting way to simplify the complexity introduced by soft module selection per layer. The method used in the paper is relatively straightforward and can be used as a general strategy for modular networks and modular Transformers.
2. The paper explores the space of models hybridizing SSMs and Transformers which is an interesting space to explore.
3. The empirical performance upshot is decent in some tasks and there is some efficiency gain over MEGA/MEGA-chunk
Weaknesses: 1. There are several technical unclarities - see questions
2. A few more ablations can be done - what if we use selective attention without modularity? what if we use just modules with just local attention (no softmax-based dynamic subset input selection)? Or just chunks (like MEGA)? --- particularly simpler alternatives of input subset selection seem to be missing.
3. In my interpretation the main technical contribution seems to be in the space of modular architectures. The fact that the authors use SSM+GAU as modules are just an implementation detail (although the empirical results of the implementation are noteworthy). In that sense, it could have been good to have more comparison/discussion/analysis with respect to other existing strategies involving MoEs or modules.
4. The value of the strategy in the space of efficient attention strategies is also a bit obscure. For example, what if we use BigBird/Performer etc. attention strategy to replace the attention in GAU in MEGA?
5. The empirical results are good on 2-3 tasks in LRA compared to MEGA-chunk, but otherwise Sailboat/Sailboat-Mem performs close to MEGA with limited efficiency gain.
6. It seems MEGA uses a feed forward MLP after the GAU-like block whereas Sailboat-mem. Uncontrolled variables like that can be potential confounders preventing clear comparison. Could be good to show some results in LRA with MEGA-chunk variant (with controlled parameters) with the same kind of block as Sailboat with the same activation and everything.
-------
**Post-Rebuttal score update:**
I increased the soundness score to 3 (from 2), but kept the overall score still to 5. Reasons below.
* Some of the issues are resolved in the rebuttal and additional experiments like 6, 2.
* Some of the important technical details are more clear mitigating my concern in 1., but hopefully they will be clarified better in the paper.
* I was initially not too convinced by the rebuttal about the lack of comparison against other efficient attention in MEGA-framework (point 5). But now I think, the contribution of the adaptive input sparsification can be seen as a sort of orthogonal mechanism to efficient attention. It does not modify the attention itself but the input to the attention - and thus can be stacked with other efficient attention. So I think, under that perspective, point 5 is not as critical. Although perceiver is a relatively less orthogonal possible comparison. For this point and the above points, I am increasing the soundness score to 3.
* Given the clarification that $M=1$ (no. of pre-defined modules), I think on the flip side some of my points under the strengths section do not really apply anymore. A modular mechanism with just one module amounts to just lacking any modular mechanism. So I wouldn't say that the method makes much of a move in the space of modular deep learning or MoE. A theoretical framework is introduced but not tested empirically. Having more modules can also bring forth other engineering challenges and increase compute - which can result in a worse trade-off than Mega.
* Overall, the input compression mechanism is still interesting but my main concern at present is the presentation and the overall positioning of the paper. The paper emphasizes quite a bit on the modular setup but it seems that all those parts can be skipped over and only the layer input sparsification via tempered softmax can be kept to be relevant to the experiments. Given that it's hard to find (or missing) that M=1, the paper also feels misleading and confusing. This is mainly a presentation issue, but still seems substantial to me. Without this, the paper is $6$ for me. But for the last two points, I am still keeping the score as $5$ at the moment.
**Update 2**
I increased the score to $6$ based on further discussions and the authors' expressed willingness for increased clarification on the positioning.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. How are c$_i$ and a$_i$ computed for modules?
2. I did not understand the hyperparameter setup for the modules. For example, how many modules are used per layer (what is $M$?)? Wouldn't adding multiple modules per layer increase the parameters (according to your notation it seems your modules are layer specific as opposed to having a common set of modules for every layer to select/combine or is that an incorrect interpretation)? How do you control for parameter counts then?
3. Can be good to contrast/compare (even if not empirically), in the paper, the difference between GShard [4] and this approach. As far as I understand, GShard also creates local token groups for the parallel processing of modules.
4. MEGA uses Feed forward after the GAU block. Is that removed in Sailboat? Any specific reason?
Missing references:
[5] uses Gumbel sigmoid as a selector mechanism to dynamically discard input before attention although not in a modular context and without exact discrete selection during training.
Minor:
* Are ζ$_i$ in the eqn. near L120 learnable parameters?
* Is \forall ∀ in the eqn. near L120 a typo?
* Is there any reason why the function/module weights (ζ$_i$c$_i$) are not normalized?
* Another extra comparison could be to use something like perciever [3] for creating some constant number of latent states per module from the SSM outputs.
* “second-order self-attention” -- second-order in which sense?
* Should be "instantiated" in L144
* I assume $a^l ∈ \mathbb{R}^n$ should be more specifically $a^l \in \\{0,1\\}^n$ in L168?
* Probably better to have an extended related works section in the appendix with more discussions. The current related work is a bit sparse.
* Note that there are pure SSM-based approaches (they are very recent works so I am not factoring this point into my decision) that show competitive performances too like BigS [1] and Hyena [2]. The cited paper that shows SSMs perform poorly on machine translation doesn't seem to use GLUs or other features that were found to be crucial for better NLP performance from SSMs.
* It could be good to clarify/discuss/contrast better on your paper the difference between your approach and "sparse attention" approaches
------
[1] Pretraining Without Attention, Wang et al. ArXiv 2023: https://arxiv.org/abs/2212.10544
[2] Hyena Hierarchy: Towards Larger Convolutional Language Models, Poli et al. ArXiv 2023: https://arxiv.org/abs/2302.10866
[3] Perceiver: General perception with iterative attention. Jaegle et al. ICML 2021
[4] GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding, Lepikhin et al. ICLR 2021
[5] How Does Selective Mechanism Improve Self-Attention Networks?, Geng et al. ACL 2020
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations and social impacts are discussed in the appendix. It's more or less adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the positive review of our work and the constructive feedback on our manuscript. In the following, we address the remaining concerns to hopefully motivate a clear acceptance score.
**Technical clarity:** Please refer to our answers to Q1, Q2 and Minors. We will also update the Related Work section to include extra comparisons with the sparse attention approaches and the missing reference.
**More Ablations:** We add the new ablations in the Additional Ablation Study section in **Global Response**. Comparing Sailboat-mem with Sailboat-local and Sailboat-chunk, we can see that our model obtains better accuracy than the models without SMA. If we constrain the memory size to a smaller value (see Sailboat-mem32 v.s. Sailboat-local32), this performance gap becomes more substantial (more than 10 percent absolute difference on Pathfinder). These evidences demonstrates the effectiveness of our SMA mechanism.
**Comparison with MoEs:** We add this comparison in the Comparison with Mixture of Experts section in **Global Response**. We also include the additional ablation results with X-MoE routing as an alternative design choice of latent configurator. We can see that X-MoE performs much worse than our Tempered Softmax in our use case.
**Comparison with efficient attention strategies:** As discussed in **Global Response**, our SMA mechanism enables a dynamic architecture that can learn to drop a sub-module entirely for better efficiency when it is adapted to the target task, and we empirically observe such behavior for different tasks in LRA. This capability is not possible for the previous efficient attention strategies because their architectures are static and thus an efficient attention layer is always applied before the feedforward layer.
**Limited performance improvements on LRA compared with MEGA-chunk:** Please refer to the Updated Results for Long Range Arena section in **Global Response**. We do have substantial performance improvement over MEGA-chunk on 5 out of 6 tasks on LRA, and the average accuracy is 1.96 points higher. In Figure 5 of **Appendix D** in our paper, we also prove that our model can provide substantially better speed-quality trade-off than MEGA-chunk when varying different memory sizes.
**Results of Sailboat-mem with feed forward MLP:** As shown in the Additional Ablation Study section in **Global Response**, we can see that both Sailboat-mem-ffn and Sailboat-chunk produce significantly worse performance than Sailboat-mem. The Sailboat-chunk model uses the same block as Sailboat but follow the MEGA-chunk to use the chunking strategy for sequence subset selection. Sailboat-mem-ffn follows the MEGA-chunk to add an additional FFN after each Sailboat layer.
---
**Q1:** *How are $c_i$ and $a_i$ computed for modules?*
**A1:** The latent configurator will produce $c_i$ and $a_i$ for each of the module. Please refer to L167-L172 for how the configurator is implemented in Sailboat.
**Q2:** *I did not understand the hyperparameter setup for the modules ... ... How do you control for parameter counts then?*
**A2:** As said in L109, $M$ is the number of predefined functions (or modules), and we set $M=1$ to include only one GAU module per layer, whose activation is controlled by the latent configurator. Yes, an interesting future direction is to investigate how to scale up SMA with a large number of modules. It is promising to try some primitive ideas such as setting an upper bound for the activation time of each module, or setting a maximum for the number of modules that can be activated at the same time. However, this direction is out of the scope of our paper, and we leave it as the future work.
**Q3:** *... ... difference between GShard [4] and this approach?*
**A3:** GShard divides tokens into groups of the **same** size, while our SMA allows different modules to have different group size. This means SMA can support an adaptive neural architecture whose sub-modules can be completely dropped if no tokens are selected for that module. Also, while GShard is a routing mechanism, SMA is an activation mechanism that can still function when there is only one module under consideration.
**Q4:** *Is FFN removed in Sailboat? Any specific reason?*
**A4:** Yes, it is removed. This is because it is kind of redundant in its design as the GAU module and it doesn't provide any empirical performance gains for Sailboat according to our ablation study.
**Minor:**
- *Are $\zeta_i$ in the eqn. near L120 learnable parameters?*
As in L117, they are intended to represent a linear combination over the pre-defined functions. In the implementation of our Sailboat architecture, it is instantiated as the last linear layer in the GAU module.
- *Is \forall $\forall$ in the eqn. near L120 a typo?*
No, it is not. $L’$ is the function space of a layer equipped with SMA and $M$ number of predefined functions. Thus, $\forall \zeta_i $ indicates the coefficients for a linear combination of the predefined functions.
- *Is there any reason why the function/module weights ($\zeta_i c_i$) are not normalized?*
As in L121-123, we want to show that $L'$, the function space of a layer equipped with SMA, can recover the original function space $L$ defined in L109, if all the functions are activated. This won't be true if the module weights are normalized.
- *... ... extra comparison could be to use something like perceiver [3] ... ...*
Nice catch! We will try this idea and see if it works out, but it seems out of the scope of this paper.
- *“second-order self-attention” -- second-order in which sense?*
In the sense of the interactions between the input elements. Self-attention does pairwise comparison between tokens, which is a second-order interaction.
- *Should be "instantiated" in L144*
Thanks! Fixed it.
- *I assume $a^l \in \mathbb{R}^n$ should be more specifically $a^l \in \\{0, 1\\}^n$ in L168?*
Yes, you are right! Fixed it.
---
Rebuttal Comment 1.1:
Title: Part 1
Comment: > A2: As said in L109, $M$ is the number of predefined functions (or modules), and we set $M=1$
to include only one GAU module per layer, whose activation is controlled by the latent configurator. Yes, an interesting future direction is to investigate how to scale up SMA with a large number of modules. It is promising to try some primitive ideas such as setting an upper bound for the activation time of each module, or setting a maximum for the number of modules that can be activated at the same time. However, this direction is out of the scope of our paper, and we leave it as the future work.
This part has me a bit worried. Under M=1, it seems like the focus is not on the modular aspect and that most of Page 3 is not particularly relevant to what is experimentally demonstrated. The contents on page 3 seems mainly relevant and interesting if there are multiple modules to combine. (If I am understanding correctly in all your experiments you have only $f^l_1$ in Figure 1). The main experimental demonstrations, now, seem to boil down to testing a form of adaptive sparsification of input before attention (with the possibility to ignore all input - resulting in layer skipping).
The theoretical framework for the utilization of multiple modules is still interesting but seems to mainly remain at the theoretical level. Normally, modular frameworks are relevant for introducing some choice protocol for selecting some sparse set of modules from a pre-defined set or larger modules or parallel utilization of multiple modules. Thereby, experimental demonstration of the effectiveness of modular frameworks mainly makes sense with M > 1. But this aspect is lost in practice here if the module set just has a single module.
Overall, under this light, I think the paper could have been cleaner and stronger if it focused solely on the adaptive sparsification strategy through tempered softmax and compared it to other efficient transformers (eg. local attention, BigBird, Perciever, Linear Transformer etc.) within the EMA+GAU framework. Because right now, it's hard to disentangle how much it is ahead of other efficient transformers due to the incorporation of a MEGA-like framework and how much for the novel adaptive sparsification strategy.
The contents from page 3 with generalization for multi-module mixing can be introduced in a separate paper and investigated empirically there.
> More Ablations: We add the new ablations in the Additional Ablation Study section in Global Response. Comparing Sailboat-mem with Sailboat-local and Sailboat-chunk, we can see that our model obtains better accuracy than the models without SMA. If we constrain the memory size to a smaller value (see Sailboat-mem32 v.s. Sailboat-local32), this performance gap becomes more substantial (more than 10 percent absolute difference on Pathfinder). This evidence demonstrates the effectiveness of our SMA mechanism
Thank you for the additional ablations. They are helpful and allay some of my concerns about the effect of FFN and others. The experiments against local attention show some evidence of the superiority of the proposed adaptive input sparsification compared to more basic efficient attention baselines.
> Comparison with efficient attention strategies: As discussed in Global Response, our SMA mechanism enables a dynamic architecture that can learn to drop a sub-module entirely for better efficiency when it is adapted to the target task, and we empirically observe such behavior for different tasks in LRA. This capability is not possible for the previous efficient attention strategies because their architectures are static and thus an efficient attention layer is always applied before the feedforward layer.
This is a good point. But:
1. There is still an empirical question about the accuracy/time trade-off. Yes, other efficient attention methods cannot fully drop the layer but how much speed gain is achieved because of that? Moreover, even if they do not drop layers - they still may or may not get higher accuracies within the SSM/EMA-GAU setup. The question still remains - how much we are gaining from adaptive sparsification as proposed?
2. While efficient attention by itself does not drop layers, there is a literature on layer skipping/early stopping - a few cited [1,2]. Thus, it is again a bit unclear where the proposed method stands against some prior layer-skipping strategy + efficient attention.
[1] Reducing Transformer Depth on Demand with Structured Dropout - Fan et al. ICLR 2020
[2] Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping - Zhang et al. NeurIPS 2020
(Point 2. above is minor for me because I acknowledge considering all these variables is difficult in a single paper)
> Limited performance improvements on LRA compared with MEGA-chunk
> Results of Sailboat-mem with feed-forward MLP
I acknowledge the responses to these points. I am not as much concerned with them anymore.
---
Rebuttal Comment 1.2:
Title: Part 2
Comment: > Q1: How are $c_i$ and $a_i$ computed for modules?
> A1: The latent configurator will produce $c_i$ and $a_i$ for each of the module. Please refer to L167-L172 for how the configurator is implemented in Sailboat.
L167-L172 seems to explain how $a_l \in \\{0,1\\}^n$ ($n$ being the sequence size) is computed for input sparsification, but my question is how $a_t \in \\{0,1\\}^M$ (L114) is computed for *module sparsification*.
Would it be correct to say $a^i_t = max(a^i_l)$ (where $i$ is the module id, $a^i_t \in \\{0,1\\}$ is the decision value for module $i$, and $a^i_l \in \\{0,1\\}^n$ is the decision values for the input tokens going into module $i$)?
In other words, this would mean a module is not selected iff no input token is selected for that module (If no input token is selected $max(a^i_l)$ would be $0$, and if at least one input token is selected $max(a^i_l)$ would be $1$).
If my interpretation is correct, I think this should be more explicitly and formally stated in the paper.
-------
Thank you for the other clarificatory points. I think A3 would be good to include in related works.
---
Rebuttal 2:
Comment: Thanks for acknowledging both our mechanism as an orthogonal contribution to efficient attention, and the theoretical contribution of our framework to the field of MoE.
**Regarding the positioning of our paper:** We want to emphasize that we provide a general and unified framework for multiple lines of works including adaptive input sparsification, adaptive computation time, dynamic routing, sparse attention and mixture of experts. We theoretically prove that under this framework, our SMA mechanism can provide a full coverage of the search space among the predefined modules. However, we acknowledge that it is infeasible to empirically validate the effectiveness of SMA in all these research fields and tasks in one paper, and, as indicated in the title of our paper, we narrow down our scope to only focus on applying SMA for efficient sequence modeling. We successfully demonstrated that, Sailboat, as a preliminary application of SMA (sparsely activating a GAU based on the state representation from SSM), can offer substantially better quality-efficiency trade-off than previous hybrid models. We also explore the application of SMA with multiple heterogeneous modules and identify the current engineering challenges, and advocate future works on scaling up SMA for empirical impacts on the MoE community.
We will include this discussion into our paper for a clearer understanding of our contributions.
**Regarding the concern of $M=1$:** We did try adding another FFN module in parallel with the GAU module to have $M=2$ with two modules in the function space $\mathcal{F} =\\{GAU, FFN\\}$ for our SMA mechanism to control the activations. The results are shown in the table below.
| Method | Image | Speed | ListOps | Speed | Path. | Speed |
|--------------------|--------|------|--------|------|--------|------|
| Sailboat-mem-{GAU,FFN} | 89.74 | 1.20 | 59.10 | 1.97 | 96.10 | 1.06 |
| Sailboat-mem-ffn | 89.60 | 1.13 | 56.45 | 2.05 | 96.62 | 0.97 |
| Sailboat-mem | 90.36 | 1.33× | 59.05 | 2.26× | 96.34 | 1.31× |
||||||||
We can see that Sailboat-mem-{GAU,FFN} obtains better trade-offs than Sailboat-mem-ffn (which simply appends an extra FFN layer after each Sailboat layer) on both the Image and the ListOps tasks. However, Sailboat-mem-{GAU,FFN} still falls behind Sailboat-mem (which does not include FFN anywhere at all). This is because the current implementation of SMA is I/O bounded due to copying large tensors with the scatter operator, and, for a fair comparison with MEGA, we do not include any fused kernel to optimize the memory bandwidth utilization. We acknowledge that it needs more engineering efforts to scale up SMA with multiple modules, and leave this as a future work.
We will include this discussion into the ablation study section of our paper. We will also explicitly indicate that the effective number of modules $M=1$ is used for Sailboat in the method section for a clearer presentation.
**Regarding the strength of our paper:** We want to emphasize that the strength of the contribution to modular networks mentioned in the review still holds. SMA is a straightforward modular activation method and can be used as a general strategy for modular networks. SMA also simplifies the complexity of the attention module by creating subsets of inputs, which is not explored by previous MoE works.
**Comparison with Perceiver:** Perceiver utilizes a predefined number of latent representations to repeatedly attend to the input array, which can be understood as conducting soft selection from the input array to a fixed subset. Our SMA operates on a different level to directly conduct a hard selection from the input to form a dynamic subset. Plus, our SMA can be applied to causal language modeling, while it is unclear how Perceiver can be adapted to this important use case in the era of large language models.
We will include this discussion into the related works section for a clearer connection with previous works.
**Q5:** *Would it be correct to say $a_t^i =max(a_l^i)$ (where $i$ is the module id, $a_t^i \in \\{0, 1\\}$ is the decision value for module $i$, and $a_l^i \in \\{0, 1\\}^n$ is the decision values for the input tokens going into module $i$)?*
**A5:** Yes, this interpretation is correct. If only one input token is selected for module $i$, the module $i$ will still be activated and to process that token only.
Thanks for pointing out the valuable clarification questions. We have incorporated them into the method sections of our paper for clearer presentation.
---
Rebuttal Comment 2.1:
Title: Thank you
Comment: Thank you for the additional confirmation and clarification.
I have decided to increase the score to $6$ assuming the authors update the paper as discussed. | Summary: This paper introduces a framework for representation learning, which involves multiple modules that can be applied dynamically on different inputs. The authors implement this framework using a combination of linear state space models (SSMs) and a gated attention unit (GAU), also combining ideas from adaptive computation, and mixture of experts. Combined, the network, dubbed Sailboat, obtains a better speed-accuracy tradeoff compared to multiple strong baselines on a range of tasks.
==============================
Post rebuttal:
Thank you for your response and for the clarification.
I now have a better understanding of the main contributions, and have raised my score to 6. It would be great to include this discussion in the next version of the paper.
As to the specific responses, please note that results important enough to be included in the introduction and/or the conclusion should be included in the main text, not the appendix.
Strengths: The technical and empirical contributions of this work more than justify acceptance to NeurIPS.
These include a very interesting and clever approach, combining multiple lines of research, and addressing many technical challenges, as well as an extensive and diverse set of experiments, showing the superiority of the proposed approach over previous approaches.
Weaknesses: At the same time, the paper gives the impression of having been hastily written. There are some clarity issues regarding the proposed method (see below); the introduction mentions details that are never discussed in the paper, and there are some inconsistencies between the introductory paragraph of section 2 and the rest of the section. Combined, these raise some concern that other problems might have been overlooked.
Moreover, the human working memory is only loosely connected to the Sailboat-Mem idea. In particular, $w$ is within hundreds or thousands, which is far larger than 5-7, which is the human working memory, and thus completely unrelated.
Finally, there are unclear gaps between the superb results on LRA compared to the results in tables 3 & 4, which are comparable and not substantially different from the baselines (e.g., MEGA).
See more details below.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: + I had a hard time understanding the main method:
* * The introduction mentions (#57-62) a method for efficient parallel implementation of the method using the scatter operation, but this is never discussed later.
- - The introductory paragraph of section 2 seems to describe something broader than what is currently below it in this section. E.g., section 2 does not touch on Sparse Modular Activation at all.
- - The F function in 3.1 is not entirely clear to me. First, if I understand correctly, the mapping assigns each word in the vocabulary a boolean value, which indicates whether it is used or not? Does this mean that the selection process works at the type level (i.e., each word is always selected or ignored, rather than this decision being dependent on the context)? Further, how is the vocabulary defined for non-textual tasks? Also, how does this function relate to the next part (e.g., $a_t$?)
- - In 3.2, what is $M$?
– I didn’t fully understand the Sailboat-Mem approach. Do the authors simply attend to the nearest $w$ tokens, and in this sense the idea is similar to standard hybrid approaches, or do they attend to these tokens and then apply SMA? Also, the comment in #207 (“... maintaining the ability of attending to the key tokens …”) wasn’t clear to me as well.
- The results in table 2 (LRA) show that the proposed method is dramatically faster and consumes far less memory than all baselines (including MEGA). However, in tables 3 & 4, the speed and memory values of the two approaches are comparable. Can you please explain the source of these differences? Is this because the context length is larger for these tasks? The speech experiments discuss a large context length of 16K, so that doesn’t seem like the reason.
- why do different models have different numbers of layers (#275)?
- In what sense is the proposed method able to improve interpretability? (conclusion, limitations section)
Typos and such:
- #52: sparsely map -> sparsely *maps*
- #86: is optimized is optimized
- #144: are instantiate -> are *instantiated*
- #236: LRA consists of five tasks: -> *six* tasks
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the positive review of our work and the constructive feedback on our manuscript. In the following, we address the remaining concerns to hopefully motivate a clear acceptance score.
**Presentation clarity:** Please refer to our Answers to the Q1, Q2, Q3, Q4.
**Loose connection between human working memory and the Sailboat-Mem idea:** Thanks for pointing this out. We will weaken the claim, “mimic the human working memory”, in the abstract, but we still want to emphasize the effectiveness of our working memory mechanism. In Figure 5 of the **Appendix D**, we do demonstrate that our working memory size can be as small as **16** but still maintain a reasonably good performance (>90%) on the Pathfinder task of LRA. In contrast, the traditional chunking based method simply fails the tasks with random guessing when the chunk size is less than 128. We use a larger working memory size for the final Sailboat-mem model simply because we want to achieve better performance under the settings of the LRA benchmark.
**Unclear performance gaps between LRA and Speech Recognition/Language Modeling:** This is because we follow the notations of models in the original MEGA paper for the Table 3 & 4 of our paper, and the so-called MEGA models listed in the tables are actually MEGA-chunk variants. We have fixed this by explicitly denoting the MEGA models as MEGA-chunk for clearer understanding. Also, in the updated results table for Speech Recognition in **Global Response**, we do have much better performance and speed up than MEGA-chunk.
---
**Q1:** *The introduction mentions (#57-62) a method for efficient parallel implementation of the method using the scatter operation, but this is never discussed later.*
**A1:** As in **Appendix A.1**, we do discuss the detailed parallel implementation using the scatter operator in the Pytorch-like code snippets, and such snippets are well documented with comments for how they work. The existence of such code snippets is also explicitly said as in Line 131-132 of our paper.
**Q2:** *The introductory paragraph of section 2 seems to describe something broader than what is currently below it in this section. E.g., section 2 does not touch on Sparse Modular Activation at all.*
**A2:** Section 2 provides the background and motivates us to propose a general formulation of Time-Variant Sequence Modeling in Section 3.1 that supports a dynamic neural architecture. Sparse Modular Activation (SMA) is then built upon the two assumptions proposed in Line 104-111 of Section 3.1, and we further show that, in Line 121-123, SMA can recover the original function space $L$ mentioned in Line 109 to justify the design of SMA.
**Q3:** *The F function in 3.1 is not entirely clear to me... ... Also, how does this function relate to the next part (e.g., $a_t$?)*
**A3:** Thanks for pointing this out. We acknowledge that $[0,1]^V$ in Line 105 should be corrected as $[0,1]^{n \times V}$. $[0,1]^{n \times V}$ means a space of $n \times V$ dimensional matrices whose values are between 0 and 1, thus the $\mathcal{F}$ function means the whole chain-structured model that takes a sequence of tokens as input and outputs the probability distributions over the target vocabulary. This function introduces the intermediate function space $L$, which is then used for showing that our SMA can recover the original function space (as shown in Line 121).
**Q4:** *In 3.2, what is $M$? – I didn’t fully understand the Sailboat-Mem approach. Do the authors simply attend to the nearest $w$ tokens, ... ... Also, the comment in #207 (“...maintaining the ability of attending to the key tokens …”) wasn’t clear to me as well.*
**A4:** As show in Line 109, $M$ is the number of the pre-defined functions. As said in Line 201, we apply local attention on the **compressed sequence** which only contains the activated inputs for the GAU module. In this sense, we first apply SMA and then we attend to the nearest $w$ tokens in the compressed sequence. For Line 207, the unbounded attention span between the query and the key token is because two consecutive tokens in the compressed sequence can be from the positions that are far away from each other in the original sequence.
**Q5:** *The results in table 2 (LRA) show that the proposed method is dramatically faster ... ... so that doesn’t seem like the reason.*
**A5:** Please refer to the **Unclear performance gaps between LRA and Speech Recognition/Language Modeling** paragraph above.
**Q6:** *why do different models have different numbers of layers (#275)?*
**A6:** This is because prior works (e.g. MEGA, S4) are using different numbers of layers for different tasks in LRA, we follow the setting of prior works to have a fair comparison.
**Q7:** *In what sense is the proposed method able to improve interpretability? (conclusion, limitations section)*
**A7:** As shown in Figure 4 in **Appendix D**, our method can produce discrete and dynamic module activation patterns for each of the data sample. In this sense, our mechanism allows people to find the correlations between the activation of a specific module and the input tokens/model predictions, so that we can have better understanding of the property of the tasks or the behavior of the models. In fact, as shown in Figure 3, SMA allows us to answer the questions related to task properties such as "How much attention is needed for different sequence modeling tasks?" Also, since our latent decisions are discrete, our mechanism may also provide extra controllability over the model predictions. This can be achieved by doing manual modifications of the module activation during the inference time. Generally, our work opens new possibilities for interpretability which are worth exploring in the future work.
**Typos and such:** Thanks for pointing out the typos. Fixed them. | Summary: This method introduces a novel architecture for long text modeling, which builds upon traditional linear state space models (SSMs). Since SSMs have shown inferior performance, combining SSMs with self-attention has become a popular approach. In this paper, several efficiency-related questions are considered in such settings, such as the amount of additional attention required for SSMs on a per-task basis and whether neural networks can dynamically activate their attention modules to achieve improved quality and efficiency trade-offs. These questions pose an interesting inquiry.
To address these questions, the authors propose a new architecture module that sparsely activates a Gated Attention Unit based on the state representations learned from an SSM. Through experiments on diverse tasks, the proposed approach demonstrates competitive results when compared to strong baselines.
Strengths: The research question presented is intriguing. The hybrid combination of SSM and attention has gained popularity. However, the extent of additional attention required for SSMs on a per-task basis remains unclear. By reducing the reliance on attention, significant inference speedup can be achieved.
The selected baseline for comparison appears to be adequate. The authors conduct comprehensive experiments to demonstrate the improvements brought by their proposed approach. Additionally, the ablation study effectively showcases the effectiveness of specific components of the method.
Weaknesses: The method section is difficult to comprehend. The notation should be clearly defined before its usage. Furthermore, the complex settings depicted in Figure 3 lack sufficient discussion on the motivation behind each function. It is unclear how these functions contribute to the final performance and the necessity of including them. If possible, it would be helpful to have a simplified implementation that still achieves competitive results.
More evidence is needed to support the performance of the proposed method. While the method shows similar effectiveness compared to other approaches, the claim of higher training speedup requires further explanation. The evaluation of speedup is not adequately discussed, and the observed speedup appears to be marginal when compared to strong baselines.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Could you please provide more information on how the training speedup is computed?
In line 119, what does "c" represent? Additionally, in Figure 1, what does "h" denote?
It seems that Figure 2 is listed before Figure 1. The correct order should be Figure 1 followed by Figure 2.
What is the motivation behind the intricate setting of the latent configurator? Could you elaborate on how each function contributes to the final performance?
How does the method perform when applied to widely-used pre-trained settings?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive feedback on our manuscript. In the following, we address the remaining concerns to hopefully encourage a positive evaluation.
**Presentation clarity of the method section:** Please refer to our answers to Question 2 and 3. We will provide more details in the next version and improve the presentation clarity.
**Motivation of the architecture design:** The design priniciple behind our Sailboat model is to allow a faster sytem with weaker capability, (e.g. an SSM), to learn to activate a slower system with stronger capability (e.g. a GAU) on demand. We explain the design motivations of our latent configurator as following.
For a sequence of length $n$, the design goal of the latent configurator is to generate the binary decision vector and the confidence probability vector, $a^l\in \mathbb{R}^n$ and $c^l \in \mathbb{R}^n$ respectively, for the GAU module so that we can sparsely activate the module with the decision vector while allowing the gradients to be backpropaged to the latent configurator through the confidence vector. The first Linear layer is applied to project the contextualized representation $\mathbf{H}^l\in \mathbb{R}^{n \times d_m}$ to a matrix whose last dimension is two. This matrix is then normalized by a Tempered Softmax function on its last dimension to produce a probability matrix in $\mathbb{R}^{n \times 2}$ whose first column is the probability of not activating the module and the second column is the probability of activating. The decision vector $a^l$ and the confidence vector $c^l$ are then calculated by applying the argmax and the max operator respectively on the last dimension.
From the above descriptions, we can see that **all the functions are necessary** to produce the final decision and the confidence vectors, so that the latent configurator can work as we expected. In this sense, it is not possible to ablate some functions in the configurator to see how they contribute to the final performance. However, we do try different alternative designs such as using Gumbel Softmax/X-MoE/Tempered Sigmoid instead of Tempered Softmax for probability matrix calculation, which are further explained in L300-304 and the additional ablation study in **Global Response**.
**More evidence of model performance:** Please refer to the Updated Results section in **Global Response**. Generally, with updated results, our Sailboat-mem is now the **new state-of-the-art** among models with linear inference complexity, outperforming the pure SSM-based model, S5, with remarkably faster training speed and much less GPU memory consumption. We also add more detailed ablation studies to explain the speedup and performance.
**Explanation of training speedup measurement:** Since the dynamic module-level sparsity of our Sailboat model will affect the training speed throughout the training process, we first measure the training time as the average per step wall time across the full training stage. The training speed is then calculated as the inverse of the training time. To ensure fair comparisons, the relative training speedup between different models are calculated based on the training time measured on the same hardware using the same batch size settings. All the experiments are conducted on a mixed cluster with 8 NVIDIA V100 32GB GPUs and 2 NVIDIA A5000 24GB GPUs.
---
**Question 1:** *Could you please provide more information on how the training speedup is computed?*
**Answer:** Please refer to the **Explanation of training speedup measurement** paragraph above.
**Question 2:** *In line 119, what does "c" represent? Additionally, in Figure 1, what does "h" denote?*
**Answer:** As explained in line 118, $\mathbf{c}_t$ is the confidence probability vector which stores the probability of decisions for all the modules at the time step $t$, so the symbol $\mathbf{c}$ represents a confidence matrix storing the probabilities for all the modules at all the time steps. In Figure 1, $\mathbf{h}\in \mathbb{R}^{d_m}$ is one of the vectors of the sequence representation matrix $\mathbf{H}\in \mathbb{R}^{n \times d_m}$.
**Question 3:** *It seems that Figure 2 is listed before Figure 1. The correct order should be Figure 1 followed by Figure 2.*
**Answer:** Thanks for pointing this out. We have fixed this by re-ordering the Figures.
**Question 4:** *What is the motivation behind the intricate setting of the latent configurator? Could you elaborate on how each function contributes to the final performance?*
**Answer:** Please refer to the **Motivation of the architecture design** paragraph above.
**Question 5:** *How does the method perform when applied to widely-used pre-trained settings?*
**Answer:** While SMA is generally applicable for activating any sub-modules in neural networks, we narrow the scope of our paper (due to limited resources) to only focus on developing a more efficient neural architecture with SMA for long sequence modeling. It is interesting to see how SMA can boost the downstream performance of a pre-trained model, but we think it is out of the scope of this paper and leave it as a future work that is worth exploring. For the setting of Language Model (LM) pre-training, we do test our model on the competitive LM task, enwiki8, and obtain better results than the baseline MEGA model.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will raise the score to 5. | Summary: The authors employ a hybrid model combining linear state space models and attention modules, while incorporating sparse attention within the attention modules to reduce memory usage and improve speed. The proposed method, Sailboat, outperforms previous approaches in terms of quality, speed, and memory efficiency. The sparse activation module in Sailboat utilizes temperature softmax to activate modules and aggregates output from each activated module weighted by softmax values. Additionally, the authors propose a variant called Sailboat-Mem, which performs window-sized attention on tokens assigned to each module, resulting in speed improvements and memory optimization, with a slight reduction in quality.
Strengths: The paper demonstrates a clever utilization of a mixture of experts (MoEs) and routing techniques, resulting in notable speed improvements.
The proposed method achieves results comparable to state-of-the-art approaches while achieving increased speedups and reduced memory requirements.
The simplicity of the proposed method is a significant strength, as it only requires tuning a single temperature parameter in the softmax activation.
The ablation studies conducted in the research paper providing comparisons with other variants, such as gumbel-softmax and complete activation of modules, highlight the method's trade-off in terms of performance, speed, and memory utilization.
Weaknesses: One notable weakness is the absence of a baseline comparison with Flash attention which performs exact attention but provide greater speedups while utilizing hardware aware techniques [https://arxiv.org/pdf/2205.14135.pdf].
Despite achieving speedups and reduced memory compared to MEGA, the proposed method does not demonstrate an improvement in performance over MEGA.
There are other variants of routing, such as Top-k and REINFORCE, which could potentially be compared with the proposed method. The absence of such comparisons limits the scope and comprehensiveness of the work. Please refer Top-k https://arxiv.org/abs/2101.03961, REINFORCE https://arxiv.org/abs/2202.01169
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The equation introduced at line 120 lacks defined variables beforehand, making it challenging to interpret. Could you provide more clarity on this equation by defining all the variables involved?
The improvement of linear state-space models over transformers appears to be significant. Can you explain why this is the case? Are the number of parameters comparable between the two models?
Are the SSM kernels in the hybrid models identical? Maintaining consistency in the SSM kernels would offer clearer insights into the improvements achieved by sparse activation. Could you provide further information on this aspect?
Could you provide more details about the schedule used for the temperature softmax function in the work?
Can MEGA be considered as having only one attention module where all tokens are processed by it? In contrast, Sailboat divides tokens across a set of attention modules and performs attention only on those assigned to each module. As Sailboat's performance still lags behind MEGA, is there still potential for further improvements in routing? Can you provide the associated FLOPs (floating-point operations) for Sailboat and compare them with MEGA? Additionally, could you justify the memory reduction achieved by Sailboat? Is the reduction because the memory bottleneck is bs x num_tokens x num_tokens tensor?
In Section 3.2, binary decisions in SMA can be accomplished by computing p_i=0 using only w_0 and b_0, and p_i=1 can be deduced as 1-p_i=0. Could you please provide a justification for your implementation, particularly regarding the usage of w_0, b_0, w_1, and b_1, as it seems to deviate from conventional approaches?
There is a typo in Line 816 with the repeated phrase "is optimized." Could you please correct this error?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: There are no explicit limitations beyond those discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the positive review of our work and the constructive feedback on our manuscript. In the following, we address the remaining concerns to hopefully motivate a clear acceptance score.
**Comparison with Flash Attention(FA):** FA and SMA focus on improving model efficiency on orthogonal levels. SMA is a module-level activation strategy that is perpendicular to how the attention module is actually doing the computation. In fact, we can also apply FA to the GAU module in our Sailboat architecture for more efficient self-attention computation, but we decide **not** to use any custom fused kernels in our implementation for a fair comparison with MEGA and its variant.
**Lack performance improvement over MEGA:** The MEGA model has quadratic inference complexity but our Sailboat only has sub-quadratic complexity. It is unfair to solely compare the performance, while ignoring the better scalability provided by Sailboat. The linear complexity variant of our model, Sailboat-mem, does provide significantly better average accuracy over MEGA-chunk on LRA with better speedup and memory consumption. In L598-L607 of **Appendix D**, we further demonstrate that Sailboat-mem can provide much better quality-speed trade-off than MEGA-chunk on the Image and Pathfinder tasks of LRA.
**Comparison with other routing variants:** We want to first point out that our method is **not** a routing mechanism, but instead an activation mechanism for neural modules. For more details on the comparisons with MoEs, please refer to **Global Response**. Moreover, although SMA and MoE are not directly comparable, we add an additional ablation study in **Global Response** by adapting the X-MoE routing mechanism to our use case, and show that it actually provides much worse performance than our SMA mechanism.
---
**Question 1:** *The equation introduced at line 120 lacks defined variables beforehand, making it challenging to interpret. Could you provide more clarity on this equation by defining all the variables involved?*
**Answer:** $L'$ represents the function space of a layer equipped with SMA and $M$ number of predefined functions. According to Line 117, the function space $L'$ is a linear combination of the activated functions at layer $l$, thus $\zeta_i$ indicate the scalar of the linear combination. As said in Line 113-120, $a_t^i$ is the decision value for each sub-module, and $c_t^i$ is the confidence value for the decision, and $I$ means the set of the indices of the activated functions.
**Question 2:** *The improvement of linear state-space models over transformers appears to be significant. Can you explain why this is the case? Are the number of parameters comparable between the two models?*
**Answer:** According to S4 paper, the number of parameters are tied between two models. While the exact reasons behind the performance improvement of linear state-space models over transformers are out of the scope of our paper, we here provide some related papers [1,2] for the reviewer to futher read on. Note that our work is perpendicular to the previous works on SSM, and as said in Line 165-167, we use SSM simply because it can provide an efficient computation of the recurrent states, plus it is interesting to investigate how much extra attention is needed beyond the SSM module.
**Question 3:** *Are the SSM kernels in the hybrid models identical? Maintaining consistency in the SSM kernels would offer clearer insights into the improvements achieved by sparse activation. Could you provide further information on this aspect?*
**Answer:** Yes, we use the same MH-EMA kernel for all the layers as indicated in Line 164-165 of our paper.
**Question 4:** *Could you provide more details about the schedule used for the temperature softmax function in the work?*
**Answer:** As said in Line 170-172 of our paper, we didn’t apply any scheduling for the temperature parameter and set it as a learnable parameter with a specific initialization.
**Question 5:** *Can MEGA be considered as having only one attention module where all tokens are processed by it? ... ... Is the reduction because the memory bottleneck is bs x num_tokens x num_tokens tensor?*
**Answer:** Yes, MEGA can be considered as that. Yes, there are further potential for the improvements, since our design of the latent configurator is rather simple. Since the major speed bottleneck of SMA lies on copying a large matrix (which is not included in the FLOPs calculation) through the *scatter* operator, we follow the previous works to measure the speedup based on the actual wall time for a stricter and fairer comparison. The memory reduction achieved by Sailboat is because the GAUs are processing a shorter sequence, and for some layers of the Sailboat model, the GAUs are skipped entirely because they deactivated by the latent configurator all the time. This kind of skipping results in both a smaller bs x num_tokens x num_tokens tensor and fewer SiLU activation operations inside the GAUs.
**Question 6:** *In Section 3.2, binary decisions in SMA can be accomplished by ... ... as it seems to deviate from conventional approaches?*
**Answer:** While theoretically using Softmax is equivalent to using Sigmoid under the binary decision case, we empirically find that using Softmax with both $w_0$ and $w_1$ gives much better results under our hyper-parameter settings. Please refer to the performance of the Sailboat-mem-sigmoid model under the Additional Ablation Study section in **Global Response** for more details.
**Question 7:** *There is a typo in Line 816 ... ... this error?*
**Answer:** Thanks for pointing this out. We have fixed this typo by deleting the additional "is optimized".
---
[1] Resurrecting recurrent neural networks for long sequences. Orvieto, Antonio, et al. arXiv 2023.
[2] Simple hardware-efficient long convolutions for sequence modeling. Fu, Daniel Y., et al. arXiv 2023. | Rebuttal 1:
Rebuttal: # Global Response
## Updated Results
For the Sailboat-mem model, we find that down-scaling the attention matrix $QK^T$ with the window size $w$ instead of the compressed sequence length $r$ can lead to substantially better results (as shown in Equation (5) near the Line 521 of **Appendix A.2**). Thus, we apply this change to Sailboat-mem for both the Long Range Arena and the Speech Command benchmark.
The table below shows the updated results for Long Range Arena (LRA), where * indicates a hybrid model. Our Sailboat-mem is now the new **state-of-the-art** among models with linear inference complexity, outperforming the pure SSM-based model, S5, with remarkably faster training speed and much less GPU memory consumption. Note that for a fair comparison with MEGA, we use the same MH-EMA based SSM for Sailboat-mem, and MH-EMA is generally considered as a simpler version of S4 and has worse modeling power than S5. The fact that we can still achieve performance improvements over S5 under this unfair setting of comparison demonstrates the effectiveness of our proposed SMA mechanism. We also include detailed efficiency measures for each task in the PDF.
| Model | ListOps | Text | Retr. | Image | Path. | Path-X | Avg. | Speed | Mem. |
|-|-|-|-|-|-|-|-|-|-|
| S4 | 59.10 | 86.53 | 90.94 | 88.48 | 94.01| 96.07 | 85.86 | 4.8× | 0.14× |
| S4D-LegS | 60.47 | 86.18 | 89.46 | 88.19 | 93.06 | 91.95 | 84.89 | 6.1× | 0.14× |
| S5 | _62.15_ | 89.31 | **91.40** | 88.00 | _95.33_ | **98.58** | _87.46_ | 6.1× | 0.14× |
| Liquid-S4 | **62.75** | 89.02 | 91.20 | _89.50_ | 94.8 | 96.66 | 87.32 | 1.2× | 0.17× |
| H3* | 57.50 | 88.20 | 91.00 | 87.30 | 93.00 | 91.80 | 84.80 | 6.0× | 0.24× |
| MEGA-chunk* | 58.76 | **90.19** | 90.97 | 85.80 | 94.41 | 93.81 | 85.66 | 7.0× | 0.09× |
| **Sailboat-mem*** | 61.70 | _89.60_ | _91.28_ | **90.10** | **96.35** | _96.68_ | **87.62** | **10.4×** | **0.05×** |
| | | | | | | | | | |
The table below shows the updated results for Speech Command.
| Model | #Param. | Acc. | Speed | Mem. |
|-|-|-|-|-|
| S4 | 300K | 97.50 |-|-|
| MEGA-chunk | 300K | 96.92 | 1.00× | 1.00× |
| **Sailboat-mem** | 293K | 97.35 | 1.32× | 0.44× |
||||||
## Additional Ablation Study
We present additional ablation studies on Sailboat-mem to justify the effectiveness of both the proposed working memory mechanism and the design choices of our architecture. We explain the meaning of the postfixes in the table as following:
- “-ffn” means we add an additional feedforward layer (with the same architecture as MEGA) after each Sailboat layer
- “-sigmoid” means we use sigmoid instead of Tempered Softmax for latent decision probability calculation.
- “-moe” means that instead of using our Tempered Softmax for module activation decisions, we adapt the X-MoE [1] routing mechanism to our use case by only considering two modules, a GAU module and a module that always outputs zero. We do not apply load balancing loss in this case because there is only one module that needs to do actual computation.
- “-local” means that we don't use SMA and always activate the GAU module with local attention.
- “-chunk” means that we don't use SMA and always activate the GAU module to process the equal-sized chunks of the input sequence.
- “-mem32” means we restrict the memory size of Sailboat-mem to 32.
- “-local32” means we restrict the sliding window size of Sailboat-local to 32.
| Model| Image| Speed| ListOps | Speed | Path.|Speed|
|-|-|-|-|-|-|-|
| Sailboat-mem| 90.36 | 1.33×| 59.05| 2.26× | 96.34 | 1.31× |
| Sailboat-mem-ffn|89.60| 1.13×| 56.45| 2.05× | 96.62 | 0.97× |
| Sailboat-mem-sigmoid | 87.34 |1.77×| 58.20 | 2.52× | 91.35 | 1.67× |
| Sailboat-mem-moe | 85.60 | 1.66× | 58.10 | 2.19× | 91.41| 1.80× |
| Sailboat-local | 90.12 | 1.46×| 59.00| 1.78× | 96.08 | 1.54× |
| Sailboat-chunk | 86.46 | 2.17× | 58.35| 2.79×| 93.91 | 2.97×|
| Sailboat-mem32 | 78.16 | 1.73×| 58.70| 2.14× | 92.97 | 1.86×|
| Sailboat-local32 | 75.58 | 1.67× | 53.90| 2.19× | 82.05 | 1.80× |
||||||||
## Comparison with Mixture of Experts (MoE)
The motivation behind our Sparse Modular Activation (SMA) mechanism is to enable neural networks to contextually skip modules of any architectures for efficient sequence modeling, while MoE aims to efficiently scale up the models with more parameters (usually by adding homogeneous modules). This difference of motivation results in a fundamental mechanism difference:
- SMA leverages a latent configurator to decide if each module needs to be **activated** for each sequence element, while MoE is designed to choose a predefined number of modules from a large group of modules.
This difference on the core design of the mechanisms further leads to the following consequences:
- SMA supports a dynamic model architecture that can learn to drop a sub-module entirely (for better efficiency) based on the task it is trained on, while MoE only selects the parameters of the modules for the same architecture. As shown in Figure 3 of our paper, when the Sailboat model is trained on the Text task of LRA, the first three layers learn to have **zero** activation time of the GAU module. This means that these layers degenerate to pure SSM layers from the initial SSM+GAU architecture after adapting to the target task.
- SMA is guaranteed to have a full coverage of the combinatorial search space ranging from zero activation to full activation of modules, while MoE can only cover a subset of the space by choosing a fixed number of modules to activate. This is further explained in L113-L123 of our paper.
To the best of our knowledge, SMA is the **first** mechanism that successfully enables a neural network to obtain practical efficiency and complexity gains from sparsely activating a self-attention-like module, while none of the previous works on MoE ever achieved this.
---
[1] On the Representation Collapse of Sparse Mixture of Experts. Chi et al. NeurIPS 2022.
Pdf: /pdf/906ebb61b55fd2d4ebc173310aa9e2084ea32cb7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.